![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > General
This book introduces new methods to analyze vertex-varying graph signals. In many real-world scenarios, the data sensing domain is not a regular grid, but a more complex network that consists of sensing points (vertices) and edges (relating the sensing points). Furthermore, sensing geometry or signal properties define the relation among sensed signal points. Even for the data sensed in the well-defined time or space domain, the introduction of new relationships among the sensing points may produce new insights in the analysis and result in more advanced data processing techniques. The data domain, in these cases and discussed in this book, is defined by a graph. Graphs exploit the fundamental relations among the data points. Processing of signals whose sensing domains are defined by graphs resulted in graph data processing as an emerging field in signal processing. Although signal processing techniques for the analysis of time-varying signals are well established, the corresponding graph signal processing equivalent approaches are still in their infancy. This book presents novel approaches to analyze vertex-varying graph signals. The vertex-frequency analysis methods use the Laplacian or adjacency matrix to establish connections between vertex and spectral (frequency) domain in order to analyze local signal behavior where edge connections are used for graph signal localization. The book applies combined concepts from time-frequency and wavelet analyses of classical signal processing to the analysis of graph signals. Covering analytical tools for vertex-varying applications, this book is of interest to researchers and practitioners in engineering, science, neuroscience, genome processing, just to name a few. It is also a valuable resource for postgraduate students and researchers looking to expand their knowledge of the vertex-frequency analysis theory and its applications. The book consists of 15 chapters contributed by 41 leading researches in the field.
This monograph is centered on mathematical modeling, innovative numerical algorithms and adaptive concepts to deal with fracture phenomena in multiphysics. State-of-the-art phase-field fracture models are complemented with prototype explanations and rigorous numerical analysis. These developments are embedded into a carefully designed balance between scientific computing aspects and numerical modeling of nonstationary coupled variational inequality systems. Therein, a focus is on nonlinear solvers, goal-oriented error estimation, predictor-corrector adaptivity, and interface conditions. Engineering applications show the potential for tackling practical problems within the fields of solid mechanics, porous media, and fluidstructure interaction.
The book provides a state-of-art overview of computational methods for nonlinear aeroelasticity and load analysis, focusing on key techniques and fundamental principles for CFD/CSD coupling in temporal domain. CFD/CSD coupling software design and applications of CFD/CSD coupling techniques are discussed in detail as well. It is an essential reference for researchers and students in mechanics and applied mathematics.
Perturbative Algebraic Quantum Field Theory (pAQFT), the subject of this book, is a complete and mathematically rigorous treatment of perturbative quantum field theory (pQFT) that doesn't require the use of divergent quantities and works on a large class of Lorenzian manifolds. We discuss in detail the examples of scalar fields, gauge theories and the effective quantum gravity. pQFT models describe a wide range of physical phenomena and have remarkable agreement with experimental results. Despite this success, the theory suffers from many conceptual problems. pAQFT is a good candidate to solve many, if not all, of these conceptual problems. Chapters 1-3 provide some background in mathematics and physics. Chapter 4 concerns classical theory of the scalar field, which is subsequently quantized in chapters 5 and 6. Chapter 7 covers gauge theory and chapter 8 discusses effective quantum gravity. The book aims to be accessible to researchers and graduate students, who are interested in the mathematical foundations of pQFT.
In this monograph we study the problem of construction of asymptotic solutions of equations for functions whose number of arguments tends to infinity as the small parameter tends to zero. Such equations arise in statistical physics and in quantum theory of a large number of fi elds. We consider the problem of renormalization of quantum field theory in the Hamiltonian formalism, which encounters additional difficulties related to the Stuckelberg divergences and the Haag theorem. Asymptotic methods for solving pseudodifferential equations with small parameter multiplying the derivatives, as well as the asymptotic methods developed in the present monograph for solving problems in statistical physics and quantum field theory, can be considered from a unified viewpoint if one introduces the notion of abstract canonical operator. The book can be of interest for researchers - specialists in asymptotic methods, statistical physics, and quantum fi eld theory as well as for graduate and undergraduate students of these specialities.
This book was written to serve as a graduate-level textbook for special topics classes in mathematics, statistics, and economics, to introduce these topics to other researchers, and for use in short courses. It is an introduction to the theory of majorization and related notions, and contains detailed material on economic applications of majorization and the Lorenz order, investigating the theoretical aspects of these two interrelated orderings. Revising and expanding on an earlier monograph, Majorization and the Lorenz Order: A Brief Introduction, the authors provide a straightforward development and explanation of majorization concepts, addressing historical development of the topics, and providing up-to-date coverage of families of Lorenz curves. The exposition of multivariate Lorenz orderings sets it apart from existing treatments of these topics. Mathematicians, theoretical statisticians, economists, and other social scientists who already recognize the utility of the Lorenz order in income inequality contexts and arenas will find the book useful for its sound development of relevant concepts rigorously linked to both the majorization literature and the even more extensive body of research on economic applications. Barry C. Arnold, PhD, is Distinguished Professor in the Statistics Department at the University of California, Riverside. He is a Fellow of the American Statistical Society, the American Association for the Advancement of Science, and the Institute of Mathematical Statistics, and is an elected member of the International Statistical Institute. He is the author of more than two hundred publications and eight books. Jose Maria Sarabia, PhD, is Professor of Statistics and Quantitative Methods in Business and Economics in the Department of Economics at the University of Cantabria, Spain. He is author of more than one hundred and fifty publications and ten books and is an associate editor of several journals including TEST, Communications in Statistics, and Journal of Statistical Distributions and Applications.
This book is the first part of a two volume anthology comprising a selection of 49 articles that illustrate the depth, breadth and scope of Nigel Kalton's research. Each article is accompanied by comments from an expert on the respective topic, which serves to situate the article in its proper context, to successfully link past, present and hopefully future developments of the theory, and to help readers grasp the extent of Kalton's accomplishments. Kalton's work represents a bridge to the mathematics of tomorrow, and this book will help readers to cross it. Nigel Kalton (1946-2010) was an extraordinary mathematician who made major contributions to an amazingly diverse range of fields over the course of his career.
This book focuses on the finite-time control of attitude stabilization, attitude tracking for individual spacecraft, and finite-time control of attitude synchronization. It discusses formation reconfiguration for multiple spacecraft in complex networks, and provides a new fast nonsingular terminal sliding mode surface (FNTSMS). Further, it presents newly designed controllers and several control laws to enhance the performance of spacecraft systems and meet related demands, such as strong disturbance rejection and high-precision control. As such, the book establishes a fundamental framework for these topics, while also highlighting the importance of integrated analysis. It is a useful resource for all researchers and students who are interested in this field, as well as engineers whose work involves designing flight vehicles.
This book addresses a broad range of problems commonly encountered in the fields of financial analysis, logistics and supply chain management, such as the use of big data analytics in the banking sector. Divided into twenty chapters, some of the contemporary topics discussed in the book are co-operative/non-cooperative supply chain models for imperfect quality items with trade-credit financing; a non-dominated sorting water cycle algorithm for the cardinality constrained portfolio problem; and determining initial, basic and feasible solutions for transportation problems by means of the "supply demand reparation method" and "continuous allocation method." In addition, the book delves into a comparison study on exponential smoothing and the Arima model for fuel prices; optimal policy for Weibull distributed deteriorating items varying with ramp type demand rate and shortages; an inventory model with shortages and deterioration for three different demand rates; outlier labeling methods for medical data; a garbage disposal plant as a validated model of a fault-tolerant system; and the design of a "least cost ration formulation application for cattle"; a preservation technology model for deteriorating items with advertisement dependent demand and trade credit; a time series model for stock price forecasting in India; and asset pricing using capital market curves. The book offers a valuable asset for all researchers and industry practitioners working in these areas, giving them a feel for the latest developments and encouraging them to pursue further research in this direction.
This monograph describes advances in the theory of extremal problems in classes of functions defined by a majorizing modulus of continuity w. In particular, an extensive account is given of structural, limiting, and extremal properties of perfect w-splines generalizing standard polynomial perfect splines in the theory of Sobolev classes. In this context special attention is paid to the qualitative description of Chebyshev w-splines and w-polynomials associated with the Kolmogorov problem of n-widths and sharp additive inequalities between the norms of intermediate derivatives in functional classes with a bounding modulus of continuity. Since, as a rule, the techniques of the theory of Sobolev classes are inapplicable in such classes, novel geometrical methods are developed based on entirely new ideas. The book can be used profitably by pure or applied scientists looking for mathematical approaches to the solution of practical problems for which standard methods do not work. The scope of problems treated in the monograph, ranging from the maximization of integral functionals, characterization of the structure of equimeasurable functions, construction of Chebyshev splines through applications of fixed point theorems to the solution of integral equations related to the classical Euler equation, appeals to mathematicians specializing in approximation theory, functional and convex analysis, optimization, topology, and integral equations .
This thesis presents a pioneering method for gleaning the maximum information from the deepest images of the far-infrared universe obtained with the Herschel satellite, reaching galaxies fainter by an order of magnitude than in previous studies. Using these high-quality measurements, the author first demonstrates that the vast majority of galaxy star formation did not take place in merger-driven starbursts over 90% of the history of the universe, which suggests that galaxy growth is instead dominated by a steady infall of matter. The author further demonstrates that massive galaxies suffer a gradual decline in their star formation activity, providing an alternative path for galaxies to stop star formation. One of the key unsolved questions in astrophysics is how galaxies acquired their mass in the course of cosmic time. In the standard theory, the merging of galaxies plays a major role in forming new stars. Then, old galaxies abruptly stop forming stars through an unknown process. Investigating this theory requires an unbiased measure of the star formation intensity of galaxies, which has been unavailable due to the dust obscuration of stellar light.
The aim of this book is to give a physical treatment of the kinetic theory of gases and magnetoplasmas, covering the standard material in as simple a way as possible, using mean-free-path arguments when possible and identifying problem areas where received theory has either failed or has fallen short of expectations. Examples are provided by strong shock waves, ultrasonic waves (high Knudsen numbers), and transport across strong magnetic fields. Examples of problem areas provided by strong shock waves, ultrasonic waves (high Knudsen numbers), and transport across strong magnetic fields. One of the paradoxes arising in kinetic theory concerns the fluid pressure. Collisions are necessary for a fluid force to result, yet standard kinetic theory does not entail this, being satisfied to bypass Newton's equations by defining pressure as a momentum flux. This omission usually has no adverse consequences, but with increasing Knudsen number, it leads to errors. This text pays particular attention to pressure, explaining the importance of allowing for its collisional nature from the outset in developing kinetic theory.
This thesis proposes a reliable and repeatable method for implementing Spoof Surface Plasmon (SSP) modes in the design of various circuit components. It also presents the first equivalent circuit model for plasmonic structures, which serves as an insightful guide to designing SSP-based circuits. Today, electronic circuits and systems are developing rapidly and becoming an indispensable part of our daily life; however the issue of compactness in integrated circuits remains a formidable challenge. Recently, the Spoof Surface Plasmon (SSP) modes have been proposed as a novel platform for highly compact electronic circuits. Despite extensive research efforts in this area, there is still an urgent need for a systematic design method for plasmonic circuits. In this thesis, different SSP-based transmission lines, antenna feeding networks and antennas are designed and experimentally evaluated. With their high field confinement, the SSPs do not suffer from the compactness limitations of traditional circuits and are capable of providing an alternative platform for the future generation of electronic circuits and electromagnetic systems.
The book covers fundamentals of the theory of optimal methods for solving ill-posed problems, as well as ways to obtain accurate and accurate-by-order error estimates for these methods. The methods described in the current book are used to solve a number of inverse problems in mathematical physics. Contents Modulus of continuity of the inverse operator and methods for solving ill-posed problems Lavrent'ev methods for constructing approximate solutions of linear operator equations of the first kind Tikhonov regularization method Projection-regularization method Inverse heat exchange problems
This book introduces readers to MesoBioNano (MBN) Explorer - a multi-purpose software package designed to model molecular systems at various levels of size and complexity. In addition, it presents a specially designed multi-task toolkit and interface - the MBN Studio - which enables the set-up of input files, controls the simulations, and supports the subsequent visualization and analysis of the results obtained. The book subsequently provides a systematic description of the capabilities of this universal and powerful software package within the framework of computational molecular science, and guides readers through its applications in numerous areas of research in bio- and chemical physics and material science - ranging from the nano- to the mesoscale. MBN Explorer is particularly suited to computing the system's energy, to optimizing molecular structure, and to exploring the various facets of molecular and random walk dynamics. The package allows the use of a broad variety of interatomic potentials and can, e.g., be configured to select any subset of a molecular system as rigid fragments, whenever a significant reduction in the number of dynamical degrees of freedom is required for computational practicalities. MBN Studio enables users to easily construct initial geometries for the molecular, liquid, crystalline, gaseous and hybrid systems that serve as input for the subsequent simulations of their physical and chemical properties using MBN Explorer. Despite its universality, the computational efficiency of MBN Explorer is comparable to that of other, more specialized software packages, making it a viable multi-purpose alternative for the computational modeling of complex molecular systems. A number of detailed case studies presented in the second part of this book demonstrate MBN Explorer's usefulness and efficiency in the fields of atomic clusters and nanoparticles, biomolecular systems, nanostructured materials, composite materials and hybrid systems, crystals, liquids and gases, as well as in providing modeling support for novel and emerging technologies. Last but not least, with the release of the 3rd edition of MBN Explorer in spring 2017, a free trial version will be available from the MBN Research Center website (mbnresearch.com).
This book outlines a possible future theoretical perspective for systemics, its conceptual morphology and landscape while the Good-Old-Fashioned-Systemics (GOFS) era is still under way. The change from GOFS to future systemics can be represented, as shown in the book title, by the conceptual change from Collective Beings to Quasi-systems. With the current advancements, problems and approaches occurring in contemporary science, systemics are moving beyond the traditional frameworks used in the past. From Collective Beings to Coherent Quasi-Systems outlines a conceptual morphology and landscape for a new theoretical perspective for systemics introducing the concept of Quasi-systems. Advances in domains such as theoretical physics, philosophy of science, cell biology, neuroscience, experimental economics, network science and many others offer new concepts and technical tools to support the creation of a fully transdisciplinary General Theory of Change. This circumstance requires a deep reformulation of systemics, without forgetting the achievements of established conventions. The book is divided into two parts. Part I, examines classic systemic issues from new theoretical perspectives and approaches. A new general unified framework is introduced to help deal with topics such as dynamic structural coherence and Quasi-systems. This new theoretical framework is compared and contrasted with the traditional approaches. Part II focuses on the process of translation into social culture of the theoretical principles, models and approaches introduced in Part I. This translation is urgent in post-industrial societies where emergent processes and problems are still dealt with by using the classical or non-systemic knowledge of the industrial phase.
This volume offers a fundamentally different way of conceptualizing time and reality. Today, we see time predominantly as the linear-sequential order of events, and reality accordingly as consisting of facts that can be ordered along sequential time. But what if this conceptualization has us mistaking the "exhausts" for the "real thing", i.e. if we miss the best, the actual taking place of reality as it occurs in a very differently structured, primordial form of time, the time-space of the present? In this new conceptual framework, both the sequential aspect of time and the factual aspect of reality are emergent phenomena that come into being only after reality has actually taken place. In the new view, facts are just the "traces" that the actual taking place of reality leaves behind on the co-emergent "canvas'' of local spacetime. Local spacetime itself emerges only as facts come into being - and only facts can be adequately localized in it. But, how does reality then actually occur? It is conceived as a "constellatory self-unfolding", characterized by strong self-referentiality, and taking place in the primordial form of time, the not yet sequentially structured "time-space of the present". Time is seen here as an ontophainetic platform, i.e. as the stage on which reality can first occur. This view of time (and, thus, also space) seems to be very much in accordance with what we encounter in quantum physics before the so-called collapse of the wave function. In parallel, classical and relativistic physics largely operate within the factual portrait of reality, and the sequential aspect of time, respectively. Only singularities constitute an important exemption: here the canvas of local spacetime - that emerged together with factization - melts down again. In the novel framework quantum reduction and singularities can be seen and addressed as inverse transitions: In quantum physical state reduction reality "gains" the chrono-ontological format of facticity, and the sequential aspect of time becomes applicable. In singularities, by contrast, the inverse happens: Reality loses its local spacetime formation and reverts back into its primordial, pre-local shape - making in this way the use of causality relations, Boolean logic and the dichotomization of subject and object obsolete. For our understanding of the relation between quantum and relativistic physics this new view opens up fundamentally new perspectives: Both are legitimate views of time and reality, they just address very different chrono-ontological portraits, and thus should not lead us to erroneously subjugating one view under the other. The task of the book is to provide a formal framework in which this radically different view of time and reality can be addressed properly. The mathematical approach is based on the logical and topological features of the Borromean Rings. It draws upon concepts and methods of algebraic and geometric topology - especially the theory of sheaves and links, group theory, logic and information theory, in relation to the standard constructions employed in quantum mechanics and general relativity, shedding new light on the pestilential problems of their compatibility. The intended audience includes physicists, mathematicians and philosophers with an interest in the conceptual and mathematical foundations of modern physics.
This book explores the use of numerical relativity (NR) methods to solve cosmological problems, and describes one of the first uses of NR to study inflationary physics. NR consists in the solution of Einstein's Equation of general relativity, which governs the evolution of matter and energy on cosmological scales, and in systems where there are strong gravitational effects, such as around black holes. To date, NR has mainly been used for simulating binary black hole and neutron star mergers like those detected recently by LIGO. Its use as a tool in fundamental problems of gravity and cosmology is novel, but rapidly gaining interest. In this thesis, the author investigates the initial condition problem in early universe cosmology - whether an inflationary expansion period could have "got going" from initially inhomogeneous conditions - and identifies criteria for predicting the robustness of particular models. State-of-the-art numerical relativity tools are developed in order to address this question, which are now publicly available.
This workbook is designed to supplement optics textbooks and covers all the traditional topics of geometrical optics. Terms, equations, definitions, and concepts are discussed briefly and explained through a series of problems that are worked out in a step-by-step manner which simplifies the problem-solving process. Additional practice problems are provided at the end of each chapter. * - An indispensable tool when studying for the state and National Boards * - An ideal supplement to optics textbooks * - Covers the traditional topics of geometrical optics.
This work uses techniques of optimization and operations research to develop the first comprehensive survey of the entire field of the optimization of resource, production, and distribution systems. Sten Thore proposes an "economic logistics" that is similar to the well-known concept of military logistics, but which is expanded to include such features as the optimal location of plants, inventories and retail outlets, and the management of hierarchical multi-echelon production, inventory, and distribution systems. The study of individual features of this supply process is familiar from operations research, but Thore joins these elements together into larger analytic structures encompassing the production and distribution system in an entire industry. Following an introductory chapter and a review of the saddle-point theory, coauthored with W. W. Cooper, Thore explores the three dimensions of the supply process synthesis: the spatial dimension (as in simple transportation systems), the vertical dimension (extending from resources to finished consumer goods, as in activity analysis), and the time dimension (as in inventory accumulation and investment). The combination of these then leads to models of such diverse subjects as regional warehouse systems, activity analysis and activity networks, multi-stage warehouse systems of intermediate goods, distribution networks, and spatial equilibrium. Each chapter contains its own exercises which are solved numerically and discussed in great detail, and illustrate such optimization techniques as linear and nonlinear programming, goal programming and goal focusing, chance-constrained programming, and infinite games. This work is designed for use ingraduate courses in economics and mathematics modeling, and will also be a useful addition to college and university library collections.
This book presents a collection of invited research and review contributions on recent advances in (mainly) theoretical condensed matter physics, theoretical chemistry, and theoretical physics. The volume celebrates the 90th birthday of N.H. March (Emeritus Professor, Oxford University, UK), a prominent figure in all of these fields. Given the broad range of interests in the research activity of Professor March, who collaborated with a number of eminent scientists in physics and chemistry, the volume embraces quite diverse topics in physics and chemistry, at various dimensions and energy scales. One thread connecting all these topics is correlation in aggregated states of matter, ranging from nuclear physics to molecules, clusters, disordered condensed phases such as the liquid state, and solid state physics, and the various phase transitions, both structural and electronic, occurring therein. A final chapter leaps to an even larger scale of matter aggregation, namely the universe and gravitation. A further no less important common thread is methodological, with the application of theoretical physics and chemistry, particularly density functional theory and statistical field theory, to both nuclear and condensed matter.
An increasing complexity of models used to predict real-world systems leads to the need for algorithms to replace complex models with far simpler ones, while preserving the accuracy of the predictions. This two-volume handbook covers methods as well as applications. This first volume focuses on real-time control theory, data assimilation, real-time visualization, high-dimensional state spaces and interaction of different reduction techniques.
The thesis presents a tool to create rubble pile asteroid simulants for use in numerical impact experiments, and provides evidence that the asteroid disruption threshold and the resultant fragment size distribution are sensitive to the distribution of internal voids. This thesis represents an important step towards a deeper understanding of fragmentation processes in the asteroid belt, and provides a tool to infer the interior structure of rubble pile asteroids. Most small asteroids are 'rubble piles' - re-accumulated fragments of debris from earlier disruptive collisions. The study of fragmentation processes for rubble pile asteroids plays an essential part in understanding their collisional evolution. An important unanswered question is "what is the distribution of void space inside rubble pile asteroids?" As a result from this thesis, numerical impact experiments can now be used to link surface features to the internal structure and therefore help to answer this question. Applying this model to asteroid Steins, which was imaged from close range by the Rosetta spacecraft, a large hill-like structure is shown to be most likely primordial, while a catena of pits can be interpreted as evidence for the existence of fracturing of pre-existing internal voids.
This book provides an up-to-date description of the methods needed to face the existence of solutions to some nonlinear boundary value problems. All important and interesting aspects of the theory of periodic solutions of ordinary differential equations related to the physical and mathematical question of resonance are treated. The author has chosen as a model example the periodic problem for a second order scalar differential equation. In a paedagogical style the author takes the reader step by step from the basics to the most advanced existence results in the field.
This thesis describes the application of a Monte Carlo radiative transfer code to accretion disc winds in two types of systems spanning 9 orders of magnitude in mass and size. In both cases, the results provide important new insights. On small scales, the presence of disc winds in accreting white dwarf binary systems has long been inferred from the presence of ultraviolet absorption lines. Here, the thesis shows that the same winds can also produce optical emission lines and a recombination continuum. On large scales, the thesis constructs a simple model of disc winds in quasars that is capable of explaining both the observed absorption and emission signatures - a crucial advance that supports a disc-wind based unification scenario for quasars. Lastly, the thesis also includes a theoretical investigation into the equivalent width distribution of the emission lines in quasars, which reveals a major challenge to all unification scenarios. |
You may like...
|