![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics
Perturbative Algebraic Quantum Field Theory (pAQFT), the subject of this book, is a complete and mathematically rigorous treatment of perturbative quantum field theory (pQFT) that doesn't require the use of divergent quantities and works on a large class of Lorenzian manifolds. We discuss in detail the examples of scalar fields, gauge theories and the effective quantum gravity. pQFT models describe a wide range of physical phenomena and have remarkable agreement with experimental results. Despite this success, the theory suffers from many conceptual problems. pAQFT is a good candidate to solve many, if not all, of these conceptual problems. Chapters 1-3 provide some background in mathematics and physics. Chapter 4 concerns classical theory of the scalar field, which is subsequently quantized in chapters 5 and 6. Chapter 7 covers gauge theory and chapter 8 discusses effective quantum gravity. The book aims to be accessible to researchers and graduate students, who are interested in the mathematical foundations of pQFT.
Limit theorems for stochastic processes are an important part of probability theory and mathematical statistics and one model that has attracted the attention of many researchers working in the area is that of limit theorems for randomly stopped stochastic processes. This volume is the first to present a state-of-the-art overview of this field, with many of the results published for the first time. It covers the general conditions as well as the basic applications of the theory, and it covers and demystifies the vast, and technically demanding, Russian literature in detail. A survey of the literature and an extended bibliography of works in the area are also provided. The coverage is thorough, streamlined and arranged according to difficulty for use as an upper-level text if required. It is an essential reference for theoretical and applied researchers in the fields of probability and statistics that will contribute to the continuing extensive studies in the area and remain relevant for years to come.
This encyclopedia contains more than 5000 integer sequences, over
half of which have never before been catalogued. Because the
sequences are presented in the most natural form, and arranged for
easy reference, this book is easier to use than the authors earlier
classic "A Handbook of Integer Sequences. The Encyclopedia gives
the name, mathematical description, and citations to literature for
each sequence. Following sequences of particular interest, thereare
essays on their origins, uses, and connections to related sequences
(all cross-referenced). A valuable new feature to this text is the
inclusion of a number of interesting diagrams and illustrations
related to selected sequences.
This book conveys the theoretical and experimental basics of a well-founded measurement technique in the areas of high DC, AC and surge voltages as well as the corresponding high currents. Additional chapters explain the acquisition of partial discharges and the electrical measured variables. Equipment exposed to very high voltages and currents is used for the transmission and distribution of electrical energy. They are therefore tested for reliability before commissioning using standardized and future test and measurement procedures. Therefore, the book also covers procedures for calibrating measurement systems and determining measurement uncertainties, and the current state of measurement technology with electro-optical and magneto-optical sensors is discussed.
Survey Sampling Theory and Applications offers a comprehensive overview of survey sampling, including the basics of sampling theory and practice, as well as research-based topics and examples of emerging trends. The text is useful for basic and advanced survey sampling courses. Many other books available for graduate students do not contain material on recent developments in the area of survey sampling. The book covers a wide spectrum of topics on the subject, including repetitive sampling over two occasions with varying probabilities, ranked set sampling, Fays method for balanced repeated replications, mirror-match bootstrap, and controlled sampling procedures. Many topics discussed here are not available in other text books. In each section, theories are illustrated with numerical examples. At the end of each chapter theoretical as well as numerical exercises are given which can help graduate students.
This book consists of selected peer-reviewed papers presented at the NAFEMS India Regional Conference (NIRC 2018). It covers current topics related to advances in computer aided design and manufacturing. The book focuses on the latest developments in engineering modelling and simulation, and its application to various complex engineering systems. Finite element method/finite element analysis, computational fluid dynamics, and additive manufacturing are some of the key topics covered in this book. The book aims to provide a better understanding of contemporary product design and analyses, and hence will be useful for researchers, academicians, and professionals.
This thesis develops novel numerical techniques for simulating quantum transport in the time domain and applies them to pertinent physical systems such as flying qubits in electronic interferometers and superconductor/semiconductor junctions hosting Majorana bound states (the key ingredient for topological quantum computing). In addition to exploring the rich new physics brought about by time dependence, the thesis also develops software that can be used to simulate nanoelectronic systems with arbitrary geometry and time dependence, offering a veritable toolbox for exploring this rapidly growing domain.
The proceedings of the 2017 Symposium on Chaos, Complexity and Leadership illuminate current research results and academic work from the fields of physics, mathematics, education, economics, as well as management and social sciences. The text explores chaotic and complex systems, as well as chaos and complexity theory in view of their applicability to management and leadership. This proceedings explores non-linearity as well as data-modelling and simulation in order to uncover new approaches and perspectives. Effort will not be spared in bringing theory into practice while exploring leadership and management-laden concepts. This book will cover the analysis of different chaotic developments from different fields within the concepts of chaos and complexity theory. Researchers and students in the field will find answers to questions surrounding these intertwined and compelling fields.
This book is devoted to recent developments concerning linear operators, covering topics such as the Cauchy problem, Riesz basis, frames, spectral theory and applications to the Gribov operator in Bargmann space. Also, integral and integro-differential equations as well as applications to problems in mathematical physics and mechanics are discussed. Contents Introduction Linear operators Basic notations and results Bases Semi-groups Discrete operator and denseness of the generalized eigenvectors Frames in Hilbert spaces Summability of series -convergence operators -hypercyclic set of linear operators Analytic operators in Bela Szoekefalvi-Nagy's sense Bases of the perturbed operator T( ) Frame of the perturbed operator T( ) Perturbation method for sound radiation by a vibrating plate in a light fluid Applications to mathematical models Reggeon field theory
This monograph presents urban simulation methods that help in better understanding urban dynamics. Over historical times, cities have progressively absorbed a larger part of human population and will concentrate three quarters of humankind before the end of the century. This "urban transition" that has totally transformed the way we inhabit the planet is globally understood in its socio-economic rationales but is less frequently questioned as a spatio-temporal process. However, the cities, because they are intrinsically linked in a game of competition for resources and development, self organize in "systems of cities" where their future becomes more and more interdependent. The high frequency and intensity of interactions between cities explain that urban systems all over the world exhibit large similarities in their hierarchical and functional structure and rather regular dynamics. They are complex systems whose emergence, structure and further evolution are widely governed by the multiple kinds of interaction that link the various actors and institutions investing in cities their efforts, capital, knowledge and intelligence. Simulation models that reconstruct this dynamics may help in better understanding it and exploring future plausible evolutions of urban systems. This would provide better insight about how societies can manage the ecological transition at local, regional and global scales. The author has developed a series of instruments that greatly improve the techniques of validation for such models of social sciences that can be submitted to many applications in a variety of geographical situations. Examples are given for several BRICS countries, Europe and United States. The target audience primarily comprises research experts in the field of urban dynamics, but the book may also be beneficial for graduate students.
As the sequel to the proceedings of the International Conference of Continuum Mechanics Focusing on Singularities (CoMFoS15), the proceedings of CoMFoS16 present further advances and new topics in mathematical theory and numerical simulations related to various aspects of continuum mechanics. These include fracture mechanics, shape optimization, modeling of earthquakes, material structure, interface dynamics and complex systems.. The authors are leading researchers with a profound knowledge of mathematical analysis from the fields of applied mathematics, physics, seismology, engineering, and industry. The book helps readers to understand how mathematical theory can be applied to various industrial problems, and conversely, how industrial problems lead to new mathematical challenges.
This book introduces methods of robust optimization in multivariate adaptive regression splines (MARS) and Conic MARS in order to handle uncertainty and non-linearity. The proposed techniques are implemented and explained in two-model regulatory systems that can be found in the financial sector and in the contexts of banking, environmental protection, system biology and medicine. The book provides necessary background information on multi-model regulatory networks, optimization and regression. It presents the theory of and approaches to robust (conic) multivariate adaptive regression splines - R(C)MARS - and robust (conic) generalized partial linear models - R(C)GPLM - under polyhedral uncertainty. Further, it introduces spline regression models for multi-model regulatory networks and interprets (C)MARS results based on different datasets for the implementation. It explains robust optimization in these models in terms of both the theory and methodology. In this context it studies R(C)MARS results with different uncertainty scenarios for a numerical example. Lastly, the book demonstrates the implementation of the method in a number of applications from the financial, energy, and environmental sectors, and provides an outlook on future research.
Growth curve models in longitudinal studies are widely used to model population size, body height, biomass, fungal growth, and other variables in the biological sciences, but these statistical methods for modeling growth curves and analyzing longitudinal data also extend to general statistics, economics, public health, demographics, epidemiology, SQC, sociology, nano-biotechnology, fluid mechanics, and other applied areas. There is no one-size-fits-all approach to growth measurement. The selected papers in this volume build on presentations from the GCM workshop held at the Indian Statistical Institute, Giridih, on March 28-29, 2016. They represent recent trends in GCM research on different subject areas, both theoretical and applied. This book includes tools and possibilities for further work through new techniques and modification of existing ones. The volume includes original studies, theoretical findings and case studies from a wide range of applied work, and these contributions have been externally refereed to the high quality standards of leading journals in the field.
This book offers a timely overview of theories and methods developed by an authoritative group of researchers to understand the link between criticality and brain functioning. Cortical information processing in particular and brain function in general rely heavily on the collective dynamics of neurons and networks distributed over many brain areas. A key concept for characterizing and understanding brain dynamics is the idea that networks operate near a critical state, which offers several potential benefits for computation and information processing. However, there is still a large gap between research on criticality and understanding brain function. For example, cortical networks are not homogeneous but highly structured, they are not in a state of spontaneous activation but strongly driven by changing external stimuli, and they process information with respect to behavioral goals. So far the questions relating to how critical dynamics may support computation in this complex setting, and whether they can outperform other information processing schemes remain open. Based on the workshop "Dynamical Network States, Criticality and Cortical Function", held in March 2017 at the Hanse Institute for Advanced Studies (HWK) in Delmenhorst, Germany, the book provides readers with extensive information on these topics, as well as tools and ideas to answer the above-mentioned questions. It is meant for physicists, computational and systems neuroscientists, and biologists.
This is the second part of a two volume anthology comprising a selection of 49 articles that illustrate the depth, breadth and scope of Nigel Kalton's research. Each article is accompanied by comments from an expert on the respective topic, which serves to situate the article in its proper context, to successfully link past, present and hopefully future developments of the theory and to help readers grasp the extent of Kalton's accomplishments. Kalton's work represents a bridge to the mathematics of tomorrow, and this book will help readers to cross it. Nigel Kalton (1946-2010) was an extraordinary mathematician who made major contributions to an amazingly diverse range of fields over the course of his career.
The work presented in this book is based on the proton-proton collision data from the Large Hadron Collider at a centre-of-mass energy of 13 TeV recorded by the ATLAS detector in 2015 and 2016. The research program of the ATLAS experiment includes the precise measurement of the parameters of the Standard Model, and the search for signals of physics beyond the SM. Both these approaches are pursued in this thesis, which presents two different analyses: the measurement of the Higgs boson mass in the di-photon decay channel, and the search for production of supersymmetric particles (gluinos, squarks or winos) in a final state containing two photons and missing transverse momentum. Finally, ATLAS detector performance studies, which are key ingredients for the two analyses outlined before, are also carried out and described.
This book analyzes the updated principles and applications of nonlinear approaches to solve engineering and physics problems. The knowledge on nonlinearity and the comprehension of nonlinear approaches are inevitable to future engineers and scientists, making this an ideal book for engineers, engineering students, and researchers in engineering, physics, and mathematics. Chapters are of specific interest to readers who seek expertise in optimization, nonlinear analysis, mathematical modeling of complex forms, and non-classical engineering problems. The book covers methodologies and applications from diverse areas such as vehicle dynamics, surgery simulation, path planning, mobile robots, contact and scratch analysis at the micro and nano scale, sub-structuring techniques, ballistic projectiles, and many more.
This thesis presents a pioneering method for gleaning the maximum information from the deepest images of the far-infrared universe obtained with the Herschel satellite, reaching galaxies fainter by an order of magnitude than in previous studies. Using these high-quality measurements, the author first demonstrates that the vast majority of galaxy star formation did not take place in merger-driven starbursts over 90% of the history of the universe, which suggests that galaxy growth is instead dominated by a steady infall of matter. The author further demonstrates that massive galaxies suffer a gradual decline in their star formation activity, providing an alternative path for galaxies to stop star formation. One of the key unsolved questions in astrophysics is how galaxies acquired their mass in the course of cosmic time. In the standard theory, the merging of galaxies plays a major role in forming new stars. Then, old galaxies abruptly stop forming stars through an unknown process. Investigating this theory requires an unbiased measure of the star formation intensity of galaxies, which has been unavailable due to the dust obscuration of stellar light.
This book is a collection of papers presented at the 23rd International Conference on Domain Decomposition Methods in Science and Engineering, held on Jeju Island, Korea on July 6-10, 2015. Domain decomposition methods solve boundary value problems by splitting them into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains. Domain decomposition methods have considerable potential for a parallelization of the finite element methods, and serve a basis for distributed, parallel computations.
The book focuses on the next fields of computer science: combinatorial optimization, scheduling theory, decision theory, and computer-aided production management systems. It also offers a quick introduction into the theory of PSC-algorithms, which are a new class of efficient methods for intractable problems of combinatorial optimization. A PSC-algorithm is an algorithm which includes: sufficient conditions of a feasible solution optimality for which their checking can be implemented only at the stage of a feasible solution construction, and this construction is carried out by a polynomial algorithm (the first polynomial component of the PSC-algorithm); an approximation algorithm with polynomial complexity (the second polynomial component of the PSC-algorithm); also, for NP-hard combinatorial optimization problems, an exact subalgorithm if sufficient conditions were found, fulfilment of which during the algorithm execution turns it into a polynomial complexity algorithm. Practitioners and software developers will find the book useful for implementing advanced methods of production organization in the fields of planning (including operative planning) and decision making. Scientists, graduate and master students, or system engineers who are interested in problems of combinatorial optimization, decision making with poorly formalized overall goals, or a multiple regression construction will benefit from this book.
This book gathers the peer-reviewed proceedings of the 12th Annual Meeting of the Bulgarian Section of the Society for Industrial and Applied Mathematics, BGSIAM'17, held in Sofia, Bulgaria, in December 2017. The general theme of BGSIAM'17 was industrial and applied mathematics, with a particular focus on: high-performance computing, numerical methods and algorithms, analysis of partial differential equations and their applications, mathematical biology, control and uncertain systems, stochastic models, molecular dynamics, neural networks, genetic algorithms, metaheuristics for optimization problems, generalized nets, and Big Data.
This book gathers the main recent results on positive trigonometric polynomials within a unitary framework. The book has two parts: theory and applications. The theory of sum-of-squares trigonometric polynomials is presented unitarily based on the concept of Gram matrix (extended to Gram pair or Gram set). The applications part is organized as a collection of related problems that use systematically the theoretical results.
This book introduces readers to MesoBioNano (MBN) Explorer - a multi-purpose software package designed to model molecular systems at various levels of size and complexity. In addition, it presents a specially designed multi-task toolkit and interface - the MBN Studio - which enables the set-up of input files, controls the simulations, and supports the subsequent visualization and analysis of the results obtained. The book subsequently provides a systematic description of the capabilities of this universal and powerful software package within the framework of computational molecular science, and guides readers through its applications in numerous areas of research in bio- and chemical physics and material science - ranging from the nano- to the mesoscale. MBN Explorer is particularly suited to computing the system's energy, to optimizing molecular structure, and to exploring the various facets of molecular and random walk dynamics. The package allows the use of a broad variety of interatomic potentials and can, e.g., be configured to select any subset of a molecular system as rigid fragments, whenever a significant reduction in the number of dynamical degrees of freedom is required for computational practicalities. MBN Studio enables users to easily construct initial geometries for the molecular, liquid, crystalline, gaseous and hybrid systems that serve as input for the subsequent simulations of their physical and chemical properties using MBN Explorer. Despite its universality, the computational efficiency of MBN Explorer is comparable to that of other, more specialized software packages, making it a viable multi-purpose alternative for the computational modeling of complex molecular systems. A number of detailed case studies presented in the second part of this book demonstrate MBN Explorer's usefulness and efficiency in the fields of atomic clusters and nanoparticles, biomolecular systems, nanostructured materials, composite materials and hybrid systems, crystals, liquids and gases, as well as in providing modeling support for novel and emerging technologies. Last but not least, with the release of the 3rd edition of MBN Explorer in spring 2017, a free trial version will be available from the MBN Research Center website (mbnresearch.com).
This volume offers a fundamentally different way of conceptualizing time and reality. Today, we see time predominantly as the linear-sequential order of events, and reality accordingly as consisting of facts that can be ordered along sequential time. But what if this conceptualization has us mistaking the "exhausts" for the "real thing", i.e. if we miss the best, the actual taking place of reality as it occurs in a very differently structured, primordial form of time, the time-space of the present? In this new conceptual framework, both the sequential aspect of time and the factual aspect of reality are emergent phenomena that come into being only after reality has actually taken place. In the new view, facts are just the "traces" that the actual taking place of reality leaves behind on the co-emergent "canvas'' of local spacetime. Local spacetime itself emerges only as facts come into being - and only facts can be adequately localized in it. But, how does reality then actually occur? It is conceived as a "constellatory self-unfolding", characterized by strong self-referentiality, and taking place in the primordial form of time, the not yet sequentially structured "time-space of the present". Time is seen here as an ontophainetic platform, i.e. as the stage on which reality can first occur. This view of time (and, thus, also space) seems to be very much in accordance with what we encounter in quantum physics before the so-called collapse of the wave function. In parallel, classical and relativistic physics largely operate within the factual portrait of reality, and the sequential aspect of time, respectively. Only singularities constitute an important exemption: here the canvas of local spacetime - that emerged together with factization - melts down again. In the novel framework quantum reduction and singularities can be seen and addressed as inverse transitions: In quantum physical state reduction reality "gains" the chrono-ontological format of facticity, and the sequential aspect of time becomes applicable. In singularities, by contrast, the inverse happens: Reality loses its local spacetime formation and reverts back into its primordial, pre-local shape - making in this way the use of causality relations, Boolean logic and the dichotomization of subject and object obsolete. For our understanding of the relation between quantum and relativistic physics this new view opens up fundamentally new perspectives: Both are legitimate views of time and reality, they just address very different chrono-ontological portraits, and thus should not lead us to erroneously subjugating one view under the other. The task of the book is to provide a formal framework in which this radically different view of time and reality can be addressed properly. The mathematical approach is based on the logical and topological features of the Borromean Rings. It draws upon concepts and methods of algebraic and geometric topology - especially the theory of sheaves and links, group theory, logic and information theory, in relation to the standard constructions employed in quantum mechanics and general relativity, shedding new light on the pestilential problems of their compatibility. The intended audience includes physicists, mathematicians and philosophers with an interest in the conceptual and mathematical foundations of modern physics.
Written by the pioneer and foremost authority on the subject, this new book is both a comprehensive university textbook and professional/research reference on the finite-difference time-domain (FD-TD) computational solution method for Maxwell's equations. It presents in-depth discussions of: The revolutionary Berenger PML absorbing boundary condition; FD-TD modelling of nonlinear, dispersive, and gain optical materials used in lasers and optical microchips; unstructured FD-TD meshes for modelling of complex systems; 2.5-dimensional body-of-revolution FD-TD algorithms; Linear and nonlinear electronic circuit models, including a seamless tie-in to SPICE; Digital signal postprocessing of FD-TD data; FD-TD modelling of microlaser cavities; and FD-TD software development for the latest Intel and Cray massively parallel computers. |
You may like...
Kovels' Antiques and Collectibles Price…
Kim Kovel, Terry Kovel
Paperback
R714
Discovery Miles 7 140
|