![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Applied mathematics
This thesis describes the stand-alone discovery and measurement of the Higgs boson in its decays to two W bosons using the Run-I ATLAS dataset. This is the most precise measurement of gluon-fusion Higgs boson production and is among the most significant results attained at the LHC. The thesis provides an exceptionally clear exposition on a complicated analysis performed by a large team of researchers. Aspects of the analysis performed by the author are explained in detail; these include new methods for evaluating uncertainties on the jet binning used in the analysis and for estimating the background due to associated production of a W boson and an off-shell photon. The thesis also describes a measurement of the WW cross section, an essential background to Higgs boson production. The primary motivation of the LHC was to prove or disprove the existence of the Higgs boson. In 2012, CERN announced this discovery and the resultant ATLAS publication contained three decay channels: gg, ZZ, and WW.
This book covers topics in portfolio management and multicriteria decision analysis (MCDA), presenting a transparent and unified methodology for the portfolio construction process. The most important feature of the book includes the proposed methodological framework that integrates two individual subsystems, the portfolio selection subsystem and the portfolio optimization subsystem. An additional highlight of the book includes the detailed, step-by-step implementation of the proposed multicriteria algorithms in Python. The implementation is presented in detail; each step is elaborately described, from the input of the data to the extraction of the results. Algorithms are organized into small cells of code, accompanied by targeted remarks and comments, in order to help the reader to fully understand their mechanics. Readers are provided with a link to access the source code through GitHub. This Work may also be considered as a reference which presents the state-of-art research on portfolio construction with multiple and complex investment objectives and constraints. The book consists of eight chapters. A brief introduction is provided in Chapter 1. The fundamental issues of modern portfolio theory are discussed in Chapter 2. In Chapter 3, the various multicriteria decision aid methods, either discrete or continuous, are concisely described. In Chapter 4, a comprehensive review of the published literature in the field of multicriteria portfolio management is considered. In Chapter 5, an integrated and original multicriteria portfolio construction methodology is developed. Chapter 6 presents the web-based information system, in which the suggested methodological framework has been implemented. In Chapter 7, the experimental application of the proposed methodology is discussed and in Chapter 8, the authors provide overall conclusions. The readership of the book aims to be a diverse group, including fund managers, risk managers, investment advisors, bankers, private investors, analytics scientists, operations researchers scientists, and computer engineers, to name just several. Portions of the book may be used as instructional for either advanced undergraduate or post-graduate courses in investment analysis, portfolio engineering, decision science, computer science, or financial engineering.
The principle aim of the book is to present a self-contained, modern account of similarity and symmetry methods, which are important mathematical tools for both physicists, engineers and applied mathematicians. The idea is to provide a balanced presentation of the mathematical techniques and applications of symmetry methods in mathematics, physics and engineering. That is why it includes recent developments and many examples in finding systematically conservation laws, local and nonlocal symmetries for ordinary and partial differential equations. The role of continuous symmetries in classical and quantum field theories is exposed at a technical level accessible even for non specialists. The importance of symmetries in continuum mechanics and mechanics of materials is highlighted through recent developments, such as the construction of constitutive models for various materials combining Lie symmetries with experimental data. As a whole this book is a unique collection of contributions from experts in the field, including specialists in the mathematical treatment of symmetries, researchers using symmetries from a fundamental, applied or numerical viewpoint. The book is a fascinating overview of symmetry methods aimed for graduate students in physics, mathematics and engineering, as well as researchers either willing to enter in the field or to capture recent developments and applications of symmetry methods in different scientific fields.
Thurston maps are topological generalizations of postcritically-finite rational maps. This book provides a comprehensive study of ergodic theory of expanding Thurston maps, focusing on the measure of maximal entropy, as well as a more general class of invariant measures, called equilibrium states, and certain weak expansion properties of such maps. In particular, we present equidistribution results for iterated preimages and periodic points with respect to the unique measure of maximal entropy by investigating the number and locations of fixed points. We then use the thermodynamical formalism to establish the existence, uniqueness, and various other properties of the equilibrium state for a Holder continuous potential on the sphere equipped with a visual metric. After studying some weak expansion properties of such maps, we obtain certain large deviation principles for iterated preimages and periodic points under an additional assumption on the critical orbits of the maps. This enables us to obtain general equidistribution results for such points with respect to the equilibrium states under the same assumption.
In 1983 O.J. Boxma and the author have published a research monograph on Boundary Value Problems in Queueing System Analysis. The continuation of that research is described in the present monograph. The technique developed in the 1983-monograph has appeared to be quite powerful in the construction of explicit expressions for the generating functions and/or Laplace-Stieltjes transforms of characteristic distributions needed in the performance evaluation of stoachistic models stemming from computer system and telecommunication engineering. The book covers the following topics: One- Dimensional Random Walks; Two-Dimensional Random Walks; the Two-Dimensional Workload Process; and the N-Dimensional Random Walk.
This work illustrates research conducted over a ten-year timespan and addresses a fundamental issue in reliability theory. This still appears to be an empirically disorganized field and the book suggests employing a deductive base in order to evolve reliability as a science. The study is in line with the fundamental work by Gnedenko. Boris Vladimirovich Gnedenko (1912 - 1995) was a Soviet mathematician who made significant contributions in various scientific areas. His name is especially associated with studies of dependability, for which he is often recognized as the 'father' of reliability theory. In the last few decades, this area has expanded in new directions such as safety, security, risk analysis and other fields, yet the book 'Mathematical Methods in Reliability Theory' written by Gnedenko with Alexander Soloviev and Yuri Belyaev still towers as a pillar of the reliability sector's configuration and identity. The present book proceeds in the direction opened by the cultural project of the Russian authors; in particular it identifies different trends in the hazard rate functions by means of deductive logic and demonstrations. Further, it arrives at multiple results by means of the entropy function, an original mathematical tool in the reliability domain. As such, it will greatly benefit all specialists in the field who are interested in unconventional solutions.
A zebrafish, the hull of a miniature ship, a mathematical equation and a food chain - what do these things have in common? They are examples of models used by scientists to isolate and study particular aspects of the world around us. This book begins by introducing the concept of a scientific model from an intuitive perspective, drawing parallels to mental models and artistic representations. It then recounts the history of modelling from the 16th century up until the present day. The iterative process of model building is described and discussed in the context of complex models with high predictive accuracy versus simpler models that provide more of a conceptual understanding. To illustrate the diversity of opinions within the scientific community, we also present the results of an interview study, in which ten scientists from different disciplines describe their views on modelling and how models feature in their work. Lastly, it includes a number of worked examples that span different modelling approaches and techniques. It provides a comprehensive introduction to scientific models and shows how models are constructed and used in modern science. It also addresses the approach to, and the culture surrounding modelling in different scientific disciplines. It serves as an inspiration for model building and also facilitates interdisciplinary collaborations by showing how models are used in different scientific fields. The book is aimed primarily at students in the sciences and engineering, as well as students at teacher training colleges but will also appeal to interested readers wanting to get an overview of scientific modelling in general and different modelling approaches in particular.
This book is a systematic summary of some new advances in the area of nonlinear analysis and design in the frequency domain, focusing on the application oriented theory and methods based on the GFRF concept, which is mainly done by the author in the past 8 years. The main results are formulated uniformly with a parametric characteristic approach, which provides a convenient and novel insight into nonlinear influence on system output response in terms of characteristic parameters and thus facilitate nonlinear analysis and design in the frequency domain. The book starts with a brief introduction to the background of nonlinear analysis in the frequency domain, followed by recursive algorithms for computation of GFRFs for different parametric models, and nonlinear output frequency properties. Thereafter the parametric characteristic analysis method is introduced, which leads to the new understanding and formulation of the GFRFs, and nonlinear characteristic output spectrum (nCOS) and the nCOS based analysis and design method. Based on the parametric characteristic approach, nonlinear influence in the frequency domain can be investigated with a novel insight, i.e., alternating series, which is followed by some application results in vibration control. Magnitude bounds of frequency response functions of nonlinear systems can also be studied with a parametric characteristic approach, which result in novel parametric convergence criteria for any given parametric nonlinear model whose input-output relationship allows a convergent Volterra series expansion. This book targets those readers who are working in the areas related to nonlinear analysis and design, nonlinear signal processing, nonlinear system identification, nonlinear vibration control, and so on. It particularly serves as a good reference for those who are studying frequency domain methods for nonlinear systems.
This book fills an important gap in studies on D. D. Kosambi. For the first time, the mathematical work of Kosambi is described, collected and presented in a manner that is accessible to non-mathematicians as well. A number of his papers that are difficult to obtain in these areas are made available here. In addition, there are essays by Kosambi that have not been published earlier as well as some of his lesser known works. Each of the twenty four papers is prefaced by a commentary on the significance of the work, and where possible, extracts from technical reviews by other mathematicians.
This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of parallel iterative linear system solvers with emphasis on scalable preconditioners, (b) parallel schemes for obtaining a few of the extreme eigenpairs or those contained in a given interval in the spectrum of a standard or generalized symmetric eigenvalue problem, and (c) parallel methods for computing a few of the extreme singular triplets. Part IV focuses on the development of parallel algorithms for matrix functions and special characteristics such as the matrix pseudospectrum and the determinant. The book also reviews the theoretical and practical background necessary when designing these algorithms and includes an extensive bibliography that will be useful to researchers and students alike. The book brings together many existing algorithms for the fundamental matrix computations that have a proven track record of efficient implementation in terms of data locality and data transfer on state-of-the-art systems, as well as several algorithms that are presented for the first time, focusing on the opportunities for parallelism and algorithm robustness.
By focusing on the mostly used variational methods, this monograph aspires to give a unified description and comparison of various ways of constructing conserved quantities for perturbations and to study symmetries in general relativity and modified theories of gravity. The main emphasis lies on the field-theoretical covariant formulation of perturbations, the canonical Noether approach and the Belinfante procedure of symmetrisation. The general formalism is applied to build the gauge-invariant cosmological perturbation theory, conserved currents and superpotentials to describe physically important solutions of gravity theories. Meticulous attention is given to the construction of conserved quantities in asymptotically-flat spacetimes as well as in asymptotically constant curvature spacetimes such as the Anti-de Sitter space. Significant part of the book can be used in graduate courses on conservation laws in general relativity. THE SERIES: DE GRUYTER STUDIES IN MATHEMATICAL PHYSICS The series is devoted to the publication of monographs and high-level texts in mathematical physics. They cover topics and methods in fields of current interest, with an emphasis on didactical presentation. The series will enable readers to understand, apply, and develop further, with sufficient rigor, mathematical methods to given problems in physics. The works in this series are aimed at advanced students and researchers in mathematical and theoretical physics. They can also serve as secondary reading for lectures and seminars at advanced levels.
This book provides a broad yet detailed introduction to neural networks and machine learning in a statistical framework. A single, comprehensive resource for study and further research, it explores the major popular neural network models and statistical learning approaches with examples and exercises and allows readers to gain a practical working understanding of the content. This updated new edition presents recently published results and includes six new chapters that correspond to the recent advances in computational learning theory, sparse coding, deep learning, big data and cloud computing. Each chapter features state-of-the-art descriptions and significant research findings. The topics covered include: * multilayer perceptron; * the Hopfield network; * associative memory models;* clustering models and algorithms; * t he radial basis function network; * recurrent neural networks; * nonnegative matrix factorization; * independent component analysis; *probabilistic and Bayesian networks; and * fuzzy sets and logic. Focusing on the prominent accomplishments and their practical aspects, this book provides academic and technical staff, as well as graduate students and researchers with a solid foundation and comprehensive reference on the fields of neural networks, pattern recognition, signal processing, and machine learning.
This book discusses recent developments in semigroup theory and its applications in areas such as operator algebras, operator approximations and category theory. All contributing authors are eminent researchers in their respective fields, from across the world. Their papers, presented at the 2014 International Conference on Semigroups, Algebras and Operator Theory in Cochin, India, focus on recent developments in semigroup theory and operator algebras. They highlight current research activities on the structure theory of semigroups as well as the role of semigroup theoretic approaches to other areas such as rings and algebras. The deliberations and discussions at the conference point to future research directions in these areas. This book presents 16 unpublished, high-quality and peer-reviewed research papers on areas such as structure theory of semigroups, decidability vs. undecidability of word problems, regular von Neumann algebras, operator theory and operator approximations. Interested researchers will find several avenues for exploring the connections between semigroup theory and the theory of operator algebras.
This thesis demonstrates the first use of high-speed ultrasound imaging to non-invasively probe how the interior of a dense suspension responds to impact. Suspensions of small solid particles in a simple liquid can generate a rich set of dynamic phenomena that are of fundamental scientific interest because they do not conform to the typical behavior expected of either solids or liquids. Most remarkable is the highly counter-intuitive ability of concentrated suspensions to strongly thicken and even solidify when sheared or impacted. The understanding of the mechanism driving this solidification is, however, still limited, especially for the important transient stage while the response develops as a function of time. In this thesis, high-speed ultrasound imaging is introduced to track, for the first time, the transition from the flowing to the solidified state and directly observe the shock-like shear fronts that accompany this transition. A model is developed that agrees quantitatively with the experimental measurements. The combination of imaging techniques, experimental design, and modeling in this thesis represents a major breakthrough for the understanding of the dynamic response of dense suspensions, with important implications for a wide range of applications ranging from the handling of slurries to additive manufacturing.
The book presents research that contributes to the development of intelligent dialog systems to simplify diverse aspects of everyday life, such as medical diagnosis and entertainment. Covering major thematic areas: machine learning and artificial neural networks; algorithms and models; and social and biometric data for applications in human-computer interfaces, it discusses processing of audio-visual signals for the detection of user-perceived states, the latest scientific discoveries in processing verbal (lexicon, syntax, and pragmatics), auditory (voice, intonation, vocal expressions) and visual signals (gestures, body language, facial expressions), as well as algorithms for detecting communication disorders, remote health-status monitoring, sentiment and affect analysis, social behaviors and engagement. Further, it examines neural and machine learning algorithms for the implementation of advanced telecommunication systems, communication with people with special needs, emotion modulation by computer contents, advanced sensors for tracking changes in real-life and automatic systems, as well as the development of advanced human-computer interfaces. The book does not focus on solving a particular problem, but instead describes the results of research that has positive effects in different fields and applications.
This thesis discusses the random Euclidean bipartite matching problem, i.e., the matching problem between two different sets of points randomly generated on the Euclidean domain. The presence of both randomness and Euclidean constraints makes the study of the average properties of the solution highly relevant. The thesis reviews a number of known results about both matching problems and Euclidean matching problems. It then goes on to provide a complete and general solution for the one dimensional problem in the case of convex cost functionals and, moreover, discusses a potential approach to the average optimal matching cost and its finite size corrections in the quadratic case. The correlation functions of the optimal matching map in the thermodynamical limit are also analyzed. Lastly, using a functional approach, the thesis puts forward a general recipe for the computation of the correlation function of the optimal matching in any dimension and in a generic domain.
This collection of papers offers a broad synopsis of state-of-the-art mathematical methods used in modeling the interaction between tumors and the immune system. These papers were presented at the four-day workshop on Mathematical Models of Tumor-Immune System Dynamics held in Sydney, Australia from January 7th to January 10th, 2013. The workshop brought together applied mathematicians, biologists, and clinicians actively working in the field of cancer immunology to share their current research and to increase awareness of the innovative mathematical tools that are applicable to the growing field of cancer immunology. Recent progress in cancer immunology and advances in immunotherapy suggest that the immune system plays a fundamental role in host defense against tumors and could be utilized to prevent or cure cancer. Although theoretical and experimental studies of tumor-immune system dynamics have a long history, there are still many unanswered questions about the mechanisms that govern the interaction between the immune system and a growing tumor. The multidimensional nature of these complex interactions requires a cross-disciplinary approach to capture more realistic dynamics of the essential biology. The papers presented in this volume explore these issues and the results will be of interest to graduate students and researchers in a variety of fields within mathematical and biological sciences.
This book offers a detailed investigation of breakdowns in traffic and transportation networks. It shows empirically that transitions from free flow to so-called synchronized flow, initiated by local disturbances at network bottlenecks, display a nucleation-type behavior: while small disturbances in free flow decay, larger ones grow further and lead to breakdowns at the bottlenecks. Further, it discusses in detail the significance of this nucleation effect for traffic and transportation theories, and the consequences this has for future automatic driving, traffic control, dynamic traffic assignment, and optimization in traffic and transportation networks. Starting from a large volume of field traffic data collected from various sources obtained solely through measurements in real world traffic, the author develops his insights, with an emphasis less on reviewing existing methodologies, models and theories, and more on providing a detailed analysis of empirical traffic data and drawing consequences regarding the minimum requirements for any traffic and transportation theories to be valid. The book - proves the empirical nucleation nature of traffic breakdown in networks - discusses the origin of the failure of classical traffic and transportation theories - shows that the three-phase theory is incommensurable with the classical traffic theories, and - explains why current state-of-the art dynamic traffic assignments tend to provoke heavy traffic congestion, making it a valuable reference resource for a wide audience of scientists and postgraduate students interested in the fundamental understanding of empirical traffic phenomena and related data-driven phenomenology, as well as for practitioners working in the fields of traffic and transportation engineering.
In this text, a theory for general linear parabolic partial differential equations is established which covers equations with inhomogeneous symbol structure as well as mixed-order systems. Typical applications include several variants of the Stokes system and free boundary value problems. We show well-posedness in "Lp-Lq"-Sobolev spaces in time and space for the linear problems (i.e., maximal regularity) which is the key step for the treatment of nonlinear problems. The theory is based on the concept of the Newton polygon and can cover equations which are not accessible by standard methods as, e.g., semigroup theory. Results are obtained in different types of non-integer "Lp"-Sobolev spaces as Besov spaces, Bessel potential spaces, and Triebel Lizorkin spaces. The last-mentioned class appears in a natural way as traces of "Lp-Lq"-Sobolev spaces. We also present a selection of applications in the whole space and on half-spaces. Among others, we prove well-posedness of the linearizations of the generalized thermoelastic plate equation, the two-phase Navier Stokes equations with Boussinesq Scriven surface, and the "Lp-Lq" two-phase Stefan problem with Gibbs Thomson correction. "
Presented in this document is a class of deterministic models describing the dynamics of two plant species whose characteristics are common to the majority of annual plants that have a seedbank. Formulated in terms of elementary dynamical systems, these models were developed in response to four major questions on the long-term outcomes of binary mixtures of plant species: Is ultimate coexistence possible? If not, which strain will win? Does the mixture approach an equilibrium? If so, how long does the mixture take to attain it? The book gives a detailed account of model construction, analysis and application to field data obtained from long-term trials. In the particular case study modelled, the species involved are two pastural strains whose dynamics have critical agricultural and economic implications for the areas in which they are found, including North America, the Mediterranean region and Australia. This study will be valuable to researchers and students in mathematical biology and to agronomists and botanists interested in population dynamics.
This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and industry.
The papers in this volume start with a description of the construction of reduced models through a review of Proper Orthogonal Decomposition (POD) and reduced basis models, including their mathematical foundations and some challenging applications, then followed by a description of a new generation of simulation strategies based on the use of separated representations (space-parameters, space-time, space-time-parameters, space-space,...), which have led to what is known as Proper Generalized Decomposition (PGD) techniques. The models can be enriched by treating parameters as additional coordinates, leading to fast and inexpensive online calculations based on richer offline parametric solutions. Separated representations are analyzed in detail in the course, from their mathematical foundations to their most spectacular applications. It is also shown how such an approximation could evolve into a new paradigm in computational science, enabling one to circumvent various computational issues in a vast array of applications in engineering science.
This important textbook provides an introduction to the concepts of
the newly developed extended finite element method (XFEM) for
fracture analysis of structures, as well as for other related
engineering applications.
The series is designed to bring together those mathematicians who are seriously interested in getting new challenging stimuli from economic theories with those economists who are seeking effective mathematical tools for their research. A lot of economic problems can be formulated as constrained optimizations and equilibration of their solutions. Various mathematical theories have been supplying economists with indispensable machineries for these problems arising in economic theory. Conversely, mathematicians have been stimulated by various mathematical difficulties raised by economic theories.
A fresh approach to bridging research design with statistical analysis While good social science requires both research design and statistical analysis, most books treat these two areas separately." Understanding and Applying Research Design" introduces an accessible approach to integrating design and statistics, focusing on the processes of posing, testing, and interpreting research questions in the social sciences. The authors analyze real-world data using SPSS software, guiding readers on the overall process of science, focusing on premises, procedures, and designs of social scientific research. Three clearly organized sections move seamlessly from theoretical topics to statistical techniques at the heart of research procedures, and finally, to practical application of research design: Premises of Research introduces the research process and the capabilities of SPSS, with coverage of ethics, Empirical Generalization, and Chi Square and Contingency Table AnalysisProcedures of Research explores key quantitative methods in research design including measurement, correlation, regression, and causationDesigns of Research outlines various design frameworks, with discussion of survey research, aggregate research, and experiments Throughout the book, SPSS software is used to showcase the discussed techniques, and detailed appendices provide guidance on key statistical procedures and tips for data management. Numerous exercises allow readers to test their comprehension of the presented material, and a related website features additional data sets and SPSS code. "Understanding and Applying Research Design" is an excellent book for social sciences and education courses on research methods at the upper-undergraduate level. The book is also an insightful reference for professionals who would like to learn how to pose, test, and interpret research questions with confidence. |
![]() ![]() You may like...
Mathematical Statistics with…
William Mendenhall, Dennis Wackerly, …
Paperback
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R4,084
Discovery Miles 40 840
|