![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Applied mathematics > General
This book addresses the mathematical aspects of modern image processing methods, with a special emphasis on the underlying ideas and concepts. It discusses a range of modern mathematical methods used to accomplish basic imaging tasks such as denoising, deblurring, enhancing, edge detection and inpainting. In addition to elementary methods like point operations, linear and morphological methods, and methods based on multiscale representations, the book also covers more recent methods based on partial differential equations and variational methods. Review of the German Edition: The overwhelming impression of the book is that of a very professional presentation of an appropriately developed and motivated textbook for a course like an introduction to fundamentals and modern theory of mathematical image processing. Additionally, it belongs to the bookcase of any office where someone is doing research/application in image processing. It has the virtues of a good and handy reference manual. (zbMATH, reviewer: Carl H. Rohwer, Stellenbosch)
This thesis describes the stand-alone discovery and measurement of the Higgs boson in its decays to two W bosons using the Run-I ATLAS dataset. This is the most precise measurement of gluon-fusion Higgs boson production and is among the most significant results attained at the LHC. The thesis provides an exceptionally clear exposition on a complicated analysis performed by a large team of researchers. Aspects of the analysis performed by the author are explained in detail; these include new methods for evaluating uncertainties on the jet binning used in the analysis and for estimating the background due to associated production of a W boson and an off-shell photon. The thesis also describes a measurement of the WW cross section, an essential background to Higgs boson production. The primary motivation of the LHC was to prove or disprove the existence of the Higgs boson. In 2012, CERN announced this discovery and the resultant ATLAS publication contained three decay channels: gg, ZZ, and WW.
This book focuses on the synthesis of lower-mobility parallel manipulators, presenting a group-theory-based method that has the advantage of being geometrically intrinsic. Rotations and translations of a rigid body as well as a combination of the two can be expressed and handled elegantly using the group algebraic structure of the set of rigid-body displacements. The book gathers the authors' research results, which were previously scattered in various journals and conference proceedings, presenting them in a unified form. Using the presented method, it reveals numerous novel architectures of lower-mobility parallel manipulators, which are of interest to those in the robotics community. More importantly, readers can use the method and tool to develop new types of lower-mobility parallel manipulators independently.
This book fills an important gap in studies on D. D. Kosambi. For the first time, the mathematical work of Kosambi is described, collected and presented in a manner that is accessible to non-mathematicians as well. A number of his papers that are difficult to obtain in these areas are made available here. In addition, there are essays by Kosambi that have not been published earlier as well as some of his lesser known works. Each of the twenty four papers is prefaced by a commentary on the significance of the work, and where possible, extracts from technical reviews by other mathematicians.
This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of parallel iterative linear system solvers with emphasis on scalable preconditioners, (b) parallel schemes for obtaining a few of the extreme eigenpairs or those contained in a given interval in the spectrum of a standard or generalized symmetric eigenvalue problem, and (c) parallel methods for computing a few of the extreme singular triplets. Part IV focuses on the development of parallel algorithms for matrix functions and special characteristics such as the matrix pseudospectrum and the determinant. The book also reviews the theoretical and practical background necessary when designing these algorithms and includes an extensive bibliography that will be useful to researchers and students alike. The book brings together many existing algorithms for the fundamental matrix computations that have a proven track record of efficient implementation in terms of data locality and data transfer on state-of-the-art systems, as well as several algorithms that are presented for the first time, focusing on the opportunities for parallelism and algorithm robustness.
This thesis discusses the random Euclidean bipartite matching problem, i.e., the matching problem between two different sets of points randomly generated on the Euclidean domain. The presence of both randomness and Euclidean constraints makes the study of the average properties of the solution highly relevant. The thesis reviews a number of known results about both matching problems and Euclidean matching problems. It then goes on to provide a complete and general solution for the one dimensional problem in the case of convex cost functionals and, moreover, discusses a potential approach to the average optimal matching cost and its finite size corrections in the quadratic case. The correlation functions of the optimal matching map in the thermodynamical limit are also analyzed. Lastly, using a functional approach, the thesis puts forward a general recipe for the computation of the correlation function of the optimal matching in any dimension and in a generic domain.
This book offers a detailed investigation of breakdowns in traffic and transportation networks. It shows empirically that transitions from free flow to so-called synchronized flow, initiated by local disturbances at network bottlenecks, display a nucleation-type behavior: while small disturbances in free flow decay, larger ones grow further and lead to breakdowns at the bottlenecks. Further, it discusses in detail the significance of this nucleation effect for traffic and transportation theories, and the consequences this has for future automatic driving, traffic control, dynamic traffic assignment, and optimization in traffic and transportation networks. Starting from a large volume of field traffic data collected from various sources obtained solely through measurements in real world traffic, the author develops his insights, with an emphasis less on reviewing existing methodologies, models and theories, and more on providing a detailed analysis of empirical traffic data and drawing consequences regarding the minimum requirements for any traffic and transportation theories to be valid. The book - proves the empirical nucleation nature of traffic breakdown in networks - discusses the origin of the failure of classical traffic and transportation theories - shows that the three-phase theory is incommensurable with the classical traffic theories, and - explains why current state-of-the art dynamic traffic assignments tend to provoke heavy traffic congestion, making it a valuable reference resource for a wide audience of scientists and postgraduate students interested in the fundamental understanding of empirical traffic phenomena and related data-driven phenomenology, as well as for practitioners working in the fields of traffic and transportation engineering.
In this text, a theory for general linear parabolic partial differential equations is established which covers equations with inhomogeneous symbol structure as well as mixed-order systems. Typical applications include several variants of the Stokes system and free boundary value problems. We show well-posedness in "Lp-Lq"-Sobolev spaces in time and space for the linear problems (i.e., maximal regularity) which is the key step for the treatment of nonlinear problems. The theory is based on the concept of the Newton polygon and can cover equations which are not accessible by standard methods as, e.g., semigroup theory. Results are obtained in different types of non-integer "Lp"-Sobolev spaces as Besov spaces, Bessel potential spaces, and Triebel Lizorkin spaces. The last-mentioned class appears in a natural way as traces of "Lp-Lq"-Sobolev spaces. We also present a selection of applications in the whole space and on half-spaces. Among others, we prove well-posedness of the linearizations of the generalized thermoelastic plate equation, the two-phase Navier Stokes equations with Boussinesq Scriven surface, and the "Lp-Lq" two-phase Stefan problem with Gibbs Thomson correction. "
This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and industry.
By focusing on the mostly used variational methods, this monograph aspires to give a unified description and comparison of various ways of constructing conserved quantities for perturbations and to study symmetries in general relativity and modified theories of gravity. The main emphasis lies on the field-theoretical covariant formulation of perturbations, the canonical Noether approach and the Belinfante procedure of symmetrisation. The general formalism is applied to build the gauge-invariant cosmological perturbation theory, conserved currents and superpotentials to describe physically important solutions of gravity theories. Meticulous attention is given to the construction of conserved quantities in asymptotically-flat spacetimes as well as in asymptotically constant curvature spacetimes such as the Anti-de Sitter space. Significant part of the book can be used in graduate courses on conservation laws in general relativity. THE SERIES: DE GRUYTER STUDIES IN MATHEMATICAL PHYSICS The series is devoted to the publication of monographs and high-level texts in mathematical physics. They cover topics and methods in fields of current interest, with an emphasis on didactical presentation. The series will enable readers to understand, apply, and develop further, with sufficient rigor, mathematical methods to given problems in physics. The works in this series are aimed at advanced students and researchers in mathematical and theoretical physics. They can also serve as secondary reading for lectures and seminars at advanced levels.
The series is designed to bring together those mathematicians who are seriously interested in getting new challenging stimuli from economic theories with those economists who are seeking effective mathematical tools for their research. A lot of economic problems can be formulated as constrained optimizations and equilibration of their solutions. Various mathematical theories have been supplying economists with indispensable machineries for these problems arising in economic theory. Conversely, mathematicians have been stimulated by various mathematical difficulties raised by economic theories.
A fresh approach to bridging research design with statistical analysis While good social science requires both research design and statistical analysis, most books treat these two areas separately." Understanding and Applying Research Design" introduces an accessible approach to integrating design and statistics, focusing on the processes of posing, testing, and interpreting research questions in the social sciences. The authors analyze real-world data using SPSS software, guiding readers on the overall process of science, focusing on premises, procedures, and designs of social scientific research. Three clearly organized sections move seamlessly from theoretical topics to statistical techniques at the heart of research procedures, and finally, to practical application of research design: Premises of Research introduces the research process and the capabilities of SPSS, with coverage of ethics, Empirical Generalization, and Chi Square and Contingency Table AnalysisProcedures of Research explores key quantitative methods in research design including measurement, correlation, regression, and causationDesigns of Research outlines various design frameworks, with discussion of survey research, aggregate research, and experiments Throughout the book, SPSS software is used to showcase the discussed techniques, and detailed appendices provide guidance on key statistical procedures and tips for data management. Numerous exercises allow readers to test their comprehension of the presented material, and a related website features additional data sets and SPSS code. "Understanding and Applying Research Design" is an excellent book for social sciences and education courses on research methods at the upper-undergraduate level. The book is also an insightful reference for professionals who would like to learn how to pose, test, and interpret research questions with confidence.
This edited volume addresses the vast challenges of adapting Online Social Media (OSM) to developing research methods and applications. The topics cover generating realistic social network topologies, awareness of user activities, topic and trend generation, estimation of user attributes from their social content, behavior detection, mining social content for common trends, identifying and ranking social content sources, building friend-comprehension tools, and many others. Each of the ten chapters tackle one or more of these issues by proposing new analysis methods or new visualization techniques, or both, for famous OSM applications such as Twitter and Facebook. This collection of contributed chapters address these challenges. Online Social Media has become part of the daily lives of hundreds of millions of users generating an immense amount of 'social content'. Addressing the challenges that stem from this wide adaptation of OSM is what makes this book a valuable contribution to the field of social networks.
This thesis presents theoretical and numerical studies on phenomenological description of the quark gluon plasma (QGP), a many-body system of elementary particles. The author formulates a causal theory of hydrodynamics for systems with net charges from the law of increasing entropy and a momentum expansion method. The derived equation results can be applied not only to collider physics, but also to the early universe and ultra-cold atoms. The author also develops novel off-equilibrium hydrodynamic models for the longitudinal expansion of the QGP on the basis of these equations. Numerical estimations show that convection and entropy production during the hydrodynamic evolution are key to explaining excessive charged particle production, recently observed at the Large Hadron Collider. Furthermore, the analyses at finite baryon density indicate that the energy available for QGP production is larger than the amount conventionally assumed. "
This thesis provides a detailed and comprehensive description of the search for New Physics at the Large Hadron Collider (LHC) in the mono-jet final state, using the first 3.2 fb-1 of data collected at the centre of mass energy of colliding protons of 13~TeV recorded in the ATLAS experiment at LHC. The results are interpreted as limits in different theoretical contexts such as compressed supersymmetric models, theories that foresee extra-spatial dimensions and in the dark matter scenario. In the latter the limits are then compared with those obtained by other ATLAS analyses and by experiments based on completely different experimental techniques, highlighting the role of the mono-jet results in the context of dark matter searches.Lastly, a set of possible analysis improvements are proposed to reduce the main uncertainties that affect the signal region and to increase the discovery potential by further exploiting the information on the final state.
The book is primarily intended as a textbook on modern algebra for undergraduate mathematics students. It is also useful for those who are interested in supplementary reading at a higher level. The text is designed in such a way that it encourages independent thinking and motivates students towards further study. The book covers all major topics in group, ring, vector space and module theory that are usually contained in a standard modern algebra text. In addition, it studies semigroup, group action, Hopf's group, topological groups and Lie groups with their actions, applications of ring theory to algebraic geometry, and defines Zariski topology, as well as applications of module theory to structure theory of rings and homological algebra. Algebraic aspects of classical number theory and algebraic number theory are also discussed with an eye to developing modern cryptography. Topics on applications to algebraic topology, category theory, algebraic geometry, algebraic number theory, cryptography and theoretical computer science interlink the subject with different areas. Each chapter discusses individual topics, starting from the basics, with the help of illustrative examples. This comprehensive text with a broad variety of concepts, applications, examples, exercises and historical notes represents a valuable and unique resource.
This book explores precisely how mathematics allows us to model and predict the behaviour of physical systems, to an amazing degree of accuracy. One of the oldest explanations for this is that, in some profound way, the structure of the world is mathematical. The ancient Pythagoreans stated that "everything is number". However, while exploring the Pythagorean method, this book chooses to add a second principle of the universe: the mind. This work defends the proposition that mind and mathematical structure are the grounds of reality.
This work focuses on new electromagnetic decay mode in nuclear physics. The first part of the thesis presents the observation of the two-photon decay for a transition where the one-photon decay is allowed. In the second part, so called quadrupole mixed-symmetry is investigated in inelastic proton scattering experiments. In 1930 Nobel-prize winner M. Goeppert-Mayer was the first to discuss the two-photon decay of an exited state in her doctoral thesis. This process has been observed many times in atomic physics. However in nuclear physics data is sparse. Here this decay mode has only been observed for the special case of a transition between nuclear states with spin and parity quantum number 0+. For such a transition, the one-photon decay - the main experimental obstacle to observe the two-photon decay - is forbidden. Furthermore, the energy sharing and angular distributions were measured, allowing conclusions to be drawn about the multipoles contributing to the two-photon transition. Quadrupole mixed-symmetry states are an excitation mode in spherical nuclei which are sensitive to the strength of the quadrupole residual interaction. A new signature for these interesting states is presented which allows identification of mixed-symmetry states independently of electromagnetic transition strengths. Furthermore this signature represents a valuable additional observable to test model predictions for mixed-symmetry states.
This book employs computer simulations of 'artificial' Universes to investigate the properties of two popular alternatives to the standard candidates for dark matter (DM) and dark energy (DE). It confronts the predictions of theoretical models with observations using a sophisticated semi-analytic model of galaxy formation. Understanding the nature of dark matter (DM) and dark energy (DE) are two of the most central problems in modern cosmology. While their important role in the evolution of the Universe has been well established-namely, that DM serves as the building blocks of galaxies, and that DE accelerates the expansion of the Universe-their true nature remains elusive. In the first half, the authors consider 'sterile neutrino' DM, motivated by recent claims that these particles may have finally been detected. Using sophisticated models of galaxy formation, the authors find that future observations of the high redshift Universe and faint dwarf galaxies in the Local Group can place strong constraints on the sterile neutrino scenario. In the second half, the authors propose and test novel numerical algorithms for simulating Universes with a 'modified' theory of gravity, as an alternative explanation to accelerated expansion. The authors' techniques improve the efficiency of these simulations by more than a factor of 20 compared to previous methods, inviting the readers into a new era for precision cosmological tests of gravity.
This textbook provides a detailed introduction to the use of software in combination with simple and economical hardware (a sound level meter with calibrated AC output and a digital recording system) to obtain sophisticated measurements usually requiring expensive equipment. It emphasizes the use of free, open source, and multiplatform software. Many commercial acoustical measurement systems use software algorithms as an integral component; however the methods are not disclosed. This book enables the reader to develop useful algorithms and provides insight into the use of digital audio editing tools to document features in the signal. Topics covered include acoustical measurement principles, in-depth critical study of uncertainty applied to acoustical measurements, digital signal processing from the basics, and metrologically-oriented spectral and statistical analysis of signals. The student will gain a deep understanding of the use of software for measurement purposes; the ability to implement software-based measurement systems; familiarity with the hardware necessary to acquire and store signals; an appreciation for the key issue of long-term preservation of signals; and a full grasp of the often neglected issue of uncertainty in acoustical measurements. Pedagogical features include in-text worked-out examples, end-of-chapter problems, a glossary of metrology terms, and extensive appendices covering statistics, proofs, additional examples, file formats, and underlying theory.
Statistical literacy is critical for the modern researcher in Physics and Astronomy. This book empowers researchers in these disciplines by providing the tools they will need to analyze their own data. Chapters in this book provide a statistical base from which to approach new problems, including numerical advice and a profusion of examples. The examples are engaging analyses of real-world problems taken from modern astronomical research. The examples are intended to be starting points for readers as they learn to approach their own data and research questions. Acknowledging that scientific progress now hinges on the availability of data and the possibility to improve previous analyses, data and code are distributed throughout the book. The JAGS symbolic language used throughout the book makes it easy to perform Bayesian analysis and is particularly valuable as readers may use it in a myriad of scenarios through slight modifications. This book is comprehensive, well written, and will surely be regarded as a standard text in both astrostatistics and physical statistics. Joseph M. Hilbe, President, International Astrostatistics Association, Professor Emeritus, University of Hawaii, and Adjunct Professor of Statistics, Arizona State University
This book - specifically developed as a novel textbook on elementary classical mechanics - shows how analytical and numerical methods can be seamlessly integrated to solve physics problems. This approach allows students to solve more advanced and applied problems at an earlier stage and equips them to deal with real-world examples well beyond the typical special cases treated in standard textbooks. Another advantage of this approach is that students are brought closer to the way physics is actually discovered and applied, as they are introduced right from the start to a more exploratory way of understanding phenomena and of developing their physical concepts. While not a requirement, it is advantageous for the reader to have some prior knowledge of scientific programming with a scripting-type language. This edition of the book uses Matlab, and a chapter devoted to the basics of scientific programming with Matlab is included. A parallel edition using Python instead of Matlab is also available. Last but not least, each chapter is accompanied by an extensive set of course-tested exercises and solutions.
Strong pulsed magnetic fields are important for several fields in physics and engineering, such as power generation and accelerator facilities. Basic aspects of the generation of strong and superstrong pulsed magnetic fields technique are given, including the physics and hydrodynamics of the conductors interacting with the field as well as an account of the significant progress in generation of strong magnetic fields using the magnetic accumulation technique. Results of computer simulations as well as a survey of available field technology are completing the volume.
The present book includes a set of selected papers from the tenth "International Conference on Informatics in Control Automation and Robotics" (ICINCO 2013), held in Reykjavik, Iceland, from 29 to 31 July 2013. The conference was organized in four simultaneous tracks: "Intelligent Control Systems and Optimization", "Robotics and Automation", "Signal Processing, Sensors, Systems Modeling and Control" and "Industrial Engineering, Production and Management". The book is based on the same structure. ICINCO 2013 received 255 paper submissions from 50 countries, in all continents. After a double blind paper review performed by the Program Committee only 30% were published and presented orally. A further refinement was made after the conference, based also on the assessment of presentation quality, so that this book includes the extended and revised versions of the very best papers of ICINCO 2013.
This monograph presents the latest findings from a long-term research project intended to identify the physics behind Quantum Mechanics. A fundamental theory for quantum mechanics is constructed from first physical principles, revealing quantization as an emergent phenomenon arising from a deeper stochastic process. As such, it offers the vibrant community working on the foundations of quantum mechanics an alternative contribution open to discussion. The book starts with a critical summary of the main conceptual problems that still beset quantum mechanics. The basic consideration is then introduced that any material system is an open system in permanent contact with the random zero-point radiation field, with which it may reach a state of equilibrium. Working from this basis, a comprehensive and self-consistent theoretical framework is then developed. The pillars of the quantum-mechanical formalism are derived, as well as the radiative corrections of nonrelativistic QED, while revealing the underlying physical mechanisms. The genesis of some of the central features of quantum theory is elucidated, such as atomic stability, the spin of the electron, quantum fluctuations, quantum nonlocality and entanglement. The theory developed here reaffirms fundamental scientific principles such as realism, causality, locality and objectivity. |
![]() ![]() You may like...
Infinite Words, Volume 141 - Automata…
Dominique Perrin, Jean-Eric Pin
Hardcover
R4,214
Discovery Miles 42 140
Handbook Of Mathematical Concepts And…
Mohammad Asadzadeh, Reimond Emanuelsson
Hardcover
R4,946
Discovery Miles 49 460
Mathematical Statistics with…
William Mendenhall, Dennis Wackerly, …
Paperback
A Commentary On Newton's Principia…
John Martin Frederick Wright
Hardcover
R1,070
Discovery Miles 10 700
|