![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > General
The purpose of this book is to thoroughly prepare the reader for
research in string theory at an intermediate level. As such it is
not a compendium of results but intended as textbook in the sense
that most of the material is organized in a pedagogical and
self-contained fashion.
This book looks at the increasing interest in running microscopy processing algorithms on big image data by presenting the theoretical and architectural underpinnings of a web image processing pipeline (WIPP). Software-based methods and infrastructure components for processing big data microscopy experiments are presented to demonstrate how information processing of repetitive, laborious and tedious analysis can be automated with a user-friendly system. Interactions of web system components and their impact on computational scalability, provenance information gathering, interactive display, and computing are explained in a top-down presentation of technical details. Web Microanalysis of Big Image Data includes descriptions of WIPP functionalities, use cases, and components of the web software system (web server and client architecture, algorithms, and hardware-software dependencies). The book comes with test image collections and a web software system to increase the reader's understanding and to provide practical tools for conducting big image experiments. By providing educational materials and software tools at the intersection of microscopy image analyses and computational science, graduate students, postdoctoral students, and scientists will benefit from the practical experiences, as well as theoretical insights. Furthermore, the book provides software and test data, empowering students and scientists with tools to make discoveries with higher statistical significance. Once they become familiar with the web image processing components, they can extend and re-purpose the existing software to new types of analyses. Each chapter follows a top-down presentation, starting with a short introduction and a classification of related methods. Next, a description of the specific method used in accompanying software is presented. For several topics, examples of how the specific method is applied to a dataset (parameters, RAM requirements, CPU efficiency) are shown. Some tips are provided as practical suggestions to improve accuracy or computational performance.
The celebrated Parisi solution of the Sherrington-Kirkpatrick model for spin glasses is one of the most important achievements in the field of disordered systems. Over the last three decades, through the efforts of theoretical physicists and mathematicians, the essential aspects of the Parisi solution were clarified and proved mathematically. The core ideas of the theory that emerged are the subject of this book, including the recent solution of the Parisi ultrametricity conjecture and a conceptually simple proof of the Parisi formula for the free energy. The treatment is self-contained and should be accessible to graduate students with a background in probability theory, with no prior knowledge of spin glasses. The methods involved in the analysis of the Sherrington-Kirkpatrick model also serve as a good illustration of such classical topics in probability as the Gaussian interpolation and concentration of measure, Poisson processes, and representation results for exchangeable arrays.
The first volume of the proceedings of the 7th conference on "Finite Volumes for Complex Applications" (Berlin, June 2014) covers topics that include convergence and stability analysis, as well as investigations of these methods from the point of view of compatibility with physical principles. It collects together the focused invited papers, as well as the reviewed contributions from internationally leading researchers in the field of analysis of finite volume and related methods. Altogether, a rather comprehensive overview is given of the state of the art in the field. The finite volume method in its various forms is a space discretization technique for partial differential equations based on the fundamental physical principle of conservation. Recent decades have brought significant success in the theoretical understanding of the method. Many finite volume methods preserve further qualitative or asymptotic properties, including maximum principles, dissipativity, monotone decay of free energy, and asymptotic stability. Due to these properties, finite volume methods belong to the wider class of compatible discretization methods, which preserve qualitative properties of continuous problems at the discrete level. This structural approach to the discretization of partial differential equations becomes particularly important for multiphysics and multiscale applications. Researchers, PhD and masters level students in numerical analysis, scientific computing and related fields such as partial differential equations will find this volume useful, as will engineers working in numerical modeling and simulations."
The last decades have seen the emergence of Complex Networks as the language with which a wide range of complex phenomena in fields as diverse as Physics, Computer Science, and Medicine (to name just a few) can be properly described and understood. This book provides a view of the state of the art in this dynamic field and covers topics ranging from network controllability, social structure, online behavior, recommendation systems, and network structure. This book includes the peer-reviewed list of works presented at the 7th Workshop on Complex Networks CompleNet 2016 which was hosted by the Universite de Bourgogne, France, from March 23-25, 2016. The 28 carefully reviewed and selected contributions in this book address many topics related to complex networks and have been organized in seven major groups: (1) Theory of Complex Networks, (2) Multilayer networks, (3) Controllability of networks, (4) Algorithms for networks, (5) Community detection, (6) Dynamics and spreading phenomena on networks, (7) Applications of Networks.
The factorization method is a relatively new method for solving certain types of inverse scattering problems and problems in tomography. Aimed at students and researchers in Applied Mathematics, Physics and Engineering, this text introduces the reader to this promising approach for solving important classes of inverse problems. The wide applicability of this method is discussed by choosing typical examples, such as inverse scattering problems for the scalar Helmholtz equation, a scattering problem for Maxwell's equation, and a problem in impedance and optical tomography. The last section of the book compares the Factorization Method to established sampling methods (the Linear Sampling Method, the Singular Source Method, and the Probe Method).
This book, authored by a well-known researcher and expositor in meteorology, focuses on the direct link between molecular dynamics and atmospheric variation. Uniting molecular dynamics, turbulence theory, fluid mechanics and non equilibrium statistical mechanics, it is relevant to the fields of applied mathematics, physics and atmospheric sciences, and focuses on fluid flow and turbulence, as well as on temperature, radiative transfer and chemistry. With extensive references and glossary this is an ideal text for graduates and researchers in meteorology, applied mathematics and physical chemistry.
This book provides an introduction to vector optimization with variable ordering structures, i.e., to optimization problems with a vector-valued objective function where the elements in the objective space are compared based on a variable ordering structure: instead of a partial ordering defined by a convex cone, we see a whole family of convex cones, one attached to each element of the objective space. The book starts by presenting several applications that have recently sparked new interest in these optimization problems, and goes on to discuss fundamentals and important results on a wide range of topics. The theory developed includes various optimality notions, linear and nonlinear scalarization functionals, optimality conditions of Fermat and Lagrange type, existence and duality results. The book closes with a collection of numerical approaches for solving these problems in practice.
Tjonnie Li's thesis covers two applications of Gravitational Wave astronomy: tests of General Relativity in the strong-field regime and cosmological measurements. The first part of the thesis focuses on the so-called TIGER, i.e. Test Infrastructure for General Relativity, an innovative Bayesian framework for performing hypothesis tests of modified gravity using ground-based GW data. After developing the framework, Li simulates a variety of General Relativity deviations and demonstrates the ability of the aforementioned TIGER to measure them. The advantages of the method are nicely shown and compared to other, less generic methods. Given the extraordinary implications that would result from any measured deviation from General Relativity, it is extremely important that a rigorous statistical approach for supporting these results would be in place before the first Gravitational Wave detections begin. In developing TIGER, Tjonnie Li shows a large amount of creativity and originality, and his contribution is an important step in the direction of a possible discovery of a deviation (if any) from General Relativity. In another section, Li's thesis deals with cosmology, describing an exploratory study where the possibility of cosmological parameters measurement through gravitational wave compact binary coalescence signals associated with electromagnetic counterparts is evaluated. In particular, the study explores the capabilities of the future Einstein Telescope observatory. Although of very long term-only applicability, this is again a thorough investigation, nicely put in the context of the current and the future observational cosmology.
This book contains a collection of papers presented at the 2nd Tbilisi Salerno Workshop on Mathematical Modeling in March 2015. The focus is on applications of mathematics in physics, electromagnetics, biochemistry and botany, and covers such topics as multimodal logic, fractional calculus, special functions, Fourier-like solutions for PDE's, Rvachev-functions and linear dynamical systems. Special chapters focus on recent uniform analytic descriptions of natural and abstract shapes using the Gielis Formula. The book is intended for a wide audience with interest in application of mathematics to modeling in the natural sciences.
This monograph explains the theory of quantum waveguides, that is, dynamics of quantum particles confined to regions in the form of tubes, layers, networks, etc. The focus is on relations between the confinement geometry on the one hand and the spectral and scattering properties of the corresponding quantum Hamiltonians on the other. Perturbations of such operators, in particular, by external fields are also considered. The volume provides a unique summary of twenty-five years of research activity in this area and indicates ways in which the theory can develop further. The book is fairly self-contained. While it requires some broader mathematical physics background, all the basic concepts are properly explained and proofs of most theorems are given in detail, so there is no need for additional sources. Without a parallel in the literature, the monograph by Exner and Kovarik guides the reader through this new and exciting field.
This vital new resource offers engineers and researchers a window on important new technology that will supersede the barcode and is destined to change the face of logistics and product data handling. In the last two decades, radio-frequency identification has grown fast, with accelerated take-up of RFID into the mainstream through its adoption by key users such as Wal-Mart, K-Mart and the US Department of Defense. RFID has many potential applications due to its flexibility, capability to operate out of line of sight, and its high data-carrying capacity. Yet despite optimistic projections of a market worth $25 billion by 2018, potential users are concerned about costs and investment returns. Clearly demonstrating the need for a fully printable chipless RFID tag as well as a powerful and efficient reader to assimilate the tag's data, this book moves on to describe both. Introducing the general concepts in the field including technical data, it then describes how a chipless RFID tag can be made using a planar disc-loaded monopole antenna and an asymmetrical coupled spiral multi-resonator. The tag encodes data via the "spectral signature" technique and is now in its third-generation version with an ultra-wide band (UWB) reader operating at between 5 and 10.7GHz.
The main body of this book is devoted to statistical physics, whereas much less emphasis is given to thermodynamics. In particular, the idea is to present the most important outcomes of thermodynamics - most notably, the laws of thermodynamics - as conclusions from derivations in statistical physics. Special emphasis is on subjects that are vital to engineering education. These include, first of all, quantum statistics, like the Fermi-Dirac distribution, as well as diffusion processes, both of which are fundamental to a sound understanding of semiconductor devices. Another important issue for electrical engineering students is understanding of the mechanisms of noise generation and stochastic dynamics in physical systems, most notably in electric circuitry. Accordingly, the fluctuation-dissipation theorem of statistical mechanics, which is the theoretical basis for understanding thermal noise processes in systems, is presented from a signals-and-systems point of view, in a way that is readily accessible for engineering students and in relation with other courses in the electrical engineering curriculum, like courses on random processes.
The primary objective of this book is to study some of the research topics in the area of analysis of complex surveys which have not been covered in any book yet. It discusses the analysis of categorical data using three models: a full model, a log-linear model and a logistic regression model. It is a valuable resource for survey statisticians and practitioners in the field of sociology, biology, economics, psychology and other areas who have to use these procedures in their day-to-day work. It is also useful for courses on sampling and complex surveys at the upper-undergraduate and graduate levels. The importance of sample surveys today cannot be overstated. From voters' behaviour to fields such as industry, agriculture, economics, sociology, psychology, investigators generally resort to survey sampling to obtain an assessment of the behaviour of the population they are interested in. Many large-scale sample surveys collect data using complex survey designs like multistage stratified cluster designs. The observations using these complex designs are not independently and identically distributed - an assumption on which the classical procedures of inference are based. This means that if classical tests are used for the analysis of such data, the inferences obtained will be inconsistent and often invalid. For this reason, many modified test procedures have been developed for this purpose over the last few decades.
Toward the late 1990s, several research groups independently began developing new, related theories in mathematical finance. These theories did away with the standard stochastic geometric diffusion "Samuelson" market model (also known as the Black-Scholes model because it is used in that most famous theory), instead opting for models that allowed minimax approaches to complement or replace stochastic methods. Among the most fruitful models were those utilizing game-theoretic tools and the so-called interval market model. Over time, these models have slowly but steadily gained influence in the financial community, providing a useful alternative to classical methods. A self-contained monograph, The Interval Market Model in Mathematical Finance: Game-Theoretic Methods assembles some of the most important results, old and new, in this area of research. Written by seven of the most prominent pioneers of the interval market model and game-theoretic finance, the work provides a detailed account of several closely related modeling techniques for an array of problems in mathematical economics. The book is divided into five parts, which successively address topics including: * probability-free Black-Scholes theory; * fair-price interval of an option; * representation formulas and fast algorithms for option pricing; * rainbow options; * tychastic approach of mathematical finance based upon viability theory. This book provides a welcome addition to the literature, complementing myriad titles on the market that take a classical approach to mathematical finance. It is a worthwhile resource for researchers in applied mathematics and quantitative finance, and has also been written in a manner accessible to financially-inclined readers with a limited technical background.
This book covers essential Microsoft EXCEL (R)'s computational skills while analyzing introductory physics projects. Topics of numerical analysis include; multiple graphs on the same sheet, calculation of descriptive statistical parameters, a 3-point interpolation, the Euler and the Runge-Kutter methods to solve equations of motion, the Fourier transform to calculate the normal modes of a double pendulum, matrix calculations to solve coupled linear equations of a DC circuit, animation of waves and Lissajous figures, electric and magnetic field calculations from the Poisson equation and its 3D surface graphs, variational calculus such as Fermat's least traveling time principle and the least action principle. Nelson's stochastic quantum dynamics is also introduced to draw quantum particle trajectories.
The book provides a detailed exposition of the calculus of variations on fibre bundles and graded manifolds. It presents applications in such area's as non-relativistic mechanics, gauge theory, gravitation theory and topological field theory with emphasis on energy and energy-momentum conservation laws. Within this general context the first and second Noether theorems are treated in the very general setting of reducible degenerate graded Lagrangian theory.
Problems of Point Blast Theory covers all the main topics of modern theory with the exception of applications to nova and supernova outbursts. All the presently known theoretical results are given and problems which are still to be resolved are indicated. A special feature of the book is the sophisticated mathematical approach. Of interest to specialists and graduate students working in hydrodynamics, explosion theory, plasma physics, mathematical physics, and applied mathematics.
This book presents the latest research advances in complex network structure analytics based on computational intelligence (CI) approaches, particularly evolutionary optimization. Most if not all network issues are actually optimization problems, which are mostly NP-hard and challenge conventional optimization techniques. To effectively and efficiently solve these hard optimization problems, CI based network structure analytics offer significant advantages over conventional network analytics techniques. Meanwhile, using CI techniques may facilitate smart decision making by providing multiple options to choose from, while conventional methods can only offer a decision maker a single suggestion. In addition, CI based network structure analytics can greatly facilitate network modeling and analysis. And employing CI techniques to resolve network issues is likely to inspire other fields of study such as recommender systems, system biology, etc., which will in turn expand CI's scope and applications. As a comprehensive text, the book covers a range of key topics, including network community discovery, evolutionary optimization, network structure balance analytics, network robustness analytics, community-based personalized recommendation, influence maximization, and biological network alignment. Offering a rich blend of theory and practice, the book is suitable for students, researchers and practitioners interested in network analytics and computational intelligence, both as a textbook and as a reference work.
Starting from fundamentals of classical stability theory, an overview is given of the transition phenomena in subsonic, wall-bounded shear flows. At first, the consideration focuses on elementary small-amplitude velocity perturbations of laminar shear layers, i.e. instability waves, in the simplest canonical configurations of a plane channel flow and a flat-plate boundary layer. Then the linear stability problem is expanded to include the effects of pressure gradients, flow curvature, boundary-layer separation, wall compliance, etc. related to applications. Beyond the amplification of instability waves is the non-modal growth of local stationary and non-stationary shear flow perturbations which are discussed as well. The volume continues with the key aspect of the transition process, that is, receptivity of convectively unstable shear layers to external perturbations, summarizing main paths of the excitation of laminar flow disturbances. The remainder of the book addresses the instability phenomena found at late stages of transition. These include secondary instabilities and nonlinear features of boundary-layer perturbations that lead to the final breakdown to turbulence. Thus, the reader is provided with a step-by-step approach that covers the milestones and recent advances in the laminar-turbulent transition. Special aspects of instability and transition are discussed through the book and are intended for research scientists, while the main target of the book is the student in the fundamentals of fluid mechanics. Computational guides, recommended exercises, and PowerPoint multimedia notes based on results of real scientific experiments supplement the monograph. These are especially helpful for the neophyte to obtain a solid foundation in hydrodynamic stability. To access the supplementary material go to extras.springer.com and type in the ISBN for this volume.
This book provides a comprehensive overview of the theoretical concepts and experimental applications of planar waveguides and other confined geometries, such as optical fibres. Covering a broad array of advanced topics, it begins with a sophisticated discussion of planar waveguide theory, and covers subjects including efficient production of planar waveguides, materials selection, nonlinear effects, and applications including species analytics down to single-molecule identification, and thermo-optical switching using planar waveguides. Written by specialists in the techniques and applications covered, this book will be a useful resource for advanced graduate students and researchers studying planar waveguides and optical fibers.
Stochastic analysis has a variety of applications to biological systems as well as physical and engineering problems, and its applications to finance and insurance have bloomed exponentially in recent times. The goal of this book is to present a broad overview of the range of applications of stochastic analysis and some of its recent theoretical developments. This includes numerical simulation, error analysis, parameter estimation, as well as control and robustness properties for stochastic equations. The book also covers the areas of backward stochastic differential equations via the (non-linear) G-Brownian motion and the case of jump processes. Concerning the applications to finance, many of the articles deal with the valuation and hedging of credit risk in various forms, and include recent results on markets with transaction costs.
This volume presents the state-of-the-art in selected topics across modern nuclear physics, covering fields of central importance to research and illustrating their connection to many different areas of physics. It describes recent progress in the study of superheavy and exotic nuclei, which is pushing our knowledge to ever heavier elements and neutron-richer isotopes. Extending nuclear physics to systems that are many times denser than even the core of an atomic nucleus, one enters the realm of the physics of neutron stars and possibly quark stars, a topic that is intensively investigated with many ground-based and outer-space research missions as well as numerous theoretical works. By colliding two nuclei at very high ultra-relativistic energies one can create a fireball of extremely hot matter, reminiscent of the universe very shortly after the big bang, leading to a phase of melted hadrons and free quarks and gluons, the so-called quark-gluon plasma. These studies tie up with effects of crucial importance in other fields. During the collision of heavy ions, electric fields of extreme strength are produced, potentially destabilizing the vacuum of the atomic physics system, subsequently leading to the decay of the vacuum state and the emission of positrons. In neutron stars the ultra-dense matter might support extremely high magnetic fields, far beyond anything that can be produced in the laboratory, significantly affecting the stellar properties. At very high densities general relativity predicts the stellar collapse to a black hole. However, a number of current theoretical activities, modifying Einstein's theory, point to possible alternative scenarios, where this collapse might be avoided. These and related topics are addressed in this book in a series of highly readable chapters. In addition, the book includes fundamental analyses of the practicalities involved in transiting to an electricity supply mainly based on renewable energies, investigating this scenario less from an engineering and more from a physics point of view. While the topics comprise a large scope of activities, the contributions also show an extensive overlap in the methodology and in the analytical and numerical tools involved in tackling these diverse research fields that are the forefront of modern science.
"Networks of Echoes: Imitation, Innovation and Invisible Leaders" is a mathematically rigorous and data rich book on a fascinating area of the science and engineering of social webs. There are hundreds of complex network phenomena whose statistical properties are described by inverse power laws. The phenomena of interest are not arcane events that we encounter only fleetingly, but are events that dominate our lives. We examine how this intermittent statistical behavior intertwines itself with what appears to be the organized activity of social groups. The book is structured as answers to a sequence of questions such as: How are decisions reached in elections and boardrooms? How is the stability of a society undermined by zealots and committed minorities and how is that stability re-established? Can we learn to answer such questions about human behavior by studying the way flocks of birds retain their formation when eluding a predator? These questions and others are answered using a generic model of a complex dynamic network one whose global behavior is determined by a symmetric interaction among individuals based on social imitation. The complexity of the network is manifest in time series resulting from self-organized critical dynamics that have divergent first and second moments, are non-stationary, non-ergodic and non-Poisson. How phase transitions in the network dynamics influence such activity as decision making is a fascinating story and provides a context for introducing many of the mathematical ideas necessary for understanding complex networks in general. The decision making model (DMM) is selected to emphasize that there are features of complex webs that supersede specific mechanisms and need to be understood from a general perspective. This insightful overview of recent tools and their uses may serve as an introduction and curriculum guide in related courses."
This book contains the results in numerical analysis and optimization presented at the ECCOMAS thematic conference "Computational Analysis and Optimization" (CAO 2011) held in Jyvaskyla, Finland, June 9-11, 2011. Both the conference and this volume are dedicated to Professor Pekka Neittaanmaki on the occasion of his sixtieth birthday. It consists of five parts that are closely related to his scientific activities and interests: Numerical Methods for Nonlinear Problems; Reliable Methods for Computer Simulation; Analysis of Noised and Uncertain Data; Optimization Methods; Mathematical Models Generated by Modern Technological Problems. The book also includes a short biography of Professor Neittaanmaki. |
You may like...
Handbook of Research on Technological…
Joao M. F. Rodrigues, Celia M. Q. Ramos, …
Hardcover
R7,344
Discovery Miles 73 440
Designing the User Interface: Strategies…
Ben Shneiderman, Catherine Plaisant, …
Paperback
R2,037
Discovery Miles 20 370
|