![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > General
This book provides a comprehensive mathematical description and analysis of the delegate allocation processes in the US Democratic and Republican presidential primaries, focusing on the role of apportionment methods and the effect of thresholds-the minimum levels of support required to receive delegates. The analysis involves a variety of techniques, including theoretical arguments, simplicial geometry, Monte Carlo simulation, and examination of presidential primary data from 2004 to 2020. The book is divided into two parts: Part I defines the classical apportionment problem and explains how the implementation and goals of delegate apportionment differ from those of apportionment for state representation in the US House of Representatives and for party representation in legislatures based on proportional representation. The authors then describe how delegates are assigned to states and congressional districts and formally define the delegate apportionment methods used in each state by the two major parties to allocate delegates to presidential candidates. Part II analyzes and compares the apportionment methods introduced in Part I based on their level of bias and adherence to various notions of proportionality. It explores how often the methods satisfy the quota condition and quantifies their biases in favor or against the strongest and weakest candidates. Because the methods are quota-based, they are susceptible to classical paradoxes like the Alabama and population paradoxes. They also suffer from other paradoxes that are more relevant in the context of delegate apportionment such as the elimination and aggregation paradoxes. The book evaluates the extent to which each method is susceptible to each paradox. Finally, it discusses the appointment of delegates based on divisor methods and notions of regressive proportionality. This book appeals to scholars and students interested in mathematical economics and political science, with an emphasis on apportionment and social choice theory.
In the last decade, there have been an increasing convergence of interest and methods between theoretical physics and fields as diverse as probability, machine learning, optimization and compressed sensing. In particular, many theoretical and applied works in statistical physics and computer science have relied on the use of message passing algorithms and their connection to statistical physics of spin glasses. The aim of this book, especially adapted to PhD students, post-docs, and young researchers, is to present the background necessary for entering this fast developing field.
This book facilitates both the theoretical background and applications of fuzzy, intuitionistic fuzzy and rough, fuzzy rough sets in the area of data science. This book provides various individual, soft computing, optimization and hybridization techniques of fuzzy and intuitionistic fuzzy sets with rough sets and their applications including data handling and that of type-2 fuzzy systems. Machine learning techniques are effectively implemented to solve a diversity of problems in pattern recognition, data mining and bioinformatics. To handle different nature of problems, including uncertainty, the book highlights the theory and recent developments on uncertainty, fuzzy systems, feature extraction, text categorization, multiscale modeling, soft computing, machine learning, deep learning, SMOTE, data handling, decision making, Diophantine fuzzy soft set, data envelopment analysis, centrally measures, social networks, Volterra–Fredholm integro-differential equation, Caputo fractional derivative, interval optimization, decision making, classification problems. This book is predominantly envisioned for researchers and students of data science, medical scientists and professional engineers.
This volume of Advances in Nuclear Physics addresses two very different frontiers of contemporary nuclear physics - one highly theoretical and the other solidly phenomenological. The first article by Matthias Burkardt provides a pedagogical overview of the timely topic of light front quantization. Although introduced decades ago by Dirac, light front quantization has been a central focus in theoretical - clear and particle physics in recent years for two majorreasons. The first, as discussed in detail by Burkardt, is that light-cone coordinates are the natural coordinates for describing high-energy scattering. The wealth of data in recent years on nucleon and nucleus structure functions from high-energy lepton and hadron scattering thus provides a strong impetus for understanding QCD on the light cone. Second, as theorists have explored light front quantization, a host of deep and intriguing theoretical questions have arisen associated with the triviality of the vacuum, the role of zero modes, rotational invariance, and renormalization. These issues are so compelling that they are now intensively investigated on their own merit, independent of the particular application to high-energy scattering. This article provides an excellent introduction and overview of the motivation from high-energy scattering, an accessible - scription of the basic ideas, an insightful discussion of the open problems, and a helpful guide to the specialized literature. It is an ideal opportunity for those with a spectator's acquaintance to develop a deeper understanding of this important field.
The book discusses the potential of higher-order interactions to model real-world relational systems. Over the last decade, networks have emerged as the paradigmatic framework to model complex systems. Yet, as simple collections of nodes and links, they are intrinsically limited to pairwise interactions, limiting our ability to describe, understand, and predict complex phenomena which arise from higher-order interactions. Here we introduce the new modeling framework of higher-order systems, where hypergraphs and simplicial complexes are used to describe complex patterns of interactions among any number of agents. This book is intended both as a first introduction and an overview of the state of the art of this rapidly emerging field, serving as a reference for network scientists interested in better modeling the interconnected world we live in.
This textbook, apart from introducing the basic aspects of applied mathematics, focuses on recent topics such as information data manipulation, information coding, data approximation, data dimensionality reduction, data compression, time-frequency and time scale bases, image manipulation, and image noise removal. The methods treated in more detail include spectral representation and "frequency" of the data, providing valuable information for, e.g. data compression and noise removal. Furthermore, a special emphasis is also put on the concept of "wavelets" in connection with the "multi-scale" structure of data-sets. The presentation of the book is elementary and easily accessible, requiring only some knowledge of elementary linear algebra and calculus. All important concepts are illustrated with examples, and each section contains between 10 an 25 exercises. A teaching guide, depending on the level and discipline of instructions is included for classroom teaching and self-study.
Global optimization is concerned with the computation and characterization of global optima of nonlinear functions. During the past three decades the field of global optimization has been growing at a rapid pace, and the number of publications on all aspects of global optimization has been increasing steadily. Many applications, as well as new theoretical, algorithmic, and computational contributions have resulted. The Handbook of Global Optimization is the first comprehensive book to cover recent developments in global optimization. Each contribution in the Handbook is essentially expository in nature, but scholarly in its treatment. The chapters cover optimality conditions, complexity results, concave minimization, DC programming, general quadratic programming, nonlinear complementarity, minimax problems, multiplicative programming, Lipschitz optimization, fractional programming, network problems, trajectory methods, homotopy methods, interval methods, and stochastic approaches. The Handbook of Global Optimization is addressed to researchers in mathematical programming, as well as all scientists who use optimization methods to model and solve problems.
This volume treats linear regression diagnostics as a tool for the application of linear regression models to real-life data. The presentation makes extensive use of examples to illustrate theory. The text assesses the effect of measurement errors on the estimated coefficients, which is not accounted for in a standard least squares estimate, but is important where regression coefficients are used to apportion effects due to different variables. The robustness of the regression fit is assessed qualitatively and numerically.
This volume presents state-of-the-art complementarity applications, algorithms, extensions and theory in the form of eighteen papers. These at the International Conference on Com invited papers were presented plementarity 99 (ICCP99) held in Madison, Wisconsin during June 9-12, 1999 with support from the National Science Foundation under Grant DMS-9970102. Complementarity is becoming more widely used in a variety of appli cation areas. In this volume, there are papers studying the impact of complementarity in such diverse fields as deregulation of electricity mar kets, engineering mechanics, optimal control and asset pricing. Further more, application of complementarity and optimization ideas to related problems in the burgeoning fields of machine learning and data mining are also covered in a series of three articles. In order to effectively process the complementarity problems that arise in such applications, various algorithmic, theoretical and computational extensions are covered in this volume. Nonsmooth analysis has an im portant role to play in this area as can be seen from articles using these tools to develop Newton and path following methods for constrained nonlinear systems and complementarity problems. Convergence issues are covered in the context of active set methods, global algorithms for pseudomonotone variational inequalities, successive convex relaxation and proximal point algorithms. Theoretical contributions to the connectedness of solution sets and constraint qualifications in the growing area of mathematical programs with equilibrium constraints are also presented. A relaxation approach is given for solving such problems. Finally, computational issues related to preprocessing mixed complementarity problems are addressed."
Iterative Methods for Queuing and Manufacturing Systems introduces the recent advances and developments in iterative methods for solving Markovian queuing and manufacturing problems.Key highlights include:- an introduction to simulation and simulation software packages;- Markovian models with applications in inventory control and supply chains; future research directions.With numerous exercises and fully-worked examples, this book will be essential reading for anyone interested in the formulation and computation of queuing and manufacturing systems but it will be of particular interest to students, practitioners and researchers in Applied Mathematics, Scientific Computing and Operational Research.
The three volumes of Interest Rate Modeling present a comprehensive and up-to-date treatment of techniques and models used in the pricing and risk management of fixed income securities. Written by two leading practitioners and seasoned industry veterans, this unique series combines finance theory, numerical methods, and approximation techniques to provide the reader with an integrated approach to the process of designing and implementing industrial-strength models for fixed income security valuation and hedging. Aiming to bridge the gap between advanced theoretical models and real-life trading applications, the pragmatic, yet rigorous, approach taken in this book will appeal to students, academics, and professionals working in quantitative finance. Volume I provides the theoretical and computational foundations for the series, emphasizing the construction of efficient grid- and simulation-based methods for contingent claims pricing. The second part of Volume I is dedicated to local-stochastic volatility modeling and to the construction of vanilla models for individual swap and Libor rates. Although the focus is eventually turned toward fixed income securities, much of the material in this volume applies to generic financial markets and will be of interest to anybody working in the general area of asset pricing.
One service mathematics has rendered the 'Et moi, ... si j'avait su comment en revenir. je n'y serais point aIle.' human mee. It has put common sense back Jules Verne where it belongs, on the topmost shelf next to the dusty canister labelled 'discarded non The series is divergent; therefore we may be sense'. Eric T. Bell able to do something with it. O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic has rendered com puter science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series."
This monograph draws on two traditions: the algebraic formulation of quantum mechanics as well as quantum field theory, and the geometric theory of classical mechanics. These are combined in a unified treatment of the theory of Poisson algebras of observables and pure state spaces with a transition probability, which leads on to a discussion of the theory of quantization and the classical limit from this perspective. A prototype of quantization comes from the analogy between the C*- algebra of a Lie groupoid and the Poisson algebra of the corresponding Lie algebroid. The parallel between reduction of symplectic manifolds in classical mechanics and induced representations of groups and C*- algebras in quantum mechanics plays an equally important role. Examples from physics include constrained quantization, curved spaces, magnetic monopoles, gauge theories, massless particles, and $theta$- vacua. Accessible to mathematicians with some prior knowledge of classical and quantum mechanics, and to mathematical physicists and theoretical physicists with some background in functional analysis.
This text is an introduction to harmonic analysis on symmetric spaces, focusing on advanced topics such as higher rank spaces, positive definite matrix space and generalizations. It is intended for beginning graduate students in mathematics or researchers in physics or engineering. As with the introductory book entitled "Harmonic Analysis on Symmetric Spaces - Euclidean Space, the Sphere, and the Poincare Upper Half Plane, the style is informal with an emphasis on motivation, concrete examples, history, and applications. The symmetric spaces considered here are quotients X=G/K, where G is a non-compact real Lie group, such as the general linear group GL(n,P) of all n x n non-singular real matrices, and K=O(n), the maximal compact subgroup of orthogonal matrices. Other examples are Siegel's upper half "plane" and the quaternionic upper half "plane". In the case of the general linear group, one can identify X with the space Pn of n x n positive definite symmetric matrices. Many corrections and updates have been incorporated in this new edition. Updates include discussions of random matrix theory and quantum chaos, as well as recent research on modular forms and their corresponding L-functions in higher rank. Many applications have been added, such as the solution of the heat equation on Pn, the central limit theorem of Donald St. P. Richards for Pn, results on densest lattice packing of spheres in Euclidean space, and GL(n)-analogs of the Weyl law for eigenvalues of the Laplacian in plane domains. Topics featured throughout the text include inversion formulas for Fourier transforms, central limit theorems, fundamental domains in X for discrete groups (such as the modular group GL(n,Z) of n x n matrices with integer entries and determinant +/-1), connections with the problem of finding densest lattice packings of spheres in Euclidean space, automorphic forms, Hecke operators, L-functions, and the Selberg trace formula and its applications in spectral theory as well as number theory.
This book provides awareness of methods used for functional encryption in the academic and professional communities. The book covers functional encryption algorithms and its modern applications in developing secure systems via entity authentication, message authentication, software security, cyber security, hardware security, Internet of Thing (IoT), cloud security, smart card technology, CAPTCHA, digital signature, and digital watermarking. This book is organized into fifteen chapters; topics include foundations of functional encryption, impact of group theory in cryptosystems, elliptic curve cryptography, XTR algorithm, pairing based cryptography, NTRU algorithms, ring units, cocks IBE schemes, Boneh-Franklin IBE, Sakai-Kasahara IBE, hierarchical identity based encryption, attribute based Encryption, extensions of IBE and related primitives, and digital signatures. Explains the latest functional encryption algorithms in a simple way with examples; Includes applications of functional encryption in information security, application security, and network security; Relevant to academics, research scholars, software developers, etc.
Shafarevich's Basic Algebraic Geometry has been a classic and universally used introduction to the subject since its first appearance over 40 years ago. As the translator writes in a prefatory note, ``For all [advanced undergraduate and beginning graduate] students, and for the many specialists in other branches of math who need a liberal education in algebraic geometry, Shafarevich's book is a must.'' The second volume is in two parts: Book II is a gentle cultural introduction to scheme theory, with the first aim of putting abstract algebraic varieties on a firm foundation; a second aim is to introduce Hilbert schemes and moduli spaces, that serve as parameter spaces for other geometric constructions. Book III discusses complex manifolds and their relation with algebraic varieties, Kahler geometry and Hodge theory. The final section raises an important problem in uniformising higher dimensional varieties that has been widely studied as the ``Shafarevich conjecture''. The style of Basic Algebraic Geometry 2 and its minimal prerequisites make it to a large extent independent of Basic Algebraic Geometry 1, and accessible to beginning graduate students in mathematics and in theoretical physics.
The aim of this work is to present several topics in time-frequency analysis as subjects in abelian group theory. The algebraic point of view pre dominates as questions of convergence are not considered. Our approach emphasizes the unifying role played by group structures on the development of theory and algorithms. This book consists of two main parts. The first treats Weyl-Heisenberg representations over finite abelian groups and the second deals with mul tirate filter structures over free abelian groups of finite rank. In both, the methods are dimensionless and coordinate-free and apply to one and multidimensional problems. The selection of topics is not motivated by mathematical necessity but rather by simplicity. We could have developed Weyl-Heisenberg theory over free abelian groups of finite rank or more generally developed both topics over locally compact abelian groups. However, except for having to dis cuss conditions for convergence, Haar measures, and other standard topics from analysis the underlying structures would essentially be the same. A re cent collection of papers 17] provides an excellent review of time-frequency analysis over locally compact abelian groups. A further reason for limiting the scope of generality is that our results can be immediately applied to the design of algorithms and codes for time frequency processing."
Relative entropy has played a significant role in various fields of mathematics and physics as the quantum version of the Kullback-Leibler divergence in classical theory. Many variations of relative entropy have been introduced so far with applications to quantum information and related subjects. Typical examples are three different classes, called the standard, the maximal, and the measured f-divergences, all of which are defined in terms of (operator) convex functions f on (0, ) and have respective mathematical and information theoretical backgrounds. The -Renyi relative entropy and its new version called the sandwiched -Renyi relative entropy have also been useful in recent developments of quantum information. In the first half of this monograph, the different types of quantum f-divergences and the Renyi-type divergences mentioned above in the general von Neumann algebra setting are presented for study. While quantum information has been developing mostly in the finite-dimensional setting, it is widely believed that von Neumann algebras provide the most suitable framework in studying quantum information and related subjects. Thus, the advance of quantum divergences in von Neumann algebras will be beneficial for further development of quantum information. Quantum divergences are functions of two states (or more generally, two positive linear functionals) on a quantum system and measure the difference between the two states. They are often utilized to address such problems as state discrimination, error correction, and reversibility of quantum operations. In the second half of the monograph, the reversibility/sufficiency theory for quantum operations (quantum channels) between von Neumann algebras via quantum f-divergences is explained, thus extending and strengthening Petz' previous work. For the convenience of the reader, an appendix including concise accounts of von Neumann algebras is provided.
This is a book written primarily for graduate students and early researchers in the fields of Analysis and Partial Differential Equations (PDEs). Coverage of the material is essentially self-contained, extensive and novel with great attention to details and rigour. The strength of the book primarily lies in its clear and detailed explanations, scope and coverage, highlighting and presenting deep and profound inter-connections between different related and seemingly unrelated disciplines within classical and modern mathematics and above all the extensive collection of examples, worked-out and hinted exercises. There are well over 700 exercises of varying level leading the reader from the basics to the most advanced levels and frontiers of research. The book can be used either for independent study or for a year-long graduate level course. In fact it has its origin in a year-long graduate course taught by the author in Oxford in 2004-5 and various parts of it in other institutions later on. A good number of distinguished researchers and faculty in mathematics worldwide have started their research career from the course that formed the basis for this book.
This is a book written primarily for graduate students and early researchers in the fields of Analysis and Partial Differential Equations (PDEs). Coverage of the material is essentially self-contained, extensive and novel with great attention to details and rigour. The strength of the book primarily lies in its clear and detailed explanations, scope and coverage, highlighting and presenting deep and profound inter-connections between different related and seemingly unrelated disciplines within classical and modern mathematics and above all the extensive collection of examples, worked-out and hinted exercises. There are well over 700 exercises of varying level leading the reader from the basics to the most advanced levels and frontiers of research. The book can be used either for independent study or for a year-long graduate level course. In fact it has its origin in a year-long graduate course taught by the author in Oxford in 2004-5 and various parts of it in other institutions later on. A good number of distinguished researchers and faculty in mathematics worldwide have started their research career from the course that formed the basis for this book.
Duringthelastdecades,geosciencesand-engineeringwerein?uencedbytwo essentialscenarios. First, thetechnologicalprogresshaschangedcompletely the observational and measurement techniques. Modern high speed c- puters and satellite-based techniques are entering more and more all (geo) disciplines. Second, there is a growing public concern about the future of our planet, its climate, its environment, and about an expected shortage of natural resources. Obviously, both aspects, viz. (i) e?cient strategies of protection against threats of a changing Earth and (ii) the exceptional s- uation of getting terrestrial, airborne as well as spaceborne, data of better and better quality explain the strong need for new mathematical structures, tools, and methods. In consequence, mathematics concerned with geosci- ti?c problems, i.e., geomathematics, is becoming more and more important. Nowadays, geomathematics may be regarded as the key technology to build the bridge between real Earth processes and their scienti?c understanding. In fact, it is the intrinsic and indispensable means to handle geoscient- cally relevant data sets of high quality within high accuracy and to improve signi?cantly modeling capabilities in Earth system research.
This book aims to gather the insight of leading experts on corruption and anti-corruption studies working at the scientific frontier of this phenomenon using the multidisciplinary tools of data and network science, in order to present current theoretical, empirical, and operational efforts being performed in order to curb this problem. The research results strengthen the importance of evidence-based approaches in the fight against corruption in all its forms, and foster the discussion about the best ways to convert the obtained knowledge into public policy. The contributed chapters provide comprehensive and multidisciplinary approaches to handle the non-trivial structural and dynamical aspects that characterize the modern social, economic, political and technological systems where corruption takes place. This book will serve a broad multi-disciplinary audience from natural to social scientists, applied mathematicians, including law and policymakers. |
You may like...
Oracle Solaris and Veritas Cluster : An…
Vijay Shankar Upreti
Paperback
R1,894
Discovery Miles 18 940
Introductory Dynamical Oceanography
Stephen Pond, George L. Pickard
Paperback
R1,346
Discovery Miles 13 460
Pro Oracle Application Express 4
Tim Fox, Scott Spendolini, …
Paperback
R1,573
Discovery Miles 15 730
|