![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Numerical analysis
This is an advanced book on modular forms. While there are many books published about modular forms, they are written at an elementary level, and not so interesting from the viewpoint of a reader who already knows the basics. This book offers something new, which may satisfy the desire of such a reader. However, we state every definition and every essential fact concerning classical modular forms of one variable. One of the principal new features of this book is the theory of modular forms of half-integral weight, another being the discussion of theta functions and Eisenstein series of holomorphic and nonholomorphic types. Thus the book is presented so that the reader can learn such theories systematically.
This book contains selected papers of NSC08, the 2nd Conference on Nonlinear Science and Complexity, held 28-31 July, 2008, Porto, Portugal. It focuses on fundamental theories and principles, analytical and symbolic approaches, computational techniques in nonlinear physics and mathematics. Topics treated include * Chaotic Dynamics and Transport in Classic and Quantum Systems * Complexity and Nonlinearity in Molecular Dynamics and Nano-Science * Complexity and Fractals in Nonlinear Biological Physics and Social Systems * Lie Group Analysis and Applications in Nonlinear Science * Nonlinear Hydrodynamics and Turbulence * Bifurcation and Stability in Nonlinear Dynamic Systems * Nonlinear Oscillations and Control with Applications * Celestial Physics and Deep Space Exploration * Nonlinear Mechanics and Nonlinear Structural Dynamics * Non-smooth Systems and Hybrid Systems * Fractional dynamical systems
Model-based recursive partitioning (MOB) provides a powerful synthesis between machine-learning inspired recursive partitioning methods and regression models. Hanna Birke extends this approach by allowing in addition for measurement error in covariates, as frequently occurring in biometric (or econometric) studies, for instance, when measuring blood pressure or caloric intake per day. After an introduction into the background, the extended methodology is developed in detail for the Cox model and the Weibull model, carefully implemented in R, and investigated in a comprehensive simulation study.
On the 8th of August 1900 outstanding German mathematician David Hilbert delivered a talk "Mathematical problems" at the Second Interna tional Congress of Mathematicians in Paris. The talk covered practically all directions of mathematical thought of that time and contained a list of 23 problems which determined the further development of mathema tics in many respects (1, 119]. Hilbert's Sixteenth Problem (the second part) was stated as follows: Problem. To find the maximum number and to determine the relative position of limit cycles of the equation dy Qn(X, y) -= dx Pn(x, y)' where Pn and Qn are polynomials of real variables x, y with real coeffi cients and not greater than n degree. The study of limit cycles is an interesting and very difficult problem of the qualitative theory of differential equations. This theory was origi nated at the end of the nineteenth century in the works of two geniuses of the world science: of the Russian mathematician A. M. Lyapunov and of the French mathematician Henri Poincare. A. M. Lyapunov set forth and solved completely in the very wide class of cases a special problem of the qualitative theory: the problem of motion stability (154]. In turn, H. Poincare stated a general problem of the qualitative analysis which was formulated as follows: not integrating the differential equation and using only the properties of its right-hand sides, to give as more as possi ble complete information on the qualitative behaviour of integral curves defined by this equation (176]."
Along with finite differences and finite elements, spectral methods are one of the three main methodologies for solving partial differential equations on computers. This book provides a detailed presentation of basic spectral algorithms, as well as a systematical presentation of basic convergence theory and error analysis for spectral methods. Readers of this book will be exposed to a unified framework for designing and analyzing spectral algorithms for a variety of problems, including in particular high-order differential equations and problems in unbounded domains. The book contains a large number of figures which are designed to illustrate various concepts stressed in the book. A set of basic matlab codes has been made available online to help the readers to develop their own spectral codes for their specific applications.
Of the many different approaches to solving partial differential
equations numerically, this book studies difference methods.
Written for the beginning graduate student in applied mathematics
and engineering, this text offers a means of coming out of a course
with a large number of methods that provide both theoretical
knowledge and numerical experience. The reader will learn that
numerical experimentation is a part of the subject of numerical
solution of partial differential equations, and will be shown some
uses and taught some techniques of numerical experimentation.
Spontaneous potential (SP) well-logging is one of the most common and useful well-logging techniques in petroleum exploitation. This monograph is the first of its kind on the mathematical model of spontaneous potential well-logging and its numerical solutions. The mathematical model established in this book shows the necessity of introducing Sobolev spaces with fractional power, which seriously increases the difficulty of proving the well-posedness and proposing numerical solution schemes. In this book, in the axisymmetric situation the well-posedness of the corresponding mathematical model is proved and three efficient schemes of numerical solution are proposed, supported by a number of numerical examples to meet practical computation needs.
When the DFG (Deutsche Forschungsgemeinschaft) launched its collabora tive research centre or SFB (Sonderforschungsbereich) 438 "Mathematical Modelling, Simulation, and Verification in Material-Oriented Processes and Intelligent Systems" in July 1997 at the Technische Vniversitat Munchen and at the Vniversitat Augsburg, southern Bavaria got its second nucleus of the still young discipline scientific computing. Whereas the first and older one, FORTWIHR, the Bavarian Consortium for High Performance Scientific Com puting, had put its main emphasis on the supercomputing aspect, this new initiative was now expected to focus on the mathematical part. Consequently, throughout all of the five main research topics (A) adaptive materials and thin layers, (B) adaptive materials in medicine, (C) robotics, aeronautics, and automobile technology, (D) microstructured devices and systems, and (E) transport processes in flows, mathematical aspects play a predominant role. The formation of the SFB 438 and its scientific program are inextricably linked with the name of Karl-Heinz Hoffmann. As full professor for applied mathematics in Augsburg (1981-1991) and in Munchen (since 1992) and as dean of the faculty of mathematics at the TV Munchen, he was the driv ing force of this fascinating, but not always easy-to-realize idea of bringing together scientists from mathematics, physics, engineering, informatics, and medicine for joint efforts in modern applied mathematics. However, scarcely work had begun when the successful captain was called to take command on a bigger boat."
These days, the nature of services and the volume of demand in the telecommu nication industry is changing radically, with the replacement of analog transmis sion and traditional copper cables by digital technology and fiber optic transmis sion equipment. Moreover, we see an increasing competition among providers of telecommunication services, and the development of a broad range of new services for users, combining voice, data, graphics and video. Telecommunication network planning has thus become an important problem area for developing and applying optimization models. Telephone companies have initiated extensive modeling and planning efforts to expand and upgrade their transmission facilities, which are, for most national telecommunication networks, divided in three main levels (see Balakrishnan et al. [5]), namely, l. the long-distance or backbone network that typically connects city pairs through gateway nodes; 2. the inter-office or switching center network within each city, that interconnects switching centers in different subdivisions (clusters of customers) and provides access to the gateway(s) node(s); 1 2 DESIGN OF SURVNABLE NETWORKS WITH BOUNDED RINGS 3. the local access network that connects individual subscribers belonging to a cluster to the corresponding switching center. These three levels differ in several ways including their design criteria. Ideally, the design of a telecommunication network should simultaneously account for these three levels. However, to simplify the planning task, the overall planning problem is decomposed by considering each level separately.
This book is intended to provide economists with mathematical tools necessary to handle the concepts of evolution under uncertainty and adaption arising in economics, pursuing the Arrow-Debreu-Hahn legacy. It applies the techniques of viability theory to the study of economic systems evolving under contingent uncertainty, faced with scarcity constraints, and obeying various implementation of the inertia principle. The book illustrates how new tools can be used to move from static analysis, built on concepts of optima, equilibria and attractors to a contingent dynamic framework.
Developed over a period of two years at the University of Utah Department of Computer Science, this course has been designed to encourage the integration of computation into the science and engineering curricula. Intended as an introductory course in computing expressly for science and engineering students, the course was created to satisfy the standard programming requirement, while preparing students to immediately exploit the broad power of modern computing in their science and engineering courses.
In this book we analyze the error caused by numerical schemes for the approximation of semilinear stochastic evolution equations (SEEq) in a Hilbert space-valued setting. The numerical schemes considered combine Galerkin finite element methods with Euler-type temporal approximations. Starting from a precise analysis of the spatio-temporal regularity of the mild solution to the SEEq, we derive and prove optimal error estimates of the strong error of convergence in the first part of the book. The second part deals with a new approach to the so-called weak error of convergence, which measures the distance between the law of the numerical solution and the law of the exact solution. This approach is based on Bismut's integration by parts formula and the Malliavin calculus for infinite dimensional stochastic processes. These techniques are developed and explained in a separate chapter, before the weak convergence is proven for linear SEEq.
This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.
Line and hyperplane location problems play an important role not only in operations research and location theory, but also in computational geometry and robust statistics. This book provides a survey on line and hyperplane location combining analytical and geometrical methods. The major portion of the text presents new results on this topic, including the extension of some special cases to all distances derived from norms and a discussion of restricted problems in the plane. Almost all results are proven in the text and most of them are illustrated by examples. Furthermore, relations to classical facility location and to problems in computational geometry are pointed out. Audience: The book is suitable for researchers, lecturers, and graduate students working in the fields of location theory or computational geometry.
The emphasis throughout the present volume is on the practical application of theoretical mathematical models helping to unravel the underlying mechanisms involved in processes from mathematical physics and biosciences. It has been conceived as a unique collection of abstract methods dealing especially with nonlinear partial differential equations (either stationary or evolutionary) that are applied to understand concrete processes involving some important applications related to phenomena such as: boundary layer phenomena for viscous fluids, population dynamics,, dead core phenomena, etc. It addresses researchers and post-graduate students working at the interplay between mathematics and other fields of science and technology and is a comprehensive introduction to the theory of nonlinear partial differential equations and its main principles also presents their real-life applications in various contexts: mathematical physics, chemistry, mathematical biology, and population genetics. Based on the authors' original work, this volume provides an overview of the field, with examples suitable for researchers but also for graduate students entering research. The method of presentation appeals to readers with diverse backgrounds in partial differential equations and functional analysis. Each chapter includes detailed heuristic arguments, providing thorough motivation for the material developed later in the text. The content demonstrates in a firm way that partial differential equations can be used to address a large variety of phenomena occurring in and influencing our daily lives. The extensive reference list and index make this book a valuable resource for researchers working in a variety of fields and who are interested in phenomena modeled by nonlinear partial differential equations.
Algorithms for the numerical computation of definite integrals have been proposed for more than 300 years, but practical considerations have led to problems of ever-increasing complexity, so that, even with current computing speeds, numerical integration may be a difficult task. High dimension and complicated structure of the region of integration and singularities of the integrand are the main sources of difficulties.
This book offers a mathematical update of the state of the art of the research in the field of mathematical and numerical models of the circulatory system. It is structured into different chapters, written by outstanding experts in the field. Many fundamental issues are considered, such as: the mathematical representation of vascular geometries extracted from medical images, modelling blood rheology and the complex multilayer structure of the vascular tissue, and its possible pathologies, the mechanical and chemical interaction between blood and vascular walls, and the different scales coupling local and systemic dynamics. All of these topics introduce challenging mathematical and numerical problems, demanding for advanced analysis and efficient simulation techniques, and pay constant attention to applications of relevant clinical interest. This book is addressed to graduate students and researchers in the field of bioengineering, applied mathematics and medicine, wishing to engage themselves in the fascinating task of modeling the cardiovascular system or, more broadly, physiological flows.
In the spectrum of mathematics, graph theory which studies a mathe matical structure on a set of elements with a binary relation, as a recognized discipline, is a relative newcomer. In recent three decades the exciting and rapidly growing area of the subject abounds with new mathematical devel opments and significant applications to real-world problems. More and more colleges and universities have made it a required course for the senior or the beginning postgraduate students who are majoring in mathematics, computer science, electronics, scientific management and others. This book provides an introduction to graph theory for these students. The richness of theory and the wideness of applications make it impossi ble to include all topics in graph theory in a textbook for one semester. All materials presented in this book, however, I believe, are the most classical, fundamental, interesting and important. The method we deal with the mate rials is to particularly lay stress on digraphs, regarding undirected graphs as their special cases. My own experience from teaching out of the subject more than ten years at University of Science and Technology of China (USTC) shows that this treatment makes hardly the course di: fficult, but much more accords with the essence and the development trend of the subject."
The focus from most Virtual Reality (VR) systems lies mainly on the visual immersion of the user. But the emphasis only on the visual perception is insufficient for some applications as the user is limited in his interactions within the VR. Therefore the textbook presents the principles and theoretical background to develop a VR system that is able to create a link between physical simulations and haptic rendering which requires update rates of 1\, kHz for the force feedback. Special attention is given to the modeling and computation of contact forces in a two-finger grasp of textiles. Addressing further the perception of small scale surface properties like roughness, novel algorithms are presented that are not only able to consider the highly dynamic behaviour of textiles but also capable of computing the small forces needed for the tactile rendering at the contact point. Final analysis of the entire VR system is being made showing the problems and the solutions found in the work
'Et moi, ..., si j'avait su comment en revenir, One service mathematics has rendered the je n'y serais point aile.' human race. It has put common sense back Jules Verne where it belongs, on the topmost shelf next to the dusty canister labelled 'discarded n- sense'. The series is divergent; therefore we may be able to do something with it. Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non- linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics ...'; 'One service logic has rendered com- puter science ...'; 'One service category theory has rendered mathematics ...'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series.
At first sight discrete and fractional programming techniques appear to be two com pletely unrelated fields in operations research. We will show how techniques in both fields can be applied separately and in a combined form to particular models in location analysis. Location analysis deals with the problem of deciding where to locate facilities, con sidering the clients to be served, in such a way that a certain criterion is optimized. The term "facilities" immediately suggests factories, warehouses, schools, etc., while the term "clients" refers to depots, retail units, students, etc. Three basic classes can be identified in location analysis: continuous location, network location and dis crete location. The differences between these fields arise from the structure of the set of possible locations for the facilities. Hence, locating facilities in the plane or in another continuous space corresponds to a continuous location model while finding optimal facility locations on the edges or vertices of a network corresponds to a net work location model. Finally, if the possible set of locations is a finite set of points we have a discrete location model. Each of these fields has been actively studied, arousing intense discussion on the advantages and disadvantages of each of them. The usual requirement that every point in the plane or on the network must be a candidate location point, is one of the mostly used arguments "against" continuous and network location models."
Everything should be made as simple as possible, but not simpler. (Albert Einstein, Readers Digest, 1977) The modern practice of creating technical systems and technological processes of high effi.ciency besides the employment of new principles, new materials, new physical effects and other new solutions ( which is very traditional and plays the key role in the selection of the general structure of the object to be designed) also includes the choice of the best combination for the set of parameters (geometrical sizes, electrical and strength characteristics, etc.) concretizing this general structure, because the Variation of these parameters ( with the structure or linkage being already set defined) can essentially affect the objective performance indexes. The mathematical tools for choosing these best combinations are exactly what is this book about. With the advent of computers and the computer-aided design the pro bations of the selected variants are usually performed not for the real examples ( this may require some very expensive building of sample op tions and of the special installations to test them ), but by the analysis of the corresponding mathematical models. The sophistication of the mathematical models for the objects to be designed, which is the natu ral consequence of the raising complexity of these objects, greatly com plicates the objective performance analysis. Today, the main (and very often the only) available instrument for such an analysis is computer aided simulation of an object's behavior, based on numerical experiments with its mathematical model.
This book is a thoroughly revised result, updated to mid-1995, of the NATO Advanced Research Workshop on "Intelligent Learning Environments: the case of geometry", held in Grenoble, France, November 13-16, 1989. The main aim of the workshop was to foster exchanges among researchers who were concerned with the design of intelligent learning environments for geometry. The problem of student modelling was chosen as a central theme of the workshop, insofar as geometry cannot be reduced to procedural knowledge and because the significance of its complexity makes it of interest for intelligent tutoring system (ITS) development. The workshop centred around the following themes: modelling the knowledge domain, modelling student knowledge, design ing "didactic interaction", and learner control. This book contains revised versions of the papers presented at the workshop. All of the chapters that follow have been written by participants at the workshop. Each formed the basis for a scheduled presentation and discussion. Many are suggestive of research directions that will be carried out in the future. There are four main issues running through the papers presented in this book: * knowledge about geometry is not knowledge about the real world, and materialization of geometrical objects implies a reification of geometry which is amplified in the case of its implementation in a computer, since objects can be manipulated directly and relations are the results of actions (Laborde, Schumann). This aspect is well exemplified by research projects focusing on the design of geometric microworlds (Guin, Laborde). |
You may like...
Diagnostic Techniques in Industrial…
Mangey Ram, J. Paulo Davim
Hardcover
Learn German with Frankenstein - A…
Mary Shelley, Weeve Languages
Paperback
R464
Discovery Miles 4 640
|