![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Numerical analysis
This volume contains the description of an EC-sponsered program to
study all relevant aspects of shock/ boundary-layer interaction
control, the latter designed to improve aircraft performance at
design (cruise) and off-design conditions. The work being presented
includes a discussion of basic control experiments and the
corresponding physical modeling, to account for shock control and a
discussion of the airfoil experiments conducted for code validation
and control assessment, in conjunction with the basic experiments
and computations. The contents is comprised of a section giving a
broad overview of the research carried out here and more detailed
individual contributions by the participants in the research.
Most real-world spectrum analysis problems involve the computation of the real-data discrete Fourier transform (DFT), a unitary transform that maps elements N of the linear space of real-valued N-tuples, R , to elements of its complex-valued N counterpart, C , and when carried out in hardware it is conventionally achieved via a real-from-complex strategy using a complex-data version of the fast Fourier transform (FFT), the generic name given to the class of fast algorithms used for the ef?cient computation of the DFT. Such algorithms are typically derived by explo- ing the property of symmetry, whether it exists just in the transform kernel or, in certain circumstances, in the input data and/or output data as well. In order to make effective use of a complex-data FFT, however, via the chosen real-from-complex N strategy, the input data to the DFT must ?rst be converted from elements of R to N elements of C . The reason for choosing the computational domain of real-data problems such N N as this to be C , rather than R , is due in part to the fact that computing equ- ment manufacturers have invested so heavily in producing digital signal processing (DSP) devices built around the design of the complex-data fast multiplier and accumulator (MAC), an arithmetic unit ideally suited to the implementation of the complex-data radix-2 butter?y, the computational unit used by the familiar class of recursive radix-2 FFT algorithms.
A new translation makes this classic and important text more generally accessible. The text is placed in its contemporary context, but also related to the interests of practising mathematicians today. This book will be of interest to mathematical historians, researchers, and numerical analysts.
An original motivation for algebraic geometry was to understand curves and surfaces in three dimensions. Recent theoretical and technological advances in areas such as robotics, computer vision, computer-aided geometric design and molecular biology, together with the increased availability of computational resources, have brought these original questions once more into the forefront of research. One particular challenge is to combine applicable methods from algebraic geometry with proven techniques from piecewise-linear computational geometry (such as Voronoi diagrams and hyperplane arrangements) to develop tools for treating curved objects. These research efforts may be summarized under the term nonlinear computational geometry. This volume grew out of an IMA workshop on Nonlinear Computational Geometry in May/June 2007 (organized by I.Z. Emiris, R. Goldman, F. Sottile, T. Theobald) which gathered leading experts in this emerging field. The research and expository articles in the volume are intended to provide an overview of nonlinear computational geometry. Since the topic involves computational geometry, algebraic geometry, and geometric modeling, the volume has contributions from all of these areas. By addressing a broad range of issues from purely theoretical and algorithmic problems, to implementation and practical applications this volume conveys the spirit of the IMA workshop.
Probably the first book to describe computational methods for numerically computing steady state and Hopf bifurcations. Requiring only a basic knowledge of calculus, and using detailed examples, problems, and figures, this is an ideal textbook for graduate students.
This book constitutes the refereed proceedings of the 11th International Symposium on Experimental Algorithms, SEA 2012, held Bordeaux, France, in June 2012. The 31 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 64 submissions and present current research in the area of design, analysis, and experimental evaluation and engineering of algorithms, as well as in various aspects of computational optimization and its applications.
The papers presented here describe research to improve the general understanding of the application of SAMR to practical problems, to identify issues critical to efficient and effective implementation on high performance computers and to stimulate the development of a community code repository for software including benchmarks to assist in the evaluation of software and compiler technologies. The ten chapters have been divided into two parts reflecting two major issues in the topic: programming complexity of SAMR algorithms and the applicability and numerical challenges of SAMR methods.
This is a self-contained introduction to algebraic curves over finite fields and geometric Goppa codes. There are four main divisions in the book. The first is a brief exposition of basic concepts and facts of the theory of error-correcting codes (Part I). The second is a complete presentation of the theory of algebraic curves, especially the curves defined over finite fields (Part II). The third is a detailed description of the theory of classical modular curves and their reduction modulo a prime number (Part III). The fourth (and basic) is the construction of geometric Goppa codes and the production of asymptotically good linear codes coming from algebraic curves over finite fields (Part IV). The theory of geometric Goppa codes is a fascinating topic where two extremes meet: the highly abstract and deep theory of algebraic (specifically modular) curves over finite fields and the very concrete problems in the engineering of information transmission. At the present time there are two essentially different ways to produce asymptotically good codes coming from algebraic curves over a finite field with an extremely large number of rational points. The first way, developed by M. A. Tsfasman, S. G. Vladut and Th. Zink [210], is rather difficult and assumes a serious acquaintance with the theory of modular curves and their reduction modulo a prime number. The second way, proposed recently by A.
The study of optimal shape design can be arrived at by asking the following question: "What is the best shape for a physical system?" This book is an applications-oriented study of such physical systems; in particular, those which can be described by an elliptic partial differential equation and where the shape is found by the minimum of a single criterion function. There are many problems of this type in high-technology industries. In fact, most numerical simulations of physical systems are solved not to gain better understanding of the phenomena but to obtain better control and design. Problems of this type are described in Chapter 2. Traditionally, optimal shape design has been treated as a branch of the calculus of variations and more specifically of optimal control. This subject interfaces with no less than four fields: optimization, optimal control, partial differential equations (PDEs), and their numerical solutions-this is the most difficult aspect of the subject. Each of these fields is reviewed briefly: PDEs (Chapter 1), optimization (Chapter 4), optimal control (Chapter 5), and numerical methods (Chapters 1 and 4).
Energy levels, resonanees, vibrations, feature extraetion, faetor analysis - the names vary from discipline to diseipline; however, all involve eigenvalue/eigenveetor eomputations. An engineer or physicist who is modeling a physieal proeess, strueture, or deviee is eonstrained to seleet a model for whieh the subsequently-required eomputations ean be performed. This eonstraint often leads to redueed order or redueed size models whieh may or may not preserve all of the important eharaeteristies of the system being modeled. Ideally, the modeler should not be foreed to make such apriori reduetions. It is our intention to provide here proeedures wh ich will allow the direct and suceessful solution of many large 'symmetrie' eigenvalue problems, so that at least in problems where the computations are of this type there will be no need for model reduetion. Matrix eigenelement eomputations can be c1assified as smalI, medium, or large seale, in terms of their relative degrees of difficulty as measured by the amount of computer storage and time required to eomplete the desired eomputations. A matrix eigenvalue problem is said to be sm all scale if the given matrix has order smaller than 100. Well-documented and reliable FORTRAN pro grams exist for small scale eigenelement computations, see in particular ElS- PACK [1976,1977]. Typically those programs explicitly trans form the given matrix into a simpler canonieal form. The eigenelement eomputations are then performed on the canonical form.
Most of the papers in this volume were presented at the NATO Advanced Research Workshop High Performance Computing: Technology and Application, held in Cetraro, Italy from 24 to 26 of June, 1996. The main purpose of the Workshop was to discuss some key scientific and technological developments in high performance computing, identify significant trends and defme desirable research objectives. The volume structure corresponds, in general, to the outline of the workshop technical agenda: general concepts and emerging systems, software technology, algorithms and applications. One of the Workshop innovations was an effort to extend slightly the scope of the meeting from scientific/engineering computing to enterprise-wide computing. The papers on performance and scalability of database servers, and Oracle DBMS reflect this attempt We hope that after reading this collection of papers the readers will have a good idea about some important research and technological issues in high performance computing. We wish to give our thanks to the NATO Scientific and Environmental Affairs Division for being the principal sponsor for the Workshop. Also we are pleased to acknowledge other institutions and companies that supported the Workshop: European Union: European Commission DGIII-Industry, CNR: National Research Council of Italy, University of Calabria, Alenia Spazio, Centro Italiano Ricerche Aerospaziali, ENEA: Italian National Agency for New Technology, Energy and the Environment, Fujitsu, Hewlett Packard-Convex, Hitachi, NEC, Oracle, and Silicon Graphics-Cray Research. Editors January 1997 vii LIST OF CONTRIBUTORS Ecole Nonnale Sucentsrieure de Lyon, 69364 Abarbanel. Robert
The study of scan statistics and their applications to many different scientific and engineering problems have received considerable attention in the literature recently. In addition to challenging theoretical problems, the area of scan statis tics has also found exciting applications in diverse disciplines such as archaeol ogy, astronomy, epidemiology, geography, material science, molecular biology, reconnaissance, reliability and quality control, sociology, and telecommunica tion. This will be clearly evident when one goes through this volume. In this volume, we have brought together a collection of experts working in this area of research in order to review some of the developments that have taken place over the years and also to present their new works and point out some open problems. With this in mind, we selected authors for this volume with some having theoretical interests and others being primarily concerned with applications of scan statistics. Our sincere hope is that this volume will thus provide a comprehensive survey of all the developments in this area of research and hence will serve as a valuable source as well as reference for theoreticians and applied researchers. Graduate students interested in this area will find this volume to be particularly useful as it points out many open challenging problems that they could pursue. This volume will also be appropriate for teaching a graduate-level special course on this topic.
The history of continued fractions is certainly one of the longest among those of mathematical concepts, since it begins with Euclid's algorithm for the great est common divisor at least three centuries B.C. As it is often the case and like Monsieur Jourdain in Moliere's "Ie bourgeois gentilhomme" (who was speak ing in prose though he did not know he was doing so), continued fractions were used for many centuries before their real discovery. The history of continued fractions and Pade approximants is also quite im portant, since they played a leading role in the development of some branches of mathematics. For example, they were the basis for the proof of the tran scendence of 11' in 1882, an open problem for more than two thousand years, and also for our modern spectral theory of operators. Actually they still are of great interest in many fields of pure and applied mathematics and in numerical analysis, where they provide computer approximations to special functions and are connected to some convergence acceleration methods. Con tinued fractions are also used in number theory, computer science, automata, electronics, etc ..."
At the beginning we would like to introduce a refinement. The term 'VLSI planarization' means planarization of a circuit of VLSI, Le. the embedding of a VLSI circuit in the plane by different criteria such as the minimum number of connectors, the minimum total length of connectors, the minimum number of over-the-element routes, etc. A connector is designed to connect the broken sections of a net. It can be implemented in different ways depending on the technology. Connectors for a bipolar VLSI are implemented by diffused tun nels, for instance. By over-the-element route we shall mean a connection which intersects the enclosing rectangle of an element (or a cell). The possibility of the construction such connections during circuit planarization is reflected in element models and can be ensured, for example, by the availability of areas within the rectangles where connections may be routed. VLSI planarization is one of the basic stages (others will be discussed below) of the so called topological (in the mathematical sense) approach to VLSI design. This approach does not lie in the direction of the classical approach to automation of VLSI layout design. In the classical approach to computer aided design the placement and routing problems are solved successively. The topological approach, in contrast, allows one to solve both problems at the same time. This is achieved by constructing a planar embedding of a circuit and obtaining the proper VLSI layout on the basis of it."
This volume is a selection from the 281 published papers of Joseph Leonard Walsh, former US Naval Officer and professor at University of Maryland and Harvard University. The nine broad sections are ordered following the evolution of his work. Commentaries and discussions of subsequent development are appended to most of the sections. Also included is one of Walsh's most influential works, "A closed set of normal orthogonal function," which introduced what is now known as "Walsh Functions".
The course of lectures on numerical methods (part I) given by the author to students in the numerical third of the course of the mathematics mechanics department of Leningrad State University is set down in this volume. Only the topics which, in the opinion of the author, are of the greatest value for numerical methods are considered in this book. This permits making the book comparatively small in size, and, the author hopes, accessible to a sufficiently wide circle of readers. The book may be used not only by students in daily classes, but also by students taking correspondence courses and persons connected with practical computa tion who desire to improve their theoretical background. The author is deeply grateful to V. I. Krylov, the organizer ofthe course on numerical methods (part I) at Leningrad State University, for his considerable assistance and constant interest in the work on this book, and also for his attentive review of the manuscript. The author is very grateful to G. P. Akilov and I. K. Daugavet for a series of valuable suggestions and observations. The Author Chapter I NUMERICAL SOLUTION OF EQUATIONS In this chapter, methods for the numerical solution of equations of the form P(x) = 0, will be considered, where P(x) is in general a complex-valued function.
The Third International Symposium on Hultivariate Approximation Theory was held at the Oberwolfach !1athematical Research Insti- tute, Black Forest, February 8-12, 1982. The preceding conferen- ces on this topic were held in 1976* and 1979**. The conference brought together 50 mathematicians from 14 coun- tries. These Proceedings form arecord of most of the papers pre- sented at the Symposium. The topics treated cover different problems on multivariate approximation theory such as new results concerning approxima- tion by polynomials in Sobolev spaces, biorthogonal systems and orthogonal series of functions in several variables, multivariate spline functions, group theoretic and functional analytic methods, positive linear operators, error estimates for approximation procedures and cubature formulae, Boolean methods in multivari- ate interpolation and the numerical application of summation procedures. Special emphasis was posed on the application of multivariate approximation in various fields of science. One mathematician was sorely missed at the Symposium. Professor Arthur Sard who had actively taken part in the earlier conferen- ces passed away in August of 1980. Since he was a friend of many of the participants, the editors wish to dedicate these Procee- dings to the memory of this distinguished mathematician. Abrief appreciation of his life and mathematical work appears as well *"Constructive Theory of Functions of Several Variables". Edited by w. Schempp and Karl Zeller. Lecture Notes in 1-1athematics, Vol.
Mathematical modelling of many physical processes involves rather complex dif- ferential, integral, and integro-differential equations which can be solved directly only in a number of cases. Therefore, as a first step, an original problem has to be considerably simplified in order to get a preliminary knowledge of the most important qualitative features of the process under investigation and to estimate the effect of various factors. Sometimes a solution of the simplified problem can be obtained in the analytical form convenient for further investigation. At this stage of the mathematical modelling it is useful to apply various special functions. Many model problems of atomic, molecular, and nuclear physics, electrody- namics, and acoustics may be reduced to equations of hypergeometric type, a(x)y" + r(x)y' + AY = 0 , (0.1) where a(x) and r(x) are polynomials of at most the second and first degree re- spectively and A is a constant [E7, AI, N18]. Some solutions of (0.1) are functions extensively used in mathematical physics such as classical orthogonal polyno- mials (the Jacobi, Laguerre, and Hermite polynomials) and hypergeometric and confluent hypergeometric functions.
The book is a revised and updated version of the lectures given by the author at the University of Timi oara during the academic year 1990-1991. Its goal is to present in detail someold and new aspects ofthe geometry ofsymplectic and Poisson manifolds and to point out some of their applications in Hamiltonian mechanics and geometric quantization. The material is organized as follows. In Chapter 1 we collect some general facts about symplectic vector spaces, symplectic manifolds and symplectic reduction. Chapter 2 deals with the study ofHamiltonian mechanics. We present here the gen- eral theory ofHamiltonian mechanicalsystems, the theory ofthe corresponding Pois- son bracket and also some examples ofinfinite-dimensional Hamiltonian mechanical systems. Chapter 3 starts with some standard facts concerning the theory of Lie groups and Lie algebras and then continues with the theory ofmomentum mappings and the Marsden-Weinstein reduction. The theory of Hamilton-Poisson mechan- ical systems makes the object of Chapter 4. Chapter 5 js dedicated to the study of the stability of the equilibrium solutions of the Hamiltonian and the Hamilton- Poisson mechanical systems. We present here some of the remarcable results due to Holm, Marsden, Ra~iu and Weinstein. Next, Chapter 6 and 7 are devoted to the theory of geometric quantization where we try to solve, in a geometrical way, the so called Dirac problem from quantum mechanics. We follow here the construc- tion given by Kostant and Souriau around 1964.
Computational aspects of geometry of numbers have been revolutionized by the Lenstra-Lenstra-Lovasz ' lattice reduction algorithm (LLL), which has led to bre- throughs in elds as diverse as computer algebra, cryptology, and algorithmic number theory. After its publication in 1982, LLL was immediately recognized as one of the most important algorithmic achievements of the twentieth century, because of its broad applicability and apparent simplicity. Its popularity has kept growing since, as testi ed by the hundreds of citations of the original article, and the ever more frequent use of LLL as a synonym to lattice reduction. As an unfortunate consequence of the pervasiveness of the LLL algorithm, researchers studying and applying it belong to diverse scienti c communities, and seldom meet. While discussing that particular issue with Damien Stehle ' at the 7th Algorithmic Number Theory Symposium (ANTS VII) held in Berlin in July 2006, John Cremona accuratelyremarkedthat 2007would be the 25th anniversaryof LLL and this deserveda meetingto celebrate that event. The year 2007was also involved in another arithmetical story. In 2003 and 2005, Ali Akhavi, Fabien Laguillaumie, and Brigitte Vallee ' with other colleagues organized two workshops on cryptology and algorithms with a strong emphasis on lattice reduction: CAEN '03 and CAEN '05, CAEN denoting both the location and the content (Cryptologie et Algori- miqueEn Normandie). Veryquicklyafterthe ANTSconference,AliAkhavi,Fabien Laguillaumie, and Brigitte Vallee ' were thus readily contacted and reacted very enthusiastically about organizing the LLL birthday conference. The organization committee was formed.
The requirement of causality in system theory is inevitably accompanied by the appearance of certain mathematical operations, namely the Riesz proj- tion,theHilberttransform,andthespectralfactorizationmapping.Aclassical exampleillustratingthisisthedeterminationoftheso-calledWiener?lter(the linear, minimum means square error estimation ?lter for stationary stochastic sequences [88]). If the ?lter is not required to be causal, the transfer function of the Wiener ?lter is simply given by H(?)=? (?)/? (?),where ? (?) xy xx xx and ? (?) are certain given functions. However, if one requires that the - xy timation ?lter is causal, the transfer function of the optimal ?lter is given by 1 ? (?) xy H(?)= P ,?? (??,?] . + [? ] (?) [? ] (?) xx + xx? Here [? ] and [? ] represent the so called spectral factors of ? ,and xx + xx? xx P is the so called Riesz projection. Thus, compared to the non-causal ?lter, + two additional operations are necessary for the determination of the causal ?lter, namely the spectral factorization mapping ? ? ([? ] ,[? ] ),and xx xx + xx? the Riesz projection P .
This text is an introduction to methods of grid generation technology in scientific computing. Special attention is given to methods developed by the author for the treatment of singularly-perturbed equations, e.g. in modeling high Reynolds number flows. Functionals of conformality, orthogonality, energy and alignment are discussed.
Many phenomena of interest for applications are represented by differential equations which are defined in a domain whose boundary is a priori unknown, and is accordingly named a "free boundary." A further quantitative condition is then provided in order to exclude indeterminacy. Free boundary problems thus encompass a broad spectrum which is represented in this state-of-the-art volume by a variety of contributions of researchers in mathematics and applied fields like physics, biology and material sciences. Special emphasis has been reserved for mathematical modelling and for the formulation of new problems.
The classical theories of Linear Elasticity and Newtonian Fluids, though trium phantly elegant as mathematical structures, do not adequately describe the defor mation and flow of most real materials. Attempts to characterize the behaviour of real materials under the action of external forces gave rise to the science of Rheology. Early rheological studies isolated the phenomena now labelled as viscoelastic. Weber (1835, 1841), researching the behaviour of silk threats under load, noted an instantaneous extension, followed by a further extension over a long period of time. On removal of the load, the original length was eventually recovered. He also deduced that the phenomena of stress relaxation and damping of vibrations should occur. Later investigators showed that similar effects may be observed in other materials. The German school referred to these as "Elastische Nachwirkung" or "the elastic aftereffect" while the British school, including Lord Kelvin, spoke ofthe "viscosityofsolids." The universal adoption of the term "Viscoelasticity," intended to convey behaviour combining proper ties both of a viscous liquid and an elastic solid, is of recent origin, not being used for example by Love (1934), though Alfrey (1948) uses it in the context of polymers. The earliest attempts at mathematically modelling viscoelastic behaviour were those of Maxwell (1867) (actually in the context of his work on gases; he used this model for calculating the viscosity of a gas) and Meyer (1874)."
One of the major concerns of theoretical computer science is the classifi cation of problems in terms of how hard they are. The natural measure of difficulty of a function is the amount of time needed to compute it (as a function of the length of the input). Other resources, such as space, have also been considered. In recursion theory, by contrast, a function is considered to be easy to compute if there exists some algorithm that computes it. We wish to classify functions that are hard, i.e., not computable, in a quantitative way. We cannot use time or space, since the functions are not even computable. We cannot use Turing degree, since this notion is not quantitative. Hence we need a new notion of complexity-much like time or spac that is quantitative and yet in some way captures the level of difficulty (such as the Turing degree) of a function." |
You may like...
Enhancing Learning and Teaching Through…
Chenicheri Sid Nair, Patricie Mertova
Paperback
R1,449
Discovery Miles 14 490
Best Books graded reading series: Reader…
Mart Meij, Beatrix de Villiers
Paperback
R117
Discovery Miles 1 170
|