![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Numerical analysis
This book constitutes the refereed proceedings of the 11th International Symposium on Experimental Algorithms, SEA 2012, held Bordeaux, France, in June 2012. The 31 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 64 submissions and present current research in the area of design, analysis, and experimental evaluation and engineering of algorithms, as well as in various aspects of computational optimization and its applications.
The papers presented here describe research to improve the general understanding of the application of SAMR to practical problems, to identify issues critical to efficient and effective implementation on high performance computers and to stimulate the development of a community code repository for software including benchmarks to assist in the evaluation of software and compiler technologies. The ten chapters have been divided into two parts reflecting two major issues in the topic: programming complexity of SAMR algorithms and the applicability and numerical challenges of SAMR methods.
This is a self-contained introduction to algebraic curves over finite fields and geometric Goppa codes. There are four main divisions in the book. The first is a brief exposition of basic concepts and facts of the theory of error-correcting codes (Part I). The second is a complete presentation of the theory of algebraic curves, especially the curves defined over finite fields (Part II). The third is a detailed description of the theory of classical modular curves and their reduction modulo a prime number (Part III). The fourth (and basic) is the construction of geometric Goppa codes and the production of asymptotically good linear codes coming from algebraic curves over finite fields (Part IV). The theory of geometric Goppa codes is a fascinating topic where two extremes meet: the highly abstract and deep theory of algebraic (specifically modular) curves over finite fields and the very concrete problems in the engineering of information transmission. At the present time there are two essentially different ways to produce asymptotically good codes coming from algebraic curves over a finite field with an extremely large number of rational points. The first way, developed by M. A. Tsfasman, S. G. Vladut and Th. Zink [210], is rather difficult and assumes a serious acquaintance with the theory of modular curves and their reduction modulo a prime number. The second way, proposed recently by A.
The study of optimal shape design can be arrived at by asking the following question: "What is the best shape for a physical system?" This book is an applications-oriented study of such physical systems; in particular, those which can be described by an elliptic partial differential equation and where the shape is found by the minimum of a single criterion function. There are many problems of this type in high-technology industries. In fact, most numerical simulations of physical systems are solved not to gain better understanding of the phenomena but to obtain better control and design. Problems of this type are described in Chapter 2. Traditionally, optimal shape design has been treated as a branch of the calculus of variations and more specifically of optimal control. This subject interfaces with no less than four fields: optimization, optimal control, partial differential equations (PDEs), and their numerical solutions-this is the most difficult aspect of the subject. Each of these fields is reviewed briefly: PDEs (Chapter 1), optimization (Chapter 4), optimal control (Chapter 5), and numerical methods (Chapters 1 and 4).
Energy levels, resonanees, vibrations, feature extraetion, faetor analysis - the names vary from discipline to diseipline; however, all involve eigenvalue/eigenveetor eomputations. An engineer or physicist who is modeling a physieal proeess, strueture, or deviee is eonstrained to seleet a model for whieh the subsequently-required eomputations ean be performed. This eonstraint often leads to redueed order or redueed size models whieh may or may not preserve all of the important eharaeteristies of the system being modeled. Ideally, the modeler should not be foreed to make such apriori reduetions. It is our intention to provide here proeedures wh ich will allow the direct and suceessful solution of many large 'symmetrie' eigenvalue problems, so that at least in problems where the computations are of this type there will be no need for model reduetion. Matrix eigenelement eomputations can be c1assified as smalI, medium, or large seale, in terms of their relative degrees of difficulty as measured by the amount of computer storage and time required to eomplete the desired eomputations. A matrix eigenvalue problem is said to be sm all scale if the given matrix has order smaller than 100. Well-documented and reliable FORTRAN pro grams exist for small scale eigenelement computations, see in particular ElS- PACK [1976,1977]. Typically those programs explicitly trans form the given matrix into a simpler canonieal form. The eigenelement eomputations are then performed on the canonical form.
Most of the papers in this volume were presented at the NATO Advanced Research Workshop High Performance Computing: Technology and Application, held in Cetraro, Italy from 24 to 26 of June, 1996. The main purpose of the Workshop was to discuss some key scientific and technological developments in high performance computing, identify significant trends and defme desirable research objectives. The volume structure corresponds, in general, to the outline of the workshop technical agenda: general concepts and emerging systems, software technology, algorithms and applications. One of the Workshop innovations was an effort to extend slightly the scope of the meeting from scientific/engineering computing to enterprise-wide computing. The papers on performance and scalability of database servers, and Oracle DBMS reflect this attempt We hope that after reading this collection of papers the readers will have a good idea about some important research and technological issues in high performance computing. We wish to give our thanks to the NATO Scientific and Environmental Affairs Division for being the principal sponsor for the Workshop. Also we are pleased to acknowledge other institutions and companies that supported the Workshop: European Union: European Commission DGIII-Industry, CNR: National Research Council of Italy, University of Calabria, Alenia Spazio, Centro Italiano Ricerche Aerospaziali, ENEA: Italian National Agency for New Technology, Energy and the Environment, Fujitsu, Hewlett Packard-Convex, Hitachi, NEC, Oracle, and Silicon Graphics-Cray Research. Editors January 1997 vii LIST OF CONTRIBUTORS Ecole Nonnale Sucentsrieure de Lyon, 69364 Abarbanel. Robert
The study of scan statistics and their applications to many different scientific and engineering problems have received considerable attention in the literature recently. In addition to challenging theoretical problems, the area of scan statis tics has also found exciting applications in diverse disciplines such as archaeol ogy, astronomy, epidemiology, geography, material science, molecular biology, reconnaissance, reliability and quality control, sociology, and telecommunica tion. This will be clearly evident when one goes through this volume. In this volume, we have brought together a collection of experts working in this area of research in order to review some of the developments that have taken place over the years and also to present their new works and point out some open problems. With this in mind, we selected authors for this volume with some having theoretical interests and others being primarily concerned with applications of scan statistics. Our sincere hope is that this volume will thus provide a comprehensive survey of all the developments in this area of research and hence will serve as a valuable source as well as reference for theoreticians and applied researchers. Graduate students interested in this area will find this volume to be particularly useful as it points out many open challenging problems that they could pursue. This volume will also be appropriate for teaching a graduate-level special course on this topic.
The history of continued fractions is certainly one of the longest among those of mathematical concepts, since it begins with Euclid's algorithm for the great est common divisor at least three centuries B.C. As it is often the case and like Monsieur Jourdain in Moliere's "Ie bourgeois gentilhomme" (who was speak ing in prose though he did not know he was doing so), continued fractions were used for many centuries before their real discovery. The history of continued fractions and Pade approximants is also quite im portant, since they played a leading role in the development of some branches of mathematics. For example, they were the basis for the proof of the tran scendence of 11' in 1882, an open problem for more than two thousand years, and also for our modern spectral theory of operators. Actually they still are of great interest in many fields of pure and applied mathematics and in numerical analysis, where they provide computer approximations to special functions and are connected to some convergence acceleration methods. Con tinued fractions are also used in number theory, computer science, automata, electronics, etc ..."
At the beginning we would like to introduce a refinement. The term 'VLSI planarization' means planarization of a circuit of VLSI, Le. the embedding of a VLSI circuit in the plane by different criteria such as the minimum number of connectors, the minimum total length of connectors, the minimum number of over-the-element routes, etc. A connector is designed to connect the broken sections of a net. It can be implemented in different ways depending on the technology. Connectors for a bipolar VLSI are implemented by diffused tun nels, for instance. By over-the-element route we shall mean a connection which intersects the enclosing rectangle of an element (or a cell). The possibility of the construction such connections during circuit planarization is reflected in element models and can be ensured, for example, by the availability of areas within the rectangles where connections may be routed. VLSI planarization is one of the basic stages (others will be discussed below) of the so called topological (in the mathematical sense) approach to VLSI design. This approach does not lie in the direction of the classical approach to automation of VLSI layout design. In the classical approach to computer aided design the placement and routing problems are solved successively. The topological approach, in contrast, allows one to solve both problems at the same time. This is achieved by constructing a planar embedding of a circuit and obtaining the proper VLSI layout on the basis of it."
This volume is a selection from the 281 published papers of Joseph Leonard Walsh, former US Naval Officer and professor at University of Maryland and Harvard University. The nine broad sections are ordered following the evolution of his work. Commentaries and discussions of subsequent development are appended to most of the sections. Also included is one of Walsh's most influential works, "A closed set of normal orthogonal function," which introduced what is now known as "Walsh Functions".
The course of lectures on numerical methods (part I) given by the author to students in the numerical third of the course of the mathematics mechanics department of Leningrad State University is set down in this volume. Only the topics which, in the opinion of the author, are of the greatest value for numerical methods are considered in this book. This permits making the book comparatively small in size, and, the author hopes, accessible to a sufficiently wide circle of readers. The book may be used not only by students in daily classes, but also by students taking correspondence courses and persons connected with practical computa tion who desire to improve their theoretical background. The author is deeply grateful to V. I. Krylov, the organizer ofthe course on numerical methods (part I) at Leningrad State University, for his considerable assistance and constant interest in the work on this book, and also for his attentive review of the manuscript. The author is very grateful to G. P. Akilov and I. K. Daugavet for a series of valuable suggestions and observations. The Author Chapter I NUMERICAL SOLUTION OF EQUATIONS In this chapter, methods for the numerical solution of equations of the form P(x) = 0, will be considered, where P(x) is in general a complex-valued function.
The Third International Symposium on Hultivariate Approximation Theory was held at the Oberwolfach !1athematical Research Insti- tute, Black Forest, February 8-12, 1982. The preceding conferen- ces on this topic were held in 1976* and 1979**. The conference brought together 50 mathematicians from 14 coun- tries. These Proceedings form arecord of most of the papers pre- sented at the Symposium. The topics treated cover different problems on multivariate approximation theory such as new results concerning approxima- tion by polynomials in Sobolev spaces, biorthogonal systems and orthogonal series of functions in several variables, multivariate spline functions, group theoretic and functional analytic methods, positive linear operators, error estimates for approximation procedures and cubature formulae, Boolean methods in multivari- ate interpolation and the numerical application of summation procedures. Special emphasis was posed on the application of multivariate approximation in various fields of science. One mathematician was sorely missed at the Symposium. Professor Arthur Sard who had actively taken part in the earlier conferen- ces passed away in August of 1980. Since he was a friend of many of the participants, the editors wish to dedicate these Procee- dings to the memory of this distinguished mathematician. Abrief appreciation of his life and mathematical work appears as well *"Constructive Theory of Functions of Several Variables". Edited by w. Schempp and Karl Zeller. Lecture Notes in 1-1athematics, Vol.
Mathematical modelling of many physical processes involves rather complex dif- ferential, integral, and integro-differential equations which can be solved directly only in a number of cases. Therefore, as a first step, an original problem has to be considerably simplified in order to get a preliminary knowledge of the most important qualitative features of the process under investigation and to estimate the effect of various factors. Sometimes a solution of the simplified problem can be obtained in the analytical form convenient for further investigation. At this stage of the mathematical modelling it is useful to apply various special functions. Many model problems of atomic, molecular, and nuclear physics, electrody- namics, and acoustics may be reduced to equations of hypergeometric type, a(x)y" + r(x)y' + AY = 0 , (0.1) where a(x) and r(x) are polynomials of at most the second and first degree re- spectively and A is a constant [E7, AI, N18]. Some solutions of (0.1) are functions extensively used in mathematical physics such as classical orthogonal polyno- mials (the Jacobi, Laguerre, and Hermite polynomials) and hypergeometric and confluent hypergeometric functions.
Many practical applications require the reconstruction of a multivariate function from discrete, unstructured data. This book gives a self-contained, complete introduction into this subject. It concentrates on truly meshless methods such as radial basis functions, moving least squares, and partitions of unity. The book starts with an overview on typical applications of scattered data approximation, coming from surface reconstruction, fluid-structure interaction, and the numerical solution of partial differential equations. It then leads the reader from basic properties to the current state of research, addressing all important issues, such as existence, uniqueness, approximation properties, numerical stability, and efficient implementation. Each chapter ends with a section giving information on the historical background and hints for further reading. Complete proofs are included, making this perfectly suited for graduate courses on multivariate approximation and it can be used to support courses in computer aided geometric design, and meshless methods for partial differential equations.
The book is a revised and updated version of the lectures given by the author at the University of Timi oara during the academic year 1990-1991. Its goal is to present in detail someold and new aspects ofthe geometry ofsymplectic and Poisson manifolds and to point out some of their applications in Hamiltonian mechanics and geometric quantization. The material is organized as follows. In Chapter 1 we collect some general facts about symplectic vector spaces, symplectic manifolds and symplectic reduction. Chapter 2 deals with the study ofHamiltonian mechanics. We present here the gen- eral theory ofHamiltonian mechanicalsystems, the theory ofthe corresponding Pois- son bracket and also some examples ofinfinite-dimensional Hamiltonian mechanical systems. Chapter 3 starts with some standard facts concerning the theory of Lie groups and Lie algebras and then continues with the theory ofmomentum mappings and the Marsden-Weinstein reduction. The theory of Hamilton-Poisson mechan- ical systems makes the object of Chapter 4. Chapter 5 js dedicated to the study of the stability of the equilibrium solutions of the Hamiltonian and the Hamilton- Poisson mechanical systems. We present here some of the remarcable results due to Holm, Marsden, Ra~iu and Weinstein. Next, Chapter 6 and 7 are devoted to the theory of geometric quantization where we try to solve, in a geometrical way, the so called Dirac problem from quantum mechanics. We follow here the construc- tion given by Kostant and Souriau around 1964.
Computational aspects of geometry of numbers have been revolutionized by the Lenstra-Lenstra-Lovasz ' lattice reduction algorithm (LLL), which has led to bre- throughs in elds as diverse as computer algebra, cryptology, and algorithmic number theory. After its publication in 1982, LLL was immediately recognized as one of the most important algorithmic achievements of the twentieth century, because of its broad applicability and apparent simplicity. Its popularity has kept growing since, as testi ed by the hundreds of citations of the original article, and the ever more frequent use of LLL as a synonym to lattice reduction. As an unfortunate consequence of the pervasiveness of the LLL algorithm, researchers studying and applying it belong to diverse scienti c communities, and seldom meet. While discussing that particular issue with Damien Stehle ' at the 7th Algorithmic Number Theory Symposium (ANTS VII) held in Berlin in July 2006, John Cremona accuratelyremarkedthat 2007would be the 25th anniversaryof LLL and this deserveda meetingto celebrate that event. The year 2007was also involved in another arithmetical story. In 2003 and 2005, Ali Akhavi, Fabien Laguillaumie, and Brigitte Vallee ' with other colleagues organized two workshops on cryptology and algorithms with a strong emphasis on lattice reduction: CAEN '03 and CAEN '05, CAEN denoting both the location and the content (Cryptologie et Algori- miqueEn Normandie). Veryquicklyafterthe ANTSconference,AliAkhavi,Fabien Laguillaumie, and Brigitte Vallee ' were thus readily contacted and reacted very enthusiastically about organizing the LLL birthday conference. The organization committee was formed.
The requirement of causality in system theory is inevitably accompanied by the appearance of certain mathematical operations, namely the Riesz proj- tion,theHilberttransform,andthespectralfactorizationmapping.Aclassical exampleillustratingthisisthedeterminationoftheso-calledWiener?lter(the linear, minimum means square error estimation ?lter for stationary stochastic sequences [88]). If the ?lter is not required to be causal, the transfer function of the Wiener ?lter is simply given by H(?)=? (?)/? (?),where ? (?) xy xx xx and ? (?) are certain given functions. However, if one requires that the - xy timation ?lter is causal, the transfer function of the optimal ?lter is given by 1 ? (?) xy H(?)= P ,?? (??,?] . + [? ] (?) [? ] (?) xx + xx? Here [? ] and [? ] represent the so called spectral factors of ? ,and xx + xx? xx P is the so called Riesz projection. Thus, compared to the non-causal ?lter, + two additional operations are necessary for the determination of the causal ?lter, namely the spectral factorization mapping ? ? ([? ] ,[? ] ),and xx xx + xx? the Riesz projection P .
This text is an introduction to methods of grid generation technology in scientific computing. Special attention is given to methods developed by the author for the treatment of singularly-perturbed equations, e.g. in modeling high Reynolds number flows. Functionals of conformality, orthogonality, energy and alignment are discussed.
Many phenomena of interest for applications are represented by differential equations which are defined in a domain whose boundary is a priori unknown, and is accordingly named a "free boundary." A further quantitative condition is then provided in order to exclude indeterminacy. Free boundary problems thus encompass a broad spectrum which is represented in this state-of-the-art volume by a variety of contributions of researchers in mathematics and applied fields like physics, biology and material sciences. Special emphasis has been reserved for mathematical modelling and for the formulation of new problems.
The classical theories of Linear Elasticity and Newtonian Fluids, though trium phantly elegant as mathematical structures, do not adequately describe the defor mation and flow of most real materials. Attempts to characterize the behaviour of real materials under the action of external forces gave rise to the science of Rheology. Early rheological studies isolated the phenomena now labelled as viscoelastic. Weber (1835, 1841), researching the behaviour of silk threats under load, noted an instantaneous extension, followed by a further extension over a long period of time. On removal of the load, the original length was eventually recovered. He also deduced that the phenomena of stress relaxation and damping of vibrations should occur. Later investigators showed that similar effects may be observed in other materials. The German school referred to these as "Elastische Nachwirkung" or "the elastic aftereffect" while the British school, including Lord Kelvin, spoke ofthe "viscosityofsolids." The universal adoption of the term "Viscoelasticity," intended to convey behaviour combining proper ties both of a viscous liquid and an elastic solid, is of recent origin, not being used for example by Love (1934), though Alfrey (1948) uses it in the context of polymers. The earliest attempts at mathematically modelling viscoelastic behaviour were those of Maxwell (1867) (actually in the context of his work on gases; he used this model for calculating the viscosity of a gas) and Meyer (1874)."
One of the major concerns of theoretical computer science is the classifi cation of problems in terms of how hard they are. The natural measure of difficulty of a function is the amount of time needed to compute it (as a function of the length of the input). Other resources, such as space, have also been considered. In recursion theory, by contrast, a function is considered to be easy to compute if there exists some algorithm that computes it. We wish to classify functions that are hard, i.e., not computable, in a quantitative way. We cannot use time or space, since the functions are not even computable. We cannot use Turing degree, since this notion is not quantitative. Hence we need a new notion of complexity-much like time or spac that is quantitative and yet in some way captures the level of difficulty (such as the Turing degree) of a function."
This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19-21, 2013. The materials in the book include theory and applications to provide design, analysis, and modeling of the key areas. The book will be useful material for students, researchers, professionals, as well academicians in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.
The chapters in this volume, written by international experts from different fields of mathematics, are devoted to honoring George Isac, a renowned mathematician. These contributions focus on recent developments in complementarity theory, variational principles, stability theory of functional equations, nonsmooth optimization, and several other important topics at the forefront of nonlinear analysis and optimization.
This textbook is designed to give graduate students an understanding of integrable systems via the study of Riemann surfaces, loop groups, and twistors. The book has its origins in a series of lecture courses given by the authors, all of whom are internationally known mathematicians and renowned expositors. It is written in an accessible and informal style, and fills a gap in the existing literature. The introduction by Nigel Hitchin addresses the meaning of integrability: how do we recognize an integrable system? His own contribution then develops connections with algebraic geometry, and includes an introduction to Riemann surfaces, sheaves, and line bundles. Graeme Segal takes the Kortewegde Vries and nonlinear Schroedinger equations as central examples, and explores the mathematical structures underlying the inverse scattering transform. He explains the roles of loop groups, the Grassmannian, and algebraic curves. In the final part of the book, Richard Ward explores the connection between integrability and the self-dual Yang-Mills equations, and describes the correspondence between solutions to integrable equations and holomorphic vector bundles over twistor space.
A "Sonderforschungsbereich" (SFB) is a programme of the "Deutsche For schungsgemeinschaft" to financially support a concentrated research effort of a number of scientists located principally at one University, Research La boratory or a number of these situated in close proximity to one another so that active interaction among individual scientists is easily possible. Such SFB are devoted to a topic, in our case "Deformation and Failure in Metallic and Granular M aterialK', and financing is based on a peer reviewed proposal for three (now four) years with the intention of several prolongations after evaluation of intermediate progress and continuation reports. An SFB is terminated in general by a formal workshop, in which the state of the art of the achieved results is presented in oral or I and poster communications to which also guests are invited with whom the individual project investigators may have collaborated. Moreover, a research report in book form is produced in which a number of articles from these lectures are selected and collected, which present those research results that withstood a rigorous reviewing pro cess (with generally two or three referees). The theme deformation and failure of materials is presented here in two volumes of the Lecture Notes in Applied and Computational Mechanics by Springer Verlag, and the present volume is devoted to granular and porous continua. The complementary volume (Lecture Notes in Applied and Com putational Mechanics, vol. 10, Eds. K. HUTTER & H." |
You may like...
Kirstenbosch - A Visitor's Guide
Colin Paterson-Jones, John Winter
Paperback
Human and Machine Perception 2…
Virginio Cantoni, Vito di Gesu, …
Hardcover
R4,199
Discovery Miles 41 990
JBoss 3.2 Deployment and Administration
Meeraj Kunnumpurath
Paperback
|