Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 6 of 6 matches in All Departments
Structural optimization - a survey.- Mathematical optimization: an introduction.- Design optimization with the finite element program ANSYSR.- B&B: a FE-program for cost minimization in concrete design.- The CAOS system.- Shape optimization with program CARAT.- DYNOPT: a program system for structural optimization weight minimum design with respect to various constraints.- MBB-Lagrange: a computer aided structural design system.- The OASIS-ALADDIN structural optimization system.- The structural optimization system OPTSYS.- SAPOP: an optimization procedure for multicriteria structural design.- SHAPE: a structural shape optimization program.- STARS: mathematical foundations.
Real life phenomena in engineering, natural, or medical sciences are often described by a mathematical model with the goal to analyze numerically the behaviour of the system. Advantages of mathematical models are their cheap availability, the possibility of studying extreme situations that cannot be handled by experiments, or of simulating real systems during the design phase before constructing a first prototype. Moreover, they serve to verify decisions, to avoid expensive and time consuming experimental tests, to analyze, understand, and explain the behaviour of systems, or to optimize design and production. As soon as a mathematical model contains differential dependencies from an additional parameter, typically the time, we call it a dynamical model. There are two key questions always arising in a practical environment: 1 Is the mathematical model correct? 2 How can I quantify model parameters that cannot be measured directly? In principle, both questions are easily answered as soon as some experimental data are available. The idea is to compare measured data with predicted model function values and to minimize the differences over the whole parameter space. We have to reject a model if we are unable to find a reasonably accurate fit. To summarize, parameter estimation or data fitting, respectively, is extremely important in all practical situations, where a mathematical model and corresponding experimental data are available to describe the behaviour of a dynamical system.
This book contains the written versions of main lectures presented at the Advanced Study Institute (ASI) on Computational Mathematical Programming, which was held in Bad Windsheim, Germany F. R., from July 23 to August 2, 1984, under the sponsorship of NATO. The ASI was organized by the Committee on Algorithms (COAL) of the Mathematical Programming Society. Co-directors were Karla Hoffmann (National Bureau of Standards, Washington, U.S.A.) and Jan Teigen (Rabobank Nederland, Zeist, The Netherlands). Ninety participants coming from about 20 different countries attended the ASI and contributed their efforts to achieve a highly interesting and stimulating meeting. Since 1947 when the first linear programming technique was developed, the importance of optimization models and their mathematical solution methods has steadily increased, and now plays a leading role in applied research areas. The basic idea of optimization theory is to minimize (or maximize) a function of several variables subject to certain restrictions. This general mathematical concept covers a broad class of possible practical applications arising in mechanical, electrical, or chemical engineering, physics, economics, medicine, biology, etc. There are both industrial applications (e.g. design of mechanical structures, production plans) and applications in the natural, engineering, and social sciences (e.g. chemical equilibrium problems, christollography problems).
This collection of 188 nonlinear programming test examples is a supplement of the test problem collection published by Hock and Schittkowski [2]. As in the former case, the intention is to present an extensive set of nonlinear programming problems that were used by other authors in the past to develop, test or compare optimization algorithms. There is no distinction between an "easy" or "difficult" test problem, since any related classification must depend on the underlying algorithm and test design. For instance, a nonlinear least squares problem may be solved easily by a special purpose code within a few iterations, but the same problem can be unsolvable for a general nonlinear programming code due to ill-conditioning. Thus one should consider both collections as a possible offer to choose some suitable problems for a specific test frame. One difference between the new collection and the former one pub lished by Hock and Schittkowski [2], is the attempt to present some more realistic or "real world" problems. Moreover a couple of non linear least squares test problems were collected which can be used e. g. to test data fitting algorithms. The presentation of the test problems is somewhat simplified and numerical solutions are computed only by one nonlinear programming code, the sequential quadratic programming algorithm NLPQL of Schittkowski [3]. But both test problem collections are implemeted in the same way in form of special FORTRAN subroutines, so that the same test programs can be used.
................................................................. The performance of a nonlinear programming algorithm can only be ascertained by numerical experiments requiring the collection and implementation of test examples in dependence upon the desired performance criterium. This book should be considered as an assis tance for a test designer since it presents an extensive collec tion of nonlinear programming problems which have been used in the past to test or compare optimization programs. He will be in formed about the optimal solution, about the structure of the problem in the neighbourhood of the solution, and, in addition, about the usage of the corresp, onding FORTRAN subroutines if he is interested in obtaining them -ofi a magnetic tape. Chapter I shows how the test examples are documented. In par ticular, the evaluation of computable information about the solu tion of a problem is outlined. It is explained how the optimal solution, the optimal Lagrange-multipliers, and the condition number of the projected Hessian of the Lagrangian are obtained. Furthermore, a classification number is defined allowing a formal description of a test problem, and the documentation scheme is described which is used in Chapter IV to present the problems."
...................................................................... The increasing importance of mathematical programming for the solution of complex nonlinear systems arising in practical situations requires the development of qualified optimization software. In recent years, a lot of effort has been made to implement efficient and reliable optimization programs and we can observe a wide distribution of these programs both for research and industrial applications. In spite of their practical importance only a few attempts have been made in the past to come to comparative conclusions and to give a designer the possibility to decide which optimization program could solve his individual problems in the most desirable way. Box BO 1966J, Huang, Levy HL 1970J, Himmelblau HI 1971J, Dumi tru DU 1974], and More, Garbow, Hillstrom MG 1978] for example compared algorithms for unres ricied u illii Gtiv y le, B n BD 1970], McKeown MK 1975], and Ramsin, Wedin RW 1977l studied codes for nonlinear least squares problems. Codes for the linear case are compared by Bartels BA 1975.J and Schittkowski, Stoer SS 1979J. Extensive tests for geometric programming algorithms are found in Dembo DE 1976bJ, Rijckaert RI 1977], and Rijckaert, Martens RM 1978J."
|
You may like...
|