![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Optimization > General
This collection of challenging and well-designed test problems arising in literature studies also contains a wide spectrum of applications, including pooling/blending operations, heat exchanger network synthesis, homogeneous azeotropic separation, and dynamic optimization and optimal control problems.
In t.lw fHll of !!)!)2, Professor Dr. M. Alt.ar, chairman of tIw newly established dppartnwnt or Managenwnt. wit.h Comput.er Science at thp Homanian -American Univprsity in Bucharest (a private univprsil.y), inl.roducod in t.he curriculum a course on DiffenHltial Equations and Optimal Cont.rol, asking lIS to teach such course. It was an inter8sting challengo, since for t.Iw first tim8 wo had to t8ach such mathemaLical course for st.udents with economic background and interosts. It was a natural idea to sl.m't by looking at pconomic models which were described by differpntial equations and for which problems in (\pcision making dir! ariso. Since many or such models were r!escribed in discret.e timp, wp eleculed to elpvolop in parallel t.he theory of differential equations anel thaI, of discrete-timo systpms aur! also control theory in continuous and discrete time. Tlw jll'eSPlu book is t.he result of our tpaehing px!wripnce wit.h this courge. It is an enlargud version of t.he actllal lectuf(~s where, depending on t.he background of tho St.lI(\('Ilts, not all proofs could be given in detail. We would like to express our grat.itude to tlw Board of the Romanian - American University, personally 1. 0 the Rector, Professor Dr. Ion Smedpscu, for support, encouragement and readinpss to accept advancnd ideas in tho curriculum. fhe authors express t.heir warmest thanks 1.0 Mrs. Monica Stan . Necula for tho oxcellent procC'ssing of t.he manuscript.
This volume summarizes and synthesizes an aspect of research work that has been done in the area of Generalized Convexity over the past few decades. Specifically, the book focuses on V-invex functions in vector optimization that have grown out of the work of Jeyakumar and Mond in the 1990 s. The authors integrate related research into the book and demonstrate the wide context from which the area has grown and continues to grow.
Multiwavelength Optical Networks systematically studies the major
research issues in WDM (Wavelength Division Multiplexing) optical
networks, such as routing and wavelength assignment, QoS multicast
routing, design of logical topologies, and placement of wavelength
converters. The book consists of two parts. The first part studies
the fundamental concepts and principles of WDM networks. The second
part discusses advanced and research issues of WDM networks.
Operations research often solves deterministic optimization problems based on elegantand conciserepresentationswhereall parametersarepreciselyknown. In the face of uncertainty, probability theory is the traditional tool to be appealed for, and stochastic optimization is actually a signi?cant sub-area in operations research. However, the systematic use of prescribed probability distributions so as to cope with imperfect data is partially unsatisfactory. First, going from a deterministic to a stochastic formulation, a problem may becomeintractable. Agoodexampleiswhengoingfromdeterministictostoch- tic scheduling problems like PERT. From the inception of the PERT method in the 1950's, it was acknowledged that data concerning activity duration times is generally not perfectly known and the study of stochastic PERT was launched quite early. Even if the power of today's computers enables the stochastic PERT to be addressed to a large extent, still its solutions often require simplifying assumptions of some kind. Another di?culty is that stochastic optimization problems produce solutions in the average. For instance, the criterion to be maximized is more often than not expected utility. This is not always a meaningful strategy. In the case when the underlying process is not repeated a lot of times, let alone being one-shot, it is not clear if this criterion is realistic, in particular if probability distributions are subjective. Expected utility was proposed as a rational criterion from ?rst principles by Savage. In his view, the subjective probability distribution was - sically an artefact useful to implement a certain ordering of solutions.
Optimum envelope-constrained filter design is concerned with time-domain synthesis of a filter such that its response to a specific input signal stays within prescribed upper and lower bounds, while minimizing the impact of input noise on the filter output or the impact of the shaped signal on other systems depending on the application. In many practical applications, such as in TV channel equalization, digital transmission, and pulse compression applied to radar, sonar and detection, the soft least square approach, which attempts to match the output waveform with a specific desired pulse, is not the most suitable one. Instead, it becomes necessary to ensure that the response stays within the hard envelope constraints defined by a set of continuous inequality constraints. The main advantage of using the hard envelope-constrained filter formulation is that it admits a whole set of allowable outputs. From this set one can then choose the one which results in the minimization of a cost function appropriate to the application at hand. The signal shaping problems so formulated are semi-infinite optimization problems. This monograph presents in a unified manner results that have been generated over the past several years and are scattered in the research literature. The material covered in the monograph includes problem formulation, numerical optimization algorithms, filter robustness issues and practical examples of the application of envelope constrained filter design. Audience: Postgraduate students, researchers in optimization and telecommunications engineering, and applied mathematicians.
Paul Williams, a leading authority on modeling in integer programming, has written a concise, readable introduction to the science and art of using modeling in logic for integer programming. Written for graduate and postgraduate students, as well as academics and practitioners, the book is divided into four chapters that all avoid the typical format of definitions, theorems and proofs and instead introduce concepts and results within the text through examples. References are given at the end of each chapter to the more mathematical papers and texts on the subject, and exercises are included to reinforce and expand on the material in the chapter. Methods of solving with both logic and IP are given and their connections are described. Applications in diverse fields are discussed, and Williams shows how IP models can be expressed as satisfiability problems and solved as such.
Computing has become essential for the modeling, analysis, and
optimization of systems. This book is devoted to algorithms,
computational analysis, and decision models. The chapters are
organized in two parts: optimization models of decisions and models
of pricing and equilibria.
System Modeling and Optimization is an indispensable reference for anyone interested in the recent advances in these two disciplines. The book collects, for the first time, selected articles from the 21st and most recent IFIP TC 7 conference in Sophia Antipolis, France. Applied mathematicians and computer scientists can attest to the ever-growing influence of these two subjects. The practical applications of system modeling and optimization can be seen in a number of fields: environmental science, transport and telecommunications, image analysis, free boundary problems, bioscience, and non-cylindrical evolution control, to name just a few. New developments in each of these fields have contributed to a more complex understanding of both system modeling and optimization. Editors John Cagnol and Jean-Paul Zol sio, chairs of the conference, have assembled System Modeling and Optimization to present the most up-to-date developments to professionals and academics alike.
The problem of "Shortest Connectivity," which is discussed here, has a long and convoluted history. Many scientists from many fields as well as laymen have stepped on its stage. Usually, the problem is known as Steiner's Problem and it can be described more precisely in the following way: Given a finite set of points in a metric space, search for a network that connects these points with the shortest possible length. This shortest network must be a tree and is called a Steiner Minimal Tree (SMT). It may contain vertices different from the points which are to be connected. Such points are called Steiner points. Steiner's Problem seems disarmingly simple, but it is rich with possibilities and difficulties, even in the simplest case, the Euclidean plane. This is one of the reasons that an enormous volume of literature has been published, starting in 1 the seventeenth century and continuing until today. The difficulty is that we look for the shortest network overall. Minimum span ning networks have been well-studied and solved eompletely in the case where only the given points must be connected. The novelty of Steiner's Problem is that new points, the Steiner points, may be introduced so that an intercon necting network of all these points will be shorter. This also shows that it is impossible to solve the problem with combinatorial and geometric methods alone."
Continuous optimization is the study of problems in which we wish to opti mize (either maximize or minimize) a continuous function (usually of several variables) often subject to a collection of restrictions on these variables. It has its foundation in the development of calculus by Newton and Leibniz in the 17* DEGREES century. Nowadys, continuous optimization problems are widespread in the mathematical modelling of real world systems for a very broad range of applications. Solution methods for large multivariable constrained continuous optimiza tion problems using computers began with the work of Dantzig in the late 1940s on the simplex method for linear programming problems. Recent re search in continuous optimization has produced a variety of theoretical devel opments, solution methods and new areas of applications. It is impossible to give a full account of the current trends and modern applications of contin uous optimization. It is our intention to present a number of topics in order to show the spectrum of current research activities and the development of numerical methods and applications."
This book concentrates on providing technical tools to make the user of Multiple Criteria Decision Making (MCDM) methodologies independent of bulky optimization computations. These bulky computations have been a necessary, but limiting, characteristic of interactive MCDM methodologies and algorithms. The book removes these limitations of MCDM problems by reducing a problem's computational complexity. The result is a wider and more functional general framework for presenting, teaching, implementing and applying a wide range of MCDM methodologies.
This book is a comprehensive survey of the mathematical concepts and principles of industrial mathematics. Its purpose is to provide students and professionals with an understanding of the fundamental mathematical principles used in Industrial Mathematics/OR in modeling problems and application solutions. All the concepts presented in each chapter have undergone the learning scrutiny of the author and his students. The illustrative material throughout the book was refined for student comprehension as the manuscript developed through its iterations, and the chapter exercises are refined from the previous year's exercises.
This book deals with the theory and applications of the Reformulation- Linearization/Convexification Technique (RL T) for solving nonconvex optimization problems. A unified treatment of discrete and continuous nonconvex programming problems is presented using this approach. In essence, the bridge between these two types of nonconvexities is made via a polynomial representation of discrete constraints. For example, the binariness on a 0-1 variable x . can be equivalently J expressed as the polynomial constraint x . (1-x . ) = 0. The motivation for this book is J J the role of tight linear/convex programming representations or relaxations in solving such discrete and continuous nonconvex programming problems. The principal thrust is to commence with a model that affords a useful representation and structure, and then to further strengthen this representation through automatic reformulation and constraint generation techniques. As mentioned above, the focal point of this book is the development and application of RL T for use as an automatic reformulation procedure, and also, to generate strong valid inequalities. The RLT operates in two phases. In the Reformulation Phase, certain types of additional implied polynomial constraints, that include the aforementioned constraints in the case of binary variables, are appended to the problem. The resulting problem is subsequently linearized, except that certain convex constraints are sometimes retained in XV particular special cases, in the Linearization/Convexijication Phase. This is done via the definition of suitable new variables to replace each distinct variable-product term. The higher dimensional representation yields a linear (or convex) programming relaxation.
Optimization methods have been considered in many articles, monographs, and handbooks. However, experts continue to experience difficulties in correctly stating optimization problems in engineering. These troubles typically emerge when trying to define the set of feasible solutions, i.e. the constraints imposed on the design variables, functional relationships, and criteria. The Parameter Space Investigation (PSI) method was developed specifically for the correct statement and solution of engineering optimization problems. It is implemented in the MOVI 1.0 software package, a tutorial version of which is included in this book. The PSI method and MOVI 1.0 software package have a wide range of applications. The PSI method can be successfully used for the statement and solution of the following multicriteria problems: design, identification, design with control, the optional development of prototypes, finite element models, and the decomposition and aggregation of large-scale systems. Audience: The PSI method will be of interest to researchers, graduate students, and engineers who work in engineering, mathematical modelling and industrial mathematics, and in computer and information science.
This work examines all the fuzzy multicriteria methods recently
developed, such as fuzzy AHP, fuzzy TOPSIS, interactive fuzzy
multiobjective stochastic linear programming, fuzzy multiobjective
dynamic programming, grey fuzzy multiobjective optimization, fuzzy
multiobjective geometric programming, and more. Each of the 22
chapters includes practical applications along with new
developments/results.
Since I started working in the area of nonlinear programming and, later on, variational inequality problems, I have frequently been surprised to find that many algorithms, however scattered in numerous journals, monographs and books, and described rather differently, are closely related to each other. This book is meant to help the reader understand and relate algorithms to each other in some intuitive fashion, and represents, in this respect, a consolidation of the field. The framework of algorithms presented in this book is called Cost Approxi mation. (The preface of the Ph.D. thesis Pat93d] explains the background to the work that lead to the thesis, and ultimately to this book.) It describes, for a given formulation of a variational inequality or nonlinear programming problem, an algorithm by means of approximating mappings and problems, a principle for the update of the iteration points, and a merit function which guides and monitors the convergence of the algorithm. One purpose of this book is to offer this framework as an intuitively appeal ing tool for describing an algorithm. One of the advantages of the framework, or any reasonable framework for that matter, is that two algorithms may be easily related and compared through its use. This framework is particular in that it covers a vast number of methods, while still being fairly detailed; the level of abstraction is in fact the same as that of the original problem statement."
This is the first book devoted entirely to Particle Swarm Optimization (PSO), which is a non-specific algorithm, similar to evolutionary algorithms, such as taboo search and ant colonies. Since its original development in 1995, PSO has mainly been applied to continuous-discrete heterogeneous strongly non-linear numerical optimization and it is thus used almost everywhere in the world. Its convergence rate also makes it a preferred tool in dynamic optimization.
Invexity and Optimization presents results on invex function and their properties in smooth and nonsmooth cases, pseudolinearity and eta-pseudolinearity. Results on optimality and duality for a nonlinear scalar programming problem are presented, second and higher order duality results are given for a nonlinear scalar programming problem, and saddle point results are also presented. Invexity in multiobjective programming problems and Kuhn-Tucker optimality conditions are given for a multiobjecive programming problem, Wolfe and Mond-Weir type dual models are given for a multiobjective programming problem and usual duality results are presented in presence of invex functions. Continuous-time multiobjective problems are also discussed. Quadratic and fractional programming problems are given for invex functions. Symmetric duality results are also given for scalar and vector cases.
The aim of stochastic programming is to find optimal decisions
in problems which involve uncertain data. This field is currently
developing rapidly with contributions from many disciplines
including operations research, mathematics, and probability. At the
same time, it is now being applied in a wide variety of subjects
ranging from agriculture to financial planning and from industrial
engineering to computer networks. This textbook provides a first
course in stochastic programming suitable for students with a basic
knowledge of linear programming, elementary analysis, and
probability. The authors aim to present a broad overview of the
main themes and methods of the subject. Its prime goal is to help
students develop an intuition on how to model uncertainty into
mathematical problems, what uncertainty changes bring to the
decision process, and what techniques help to manage uncertainty in
solving the problems. The book is highly illustrated with chapter summaries and many
examples and exercises. Students, researchers and practitioners in
operations research and the optimization area will find it
particularly of interest. Review of First Edition: "The discussion on modeling issues, the large number of examples used to illustrate the material, and the breadth of the coverage make'Introduction to Stochastic Programming' an ideal textbook for the area." (Interfaces, 1998) "
The era of interior point methods (IPMs) was initiated by N. Karmarkar 's 1984 paper, which triggered turbulent research and reshaped almost all areas of optimization theory and computational practice. This book offers comprehensive coverage of IPMs. It details the main results of more than a decade of IPM research. Numerous exercises are provided to aid in understanding the material.
Reliability theory is a major concern for engineers and managers engaged in making high quality products and designing highly reliable systems. "Advanced Reliability Models and Maintenance Policies" is a survey of new research topics in reliability theory and optimization techniques in reliability engineering. The book introduces partition and redundant problems within reliability models, and provides optimization techniques. The book also indicates how to perform maintenance in a finite time span and at failure detection, and to apply recovery techniques for computer systems. New themes such as reliability complexity and service reliability in reliability theory are theoretically proposed, and optimization problems in management science using reliability techniques are presented. The book is an essential guide for graduate students and researchers in reliability theory, and a valuable reference for reliability engineers engaged both in maintenance work and in management and computer systems.
This book deals with combinatorial aspects of epistasis, a notion that existed for years in genetics and appeared in the ?eld of evolutionary algorithms in the early 1990s. Even thoughthe?rst chapterputsepistasisintheperspective ofevolutionary algorithms and arti?cial intelligence, and applications occasionally pop up in other chapters, thisbookisessentiallyaboutmathematics, aboutcombinatorialtechniques to compute in an e?cient and mathematically elegant way what will be de?ned as normalized epistasis. Some of the material in this book ?nds its origin in the PhD theses of Hugo Van Hove [97] and Dominique Suys [95]. The sixth chapter also contains material that appeared in the dissertation of Luk Schoofs [84]. Together with that of M. Teresa Iglesias [36], these dissertations form the backbone of a decade of mathematical ventures in the world of epistasis. The authors wish to acknowledge support from the Flemish Fund of Scienti?c - search (FWO-Vlaanderen) and of the Xunta de Galicia. They also wish to explicitly mentiontheintellectualandmoralsupporttheyreceivedthroughoutthepreparation of this work from their family and their colleagues Emilio Villanueva, Jose Mar'a Barja and Arnold Beckelheimer, as well as our local T T Xpert Jan Adriaenssens.
THIRTY FIVE YEARS OF AUTOMATING MATHEMATICS: DEDICATED TO 35 YEARS OF DE BRUIJN'S AUTOMATH N. G. de Bruijn was a well established mathematician before deciding in 1967 at the age of 49 to work on a new direction related to Automating Mathematics. By then, his contributions in mathematics were numerous and extremely influential. His book on advanced asymptotic methods, North Holland 1958, was a classic and was subsequently turned into a book in the well known Dover book series. His work on combinatorics yielded influential notions and theorems of which we mention the de Bruijn-sequences of 1946 and the de Bruijn-Erdos theorem of 1948. De Bruijn's contributions to mathematics also included his work on generalized function theory, analytic number theory, optimal control, quasicrystals, the mathematical analysis of games and much more. In the 1960s de Bruijn became fascinated by the new computer technology and as a result, decided to start the new AUTOMATH project where he could check, with the help of the computer, the correctness of books of mathematics. In each area that de Bruijn approached, he shed a new light and was known for his originality and for making deep intellectual contributions. And when it came to automating mathematics, he again did it his way and introduced the highly influential AUTOMATH. In the past decade he has also been working on theories of the human brain."
Computer Aided Tolerancing (CAT) is an important topic in any field of design and production where parts move relative to one another and/or are assembled together. Geometric variations from specified dimensions and form always occur when parts are manufactured. Improvements in production systems can cause the amounts of the variations to become smaller, but their presence does not disappear. To shorten the time from concept to market of a product, it has been increasingly important to take clearances and the tolerancing of manufacturing variations into consideration right from the beginning, at the stage of design. Hence, geometric models are defined that represent both the complete array of geometric variations possible during manufacture and also the influence of geometry on the function of individual parts and on assemblies of them. The contents of this book originate from a collection of selected papers presented at the 9th CIRP International Seminar on CAT that was held from April 10-12, 2005 at Arizona State University, USA. The CIRP (College International pour la Recherche en Production or International Institution for Production Engineering Research) plans this seminar every two years, and the book is one in a series of Proceedings on CAT. The book is organized into seven parts: Models for Tolerance Representation and Specification, Tolerance Analysis, Tolerance Synthesis, Computational Metrology and Verification, Tolerances in Manufacturing, Applications to Machinery, and Incorporating Elasticity in Tolerance Models." |
You may like...
Computational Optimization Techniques…
Muhammad Sarfraz, Samsul Ariffin Abdul Karim
Hardcover
R3,099
Discovery Miles 30 990
Radar Waveform Design based on…
Guolong Cui, Antonio Maio, …
Hardcover
|