Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer programming
The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. This volume is a kind of continuation of the previously published Springer volume "Artificial Int- ligence Techniques for Computer Graphics". Nowadays, intelligent techniques are more and more used in Computer Graphics in order, not only to optimise the pr- essing time, but also to find more accurate solutions for a lot of Computer Gra- ics problems, than with traditional methods. What are intelligent techniques for Computer Graphics? Mainly, they are te- niques based on Artificial Intelligence. So, problem resolution (especially constraint satisfaction) techniques, as well as evolutionary techniques, are used in Declarative scene Modelling; heuristic search techniques, as well as strategy games techniques, are currently used in scene understanding and in virtual world exploration; multi-agent techniques and evolutionary algorithms are used in behavioural animation; and so on. However, even if in most cases the used intelligent techniques are due to Artificial - telligence, sometimes, simple human intelligence can find interesting solutions in cases where traditional Computer Graphics techniques, even combined with Artificial Intelligence ones, cannot propose any satisfactory solution. A good example of such a case is the one of scene understanding, in the case where several parts of the scene are impossible to access.
In this edition, the scope and character of the monograph did not change with respect to the first edition. Taking into account the rapid development of the field, we have, however, considerably enlarged its contents. Chapter 4 includes two additional sections 4.4 and 4.6 on theory and algorithms of D.C. Programming. Chapter 7, on Decomposition Algorithms in Nonconvex Optimization, is completely new. Besides this, we added several exercises and corrected errors and misprints in the first edition. We are grateful for valuable suggestions and comments that we received from several colleagues. R. Horst, P.M. Pardalos and N.V. Thoai March 2000 Preface to the First Edition Many recent advances in science, economics and engineering rely on nu merical techniques for computing globally optimal solutions to corresponding optimization problems. Global optimization problems are extraordinarily di verse and they include economic modeling, fixed charges, finance, networks and transportation, databases and chip design, image processing, nuclear and mechanical design, chemical engineering design and control, molecular biology, and environment al engineering. Due to the existence of multiple local optima that differ from the global solution all these problems cannot be solved by classical nonlinear programming techniques. During the past three decades, however, many new theoretical, algorith mic, and computational contributions have helped to solve globally multi extreme problems arising from important practical applications."
Semi-infinite optimization is a vivid field of active research. Recently semi infinite optimization in a general form has attracted a lot of attention, not only because of its surprising structural aspects, but also due to the large number of applications which can be formulated as general semi-infinite programs. The aim of this book is to highlight structural aspects of general semi-infinite programming, to formulate optimality conditions which take this structure into account, and to give a conceptually new solution method. In fact, under certain assumptions general semi-infinite programs can be solved efficiently when their bi-Ievel structure is exploited appropriately. After a brief introduction with some historical background in Chapter 1 we be gin our presentation by a motivation for the appearance of standard and general semi-infinite optimization problems in applications. Chapter 2 lists a number of problems from engineering and economics which give rise to semi-infinite models, including (reverse) Chebyshev approximation, minimax problems, ro bust optimization, design centering, defect minimization problems for operator equations, and disjunctive programming."
S is a high-level language for manipulating, analysing and displaying data. It forms the basis of two highly acclaimed and widely used data analysis software systems, the commercial S-PLUS(R) and the Open Source R. This book provides an in-depth guide to writing software in the S language under either or both of those systems. It is intended for readers who have some acquaintance with S language and want to know how to use it more effectively, for example to build re-usable tools for streamlining routine data analysis or to implement new statistical methods. One ofhe most outstanding strengths of the S language is the ease with which it can be extended by users. S is a functional language, and functions written by users are first-class objects treated in the same way as functions provided by the system. S code is eminently readable and so a good way to document precisely what algorithms were used, and as much of the implementations are themselves written in S, they can be studied as models and to understand their subtleties. The current implementations also provide easy ways for S functions to call compiled code written in C, Fortran and similar languages; this is documented here in depth. Increasingly S is being used for statistical or graphical analysis within larger software systems or for whole vertical-market applications. The interface facilities are most developed on Windows(R) and these are covered with worked examples. The authors have written the widely adopted 'Modern Applied Statistics with S-PLUS', now in its third edition, and several software libraries that enhance S-PLUS and R; these and the examples used in both books are available on the Internet. Dr. W.N. Venables is a senior Statistician with the CSIRO/CMIS Environmentrics Project in Autralia, having been at the Department of Statistics, University of Adelaide for many years previously. Professor B.D. Ripley holds the Chair of Applied Statistics at the University of Oxford, and is the author of four other books on spatial statistics, simulation, pattern recognition and neural networks. Both authors are known and respected thorughout the international S and R communities, for their books, workshops, short courses, freely available software and through their extensive contributions to the S-news and R mailing lists.
Both Java and .NET use the idea of a virtual machine (VM) rather than a true executable. While very useful for some purposes, VMs make your source code and hence your intellectual property (IP) inherently less secure because the process can be reversed or decompiled. This book is useful because you must understand how decompilation works in order to properly protect your IP. Anyone interested in protecting Java code from prying eyes will want to buy this one of a kind book as it separates fact from fiction about just how ineffective obfuscators are at protecting your corporate secrets.While it is very easy for anyone to decompile Java code and almost as easy to run it through an obfuscation protection tool, there is very little information on just what happens when you do this. How secure is your code after you run an obfuscator, for example? bytecodes and the Java Virtual Machine (JVM) than in any book yet published. This book redresses the imbalance by providing insights into the features and limitations of todays decompilers and obfuscators, as well as offering a detailed look at what JVMs actually do.Virtual machine is the computer science term used when (most often in an attempt to gain greater portability) you create an abstract virtual processor and write code for it instead of having your compiler generate actual machine language for a chip like the Pentium 4. want the code to run. This translates the virtual machine language to the real machine language of your processor. The intermediary code for the virtual machine is what can more easily be decompiled, although with a loss of security, since in order for the code to be converted to real machine language it must be relatively transparent and not just a sequence of 0s and 1s
This thesis deals with the evaluation of surface and groundwater quality changes in the periods of water scarcity in river catchment areas. The work can be divided into six parts. Existing methods of drought assessment are discussed in the first part, followed by the brief description of the software package HydroOffice, designed by the author. The software is dedicated to analysis of hydrological data (separation of baseflow, parameters of hydrological drought estimation, recession curves analysis, time series analysis). The capabilities of the software are currently used by scientist from more than 30 countries around the world. The third section is devoted to a comprehensive regional assessment of hydrological drought on Slovak rivers, followed by evaluation of the occurrence, course and character of drought in precipitation, discharges, base flow, groundwater head and spring yields in the pilot area of the Nitra River basin. The fifth part is focused on the assessment of changes in surface and groundwater quality during the drought periods within the pilot area. Finally, the results are summarized and interpreted, and rounded off with an outlook to future research.
This book can be presented in two different ways. Firstly, it introduces a particular methodology to build adaptive Web sites and secondly, it presents the main concepts behind Web mining and then applying them to adaptive Web sites. In this case, Adaptive Web Sites is the case study to exemplify the tools introduced in the text. The authors start by introducing the Web and motivating the need for adaptive Web sites. The second chapter introduces the main concepts behind a Web site: its operation, its associated data and structure, user sessions, etc. Chapter three explains the Web mining process and the tools to analyze Web data, mainly focused in machine learning. The fourth chapter looks at how to store and manage data. Chapter five looks at the three main and different mining tasks: content, links and usage. The following chapter covers Web personalization; a crucial topic if we want to adapt our site to specific groups of people. Chapter seven shows how to use information extraction techniques to find user behavior patterns. The subsequent chapter explains how to acquire and maintain knowledge extracted from the previous phase. Finally, chapter nine contains the case study where all the previous concepts are applied to present a framework to build adaptive Web sites. In other words, the authors have taken care of writing a self-contained book for people that want to learn and apply personalization and adaptation in Web sites. This is commendable considering the large and increasing bibliography in these and related topics. The writing is easy to follow and although the coverage is not exhaustive, the main concepts and topics are all covered.
Game Sound Technology and Player Interaction: Concepts and Developments researches both how game sound affects a player psychologically, emotionally, and physiologically, and how this relationship itself impacts the design of computer game sound and the development of technology. This compilation also applies beyond the realm of video games to other types of immersive sound, such as soundscape design, gambling machines, emotive and fantastical sound to name a few. The application for this research is wide-ranging, interdisciplinary, and of primary importance for academics and practitioners searching for the right sounds.
The overall aim of the book is to introduce students to the typical course followed by a data analysis project in earth sciences. A project usually involves searching relevant literature, reviewing and ranking published books and journal articles, extracting relevant information from the literature in the form of text, data, or graphs, searching and processing the relevant original data using MATLAB, and compiling and presenting the results as posters, abstracts, and oral presentations using graphics design software. The text of this book includes numerous examples on the use of internet resources, on the visualization of data with MATLAB, and on preparing scientific presentations. As with its sister book MATLAB Recipes for Earth Sciences-3rd Edition (2010), which demonstrates the use of statistical and numerical methods on earth science data, this book uses state-of-the art software packages, including MATLAB and the Adobe Creative Suite, to process and present geoscientific information collected during the course of an earth science project. The book's supplementary electronic material (available online through the publisher's website) includes color versions of all figures, recipes with all the MATLAB commands featured in the book, the example data, exported MATLAB graphics, and screenshots of the most important steps involved in processing the graphics.
Genetic algorithms provide a powerful range of methods for solving complex engineering search and optimization algorithms. Their power can also lead to difficulty for new researchers and students who wish to apply such evolution-based methods. "Applied Evolutionary Algorithms in Java" offers a practical, hands-on guide to applying such algorithms to engineering and scientific problems. The concepts are illustrated through clear examples, ranging from simple to more complex problems domains; all based on real-world industrial problems. Examples are taken from image processing, fuzzy-logic control systems, mobile robots, and telecommunication network optimization problems. The Java-based toolkit provides an easy-to-use and essential visual interface, with integrated graphing and analysis tools. Topics and features: *inclusion of a complete Java toolkit for exploring evolutionary algorithms *strong use of visualization techniques, to increase understanding *coverage of all major evolutionary algorithms in common usage *broad range of industrially based example applications *includes examples and an appendix based on fuzzy logic This book is intended for students, researchers, and professionals interested in using evolutionary algorithms in their work. No mathematics beyond basic algebra and Cartesian graphs methods are required, as the aim is to encourage applying the Java toolkit to develop the power of these techniques.
The contributors present the main results and techniques of their specialties in an easily accessible way accompanied with many references: historical, hints for complete proofs or solutions to exercises and directions for further research. This volume contains applications which have not appeared in any collection of this type. The book is a general source of information in computation theory, at the undergraduate and research level.
Principles of Verilog PLI is a how to do' text on Verilog Programming Language Interface. The primary focus of the book is on how to use PLI for problem solving. Both PLI 1.0 and PLI 2.0 are covered. Particular emphasis has been put on adopting a generic step-by-step approach to create a fully functional PLI code. Numerous examples were carefully selected so that a variety of problems can be solved through ther use. A separate chapter on Bus Functional Model (BFM), one of the most widely used commercial applications of PLI, is included. Principles of Verilog PLI is written for the professional engineer who uses Verilog for ASIC design and verification. Principles of Verilog PLI will be also of interest to students who are learning Verilog.
Broadly-scoped requirements such as security, privacy, and response time are a major source of complexity in modern software systems. This is due to their tangled inter-relationships with and effects on other requirements. Aspect-Oriented Requirements Engineering (AORE) aims to facilitate modularisation of such broadly-scoped requirements, so that software developers are able to reason about them in isolation - one at a time. AORE also captures these inter-relationships and effects in well-defined composition specifications, and, in so doing exposes the causes for potential conflicts, trade-offs, and roots for the key early architectural decisions. Over the last decade, significant work has been carried out in the field of AORE. With this book the editors aim to provide a consolidated overview of these efforts and results. The individual contributions discuss how aspects can be identified, represented, composed and reasoned about, as well as how they are used in specific domains and in industry. Thus, the book does not present one particular AORE approach, but conveys a broad understanding of the aspect-oriented perspective on requirements engineering. The chapters are organized into five sections: concern identification in requirements, concern modelling and composition, domain-specific use of AORE, aspect interactions, and AORE in industry. This book provides readers with the most comprehensive coverage of AORE and the capabilities it offers to those grappling with the complexity arising from broadly-scoped requirements - a phenomenon that is, without doubt, universal across software systems. Software engineers and related professionals in industry, as well as advanced undergraduate and post-graduate students and researchers, will benefit from these comprehensive descriptions and the industrial case studies.
Experience gained during a ten-year long involvement in modelling, program ming and application in nonlinear optimization helped me to arrive at the conclusion that in the interest of having successful applications and efficient software production, knowing the structure of the problem to be solved is in dispensable. This is the reason why I have chosen the field in question as the sphere of my research. Since in applications, mainly from among the nonconvex optimization models, the differentiable ones proved to be the most efficient in modelling, especially in solving them with computers, I started to deal with the structure of smooth optimization problems. The book, which is a result of more than a decade of research, can be equally useful for researchers and stu dents showing interest in the domain, since the elementary notions necessary for understanding the book constitute a part of the university curriculum. I in tended dealing with the key questions of optimization theory, which endeavour, obviously, cannot bear all the marks of completeness. What I consider the most crucial point is the uniform, differential geometric treatment of various questions, which provides the reader with opportunities for learning the structure in the wide range, within optimization problems. I am grateful to my family for affording me tranquil, productive circumstances. I express my gratitude to F."
Complementarity theory is a new domain in applied mathematics and is concerned with the study of complementarity problems. These problems represent a wide class of mathematical models related to optimization, game theory, economic engineering, mechanics, fluid mechanics, stochastic optimal control etc. The book is dedicated to the study of nonlinear complementarity problems by topological methods. Audience: Mathematicians, engineers, economists, specialists working in operations research and anybody interested in applied mathematics or in mathematical modeling.
For courses in C++ Programming The best-selling C++ How to Program is accessible to readers with little or no programming experience, yet comprehensive enough for the professional programmer. The Deitels' signature live-code approach presents the concepts in the context of full working programs followed by sample executions. The early objects approach gets readers thinking about objects immediately-allowing them to more thoroughly master the concepts. Emphasis is placed on achieving program clarity and building well-engineered software. Interesting, entertaining, and challenging exercises encourage students to make a difference and use computers and the Internet to work on problems. To keep readers up-to-date with leading-edge computing technologies, the 10th Edition conforms to the C++11 standard and the new C++14 standard.
Component-based software development regards software construction in terms of conventional engineering disciplines where the assembly of systems from readily-available prefabricated parts is the norm. Because both component-based systems themselves and the stakeholders in component-based development projects are different from traditional software systems, component-based testing also needs to deviate from traditional software testing approaches. Gross first describes the specific challenges related to component-based testing like the lack of internal knowledge of a component or the usage of a component in diverse contexts. He argues that only built-in contract testing, a test organization for component-based applications founded on building test artifacts directly into components, can prevent catastrophic failures like the one that caused the now famous ARIANE 5 crash in 1996. Since building testing into components has implications for component development, built-in contract testing is integrated with and made to complement a model-driven development method. Here UML models are used to derive the testing architecture for an application, the testing interfaces and the component testers. The method also provides a process and guidelines for modeling and developing these artifacts. This book is the first comprehensive treatment of the intricacies of testing component-based software systems. With its strong modeling background, it appeals to researchers and graduate students specializing in component-based software engineering. Professionals architecting and developing component-based systems will profit from the UML-based methodology and the implementation hints based on the XUnit and JUnit frameworks.
The author's aim in this textbook is to provide students with a clear understanding of the relationship between the principles of object-oriented programming and software engineering. Professor Zeigler takes an approach based on state representation to formal specification. Consequently, this book is unique through its - emphasis on formulating primitives from which all other functionality can be built; - integral use of a semi-formal behaviour specification language based on state transition concepts; -differentiation between behaviour and implementation; -a reusable heterogeneous container class library; -ability to show the elegance and power of ensemble methods with non-trivial examples. As a result, students studying software engineering will find this a distinctive and valuable approach to programming and systems engineering.
This textbook provides a comprehensive modeling, reformulation and optimization approach for solving production planning and supply chain planning problems, covering topics from a basic introduction to planning systems, mixed integer programming (MIP) models and algorithms through the advanced description of mathematical results in polyhedral combinatorics required to solve these problems. Based on twenty years worth of research in which the authors have played a significant role, the book addresses real life industrial production planning problems (involving complex production structures with multiple production stages) using MIP modeling and reformulation approach. The book provides an introduction to MIP modeling and to planning systems, a unique collection of reformulation results, and an easy to use problem-solving library. This approach is demonstrated through a series of real life case studies, exercises and detailed illustrations. Review by Jakub Marecek (Computer Journal) The emphasis put on mixed integer rounding and mixing sets, heuristics in-built in general purpose integer programming solvers, as well as on decompositions and heuristics using integer programming should be praised... There is no doubt that this volume offers the present best introduction to integer programming formulations of lotsizing problems, encountered in production planning. (2007)
This book is the final version of a course on algorithmic information theory and the epistemology of mathematics and physics. It discusses Einstein and Goedel's views on the nature of mathematics in the light of information theory, and sustains the thesis that mathematics is quasi-empirical. There is a foreword by Cris Calude of the University of Auckland, and supplementary material is available at the author's web site. The special feature of this book is that it presents a new "hands on" didatic approach using LISP and Mathematica software. The reader will be able to derive an understanding of the close relationship between mathematics and physics. "The Limits of Mathematics is a very personal and idiosyncratic account of Greg Chaitin's entire career in developing algorithmic information theory. The combination of the edited transcripts of his three introductory lectures maintains all the energy and content of the oral presentations, while the material on AIT itself gives a full explanation of how to implement Greg's ideas on real computers for those who want to try their hand at furthering the theory." (John Casti, Santa Fe Institute)
For courses in C++ Programming Fundamentals of C++ for Novices and Experienced Programmers Alike Intended for use in a two-term, three-term, or accelerated one-term C++ programming sequence, this 9th Edition of Starting Out with C++: Early Objects introduces the fundamentals of C++ to novices and experienced students alike. In clear, easy-to-understand terms, the text introduces all of the necessary topics for beginning C++ programmers. Real-world examples allow students to apply their knowledge in understanding how, why, and when to implement the features of C++. The text is organised in a progressive, step-by-step fashion that allows for flexibility. Building on the popularity of previous editions, the 9th Edition has been updated and enhanced with new material, including C++11 topics and recent changes in technology.
Reasoning under uncertainty is always based on a specified language or for malism, including its particular syntax and semantics, but also on its associated inference mechanism. In the present volume of the handbook the last aspect, the algorithmic aspects of uncertainty calculi are presented. Theory has suffi ciently advanced to unfold some generally applicable fundamental structures and methods. On the other hand, particular features of specific formalisms and ap proaches to uncertainty of course still influence strongly the computational meth ods to be used. Both general as well as specific methods are included in this volume. Broadly speaking, symbolic or logical approaches to uncertainty and nu merical approaches are often distinguished. Although this distinction is somewhat misleading, it is used as a means to structure the present volume. This is even to some degree reflected in the two first chapters, which treat fundamental, general methods of computation in systems designed to represent uncertainty. It has been noted early by Shenoy and Shafer, that computations in different domains have an underlying common structure. Essentially pieces of knowledge or information are to be combined together and then focused on some particular question or domain. This can be captured in an algebraic structure called valuation algebra which is described in the first chapter. Here the basic operations of combination and focus ing (marginalization) of knowledge and information is modeled abstractly subject to simple axioms."
This book covers the dominant theoretical approaches to the approximate solution of hard combinatorial optimization and enumeration problems. It contains elegant combinatorial theory, useful and interesting algorithms, and deep results about the intrinsic complexity of combinatorial problems. Its clarity of exposition and excellent selection of exercises will make it accessible and appealing to all those with a taste for mathematics and algorithms. Richard Karp,University Professor, University of California at Berkeley Following the development of basic combinatorial optimization techniques in the 1960s and 1970s, a main open question was to develop a theory of approximation algorithms. In the 1990s, parallel developments in techniques for designing approximation algorithms as well as methods for proving hardness of approximation results have led to a beautiful theory. The need to solve truly large instances of computationally hard problems, such as those arising from the Internet or the human genome project, has also increased interest in this theory. The field is currently very active, with the toolbox of approximation algorithm design techniques getting always richer. It is a pleasure to recommend Vijay Vazirani's well-written and comprehensive book on this important and timely topic. I am sure the reader will find it most useful both as an introduction to approximability as well as a reference to the many aspects of approximation algorithms. László Lovász, Senior Researcher, Microsoft Research
It is, indeed, widely acceptable today that nowhere is it more important to focus on the improvement of software quality than in the case of systems with requirements in the areas of safety and reliability - especially for distributed, real-time and embedded systems. Thus, much research work is under progress in these fields, since software process improvement impinges directly on achieved levels of quality, and many application experiments aim to show quantitative results demonstrating the efficacy of particular approaches. Requirements for safety and reliability - like other so-called non-functional requirements for computer-based systems - are often stated in imprecise and ambiguous terms, or not at all. Specifications focus on functional and technical aspects, with issues like safety covered only implicitly, or not addressed directly because they are felt to be obvious; unfortunately what is obvious to an end user or system user is progressively less so to others, to the extend that a software developer may not even be aware that safety is an issue. Therefore, there is a growing evidence for encouraging greater understanding of safety and reliability requirements issues, right across the spectrum from end user to software developer; not just in traditional safety-critical areas (e.g. nuclear, aerospace) but also acknowledging the need for such things as heart pacemakers and other medical and robotic systems to be highly dependable. |
You may like...
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
C++ How to Program: Horizon Edition
Harvey Deitel, Paul Deitel
Paperback
R1,779
Discovery Miles 17 790
Data Abstraction and Problem Solving…
Janet Prichard, Frank Carrano
Paperback
R2,163
Discovery Miles 21 630
|