![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Data structures
This book covers the dominant theoretical approaches to the approximate solution of hard combinatorial optimization and enumeration problems. It contains elegant combinatorial theory, useful and interesting algorithms, and deep results about the intrinsic complexity of combinatorial problems. Its clarity of exposition and excellent selection of exercises will make it accessible and appealing to all those with a taste for mathematics and algorithms. Richard Karp,University Professor, University of California at Berkeley Following the development of basic combinatorial optimization techniques in the 1960s and 1970s, a main open question was to develop a theory of approximation algorithms. In the 1990s, parallel developments in techniques for designing approximation algorithms as well as methods for proving hardness of approximation results have led to a beautiful theory. The need to solve truly large instances of computationally hard problems, such as those arising from the Internet or the human genome project, has also increased interest in this theory. The field is currently very active, with the toolbox of approximation algorithm design techniques getting always richer. It is a pleasure to recommend Vijay Vazirani's well-written and comprehensive book on this important and timely topic. I am sure the reader will find it most useful both as an introduction to approximability as well as a reference to the many aspects of approximation algorithms. László Lovász, Senior Researcher, Microsoft Research
This book is the final version of a course on algorithmic information theory and the epistemology of mathematics and physics. It discusses Einstein and Goedel's views on the nature of mathematics in the light of information theory, and sustains the thesis that mathematics is quasi-empirical. There is a foreword by Cris Calude of the University of Auckland, and supplementary material is available at the author's web site. The special feature of this book is that it presents a new "hands on" didatic approach using LISP and Mathematica software. The reader will be able to derive an understanding of the close relationship between mathematics and physics. "The Limits of Mathematics is a very personal and idiosyncratic account of Greg Chaitin's entire career in developing algorithmic information theory. The combination of the edited transcripts of his three introductory lectures maintains all the energy and content of the oral presentations, while the material on AIT itself gives a full explanation of how to implement Greg's ideas on real computers for those who want to try their hand at furthering the theory." (John Casti, Santa Fe Institute)
Algorithmic discrete mathematics plays a key role in the development of information and communication technologies, and methods that arise in computer science, mathematics and operations research in particular in algorithms, computational complexity, distributed computing and optimization are vital to modern services such as mobile telephony, online banking and VoIP. This book examines communication networking from a mathematical viewpoint. The contributing authors took part in the European COST action 293 a four-year program of multidisciplinary research on this subject. In this book they offer introductory overviews and state-of-the-art assessments of current and future research in the fields of broadband, optical, wireless and ad hoc networks. Particular topics of interest are design, optimization, robustness and energy consumption. The book will be of interest to graduate students, researchers and practitioners in the areas of networking, theoretical computer science, operations research, distributed computing and mathematics."
The importance of benchmarking in the service sector is well recognized as it helps in continuous improvement in products and work processes. Through benchmarking, companies have strived to implement best practices in order to remain competitive in the product- market in which they operate. However studies on benchmarking, particularly in the software development sector, have neglected using multiple variables and therefore have not been as comprehensive. Information Theory and Best Practices in the IT Industry fills this void by examining benchmarking in the business of software development and studying how it is affected by development process, application type, hardware platforms used, and many other variables. Information Theory and Best Practices in the IT Industry begins by examining practices of benchmarking productivity and critically appraises them. Next the book identifies different variables which affect productivity and variables that affect quality, developing useful equations that explaining their relationships. Finally these equations and findings are applied to case studies. Utilizing this book, practitioners can decide about what emphasis they should attach to different variables in their own companies, while seeking to optimize productivity and defect density.
These are my lecture notes from CS681: Design and Analysis of Algo rithms, a one-semester graduate course I taught at Cornell for three consec utive fall semesters from '88 to '90. The course serves a dual purpose: to cover core material in algorithms for graduate students in computer science preparing for their PhD qualifying exams, and to introduce theory students to some advanced topics in the design and analysis of algorithms. The material is thus a mixture of core and advanced topics. At first I meant these notes to supplement and not supplant a textbook, but over the three years they gradually took on a life of their own. In addition to the notes, I depended heavily on the texts * A. V. Aho, J. E. Hopcroft, and J. D. Ullman, The Design and Analysis of Computer Algorithms. Addison-Wesley, 1975. * M. R. Garey and D. S. Johnson, Computers and Intractibility: A Guide to the Theory of NP-Completeness. w. H. Freeman, 1979. * R. E. Tarjan, Data Structures and Network Algorithms. SIAM Regional Conference Series in Applied Mathematics 44, 1983. and still recommend them as excellent references.
The contributions in this volume are written by the foremost international researchers and practitioners in the GP arena. They examine the similarities and differences between theoretical and empirical results on real-world problems. The text explores the synergy between theory and practice, producing a comprehensive view of the state of the art in GP application. Topics include: FINCH: A System for Evolving Java, Practical Autoconstructive Evolution, The Rubik Cube and GP Temporal Sequence Learning, Ensemble classifiers: AdaBoost and Orthogonal Evolution of Teams, Self-modifying Cartesian GP, Abstract Expression Grammar Symbolic Regression, Age-Fitness Pareto Optimization, Scalable Symbolic Regression by Continuous Evolution, Symbolic Density Models, GP Transforms in Linear Regression Situations, Protein Interactions in a Computational Evolution System, Composition of Music and Financial Strategies via GP, and Evolutionary Art Using Summed Multi-Objective Ranks. Readers will discover large-scale, real-world applications of GP to a variety of problem domains via in-depth presentations of the latest and most significant results in GP .
Mathematical summary for Digital Signal Processing Applications with Matlab consists of Mathematics which is not usually dealt in the DSP core subject, but used in DSP applications. Matlab programs with illustrations are given for the selective topics such as generation of Multivariate Gaussian distributed sample outcomes, Bacterial foraging algorithm, Newton's iteration, Steepest descent algorithm, etc. are given exclusively in the separate chapter. Also Mathematical summary for Digital Signal Processing Applications with Matlab is written in such a way that it is suitable for Non-Mathematical readers and is very much suitable for the beginners who are doing research in Digital Signal Processing.
Describing a new optimization algorithm, the "Teaching-Learning-Based Optimization (TLBO)," in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners' results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.
In "Physical Unclonable Functions in Theory and Practice," the authorspresent an in-depth overview ofvarious topics concerning PUFs, providing theoretical background and application details. This book concentrates on the practical issues of PUF hardware design, focusing on dedicated microelectronic PUF circuits. Additionally, the authors discuss the whole process of circuit design, layout and chip verification. The book also offers coverage of: Different published approaches focusing on dedicated microelectronic PUF circuits Specification of PUF circuits General design issues Minimizing error rate from the circuit s perspective Transistor modeling issues of Montecarlo mismatch simulation and solutions Examples of PUF circuits including an accurate description of the circuits and testing/measurement resultsDifferent error rate reducing pre-selection techniques This monographgives insight into PUFs in general and provides knowledge in the field of PUF circuit design and implementation. It could be of interest for all circuit designers confronted with PUF design, and also for professionals and students being introduced to the topic."
Floating-point arithmetic is the most widely used way of implementing real-number arithmetic on modern computers. However, making such an arithmetic reliable and portable, yet fast, is a very difficult task. As a result, floating-point arithmetic is far from being exploited to its full potential. This handbook aims to provide a complete overview of modern floating-point arithmetic. So that the techniques presented can be put directly into practice in actual coding or design, they are illustrated, whenever possible, by a corresponding program. The handbook is designed for programmers of numerical applications, compiler designers, programmers of floating-point algorithms, designers of arithmetic operators, and more generally, students and researchers in numerical analysis who wish to better understand a tool used in their daily work and research.
It gives me immense pleasure to introduce this timely handbook to the research/- velopment communities in the ?eld of signal processing systems (SPS). This is the ?rst of its kind and represents state-of-the-arts coverage of research in this ?eld. The driving force behind information technologies (IT) hinges critically upon the major advances in both component integration and system integration. The major breakthrough for the former is undoubtedly the invention of IC in the 50's by Jack S. Kilby, the Nobel Prize Laureate in Physics 2000. In an integrated circuit, all components were made of the same semiconductor material. Beginning with the pocket calculator in 1964, there have been many increasingly complex applications followed. In fact, processing gates and memory storage on a chip have since then grown at an exponential rate, following Moore's Law. (Moore himself admitted that Moore's Law had turned out to be more accurate, longer lasting and deeper in impact than he ever imagined. ) With greater device integration, various signal processing systems have been realized for many killer IT applications. Further breakthroughs in computer sciences and Internet technologies have also catalyzed large-scale system integration. All these have led to today's IT revolution which has profound impacts on our lifestyle and overall prospect of humanity. (It is hard to imagine life today without mobiles or Internets ) The success of SPS requires a well-concerted integrated approach from mul- ple disciplines, such as device, design, and application.
Object-oriented database management systems (OODBMS) are used to imple ment and maintain large object databases on persistent storage. Regardless whether the underlying database model follows the object-oriented, the rela tional or the object-relational paradigm, a key feature of any DBMS product is content based access to data sets. On the one hand this feature provides user-friendly query interfaces based on predicates to describe the desired data. On the other hand it poses challenging questions regarding DBMS design and implementation as well as the application development process on top of the DBMS. The reason for the latter is that the actual query performance depends on a technically meaningful use of access support mechanisms. In particular, if chosen and applied properly, such a mechanism speeds up the execution of predicate based queries. In the object-oriented world, such queries may involve arbitrarily complex terms referring to inheritance hierarchies and aggregation paths. These features are attractive at the application level, however, they increase the complexity of appropriate access support mechanisms which are known to be technically non-trivial in the relational world."
HIS BOOK CONTAINS a most comprehensive text that presents syntax-directed and compositional methods for the formal veri?- T cation of programs. The approach is not language-bounded in the sense that it covers a large variety of programming models and features that appear in most modern programming languages. It covers the classes of - quential and parallel, deterministic and non-deterministic, distributed and object-oriented programs. For each of the classes it presents the various c- teria of correctness that are relevant for these classes, such as interference freedom, deadlock freedom, and appropriate notions of liveness for parallel programs. Also, special proof rules appropriate for each class of programs are presented. In spite of this diversity due to the rich program classes cons- ered, there exist a uniform underlying theory of veri?cation which is synt- oriented and promotes compositional approaches to veri?cation, leading to scalability of the methods. The text strikes the proper balance between mathematical rigor and - dactic introduction of increasingly complex rules in an incremental manner, adequately supported by state-of-the-art examples. As a result it can serve as a textbook for a variety of courses on di?erent levels and varying durations. It can also serve as a reference book for researchers in the theory of veri?- tion, in particular since it contains much material that never before appeared in book form. This is specially true for the treatment of object-oriented p- grams which is entirely novel and is strikingly elegant.
Graph Separators with Applications is devoted to techniques for obtaining upper and lower bounds on the sizes of graph separators - upper bounds being obtained via decomposition algorithms. The book surveys the main approaches to obtaining good graph separations, while the main focus of the book is on techniques for deriving lower bounds on the sizes of graph separators. This asymmetry in focus reflects our perception that the work on upper bounds, or algorithms, for graph separation is much better represented in the standard theory literature than is the work on lower bounds, which we perceive as being much more scattered throughout the literature on application areas. Given the multitude of notions of graph separator that have been developed and studied over the past (roughly) three decades, there is a need for a central, theory-oriented repository for the mass of results. The need is absolutely critical in the area of lower-bound techniques for graph separators, since these techniques have virtually never appeared in articles having the word separator' or any of its near-synonyms in the title. Graph Separators with Applications fills this need.
Bioinspired computation methods such as evolutionary algorithms and ant colony optimization are being applied successfully to complex engineering problems and to problems from combinatorial optimization, and with this comes the requirement to more fully understand the computational complexity of these search heuristics. This is the first textbook covering the most important results achieved in this area. The authors study the computational complexity of bioinspired computation and show how runtime behavior can be analyzed in a rigorous way using some of the best-known combinatorial optimization problems -- minimum spanning trees, shortest paths, maximum matching, covering and scheduling problems. A feature of the book is the separate treatment of single- and multiobjective problems, the latter a domain where the development of the underlying theory seems to be lagging practical successes. This book will be very valuable for teaching courses on bioinspired computation and combinatorial optimization. Researchers will also benefit as the presentation of the theory covers the most important developments in the field over the last 10 years. Finally, with a focus on well-studied combinatorial optimization problems rather than toy problems, the book will also be very valuable for practitioners in this field.
Numbers, Information and Complexity is a collection of about 50 articles in honour of Rudolf Ahlswede. His main areas of research are represented in the three sections, `Numbers and Combinations', `Information Theory (Channels and Networks, Combinatorial and Algebraic Coding, Cryptology, with the related fields Data Compression, Entropy Theory, Symbolic Dynamics, Probability and Statistics)', and `Complexity'. Special attention was paid to the interplay between the fields. Surveys on topics of current interest are included as well as new research results. The book features surveys on Combinatorics about topics such as intersection theorems, which are not yet covered in textbooks, several contributions by leading experts in data compression, and relations to Natural Sciences are discussed.
This book represents the refereed proceedings of the Ninth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Warsaw (Poland) in August 2010. These biennial conferences are major events for Monte Carlo and the premiere event for quasi-Monte Carlo research. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. The reader will be provided with information on latest developments in these very active areas. The book is an excellent reference for theoreticians and practitioners interested in solving high-dimensional computational problems arising, in particular, in finance and statistics.
Parsing technology traditionally consists of two branches, which correspond to the two main application areas of context-free grammars and their generalizations. Efficient deterministic parsing algorithms have been developed for parsing programming languages, and quite different algorithms are employed for analyzing natural language. The Functional Treatment of Parsing provides a functional framework within which the different traditional techniques are restated and unified. The resulting theory provides new recursive implementations of parsers for context-free grammars. The new implementations, called recursive ascent parsers, avoid explicit manipulation of parse stacks and parse matrices, and are in many ways superior to conventional implementations. They are applicable to grammars for programming languages as well as natural languages. The book has been written primarily for students and practitioners of parsing technology. With its emphasis on modern functional methods, however, the book will also be of benefit to scientists interested in functional programming. The Functional Treatment of Parsing is an excellent reference and can be used as a text for a course on the subject.
Temporal Information Systems in Medicine introduces the engineering of information systems for medically-related problems and applications. The chapters are organized into four parts; fundamentals, temporal reasoning & maintenance in medicine, time in clinical tasks, and the display of time-oriented clinical information. The chapters are self-contained with pointers to other relevant chapters or sections in this book when necessary. Time is of central importance and is a key component of the engineering process for information systems. This book is designed as a secondary text or reference book for upper -undergraduate level students and graduate level students concentrating on computer science, biomedicine and engineering. Industry professionals and researchers working in health care management, information systems in medicine, medical informatics, database management and AI will also find this book a valuable asset.
The research and its outcomes presented in this collection focus on various aspects of high-performance computing (HPC) software and its development which is confronted with various challenges as today's supercomputer technology heads towards exascale computing. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The collection thereby highlights pioneering research findings as well as innovative concepts in exascale software development that have been conducted under the umbrella of the priority programme "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) and that have been presented at the SPPEXA Symposium, Jan 25-27 2016, in Munich. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest.
This thesis introduces novel and significant results regarding the analysis and synthesis of positive systems, especially under l1 and L1 performance. It describes stability analysis, controller synthesis, and bounding positivity-preserving observer and filtering design for a variety of both discrete and continuous positive systems. It subsequently derives computationally efficient solutions based on linear programming in terms of matrix inequalities, as well as a number of analytical solutions obtained for special cases. The thesis applies a range of novel approaches and fundamental techniques to the further study of positive systems, thus contributing significantly to the theory of positive systems, a "hot topic" in the field of control.
Chaos-based cryptography, attracting many researchers in the past decade, is a research field across two fields, i.e., chaos (nonlinear dynamic system) and cryptography (computer and data security). It Chaos' properties, such as randomness and ergodicity, have been proved to be suitable for designing the means for data protection. The book gives a thorough description of chaos-based cryptography, which consists of chaos basic theory, chaos properties suitable for cryptography, chaos-based cryptographic techniques, and various secure applications based on chaos. Additionally, it covers both the latest research results and some open issues or hot topics. The book creates a collection of high-quality chapters contributed by leading experts in the related fields. It embraces a wide variety of aspects of the related subject areas and provide a scientifically and scholarly sound treatment of state-of-the-art techniques to students, researchers, academics, personnel of law enforcement and IT practitioners who are interested or involved in the study, research, use, design and development of techniques related to chaos-based cryptography.
Algorithmic Learning in a Random World describes recent theoretical and experimental developments in building computable approximations to Kolmogorov's algorithmic notion of randomness. Based on these approximations, a new set of machine learning algorithms have been developed that can be used to make predictions and to estimate their confidence and credibility in high-dimensional spaces under the usual assumption that the data are independent and identically distributed (assumption of randomness). Another aim of this unique monograph is to outline some limits of predictions: The approach based on algorithmic theory of randomness allows for the proof of impossibility of prediction in certain situations. The book describes how several important machine learning problems, such as density estimation in high-dimensional spaces, cannot be solved if the only assumption is randomness. |
You may like...
Computational and Statistical Methods…
Shen Liu, James McGree, …
Hardcover
R1,802
Discovery Miles 18 020
Comprehensive Metaheuristics…
S. Ali Mirjalili, Amir Hossein Gandomi
Paperback
R3,956
Discovery Miles 39 560
|