![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Algorithms & procedures
Algorithmic discrete mathematics plays a key role in the development of information and communication technologies, and methods that arise in computer science, mathematics and operations research in particular in algorithms, computational complexity, distributed computing and optimization are vital to modern services such as mobile telephony, online banking and VoIP. This book examines communication networking from a mathematical viewpoint. The contributing authors took part in the European COST action 293 a four-year program of multidisciplinary research on this subject. In this book they offer introductory overviews and state-of-the-art assessments of current and future research in the fields of broadband, optical, wireless and ad hoc networks. Particular topics of interest are design, optimization, robustness and energy consumption. The book will be of interest to graduate students, researchers and practitioners in the areas of networking, theoretical computer science, operations research, distributed computing and mathematics."
The importance of benchmarking in the service sector is well recognized as it helps in continuous improvement in products and work processes. Through benchmarking, companies have strived to implement best practices in order to remain competitive in the product- market in which they operate. However studies on benchmarking, particularly in the software development sector, have neglected using multiple variables and therefore have not been as comprehensive. Information Theory and Best Practices in the IT Industry fills this void by examining benchmarking in the business of software development and studying how it is affected by development process, application type, hardware platforms used, and many other variables. Information Theory and Best Practices in the IT Industry begins by examining practices of benchmarking productivity and critically appraises them. Next the book identifies different variables which affect productivity and variables that affect quality, developing useful equations that explaining their relationships. Finally these equations and findings are applied to case studies. Utilizing this book, practitioners can decide about what emphasis they should attach to different variables in their own companies, while seeking to optimize productivity and defect density.
Floating-point arithmetic is the most widely used way of implementing real-number arithmetic on modern computers. However, making such an arithmetic reliable and portable, yet fast, is a very difficult task. As a result, floating-point arithmetic is far from being exploited to its full potential. This handbook aims to provide a complete overview of modern floating-point arithmetic. So that the techniques presented can be put directly into practice in actual coding or design, they are illustrated, whenever possible, by a corresponding program. The handbook is designed for programmers of numerical applications, compiler designers, programmers of floating-point algorithms, designers of arithmetic operators, and more generally, students and researchers in numerical analysis who wish to better understand a tool used in their daily work and research.
These are my lecture notes from CS681: Design and Analysis of Algo rithms, a one-semester graduate course I taught at Cornell for three consec utive fall semesters from '88 to '90. The course serves a dual purpose: to cover core material in algorithms for graduate students in computer science preparing for their PhD qualifying exams, and to introduce theory students to some advanced topics in the design and analysis of algorithms. The material is thus a mixture of core and advanced topics. At first I meant these notes to supplement and not supplant a textbook, but over the three years they gradually took on a life of their own. In addition to the notes, I depended heavily on the texts * A. V. Aho, J. E. Hopcroft, and J. D. Ullman, The Design and Analysis of Computer Algorithms. Addison-Wesley, 1975. * M. R. Garey and D. S. Johnson, Computers and Intractibility: A Guide to the Theory of NP-Completeness. w. H. Freeman, 1979. * R. E. Tarjan, Data Structures and Network Algorithms. SIAM Regional Conference Series in Applied Mathematics 44, 1983. and still recommend them as excellent references.
It gives me immense pleasure to introduce this timely handbook to the research/- velopment communities in the ?eld of signal processing systems (SPS). This is the ?rst of its kind and represents state-of-the-arts coverage of research in this ?eld. The driving force behind information technologies (IT) hinges critically upon the major advances in both component integration and system integration. The major breakthrough for the former is undoubtedly the invention of IC in the 50's by Jack S. Kilby, the Nobel Prize Laureate in Physics 2000. In an integrated circuit, all components were made of the same semiconductor material. Beginning with the pocket calculator in 1964, there have been many increasingly complex applications followed. In fact, processing gates and memory storage on a chip have since then grown at an exponential rate, following Moore's Law. (Moore himself admitted that Moore's Law had turned out to be more accurate, longer lasting and deeper in impact than he ever imagined. ) With greater device integration, various signal processing systems have been realized for many killer IT applications. Further breakthroughs in computer sciences and Internet technologies have also catalyzed large-scale system integration. All these have led to today's IT revolution which has profound impacts on our lifestyle and overall prospect of humanity. (It is hard to imagine life today without mobiles or Internets ) The success of SPS requires a well-concerted integrated approach from mul- ple disciplines, such as device, design, and application.
Describing a new optimization algorithm, the "Teaching-Learning-Based Optimization (TLBO)," in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners' results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.
Mathematical summary for Digital Signal Processing Applications with Matlab consists of Mathematics which is not usually dealt in the DSP core subject, but used in DSP applications. Matlab programs with illustrations are given for the selective topics such as generation of Multivariate Gaussian distributed sample outcomes, Bacterial foraging algorithm, Newton's iteration, Steepest descent algorithm, etc. are given exclusively in the separate chapter. Also Mathematical summary for Digital Signal Processing Applications with Matlab is written in such a way that it is suitable for Non-Mathematical readers and is very much suitable for the beginners who are doing research in Digital Signal Processing.
This book represents the refereed proceedings of the Ninth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Warsaw (Poland) in August 2010. These biennial conferences are major events for Monte Carlo and the premiere event for quasi-Monte Carlo research. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. The reader will be provided with information on latest developments in these very active areas. The book is an excellent reference for theoreticians and practitioners interested in solving high-dimensional computational problems arising, in particular, in finance and statistics.
Graph Separators with Applications is devoted to techniques for obtaining upper and lower bounds on the sizes of graph separators - upper bounds being obtained via decomposition algorithms. The book surveys the main approaches to obtaining good graph separations, while the main focus of the book is on techniques for deriving lower bounds on the sizes of graph separators. This asymmetry in focus reflects our perception that the work on upper bounds, or algorithms, for graph separation is much better represented in the standard theory literature than is the work on lower bounds, which we perceive as being much more scattered throughout the literature on application areas. Given the multitude of notions of graph separator that have been developed and studied over the past (roughly) three decades, there is a need for a central, theory-oriented repository for the mass of results. The need is absolutely critical in the area of lower-bound techniques for graph separators, since these techniques have virtually never appeared in articles having the word separator' or any of its near-synonyms in the title. Graph Separators with Applications fills this need.
Object-oriented database management systems (OODBMS) are used to imple ment and maintain large object databases on persistent storage. Regardless whether the underlying database model follows the object-oriented, the rela tional or the object-relational paradigm, a key feature of any DBMS product is content based access to data sets. On the one hand this feature provides user-friendly query interfaces based on predicates to describe the desired data. On the other hand it poses challenging questions regarding DBMS design and implementation as well as the application development process on top of the DBMS. The reason for the latter is that the actual query performance depends on a technically meaningful use of access support mechanisms. In particular, if chosen and applied properly, such a mechanism speeds up the execution of predicate based queries. In the object-oriented world, such queries may involve arbitrarily complex terms referring to inheritance hierarchies and aggregation paths. These features are attractive at the application level, however, they increase the complexity of appropriate access support mechanisms which are known to be technically non-trivial in the relational world."
HIS BOOK CONTAINS a most comprehensive text that presents syntax-directed and compositional methods for the formal veri?- T cation of programs. The approach is not language-bounded in the sense that it covers a large variety of programming models and features that appear in most modern programming languages. It covers the classes of - quential and parallel, deterministic and non-deterministic, distributed and object-oriented programs. For each of the classes it presents the various c- teria of correctness that are relevant for these classes, such as interference freedom, deadlock freedom, and appropriate notions of liveness for parallel programs. Also, special proof rules appropriate for each class of programs are presented. In spite of this diversity due to the rich program classes cons- ered, there exist a uniform underlying theory of veri?cation which is synt- oriented and promotes compositional approaches to veri?cation, leading to scalability of the methods. The text strikes the proper balance between mathematical rigor and - dactic introduction of increasingly complex rules in an incremental manner, adequately supported by state-of-the-art examples. As a result it can serve as a textbook for a variety of courses on di?erent levels and varying durations. It can also serve as a reference book for researchers in the theory of veri?- tion, in particular since it contains much material that never before appeared in book form. This is specially true for the treatment of object-oriented p- grams which is entirely novel and is strikingly elegant.
In "Physical Unclonable Functions in Theory and Practice," the authorspresent an in-depth overview ofvarious topics concerning PUFs, providing theoretical background and application details. This book concentrates on the practical issues of PUF hardware design, focusing on dedicated microelectronic PUF circuits. Additionally, the authors discuss the whole process of circuit design, layout and chip verification. The book also offers coverage of: Different published approaches focusing on dedicated microelectronic PUF circuits Specification of PUF circuits General design issues Minimizing error rate from the circuit s perspective Transistor modeling issues of Montecarlo mismatch simulation and solutions Examples of PUF circuits including an accurate description of the circuits and testing/measurement resultsDifferent error rate reducing pre-selection techniques This monographgives insight into PUFs in general and provides knowledge in the field of PUF circuit design and implementation. It could be of interest for all circuit designers confronted with PUF design, and also for professionals and students being introduced to the topic."
Numbers, Information and Complexity is a collection of about 50 articles in honour of Rudolf Ahlswede. His main areas of research are represented in the three sections, `Numbers and Combinations', `Information Theory (Channels and Networks, Combinatorial and Algebraic Coding, Cryptology, with the related fields Data Compression, Entropy Theory, Symbolic Dynamics, Probability and Statistics)', and `Complexity'. Special attention was paid to the interplay between the fields. Surveys on topics of current interest are included as well as new research results. The book features surveys on Combinatorics about topics such as intersection theorems, which are not yet covered in textbooks, several contributions by leading experts in data compression, and relations to Natural Sciences are discussed.
Chaos-based cryptography, attracting many researchers in the past decade, is a research field across two fields, i.e., chaos (nonlinear dynamic system) and cryptography (computer and data security). It Chaos' properties, such as randomness and ergodicity, have been proved to be suitable for designing the means for data protection. The book gives a thorough description of chaos-based cryptography, which consists of chaos basic theory, chaos properties suitable for cryptography, chaos-based cryptographic techniques, and various secure applications based on chaos. Additionally, it covers both the latest research results and some open issues or hot topics. The book creates a collection of high-quality chapters contributed by leading experts in the related fields. It embraces a wide variety of aspects of the related subject areas and provide a scientifically and scholarly sound treatment of state-of-the-art techniques to students, researchers, academics, personnel of law enforcement and IT practitioners who are interested or involved in the study, research, use, design and development of techniques related to chaos-based cryptography.
Bioinspired computation methods such as evolutionary algorithms and ant colony optimization are being applied successfully to complex engineering problems and to problems from combinatorial optimization, and with this comes the requirement to more fully understand the computational complexity of these search heuristics. This is the first textbook covering the most important results achieved in this area. The authors study the computational complexity of bioinspired computation and show how runtime behavior can be analyzed in a rigorous way using some of the best-known combinatorial optimization problems -- minimum spanning trees, shortest paths, maximum matching, covering and scheduling problems. A feature of the book is the separate treatment of single- and multiobjective problems, the latter a domain where the development of the underlying theory seems to be lagging practical successes. This book will be very valuable for teaching courses on bioinspired computation and combinatorial optimization. Researchers will also benefit as the presentation of the theory covers the most important developments in the field over the last 10 years. Finally, with a focus on well-studied combinatorial optimization problems rather than toy problems, the book will also be very valuable for practitioners in this field.
Parsing technology traditionally consists of two branches, which correspond to the two main application areas of context-free grammars and their generalizations. Efficient deterministic parsing algorithms have been developed for parsing programming languages, and quite different algorithms are employed for analyzing natural language. The Functional Treatment of Parsing provides a functional framework within which the different traditional techniques are restated and unified. The resulting theory provides new recursive implementations of parsers for context-free grammars. The new implementations, called recursive ascent parsers, avoid explicit manipulation of parse stacks and parse matrices, and are in many ways superior to conventional implementations. They are applicable to grammars for programming languages as well as natural languages. The book has been written primarily for students and practitioners of parsing technology. With its emphasis on modern functional methods, however, the book will also be of benefit to scientists interested in functional programming. The Functional Treatment of Parsing is an excellent reference and can be used as a text for a course on the subject.
The research and its outcomes presented in this collection focus on various aspects of high-performance computing (HPC) software and its development which is confronted with various challenges as today's supercomputer technology heads towards exascale computing. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The collection thereby highlights pioneering research findings as well as innovative concepts in exascale software development that have been conducted under the umbrella of the priority programme "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) and that have been presented at the SPPEXA Symposium, Jan 25-27 2016, in Munich. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest.
Temporal Information Systems in Medicine introduces the engineering of information systems for medically-related problems and applications. The chapters are organized into four parts; fundamentals, temporal reasoning & maintenance in medicine, time in clinical tasks, and the display of time-oriented clinical information. The chapters are self-contained with pointers to other relevant chapters or sections in this book when necessary. Time is of central importance and is a key component of the engineering process for information systems. This book is designed as a secondary text or reference book for upper -undergraduate level students and graduate level students concentrating on computer science, biomedicine and engineering. Industry professionals and researchers working in health care management, information systems in medicine, medical informatics, database management and AI will also find this book a valuable asset.
Algorithmic Learning in a Random World describes recent theoretical and experimental developments in building computable approximations to Kolmogorov's algorithmic notion of randomness. Based on these approximations, a new set of machine learning algorithms have been developed that can be used to make predictions and to estimate their confidence and credibility in high-dimensional spaces under the usual assumption that the data are independent and identically distributed (assumption of randomness). Another aim of this unique monograph is to outline some limits of predictions: The approach based on algorithmic theory of randomness allows for the proof of impossibility of prediction in certain situations. The book describes how several important machine learning problems, such as density estimation in high-dimensional spaces, cannot be solved if the only assumption is randomness.
This thesis introduces a new integrated algorithm for the detection of lane-level irregular driving. To date, there has been very little improvement in the ability to detect lane level irregular driving styles, mainly due to a lack of high performance positioning techniques and suitable driving pattern recognition algorithms. The algorithm combines data from the Global Positioning System (GPS), Inertial Measurement Unit (IMU) and lane information using advanced filtering methods. The vehicle state within a lane is estimated using a Particle Filter (PF) and an Extended Kalman Filter (EKF). The state information is then used within a novel Fuzzy Inference System (FIS) based algorithm to detect different types of irregular driving. Simulation and field trial results are used to demonstrate the accuracy and reliability of the proposed irregular driving detection method.
The 14 contributed chapters in this book survey the most recent developments in high-performance algorithms for NGS data, offering fundamental insights and technical information specifically on indexing, compression and storage; error correction; alignment; and assembly. The book will be of value to researchers, practitioners and students engaged with bioinformatics, computer science, mathematics, statistics and life sciences.
This book focuses on new and emerging data mining solutions that offer a greater level of transparency than existing solutions. Transparent data mining solutions with desirable properties (e.g. effective, fully automatic, scalable) are covered in the book. Experimental findings of transparent solutions are tailored to different domain experts, and experimental metrics for evaluating algorithmic transparency are presented. The book also discusses societal effects of black box vs. transparent approaches to data mining, as well as real-world use cases for these approaches.As algorithms increasingly support different aspects of modern life, a greater level of transparency is sorely needed, not least because discrimination and biases have to be avoided. With contributions from domain experts, this book provides an overview of an emerging area of data mining that has profound societal consequences, and provides the technical background to for readers to contribute to the field or to put existing approaches to practical use.
Written for developers with some understanding of deep learning algorithms. Experience with reinforcement learning is not required. Grokking Deep Reinforcement Learning introduces this powerful machine learning approach, using examples, illustrations, exercises, and crystal-clear teaching. You'll love the perfectly paced teaching and the clever, engaging writing style as you dig into this awesome exploration of reinforcement learning fundamentals, effective deep learning techniques, and practical applications in this emerging field. We all learn through trial and error. We avoid the things that cause us to experience pain and failure. We embrace and build on the things that give us reward and success. This common pattern is the foundation of deep reinforcement learning: building machine learning systems that explore and learn based on the responses of the environment. * Foundational reinforcement learning concepts and methods * The most popular deep reinforcement learning agents solving high-dimensional environments * Cutting-edge agents that emulate human-like behavior and techniques for artificial general intelligence Deep reinforcement learning is a form of machine learning in which AI agents learn optimal behavior on their own from raw sensory input. The system perceives the environment, interprets the results of its past decisions and uses this information to optimize its behavior for maximum long-term return.
hebookpresentedtothereaderisdevotedtotime-dependentscheduling. TScheduling problems, in general, consist in the allocation of resources over time in order to perform a set of jobs. Any allocation that meets all requirements concerning the jobs and resources is called a feasible schedule. The quality of a schedule is measured by a criterion function. The aim of scheduling is to ?nd, among all feasible schedules, a schedule that optimizes the criterion function. A solution to an arbitrary scheduling problem consists in giving a polynomial-time algorithm generating either an optimal schedule or a schedule that is close to the optimal one, if the given scheduling problem has been proved to be computationally intractable. The scheduling problems are subject of interest of the scheduling theory, originated in mid-?fties of the twentieth century. The theory has been developing dynamically and new research areas constantly come into existence. The subject of this book, ti- dependent scheduling, is one of such areas. In time-dependent scheduling, the processing time of a job is variable and depends on the starting time of the job. This crucial assumption allows us to apply the scheduling theory to a broader spectrum of problems. For example, in the framework of the time-dependent scheduling theory we may consider the problems of repayment of multiple loans, ?re ?ghting and maintenance assignments. In this book, we will discuss algorithms and complexity issues concerning various time-dependent scheduling problems.
A collection of surveys and research papers on mathematical software and algorithms. The common thread is that the field of mathematical applications lies on the border between algebra and geometry. Topics include polyhedral geometry, elimination theory, algebraic surfaces, Gröbner bases, triangulations of point sets and the mutual relationship. This diversity is accompanied by the abundance of available software systems which often handle only special mathematical aspects. This is why the volume also focuses on solutions to the integration of mathematical software systems. This includes low-level and XML based high-level communication channels as well as general frameworks for modular systems. |
You may like...
Pediatric Urology, An Issue of Urologic…
Anthony Caldamone, Hillary L Copp, …
Hardcover
R2,130
Discovery Miles 21 300
Image Processing Technologies…
Kiyoharu Aizawa, Katsuhiko Sakaue, …
Hardcover
R2,447
Discovery Miles 24 470
Smart Science, Design & Technology…
Siu-Tsen Shen, Sheng-Joue Young, …
Hardcover
R5,217
Discovery Miles 52 170
Prufungstraining DaF - Deutsch-Test fur…
Peter Hartling
Mixed media product
R716
Discovery Miles 7 160
|