![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Data structures
Numbers, Information and Complexity is a collection of about 50 articles in honour of Rudolf Ahlswede. His main areas of research are represented in the three sections, `Numbers and Combinations', `Information Theory (Channels and Networks, Combinatorial and Algebraic Coding, Cryptology, with the related fields Data Compression, Entropy Theory, Symbolic Dynamics, Probability and Statistics)', and `Complexity'. Special attention was paid to the interplay between the fields. Surveys on topics of current interest are included as well as new research results. The book features surveys on Combinatorics about topics such as intersection theorems, which are not yet covered in textbooks, several contributions by leading experts in data compression, and relations to Natural Sciences are discussed.
Chaos-based cryptography, attracting many researchers in the past decade, is a research field across two fields, i.e., chaos (nonlinear dynamic system) and cryptography (computer and data security). It Chaos' properties, such as randomness and ergodicity, have been proved to be suitable for designing the means for data protection. The book gives a thorough description of chaos-based cryptography, which consists of chaos basic theory, chaos properties suitable for cryptography, chaos-based cryptographic techniques, and various secure applications based on chaos. Additionally, it covers both the latest research results and some open issues or hot topics. The book creates a collection of high-quality chapters contributed by leading experts in the related fields. It embraces a wide variety of aspects of the related subject areas and provide a scientifically and scholarly sound treatment of state-of-the-art techniques to students, researchers, academics, personnel of law enforcement and IT practitioners who are interested or involved in the study, research, use, design and development of techniques related to chaos-based cryptography.
Bioinspired computation methods such as evolutionary algorithms and ant colony optimization are being applied successfully to complex engineering problems and to problems from combinatorial optimization, and with this comes the requirement to more fully understand the computational complexity of these search heuristics. This is the first textbook covering the most important results achieved in this area. The authors study the computational complexity of bioinspired computation and show how runtime behavior can be analyzed in a rigorous way using some of the best-known combinatorial optimization problems -- minimum spanning trees, shortest paths, maximum matching, covering and scheduling problems. A feature of the book is the separate treatment of single- and multiobjective problems, the latter a domain where the development of the underlying theory seems to be lagging practical successes. This book will be very valuable for teaching courses on bioinspired computation and combinatorial optimization. Researchers will also benefit as the presentation of the theory covers the most important developments in the field over the last 10 years. Finally, with a focus on well-studied combinatorial optimization problems rather than toy problems, the book will also be very valuable for practitioners in this field.
Parsing technology traditionally consists of two branches, which correspond to the two main application areas of context-free grammars and their generalizations. Efficient deterministic parsing algorithms have been developed for parsing programming languages, and quite different algorithms are employed for analyzing natural language. The Functional Treatment of Parsing provides a functional framework within which the different traditional techniques are restated and unified. The resulting theory provides new recursive implementations of parsers for context-free grammars. The new implementations, called recursive ascent parsers, avoid explicit manipulation of parse stacks and parse matrices, and are in many ways superior to conventional implementations. They are applicable to grammars for programming languages as well as natural languages. The book has been written primarily for students and practitioners of parsing technology. With its emphasis on modern functional methods, however, the book will also be of benefit to scientists interested in functional programming. The Functional Treatment of Parsing is an excellent reference and can be used as a text for a course on the subject.
The research and its outcomes presented in this collection focus on various aspects of high-performance computing (HPC) software and its development which is confronted with various challenges as today's supercomputer technology heads towards exascale computing. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The collection thereby highlights pioneering research findings as well as innovative concepts in exascale software development that have been conducted under the umbrella of the priority programme "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) and that have been presented at the SPPEXA Symposium, Jan 25-27 2016, in Munich. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest.
Temporal Information Systems in Medicine introduces the engineering of information systems for medically-related problems and applications. The chapters are organized into four parts; fundamentals, temporal reasoning & maintenance in medicine, time in clinical tasks, and the display of time-oriented clinical information. The chapters are self-contained with pointers to other relevant chapters or sections in this book when necessary. Time is of central importance and is a key component of the engineering process for information systems. This book is designed as a secondary text or reference book for upper -undergraduate level students and graduate level students concentrating on computer science, biomedicine and engineering. Industry professionals and researchers working in health care management, information systems in medicine, medical informatics, database management and AI will also find this book a valuable asset.
This thesis introduces a new integrated algorithm for the detection of lane-level irregular driving. To date, there has been very little improvement in the ability to detect lane level irregular driving styles, mainly due to a lack of high performance positioning techniques and suitable driving pattern recognition algorithms. The algorithm combines data from the Global Positioning System (GPS), Inertial Measurement Unit (IMU) and lane information using advanced filtering methods. The vehicle state within a lane is estimated using a Particle Filter (PF) and an Extended Kalman Filter (EKF). The state information is then used within a novel Fuzzy Inference System (FIS) based algorithm to detect different types of irregular driving. Simulation and field trial results are used to demonstrate the accuracy and reliability of the proposed irregular driving detection method.
The 14 contributed chapters in this book survey the most recent developments in high-performance algorithms for NGS data, offering fundamental insights and technical information specifically on indexing, compression and storage; error correction; alignment; and assembly. The book will be of value to researchers, practitioners and students engaged with bioinformatics, computer science, mathematics, statistics and life sciences.
This book focuses on new and emerging data mining solutions that offer a greater level of transparency than existing solutions. Transparent data mining solutions with desirable properties (e.g. effective, fully automatic, scalable) are covered in the book. Experimental findings of transparent solutions are tailored to different domain experts, and experimental metrics for evaluating algorithmic transparency are presented. The book also discusses societal effects of black box vs. transparent approaches to data mining, as well as real-world use cases for these approaches.As algorithms increasingly support different aspects of modern life, a greater level of transparency is sorely needed, not least because discrimination and biases have to be avoided. With contributions from domain experts, this book provides an overview of an emerging area of data mining that has profound societal consequences, and provides the technical background to for readers to contribute to the field or to put existing approaches to practical use.
A collection of surveys and research papers on mathematical software and algorithms. The common thread is that the field of mathematical applications lies on the border between algebra and geometry. Topics include polyhedral geometry, elimination theory, algebraic surfaces, Gröbner bases, triangulations of point sets and the mutual relationship. This diversity is accompanied by the abundance of available software systems which often handle only special mathematical aspects. This is why the volume also focuses on solutions to the integration of mathematical software systems. This includes low-level and XML based high-level communication channels as well as general frameworks for modular systems.
This thesis introduces novel and significant results regarding the analysis and synthesis of positive systems, especially under l1 and L1 performance. It describes stability analysis, controller synthesis, and bounding positivity-preserving observer and filtering design for a variety of both discrete and continuous positive systems. It subsequently derives computationally efficient solutions based on linear programming in terms of matrix inequalities, as well as a number of analytical solutions obtained for special cases. The thesis applies a range of novel approaches and fundamental techniques to the further study of positive systems, thus contributing significantly to the theory of positive systems, a "hot topic" in the field of control.
This book contains some selected papers from the International Conference on Extreme Learning Machine (ELM) 2017, held in Yantai, China, October 4-7, 2017. The book covers theories, algorithms and applications of ELM. Extreme Learning Machines (ELM) aims to enable pervasive learning and pervasive intelligence. As advocated by ELM theories, it is exciting to see the convergence of machine learning and biological learning from the long-term point of view. ELM may be one of the fundamental `learning particles' filling the gaps between machine learning and biological learning (of which activation functions are even unknown). ELM represents a suite of (machine and biological) learning techniques in which hidden neurons need not be tuned: inherited from their ancestors or randomly generated. ELM learning theories show that effective learning algorithms can be derived based on randomly generated hidden neurons (biological neurons, artificial neurons, wavelets, Fourier series, etc) as long as they are nonlinear piecewise continuous, independent of training data and application environments. Increasingly, evidence from neuroscience suggests that similar principles apply in biological learning systems. ELM theories and algorithms argue that "random hidden neurons" capture an essential aspect of biological learning mechanisms as well as the intuitive sense that the efficiency of biological learning need not rely on computing power of neurons. ELM theories thus hint at possible reasons why the brain is more intelligent and effective than current computers. This conference will provide a forum for academics, researchers and engineers to share and exchange R&D experience on both theoretical studies and practical applications of the ELM technique and brain learning. It gives readers a glance of the most recent advances of ELM.
Thisvolumecontainstheinvitedandregularpaperspresentedat TCS 2010, the 6thIFIP International Conference on Theoretical Computer Science, organised by IFIP Tech- cal Committee 1 (Foundations of Computer Science) and IFIP WG 2.2 (Formal - scriptions of Programming Concepts) in association with SIGACT and EATCS. TCS 2010 was part of the World Computer Congress held in Brisbane, Australia, during September 20-23, 2010 ( ). TCS 2010 is composed of two main areas: (A) Algorithms, Complexity and Models of Computation, and (B) Logic, Semantics, Speci?cation and Veri?cation. The selection process led to the acceptance of 23 papers out of 39 submissions, eachofwhichwasreviewedbythreeProgrammeCommitteemembers.TheProgramme Committee discussion was held electronically using Easychair. The invited speakers at TCS 2010 are: Rob van Glabbeek (NICTA, Australia) Bart Jacobs (Nijmegen, The Netherlands) Catuscia Palamidessi (INRIA and LIX, Paris, France) Sabina Rossi (Venice, Italy) James Harland (Australia) and Barry Jay (Australia) acted as TCS 2010 Chairs. We take this occasion to thank the members of the Programme Committees and the external reviewers for the professional and timely work; the conference Chairs for their support; the invited speakers for their scholarly contribution; and of course the authors for submitting their work to TCS 2010
This book constitutes the refereed proceedings of the 27th IFIP TC 11 International Information Security Conference, SEC 2012, held in Heraklion, Crete, Greece, in June 2012. The 42 revised full papers presented together with 11 short papers were carefully reviewed and selected from 167 submissions. The papers are organized in topical sections on attacks and malicious code, security architectures, system security, access control, database security, privacy attitudes and properties, social networks and social engineering, applied cryptography, anonymity and trust, usable security, security and trust models, security economics, and authentication and delegation.
The book is a collection of invited papers on Computational Intelligence for Privacy and Security. The majority of the chapters are extended versions of works presented at the special session on Computational Intelligence for Privacy and Security of the International Joint Conference on Neural Networks (IJCNN-2010) held July 2010 in Barcelona, Spain. The book is devoted to Computational Intelligence for Privacy and Security. It provides an overview of the most recent advances on the Computational Intelligence techniques being developed for Privacy and Security. The book will be of interest to researchers in industry and academics and to post-graduate students interested in the latest advances and developments in the field of Computational Intelligence for Privacy and Security.
Data mining is a very active research area with many successful real-world app- cations. It consists of a set of concepts and methods used to extract interesting or useful knowledge (or patterns) from real-world datasets, providing valuable support for decision making in industry, business, government, and science. Although there are already many types of data mining algorithms available in the literature, it is still dif cult for users to choose the best possible data mining algorithm for their particular data mining problem. In addition, data mining al- rithms have been manually designed; therefore they incorporate human biases and preferences. This book proposes a new approach to the design of data mining algorithms. - stead of relying on the slow and ad hoc process of manual algorithm design, this book proposes systematically automating the design of data mining algorithms with an evolutionary computation approach. More precisely, we propose a genetic p- gramming system (a type of evolutionary computation method that evolves c- puter programs) to automate the design of rule induction algorithms, a type of cl- si cation method that discovers a set of classi cation rules from data. We focus on genetic programming in this book because it is the paradigmatic type of machine learning method for automating the generation of programs and because it has the advantage of performing a global search in the space of candidate solutions (data mining algorithms in our case), but in principle other types of search methods for this task could be investigated in the future.
In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different scenarios of experimental analysis. The first part overviews the main issues in the experimental analysis of algorithms, and discusses the experimental cycle of algorithm development; the second part treats the characterization by means of statistical distributions of algorithm performance in terms of solution quality, runtime and other measures; and the third part collects advanced methods from experimental design for configuring and tuning algorithms on a specific class of instances with the goal of using the least amount of experimentation. The contributor list includes leading scientists in algorithm design, statistical design, optimization and heuristics, and most chapters provide theoretical background and are enriched with case studies. This book is written for researchers and practitioners in operations research and computer science who wish to improve the experimental assessment of optimization algorithms and, consequently, their design.
This book investigates the susceptibility of intrinsic physically unclonable function (PUF) implementations on reconfigurable hardware to optical semi-invasive attacks from the chip backside. It explores different classes of optical attacks, particularly photonic emission analysis, laser fault injection, and optical contactless probing. By applying these techniques, the book demonstrates that the secrets generated by a PUF can be predicted, manipulated or directly probed without affecting the behavior of the PUF. It subsequently discusses the cost and feasibility of launching such attacks against the very latest hardware technologies in a real scenario. The author discusses why PUFs are not tamper-evident in their current configuration, and therefore, PUFs alone cannot raise the security level of key storage. The author then reviews the potential and already implemented countermeasures, which can remedy PUFs' security-related shortcomings and make them resistant to optical side-channel and optical fault attacks. Lastly, by making selected modifications to the functionality of an existing PUF architecture, the book presents a prototype tamper-evident sensor for detecting optical contactless probing attempts.
The authors give a detailed summary about the fundamentals and the historical background of digital communication. This includes an overview of the encoding principles and algorithms of textual information, audio information, as well as images, graphics, and video in the Internet. Furthermore the fundamentals of computer networking, digital security and cryptography are covered. Thus, the book provides a well-founded access to communication technology of computer networks, the internet and the WWW. Numerous pictures and images, a subject-index and a detailed list of historical personalities including a glossary for each chapter increase the practical benefit of this book that is well suited as well as for undergraduate students as for working practitioners.
Articles in this book examine various materials and how to determine directly the limit state of a structure, in the sense of limit analysis and shakedown analysis. Apart from classical applications in mechanical and civil engineering contexts, the book reports on the emerging field of material design beyond the elastic limit, which has further industrial design and technological applications. Readers will discover that "Direct Methods" and the techniques presented here can in fact be used to numerically estimate the strength of structured materials such as composites or nano-materials, which represent fruitful fields of future applications. Leading researchers outline the latest computational tools and optimization techniques and explore the possibility of obtaining information on the limit state of a structure whose post-elastic loading path and constitutive behavior are not well defined or well known. Readers will discover how Direct Methods allow rapid and direct access to requested information in mathematically constructive manners without cumbersome step-by-step computation. Both researchers already interested or involved in the field and practical engineers who want to have a panorama of modern methods for structural safety assessment will find this book valuable. It provides the reader with the latest developments and a significant amount of references on the topic.
The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.
Mixed-Signal Embedded Microcontrollers are commonly used in integrating analog components needed to control non-digital electronic systems. They are used in automatically controlled devices and products, such as automobile engine control systems, wireless remote controllers, office machines, home appliances, power tools, and toys. Microcontrollers make it economical to digitally control even more devices and processes by reducing the size and cost, compared to a design that uses a separate microprocessor, memory, and input/output devices. In many undergraduate and post-graduate courses, teaching of mixed-signal microcontrollers and their use for project work has become compulsory. Students face a lot of difficulties when they have to interface a microcontroller with the electronics they deal with. This book addresses some issues of interfacing the microcontrollers and describes some project implementations with the Silicon Lab C8051F020 mixed-signal microcontroller. The intended readers are college and university students specializing in electronics, computer systems engineering, electrical and electronics engineering; researchers involved with electronics based system, practitioners, technicians and in general anybody interested in microcontrollers based projects.
This book provides the most updated information of how membrane lipids mediate protein signaling from studies carried out in animal and plant cells. Also, there are some chapters that go beyond and expand these studies of protein-lipid interactions at the structural level. The book begins with a literature review from investigations associated to sphingolipids, followed by studies that describe the role of phosphoinositides in signaling and closing with the function of other key lipids in signaling at the plasma membrane and intracellular organelles.
"Optimal Design of Distributed Control and Embedded Systems "focuses on the design of special control and scheduling algorithms based on system structural properties as well as on analysis of the influence of induced time-delay on systems performances. It treats the optimal design of distributed and embedded control systems (DCESs) with respect to communication and calculation-resource constraints, quantization aspects, and potential time-delays induced by the associated communication and calculation model. Particular emphasis is put on optimal control signal scheduling based on the system state. In order to render this complex optimization problem feasible in real time, a time decomposition is based on periodicity induced by the static scheduling is operated. The authors present a co-design approach which subsumes the synthesis of the optimal control laws and the generation of an optimal schedule of control signals on real-time networks as well as the execution of control tasks on a single processor. The authors also operate a control structure modification or a control switching based on a thorough analysis of the influence of the induced time-delay system influence on stability and system performance in order to optimize DCES performance in case of calculation and communication resource limitations. Although the richness and variety of classes of DCES preclude a completely comprehensive treatment or a single best method of approaching them all, this co-design approach has the best chance of rendering this problem feasible and finding the optimal or some sub-optimal solution. The text is rounded out with references to such applications as car suspension and unmanned vehicles. "Optimal Design of Distributed Control and Embedded Systems" will be of most interest to academic researchers working on the mathematical theory of DCES but the wide range of environments in which they are used also promotes the relevance of the text for control practitioners working in the avionics, automotive, energy-production, space exploration and many other industries."
In the research area of computer science, practitioners are constantly searching for faster platforms with pertinent results. With analytics that span environmental development to computer hardware emulation, problem-solving algorithms are in high demand. Field-Programmable Gate Array (FPGA) is a promising computing platform that can be significantly faster for some applications and can be applied to a variety of fields. FPGA Algorithms and Applications in the IoT, AI, and High-Performance Computing provides emerging research exploring the theoretical and practical aspects of computable algorithms and applications within robotics and electronics development. Featuring coverage on a broad range of topics such as neuroscience, bioinformatics, and artificial intelligence, this book is ideally designed for computer science specialists, researchers, professors, and students seeking current research on cognitive analytics and advanced computing. |
You may like...
Concept Parsing Algorithms (CPA) for…
Uri Shafrir, Masha Etkind
Hardcover
R3,276
Discovery Miles 32 760
Air Traffic Control Automated Systems
Bestugin A.R., Eshenko A.A., …
Hardcover
R3,133
Discovery Miles 31 330
|