![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Data structures
This new book-the first of its kind-examines the use of algorithmic techniques to compress random and non-random sequential strings found in chains of polymers. The book is an introduction to algorithmic complexity. Examples taken from current research in the polymer sciences are used for compression of like-natured properties as found on a chain of polymers. Both theory and applied aspects of algorithmic compression are reviewed. A description of the types of polymers and their uses is followed by a chapter on various types of compression systems that can be used to compress polymer chains into manageable units. The work is intended for graduate and postgraduate university students in the physical sciences and engineering.
This book focuses on the implementation, evaluation and application of DNA/RNA-based genetic algorithms in connection with neural network modeling, fuzzy control, the Q-learning algorithm and CNN deep learning classifier. It presents several DNA/RNA-based genetic algorithms and their modifications, which are tested using benchmarks, as well as detailed information on the implementation steps and program code. In addition to single-objective optimization, here genetic algorithms are also used to solve multi-objective optimization for neural network modeling, fuzzy control, model predictive control and PID control. In closing, new topics such as Q-learning and CNN are introduced. The book offers a valuable reference guide for researchers and designers in system modeling and control, and for senior undergraduate and graduate students at colleges and universities.
The Design and Analysis of Computer Algorithms introduces the basic data structures and programming techniques often used in efficient algorithms. It covers the use of lists, push-down stacks, queues, trees, and graphs.
Network science is a rapidly emerging field of study that encompasses mathematics, computer science, physics, and engineering. A key issue in the study of complex networks is to understand the collective behavior of the various elements of these networks. Although the results from graph theory have proven to be powerful in investigating the structures of complex networks, few books focus on the algorithmic aspects of complex network analysis. Filling this need, Complex Networks: An Algorithmic Perspective supplies the basic theoretical algorithmic and graph theoretic knowledge needed by every researcher and student of complex networks. This book is about specifying, classifying, designing, and implementing mostly sequential and also parallel and distributed algorithms that can be used to analyze the static properties of complex networks. Providing a focused scope which consists of graph theory and algorithms for complex networks, the book identifies and describes a repertoire of algorithms that may be useful for any complex network. Provides the basic background in terms of graph theory Supplies a survey of the key algorithms for the analysis of complex networks Presents case studies of complex networks that illustrate the implementation of algorithms in real-world networks, including protein interaction networks, social networks, and computer networks Requiring only a basic discrete mathematics and algorithms background, the book supplies guidance that is accessible to beginning researchers and students with little background in complex networks. To help beginners in the field, most of the algorithms are provided in ready-to-be-executed form. While not a primary textbook, the author has included pedagogical features such as learning objectives, end-of-chapter summaries, and review questions
Many machine learning tasks involve solving complex optimization problems, such as working on non-differentiable, non-continuous, and non-unique objective functions; in some cases it can prove difficult to even define an explicit objective function. Evolutionary learning applies evolutionary algorithms to address optimization problems in machine learning, and has yielded encouraging outcomes in many applications. However, due to the heuristic nature of evolutionary optimization, most outcomes to date have been empirical and lack theoretical support. This shortcoming has kept evolutionary learning from being well received in the machine learning community, which favors solid theoretical approaches. Recently there have been considerable efforts to address this issue. This book presents a range of those efforts, divided into four parts. Part I briefly introduces readers to evolutionary learning and provides some preliminaries, while Part II presents general theoretical tools for the analysis of running time and approximation performance in evolutionary algorithms. Based on these general tools, Part III presents a number of theoretical findings on major factors in evolutionary optimization, such as recombination, representation, inaccurate fitness evaluation, and population. In closing, Part IV addresses the development of evolutionary learning algorithms with provable theoretical guarantees for several representative tasks, in which evolutionary learning offers excellent performance.
This work presents the latest development in the field of computational intelligence to advance Big Data and Cloud Computing concerning applications in medical diagnosis. As forum for academia and professionals it covers state-of-the-art research challenges and issues in the digital information & knowledge management and the concerns along with the solutions adopted in these fields.
For 60 years the International Federation for Information Processing (IFIP) has been advancing research in Information and Communication Technology (ICT). This book looks into both past experiences and future perspectives using the core of IFIP's competence, its Technical Committees (TCs) and Working Groups (WGs). Soon after IFIP was founded, it established TCs and related WGs to foster the exchange and development of the scientific and technical aspects of information processing. IFIP TCs are as diverse as the different aspects of information processing, but they share the following aims: To establish and maintain liaison with national and international organizations with allied interests and to foster cooperative action, collaborative research, and information exchange. To identify subjects and priorities for research, to stimulate theoretical work on fundamental issues, and to foster fundamental research which will underpin future development. To provide a forum for professionals with a view to promoting the study, collection, exchange, and dissemination of ideas, information, and research findings and thereby to promote the state of the art. To seek and use the most effective ways of disseminating information about IFIP's work including the organization of conferences, workshops and symposia and the timely production of relevant publications. To have special regard for the needs of developing countries and to seek practicable ways of working with them. To encourage communication and to promote interaction between users, practitioners, and researchers. To foster interdisciplinary work and - in particular - to collaborate with other Technical Committees and Working Groups. The 17 contributions in this book describe the scientific, technical, and further work in TCs and WGs and in many cases also assess the future consequences of the work's results. These contributions explore the developments of IFIP and the ICT profession now and over the next 60 years. The contributions are arranged per TC and conclude with the chapter on the IFIP code of ethics and conduct.
Hardware-intrinsic security is a young field dealing with secure secret key storage. By generating the secret keys from the intrinsic properties of the silicon, e.g., from intrinsic Physical Unclonable Functions (PUFs), no permanent secret key storage is required anymore, and the key is only present in the device for a minimal amount of time. The field is extending to hardware-based security primitives and protocols such as block ciphers and stream ciphers entangled with the hardware, thus improving IC security. While at the application level there is a growing interest in hardware security for RFID systems and the necessary accompanying system architectures. This book brings together contributions from researchers and practitioners in academia and industry, an interdisciplinary group with backgrounds in physics, mathematics, cryptography, coding theory and processor theory. It will serve as important background material for students and practitioners, and will stimulate much further research and development.
Software has become a key component of contemporary life and algorithms that rank, classify, or recommend are everywhere. Building on the philosophy of Gilbert Simondon and the cultural techniques tradition, this book examines the constructive and cumulative character of software and retraces the historical trajectories of a series of algorithmic techniques that have become the building blocks for contemporary practices of ordering. Developed in opposition to centuries of library tradition, these techniques instantiate dynamic, perspectivist, and interested forms of knowing. Embedded in technical infrastructures and economic logics, they have become engines of order that transform how we arrange information, ideas, and people.
This book provides a systematic and comparative description of the vast number of research issues related to the quality of data and information. It does so by delivering a sound, integrated and comprehensive overview of the state of the art and future development of data and information quality in databases and information systems. To this end, it presents an extensive description of the techniques that constitute the core of data and information quality research, including record linkage (also called object identification), data integration, error localization and correction, and examines the related techniques in a comprehensive and original methodological framework. Quality dimension definitions and adopted models are also analyzed in detail, and differences between the proposed solutions are highlighted and discussed. Furthermore, while systematically describing data and information quality as an autonomous research area, paradigms and influences deriving from other areas, such as probability theory, statistical data analysis, data mining, knowledge representation, and machine learning are also included. Last not least, the book also highlights very practical solutions, such as methodologies, benchmarks for the most effective techniques, case studies, and examples. The book has been written primarily for researchers in the fields of databases and information management or in natural sciences who are interested in investigating properties of data and information that have an impact on the quality of experiments, processes and on real life. The material presented is also sufficiently self-contained for masters or PhD-level courses, and it covers all the fundamentals and topics without the need for other textbooks. Data and information system administrators and practitioners, who deal with systems exposed to data-quality issues and as a result need a systematization of the field and practical methods in the area, will also benefit from the combination of concrete practical approaches with sound theoretical formalisms.
Tremendous achievements in the area of semiconductor electronics turn - croelectronics into nanoelectronics. Actually, we observe a real technical boom connected with achievements in nanoelectronics. It results in devel- mentofverycomplexintegratedcircuits, particularlythe?eldprogrammable logic devices (FPLD). Up-to-day FPLD chips are so huge, that it is enough only one chip to implement a really complex digital system including a da- path and a control unit. Because of the extreme complexity of modern - crochips, it is very important to develop e?ective design methods oriented on particular properties of logic elements. The development of digital s- tems with use of FPLD microchips is not possible without use of di?erent hardware description languages(HDL), such as VHDL and Verilog. Di?erent computer-aided design tools (CAD) are wide used to develop digital system hardware. As majorityof researchespoint out, the design processis nowvery similar to the process of program development. It allows a researcher to pay more attention to some speci?c problems, where there are no standard f- mal methods of their solution. But application of all these achievements does not guaranteeper sedevelopmentof some competitiveelectronic product, - pecially in the acceptable time-to-market. This problem solution is possible only if a researcher possesses fundamental knowledge of a design process and knows exactly the mode of operation of industrial CAD tools in use. As it is known, any digital system can be represented as a composition of a da- path and a control uni
Descriptive complexity theory establishes a connection between the computational complexity of algorithmic problems (the computational resources required to solve the problems) and their descriptive complexity (the language resources required to describe the problems). This groundbreaking book approaches descriptive complexity from the angle of modern structural graph theory, specifically graph minor theory. It develops a 'definable structure theory' concerned with the logical definability of graph theoretic concepts such as tree decompositions and embeddings. The first part starts with an introduction to the background, from logic, complexity, and graph theory, and develops the theory up to first applications in descriptive complexity theory and graph isomorphism testing. It may serve as the basis for a graduate-level course. The second part is more advanced and mainly devoted to the proof of a single, previously unpublished theorem: properties of graphs with excluded minors are decidable in polynomial time if, and only if, they are definable in fixed-point logic with counting.
This comprehensive textbook presents a clean and coherent account of most fundamental tools and techniques in Parameterized Algorithms and is a self-contained guide to the area. The book covers many of the recent developments of the field, including application of important separators, branching based on linear programming, Cut & Count to obtain faster algorithms on tree decompositions, algorithms based on representative families of matroids, and use of the Strong Exponential Time Hypothesis. A number of older results are revisited and explained in a modern and didactic way. The book provides a toolbox of algorithmic techniques. Part I is an overview of basic techniques, each chapter discussing a certain algorithmic paradigm. The material covered in this part can be used for an introductory course on fixed-parameter tractability. Part II discusses more advanced and specialized algorithmic ideas, bringing the reader to the cutting edge of current research. Part III presents complexity results and lower bounds, giving negative evidence by way of W[1]-hardness, the Exponential Time Hypothesis, and kernelization lower bounds. All the results and concepts are introduced at a level accessible to graduate students and advanced undergraduate students. Every chapter is accompanied by exercises, many with hints, while the bibliographic notes point to original publications and related work.
In recent years, popular media have inundated audiences with sensationalised headlines recounting data breaches, new forms of surveillance and other dangers of our digital age. Despite their regularity, such accounts treat each case as unprecedented and unique. This book proposes a radical rethinking of the history, present and future of our relations with the digital, spatial technologies that increasingly mediate our everyday lives. From smartphones to surveillance cameras, to navigational satellites, these new technologies offer visions of integrated, smooth and efficient societies, even as they directly conflict with the ways users experience them. Recognising the potential for both control and liberation, the authors argue against both acquiescence to and rejection of these technologies. Through intentional use of the very systems that monitor them, activists from Charlottesville to Hong Kong are subverting, resisting and repurposing geographic technologies. Using examples as varied as writings on the first telephones to the experiences of a feminist collective for migrant women in Spain, the authors present a revolution of everyday technologies. In the face of the seemingly inevitable dominance of corporate interests, these technologies allow us to create new spaces of affinity, and a new politics of change.
When it comes to artificial intelligence, we either hear of a paradise on earth or of our imminent extinction. It's time we stand face-to-digital-face with the true powers and limitations of the algorithms that already automate important decisions in healthcare, transportation, crime, and commerce. Hello World is indispensable preparation for the moral quandaries of a world run by code, and with the unfailingly entertaining Hannah Fry as our guide, we'll be discussing these issues long after the last page is turned.
Written in easy to understand language, this self-explanatory guide introduces the fundamentals of finite element methods and its application to differential equations. Beginning with a brief introduction to Sobolev spaces and elliptic scalar problems, the text progresses through an explanation of finite element spaces and estimates for the interpolation error. The concepts of finite element methods for parabolic scalar parabolic problems, object-oriented finite element algorithms, efficient implementation techniques, and high dimensional parabolic problems are presented in different chapters. Recent advances in finite element methods, including non-conforming finite elements for boundary value problems of higher order and approaches for solving differential equations in high dimensional domains are explained for the benefit of the reader. Numerous solved examples and mathematical theorems are interspersed throughout the text for enhanced learning.
This fourth edition of Robert Sedgewick and Kevin Wayne's Algorithms is the leading textbook on algorithms today and is widely used in colleges and universities worldwide. This book surveys the most important computer algorithms currently in use and provides a full treatment of data structures and algorithms for sorting, searching, graph processing, and string processing -- including fifty algorithms every programmer should know. In this edition, new Java implementations are written in an accessible modular programming style, where all of the code is exposed to the reader and ready to use. The algorithms in this book represent a body of knowledge developed over the last 50 years that has become indispensable, not just for professional programmers and computer science students but for any student with interests in science, mathematics, and engineering, not to mention students who use computation in the liberal arts. The companion web site, algs4.cs.princeton.edu contains An
online synopsisFull Java implementationsTest dataExercises and
answersDynamic visualizationsLecture slidesProgramming assignments
with checklistsLinks to related material The MOOC related to this book is accessible via the "Online Course" link at algs4.cs.princeton.edu. The course offers more than 100 video lecture segments that are integrated with the text, extensive online assessments, and the large-scale discussion forums that have proven so valuable. Offered each fall and spring, this course regularly attracts tens of thousands of registrants. Robert Sedgewick and Kevin Wayne are developing a modern approach to disseminating knowledge that fully embraces technology, enabling people all around the world to discover new ways of learning and teaching. By integrating their textbook, online content, and MOOC, all at the state of the art, they have built a unique resource that greatly expands the breadth and depth of the educational experience.
In machine learning applications, practitioners must take into account the cost associated with the algorithm. These costs include:
Cost-Sensitive Machine Learning is one of the first books to provide an overview of the current research efforts and problems in this area. It discusses real-world applications that incorporate the cost of learning into the modeling process. The first part of the book presents the theoretical underpinnings of cost-sensitive machine learning. It describes well-established machine learning approaches for reducing data acquisition costs during training as well as approaches for reducing costs when systems must make predictions for new samples. The second part covers real-world applications that effectively trade off different types of costs. These applications not only use traditional machine learning approaches, but they also incorporate cutting-edge research that advances beyond the constraining assumptions by analyzing the application needs from first principles. Spurring further research on several open problems, this volume highlights the often implicit assumptions in machine learning techniques that were not fully understood in the past. The book also illustrates the commercial importance of cost-sensitive machine learning through its coverage of the rapid application developments made by leading companies and academic research labs.
Algorithms and Theory of Computation Handbook, Second Edition: General Concepts and Techniques provides an up-to-date compendium of fundamental computer science topics and techniques. It also illustrates how the topics and techniques come together to deliver efficient solutions to important practical problems. Along with updating and revising many of the existing chapters, this second edition contains four new chapters that cover external memory and parameterized algorithms as well as computational number theory and algorithmic coding theory. This best-selling handbook continues to help computer professionals and engineers find significant information on various algorithmic topics. The expert contributors clearly define the terminology, present basic results and techniques, and offer a number of current references to the in-depth literature. They also provide a glimpse of the major research issues concerning the relevant topics.
Proofs play a central role in advanced mathematics and theoretical computer science, yet many students struggle the first time they take a course in which proofs play a significant role. This bestselling text's third edition helps students transition from solving problems to proving theorems by teaching them the techniques needed to read and write proofs. Featuring over 150 new exercises and a new chapter on number theory, this new edition introduces students to the world of advanced mathematics through the mastery of proofs. The book begins with the basic concepts of logic and set theory to familiarize students with the language of mathematics and how it is interpreted. These concepts are used as the basis for an analysis of techniques that can be used to build up complex proofs step by step, using detailed 'scratch work' sections to expose the machinery of proofs about numbers, sets, relations, and functions. Assuming no background beyond standard high school mathematics, this book will be useful to anyone interested in logic and proofs: computer scientists, philosophers, linguists, and, of course, mathematicians.
This revised and extensively expanded edition of "Computability and Complexity Theory" comprises essential materials that are core knowledge in the theory of computation. The book is self-contained, with a preliminary chapter describing key mathematical concepts and notations. Subsequent chapters move from the qualitative aspects of classical computability theory to the quantitative aspects of complexity theory. Dedicated chapters on undecidability, NP-completeness, andrelative computability focus on the limitations of computability and the distinctions between feasible and intractable. Substantial new content in this edition includes: a chapter on nonuniformity studying Boolean circuits, advice classes and the important result of Karp Lipton.a chapter studying properties of the fundamental probabilistic complexity classesa study of the alternating Turing machine and uniform circuit classes. an introduction of counting classes, proving the famous results of Valiant and Vazirani and of Todaa thorough treatment of the proof that IP is identical to PSPACE With its accessibility and well-devised organization, this text/reference is an excellent resource and guide for those looking to develop a solid grounding in the theory of computing. Beginning graduates, advanced undergraduates, and professionals involved in theoretical computer science, complexity theory, and computability will find the book an essential and practical learning tool. Topics and features: Concise, focused materials cover the most fundamental concepts and results in the field of modern complexity theory, including the theory of NP-completeness, NP-hardness, the polynomial hierarchy, and complete problems for other complexity classes Contains information that otherwise exists only in research literature and presents it in a unified, simplified mannerProvides key mathematical background information, including sections on logic and number theory and algebra Supported by numerous exercises and supplementary problems for reinforcement and self-study purposes "
A One-Stop Source of Known Results, a Bibliography of Papers on the Subject, and Novel Research Directions Focusing on a very active area of research in the last decade, Combinatorics of Compositions and Words provides an introduction to the methods used in the combinatorics of pattern avoidance and pattern enumeration in compositions and words. It also presents various tools and approaches that are applicable to other areas of enumerative combinatorics. After a historical perspective on research in the area, the text introduces techniques to solve recurrence relations, including iteration and generating functions. It then focuses on enumeration of basic statistics for compositions. The text goes on to present results on pattern avoidance for subword, subsequence, and generalized patterns in compositions and then applies these results to words. The authors also cover automata, the ECO method, generating trees, and asymptotic results via random compositions and complex analysis. Highlighting both established and new results, this book explores numerous tools for enumerating patterns in compositions and words. It includes a comprehensive bibliography and incorporates the use of the computer algebra systems Maple and Mathematica(r), as well as C++ to perform computations.
Multiagent systems (MAS) are one of the most exciting and the fastest growing domains in the intelligent resource management and agent-oriented technology, which deals with modeling of autonomous decisions making entities. Recent developments have produced very encouraging results in the novel approach of handling multiplayer interactive systems. In particular, the multiagent system approach is adapted to model, control, manage or test the operations and management of several system applications including multi-vehicles, microgrids, multi-robots, where agents represent individual entities in the network. Each participant is modeled as an autonomous participant with independent strategies and responses to outcomes. They are able to operate autonomously and interact pro-actively with their environment. In recent works, the problem of information consensus is addressed, where a team of vehicles communicate with each other to agree on key pieces of information that enable them to work together in a coordinated fashion. The problem is challenging because communication channels have limited range and there are possibilities of fading and dropout. The book comprises chapters on synchronization and consensus in multiagent systems. It shows that the joint presentation of synchronization and consensus enables readers to learn about similarities and differences of both concepts. It reviews the cooperative control of multi-agent dynamical systems interconnected by a communication network topology. Using the terminology of cooperative control, each system is endowed with its own state variable and dynamics. A fundamental problem in multi-agent dynamical systems on networks is the design of distributed protocols that guarantee consensus or synchronization in the sense that the states of all the systems reach the same value.
|
You may like...
Moomin: Dangerous Journey (Foiled Blank…
Flame Tree Studio
Notebook / blank book
|