![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Data structures
This book describes the essential components of the SCION secure Internet architecture, the first architecture designed foremost for strong security and high availability. Among its core features, SCION also provides route control, explicit trust information, multipath communication, scalable quality-of-service guarantees, and efficient forwarding. The book includes functional specifications of the network elements, communication protocols among these elements, data structures, and configuration files. In particular, the book offers a specification of a working prototype. The authors provide a comprehensive description of the main design features for achieving a secure Internet architecture. They facilitate the reader throughout, structuring the book so that the technical detail gradually increases, and supporting the text with a glossary, an index, a list of abbreviations, answers to frequently asked questions, and special highlighting for examples and for sections that explain important research, engineering, and deployment features. The book is suitable for researchers, practitioners, and graduate students who are interested in network security.
Fast Solvers for Mesh-Based Computations presents an alternative way of constructing multi-frontal direct solver algorithms for mesh-based computations. It also describes how to design and implement those algorithms. The book's structure follows those of the matrices, starting from tri-diagonal matrices resulting from one-dimensional mesh-based methods, through multi-diagonal or block-diagonal matrices, and ending with general sparse matrices. Each chapter explains how to design and implement a parallel sparse direct solver specific for a particular structure of the matrix. All the solvers presented are either designed from scratch or based on previously designed and implemented solvers. Each chapter also derives the complete JAVA or Fortran code of the parallel sparse direct solver. The exemplary JAVA codes can be used as reference for designing parallel direct solvers in more efficient languages for specific architectures of parallel machines. The author also derives exemplary element frontal matrices for different one-, two-, or three-dimensional mesh-based computations. These matrices can be used as references for testing the developed parallel direct solvers. Based on more than 10 years of the author's experience in the area, this book is a valuable resource for researchers and graduate students who would like to learn how to design and implement parallel direct solvers for mesh-based computations.
This book explains deep learning concepts and derives semi-supervised learning and nuclear learning frameworks based on cognition mechanism and Lie group theory. Lie group machine learning is a theoretical basis for brain intelligence, Neuromorphic learning (NL), advanced machine learning, and advanced artifi cial intelligence. The book further discusses algorithms and applications in tensor learning, spectrum estimation learning, Finsler geometry learning, Homology boundary learning, and prototype theory. With abundant case studies, this book can be used as a reference book for senior college students and graduate students as well as college teachers and scientific and technical personnel involved in computer science, artifi cial intelligence, machine learning, automation, mathematics, management science, cognitive science, financial management, and data analysis. In addition, this text can be used as the basis for teaching the principles of machine learning. Li Fanzhang is professor at the Soochow University, China. He is director of network security engineering laboratory in Jiangsu Province and is also the director of the Soochow Institute of industrial large data. He published more than 200 papers, 7 academic monographs, and 4 textbooks. Zhang Li is professor at the School of Computer Science and Technology of the Soochow University. She published more than 100 papers in journals and conferences, and holds 23 patents. Zhang Zhao is currently an associate professor at the School of Computer Science and Technology of the Soochow University. He has authored and co-authored more than 60 technical papers.
The new computing environment enabled by advances in service oriented arc- tectures, mashups, and cloud computing will consist of service spaces comprising data, applications, infrastructure resources distributed over the Web. This envir- ment embraces a holistic paradigm in which users, services, and resources establish on-demand interactions, possibly in real-time, to realise useful experiences. Such interactions obtain relevant services that are targeted to the time and place of the user requesting the service and to the device used to access it. The bene't of such environment originates from the added value generated by the possible interactions in a large scale rather than by the capabilities of its individual components se- rately. This offers tremendous automation opportunities in a variety of application domains including execution of forecasting, of?ce tasks, travel support, intelligent information gathering and analysis, environment monitoring, healthcare, e-business, community based systems, e-science and e-government. A key feature of this environment is the ability to dynamically compose services to realise user tasks. While recent advances in service discovery, composition and Semantic Web technologies contribute necessary ?rst steps to facilitate this task, the bene?ts of composition are still limited to take advantages of large-scale ubiq- tous environments. The main stream composition techniques and technologies rely on human understanding and manual programming to compose and aggregate s- vices. Recent advances improve composition by leveraging search technologies and ?ow-based composition languages as in mashups and process-centric service c- position.
Fuzzy social choice theory is useful for modeling the uncertainty and imprecision prevalent in social life yet it has been scarcely applied and studied in the social sciences. Filling this gap, Application of Fuzzy Logic to Social Choice Theory provides a comprehensive study of fuzzy social choice theory. The book explains the concept of a fuzzy maximal subset of a set of alternatives, fuzzy choice functions, the factorization of a fuzzy preference relation into the "union" (conorm) of a strict fuzzy relation and an indifference operator, fuzzy non-Arrowian results, fuzzy versions of Arrow's theorem, and Black's median voter theorem for fuzzy preferences. It examines how unambiguous and exact choices are generated by fuzzy preferences and whether exact choices induced by fuzzy preferences satisfy certain plausible rationality relations. The authors also extend known Arrowian results involving fuzzy set theory to results involving intuitionistic fuzzy sets as well as the Gibbard-Satterthwaite theorem to the case of fuzzy weak preference relations. The final chapter discusses Georgescu's degree of similarity of two fuzzy choice functions.
Motivated by a variational model concerning the depth of the objects in a picture and the problem of hidden and illusory contours, this book investigates one of the central problems of computer vision: the topological and algorithmic reconstruction of a smooth three dimensional scene starting from the visible part of an apparent contour. The authors focus their attention on the manipulation of apparent contours using a finite set of elementary moves, which correspond to diffeomorphic deformations of three dimensional scenes. A large part of the book is devoted to the algorithmic part, with implementations, experiments, and computed examples. The book is intended also as a user's guide to the software code appcontour, written for the manipulation of apparent contours and their invariants. This book is addressed to theoretical and applied scientists working in the field of mathematical models of image segmentation.
This new book-the first of its kind-examines the use of algorithmic techniques to compress random and non-random sequential strings found in chains of polymers. The book is an introduction to algorithmic complexity. Examples taken from current research in the polymer sciences are used for compression of like-natured properties as found on a chain of polymers. Both theory and applied aspects of algorithmic compression are reviewed. A description of the types of polymers and their uses is followed by a chapter on various types of compression systems that can be used to compress polymer chains into manageable units. The work is intended for graduate and postgraduate university students in the physical sciences and engineering.
This book focuses on the implementation, evaluation and application of DNA/RNA-based genetic algorithms in connection with neural network modeling, fuzzy control, the Q-learning algorithm and CNN deep learning classifier. It presents several DNA/RNA-based genetic algorithms and their modifications, which are tested using benchmarks, as well as detailed information on the implementation steps and program code. In addition to single-objective optimization, here genetic algorithms are also used to solve multi-objective optimization for neural network modeling, fuzzy control, model predictive control and PID control. In closing, new topics such as Q-learning and CNN are introduced. The book offers a valuable reference guide for researchers and designers in system modeling and control, and for senior undergraduate and graduate students at colleges and universities.
Network science is a rapidly emerging field of study that encompasses mathematics, computer science, physics, and engineering. A key issue in the study of complex networks is to understand the collective behavior of the various elements of these networks. Although the results from graph theory have proven to be powerful in investigating the structures of complex networks, few books focus on the algorithmic aspects of complex network analysis. Filling this need, Complex Networks: An Algorithmic Perspective supplies the basic theoretical algorithmic and graph theoretic knowledge needed by every researcher and student of complex networks. This book is about specifying, classifying, designing, and implementing mostly sequential and also parallel and distributed algorithms that can be used to analyze the static properties of complex networks. Providing a focused scope which consists of graph theory and algorithms for complex networks, the book identifies and describes a repertoire of algorithms that may be useful for any complex network. Provides the basic background in terms of graph theory Supplies a survey of the key algorithms for the analysis of complex networks Presents case studies of complex networks that illustrate the implementation of algorithms in real-world networks, including protein interaction networks, social networks, and computer networks Requiring only a basic discrete mathematics and algorithms background, the book supplies guidance that is accessible to beginning researchers and students with little background in complex networks. To help beginners in the field, most of the algorithms are provided in ready-to-be-executed form. While not a primary textbook, the author has included pedagogical features such as learning objectives, end-of-chapter summaries, and review questions
Many machine learning tasks involve solving complex optimization problems, such as working on non-differentiable, non-continuous, and non-unique objective functions; in some cases it can prove difficult to even define an explicit objective function. Evolutionary learning applies evolutionary algorithms to address optimization problems in machine learning, and has yielded encouraging outcomes in many applications. However, due to the heuristic nature of evolutionary optimization, most outcomes to date have been empirical and lack theoretical support. This shortcoming has kept evolutionary learning from being well received in the machine learning community, which favors solid theoretical approaches. Recently there have been considerable efforts to address this issue. This book presents a range of those efforts, divided into four parts. Part I briefly introduces readers to evolutionary learning and provides some preliminaries, while Part II presents general theoretical tools for the analysis of running time and approximation performance in evolutionary algorithms. Based on these general tools, Part III presents a number of theoretical findings on major factors in evolutionary optimization, such as recombination, representation, inaccurate fitness evaluation, and population. In closing, Part IV addresses the development of evolutionary learning algorithms with provable theoretical guarantees for several representative tasks, in which evolutionary learning offers excellent performance.
This work presents the latest development in the field of computational intelligence to advance Big Data and Cloud Computing concerning applications in medical diagnosis. As forum for academia and professionals it covers state-of-the-art research challenges and issues in the digital information & knowledge management and the concerns along with the solutions adopted in these fields.
This fourth edition of Robert Sedgewick and Kevin Wayne's Algorithms is the leading textbook on algorithms today and is widely used in colleges and universities worldwide. This book surveys the most important computer algorithms currently in use and provides a full treatment of data structures and algorithms for sorting, searching, graph processing, and string processing -- including fifty algorithms every programmer should know. In this edition, new Java implementations are written in an accessible modular programming style, where all of the code is exposed to the reader and ready to use. The algorithms in this book represent a body of knowledge developed over the last 50 years that has become indispensable, not just for professional programmers and computer science students but for any student with interests in science, mathematics, and engineering, not to mention students who use computation in the liberal arts. The companion web site, algs4.cs.princeton.edu contains An
online synopsisFull Java implementationsTest dataExercises and
answersDynamic visualizationsLecture slidesProgramming assignments
with checklistsLinks to related material The MOOC related to this book is accessible via the "Online Course" link at algs4.cs.princeton.edu. The course offers more than 100 video lecture segments that are integrated with the text, extensive online assessments, and the large-scale discussion forums that have proven so valuable. Offered each fall and spring, this course regularly attracts tens of thousands of registrants. Robert Sedgewick and Kevin Wayne are developing a modern approach to disseminating knowledge that fully embraces technology, enabling people all around the world to discover new ways of learning and teaching. By integrating their textbook, online content, and MOOC, all at the state of the art, they have built a unique resource that greatly expands the breadth and depth of the educational experience.
Hardware-intrinsic security is a young field dealing with secure secret key storage. By generating the secret keys from the intrinsic properties of the silicon, e.g., from intrinsic Physical Unclonable Functions (PUFs), no permanent secret key storage is required anymore, and the key is only present in the device for a minimal amount of time. The field is extending to hardware-based security primitives and protocols such as block ciphers and stream ciphers entangled with the hardware, thus improving IC security. While at the application level there is a growing interest in hardware security for RFID systems and the necessary accompanying system architectures. This book brings together contributions from researchers and practitioners in academia and industry, an interdisciplinary group with backgrounds in physics, mathematics, cryptography, coding theory and processor theory. It will serve as important background material for students and practitioners, and will stimulate much further research and development.
Software has become a key component of contemporary life and algorithms that rank, classify, or recommend are everywhere. Building on the philosophy of Gilbert Simondon and the cultural techniques tradition, this book examines the constructive and cumulative character of software and retraces the historical trajectories of a series of algorithmic techniques that have become the building blocks for contemporary practices of ordering. Developed in opposition to centuries of library tradition, these techniques instantiate dynamic, perspectivist, and interested forms of knowing. Embedded in technical infrastructures and economic logics, they have become engines of order that transform how we arrange information, ideas, and people.
For 60 years the International Federation for Information Processing (IFIP) has been advancing research in Information and Communication Technology (ICT). This book looks into both past experiences and future perspectives using the core of IFIP's competence, its Technical Committees (TCs) and Working Groups (WGs). Soon after IFIP was founded, it established TCs and related WGs to foster the exchange and development of the scientific and technical aspects of information processing. IFIP TCs are as diverse as the different aspects of information processing, but they share the following aims: To establish and maintain liaison with national and international organizations with allied interests and to foster cooperative action, collaborative research, and information exchange. To identify subjects and priorities for research, to stimulate theoretical work on fundamental issues, and to foster fundamental research which will underpin future development. To provide a forum for professionals with a view to promoting the study, collection, exchange, and dissemination of ideas, information, and research findings and thereby to promote the state of the art. To seek and use the most effective ways of disseminating information about IFIP's work including the organization of conferences, workshops and symposia and the timely production of relevant publications. To have special regard for the needs of developing countries and to seek practicable ways of working with them. To encourage communication and to promote interaction between users, practitioners, and researchers. To foster interdisciplinary work and - in particular - to collaborate with other Technical Committees and Working Groups. The 17 contributions in this book describe the scientific, technical, and further work in TCs and WGs and in many cases also assess the future consequences of the work's results. These contributions explore the developments of IFIP and the ICT profession now and over the next 60 years. The contributions are arranged per TC and conclude with the chapter on the IFIP code of ethics and conduct.
This book provides a systematic and comparative description of the vast number of research issues related to the quality of data and information. It does so by delivering a sound, integrated and comprehensive overview of the state of the art and future development of data and information quality in databases and information systems. To this end, it presents an extensive description of the techniques that constitute the core of data and information quality research, including record linkage (also called object identification), data integration, error localization and correction, and examines the related techniques in a comprehensive and original methodological framework. Quality dimension definitions and adopted models are also analyzed in detail, and differences between the proposed solutions are highlighted and discussed. Furthermore, while systematically describing data and information quality as an autonomous research area, paradigms and influences deriving from other areas, such as probability theory, statistical data analysis, data mining, knowledge representation, and machine learning are also included. Last not least, the book also highlights very practical solutions, such as methodologies, benchmarks for the most effective techniques, case studies, and examples. The book has been written primarily for researchers in the fields of databases and information management or in natural sciences who are interested in investigating properties of data and information that have an impact on the quality of experiments, processes and on real life. The material presented is also sufficiently self-contained for masters or PhD-level courses, and it covers all the fundamentals and topics without the need for other textbooks. Data and information system administrators and practitioners, who deal with systems exposed to data-quality issues and as a result need a systematization of the field and practical methods in the area, will also benefit from the combination of concrete practical approaches with sound theoretical formalisms.
Tremendous achievements in the area of semiconductor electronics turn - croelectronics into nanoelectronics. Actually, we observe a real technical boom connected with achievements in nanoelectronics. It results in devel- mentofverycomplexintegratedcircuits, particularlythe?eldprogrammable logic devices (FPLD). Up-to-day FPLD chips are so huge, that it is enough only one chip to implement a really complex digital system including a da- path and a control unit. Because of the extreme complexity of modern - crochips, it is very important to develop e?ective design methods oriented on particular properties of logic elements. The development of digital s- tems with use of FPLD microchips is not possible without use of di?erent hardware description languages(HDL), such as VHDL and Verilog. Di?erent computer-aided design tools (CAD) are wide used to develop digital system hardware. As majorityof researchespoint out, the design processis nowvery similar to the process of program development. It allows a researcher to pay more attention to some speci?c problems, where there are no standard f- mal methods of their solution. But application of all these achievements does not guaranteeper sedevelopmentof some competitiveelectronic product, - pecially in the acceptable time-to-market. This problem solution is possible only if a researcher possesses fundamental knowledge of a design process and knows exactly the mode of operation of industrial CAD tools in use. As it is known, any digital system can be represented as a composition of a da- path and a control uni
This open access book presents the results of three years collaboration between earth scientists and data scientist, in developing and applying data science methods for scientific discovery. The book will be highly beneficial for other researchers at senior and graduate level, interested in applying visual data exploration, computational approaches and scientifc workflows.
Descriptive complexity theory establishes a connection between the computational complexity of algorithmic problems (the computational resources required to solve the problems) and their descriptive complexity (the language resources required to describe the problems). This groundbreaking book approaches descriptive complexity from the angle of modern structural graph theory, specifically graph minor theory. It develops a 'definable structure theory' concerned with the logical definability of graph theoretic concepts such as tree decompositions and embeddings. The first part starts with an introduction to the background, from logic, complexity, and graph theory, and develops the theory up to first applications in descriptive complexity theory and graph isomorphism testing. It may serve as the basis for a graduate-level course. The second part is more advanced and mainly devoted to the proof of a single, previously unpublished theorem: properties of graphs with excluded minors are decidable in polynomial time if, and only if, they are definable in fixed-point logic with counting.
In recent years, popular media have inundated audiences with sensationalised headlines recounting data breaches, new forms of surveillance and other dangers of our digital age. Despite their regularity, such accounts treat each case as unprecedented and unique. This book proposes a radical rethinking of the history, present and future of our relations with the digital, spatial technologies that increasingly mediate our everyday lives. From smartphones to surveillance cameras, to navigational satellites, these new technologies offer visions of integrated, smooth and efficient societies, even as they directly conflict with the ways users experience them. Recognising the potential for both control and liberation, the authors argue against both acquiescence to and rejection of these technologies. Through intentional use of the very systems that monitor them, activists from Charlottesville to Hong Kong are subverting, resisting and repurposing geographic technologies. Using examples as varied as writings on the first telephones to the experiences of a feminist collective for migrant women in Spain, the authors present a revolution of everyday technologies. In the face of the seemingly inevitable dominance of corporate interests, these technologies allow us to create new spaces of affinity, and a new politics of change.
This comprehensive textbook presents a clean and coherent account of most fundamental tools and techniques in Parameterized Algorithms and is a self-contained guide to the area. The book covers many of the recent developments of the field, including application of important separators, branching based on linear programming, Cut & Count to obtain faster algorithms on tree decompositions, algorithms based on representative families of matroids, and use of the Strong Exponential Time Hypothesis. A number of older results are revisited and explained in a modern and didactic way. The book provides a toolbox of algorithmic techniques. Part I is an overview of basic techniques, each chapter discussing a certain algorithmic paradigm. The material covered in this part can be used for an introductory course on fixed-parameter tractability. Part II discusses more advanced and specialized algorithmic ideas, bringing the reader to the cutting edge of current research. Part III presents complexity results and lower bounds, giving negative evidence by way of W[1]-hardness, the Exponential Time Hypothesis, and kernelization lower bounds. All the results and concepts are introduced at a level accessible to graduate students and advanced undergraduate students. Every chapter is accompanied by exercises, many with hints, while the bibliographic notes point to original publications and related work.
Written in easy to understand language, this self-explanatory guide introduces the fundamentals of finite element methods and its application to differential equations. Beginning with a brief introduction to Sobolev spaces and elliptic scalar problems, the text progresses through an explanation of finite element spaces and estimates for the interpolation error. The concepts of finite element methods for parabolic scalar parabolic problems, object-oriented finite element algorithms, efficient implementation techniques, and high dimensional parabolic problems are presented in different chapters. Recent advances in finite element methods, including non-conforming finite elements for boundary value problems of higher order and approaches for solving differential equations in high dimensional domains are explained for the benefit of the reader. Numerous solved examples and mathematical theorems are interspersed throughout the text for enhanced learning.
Gather and analyze data successfully, identify trends, and then create overarching strategies and actionable next steps - all through Excel. This book will show even those who lack a technical background how to make advanced interactive reports with only Excel at hand. Advanced visualization is available to everyone, and this step-by-step guide will show you how. The information in this book is presented in an accessible and understandable way for everyone, regardless of the level of technical skills and proficiency in MS Excel. The dashboard development process is given in the format of step-by-step instructions, taking you through each step in detail. Universal checklists and recommendations of a practicing business analyst and trainer will help in solving various tasks when working with data visualization. Illustrations will help you perceive information easily and quickly. Make Your Data Speak will show you how to master the main rules, techniques and tricks of professional data visualization in just a few days. What You'll Learn See how interactive dashboards can be useful for a business Review basic rules for building dashboards Understand why it's important to pay attention to colors and fonts when developing a dashboard Create interactive management reports in Excel Who This Book is For Company executives and divisional managers, Middle managers, business analysts |
You may like...
Comprehensive Metaheuristics…
S. Ali Mirjalili, Amir Hossein Gandomi
Paperback
R3,956
Discovery Miles 39 560
Primer for Data Analytics and Graduate…
Douglas Wolfe, Grant Schneider
Hardcover
R2,441
Discovery Miles 24 410
|