Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > General theory of computing > Data structures
In recent years, the United Kingdom's Home Office has started using automated systems to make immigration decisions. These systems promise faster, more accurate, and cheaper decision-making, but in practice they have exposed people to distress, disruption, and even deportation. This book identifies a pattern of risky experimentation with automated systems in the Home Office. It analyses three recent case studies including: a voice recognition system used to detect fraud in English-language testing; an algorithm for identifying 'risky' visa applications; and automated decision-making in the EU Settlement Scheme. The book argues that a precautionary approach is essential to ensure that society benefits from government automation without exposing individuals to unacceptable risks.
Many machine learning tasks involve solving complex optimization problems, such as working on non-differentiable, non-continuous, and non-unique objective functions; in some cases it can prove difficult to even define an explicit objective function. Evolutionary learning applies evolutionary algorithms to address optimization problems in machine learning, and has yielded encouraging outcomes in many applications. However, due to the heuristic nature of evolutionary optimization, most outcomes to date have been empirical and lack theoretical support. This shortcoming has kept evolutionary learning from being well received in the machine learning community, which favors solid theoretical approaches. Recently there have been considerable efforts to address this issue. This book presents a range of those efforts, divided into four parts. Part I briefly introduces readers to evolutionary learning and provides some preliminaries, while Part II presents general theoretical tools for the analysis of running time and approximation performance in evolutionary algorithms. Based on these general tools, Part III presents a number of theoretical findings on major factors in evolutionary optimization, such as recombination, representation, inaccurate fitness evaluation, and population. In closing, Part IV addresses the development of evolutionary learning algorithms with provable theoretical guarantees for several representative tasks, in which evolutionary learning offers excellent performance.
Fuzzy social choice theory is useful for modeling the uncertainty and imprecision prevalent in social life yet it has been scarcely applied and studied in the social sciences. Filling this gap, Application of Fuzzy Logic to Social Choice Theory provides a comprehensive study of fuzzy social choice theory. The book explains the concept of a fuzzy maximal subset of a set of alternatives, fuzzy choice functions, the factorization of a fuzzy preference relation into the "union" (conorm) of a strict fuzzy relation and an indifference operator, fuzzy non-Arrowian results, fuzzy versions of Arrow's theorem, and Black's median voter theorem for fuzzy preferences. It examines how unambiguous and exact choices are generated by fuzzy preferences and whether exact choices induced by fuzzy preferences satisfy certain plausible rationality relations. The authors also extend known Arrowian results involving fuzzy set theory to results involving intuitionistic fuzzy sets as well as the Gibbard-Satterthwaite theorem to the case of fuzzy weak preference relations. The final chapter discusses Georgescu's degree of similarity of two fuzzy choice functions.
The new computing environment enabled by advances in service oriented arc- tectures, mashups, and cloud computing will consist of service spaces comprising data, applications, infrastructure resources distributed over the Web. This envir- ment embraces a holistic paradigm in which users, services, and resources establish on-demand interactions, possibly in real-time, to realise useful experiences. Such interactions obtain relevant services that are targeted to the time and place of the user requesting the service and to the device used to access it. The bene't of such environment originates from the added value generated by the possible interactions in a large scale rather than by the capabilities of its individual components se- rately. This offers tremendous automation opportunities in a variety of application domains including execution of forecasting, of?ce tasks, travel support, intelligent information gathering and analysis, environment monitoring, healthcare, e-business, community based systems, e-science and e-government. A key feature of this environment is the ability to dynamically compose services to realise user tasks. While recent advances in service discovery, composition and Semantic Web technologies contribute necessary ?rst steps to facilitate this task, the bene?ts of composition are still limited to take advantages of large-scale ubiq- tous environments. The main stream composition techniques and technologies rely on human understanding and manual programming to compose and aggregate s- vices. Recent advances improve composition by leveraging search technologies and ?ow-based composition languages as in mashups and process-centric service c- position.
Vast holdings and assessment of consumer data by large companies are not new phenomena. Firms' ability to leverage the data to reach customers in targeted campaigns and gain market share is, and on an unprecedented scale. Major companies have moved from serving as data or inventory storehouses, suppliers, and exchange mechanisms to monetizing their data and expanding the products they offer. Such changes have implications for both firms and consumers in the coming years. In Success with Big Data, Russell Walker investigates the use of internal Big Data to stimulate innovations for operational effectiveness, and the ways in which external Big Data is developed for gauging, or even prompting, customer buying decisions. Walker examines the nature of Big Data, the novel measures they create for market activity, and the payoffs they can offer from the connectedness of the business and social world. With case studies from Apple, Netflix, Google, and Amazon, Walker both explores the market transformations that are changing perceptions of Big Data, and provides a framework for assessing and evaluating Big Data. Although the world appears to be moving toward a marketplace where consumers will be able to "pull" offers from firms, rather than simply receiving offers, Walker observes that such changes will require careful consideration of legal and unspoken business practices as they affect consumer privacy. Rigorous and meticulous, Success with Big Data is a valuable resource for graduate students and professionals with an interest in Big Data, digital platforms, and analytics.
This in-depth guide covers a wide range of topics, including chapters on linear algebra, root finding, curve fitting, differentiation and integration, solving differential equations, random numbers and simulation, a whole suite of unconstrained and constrained optimization algorithms, statistics, regression and time series analysis. The mathematical concepts behind the algorithms are clearly explained, with plenty of code examples and illustrations to help even beginners get started. In this book, you'll implement numerical algorithms in Kotlin using NM Dev, an object-oriented and high-performance programming library for applied and industrial mathematics. Discover how Kotlin has many advantages over Java in its speed, and in some cases, ease of use. In this book, you'll see how it can help you easily create solutions for your complex engineering and data science problems. After reading this book, you'll come away with the knowledge to create your own numerical models and algorithms using the Kotlin programming language. What You Will Learn Program in Kotlin using a high-performance numerical library Learn the mathematics necessary for a wide range of numerical computing algorithms Convert ideas and equations into code Put together algorithms and classes to build your own engineering solutions Build solvers for industrial optimization problems Perform data analysis using basic and advanced statistics Who This Book Is For Programmers, data scientists, and analysts with prior experience programming in any language, especially Kotlin or Java.
Network science is a rapidly emerging field of study that encompasses mathematics, computer science, physics, and engineering. A key issue in the study of complex networks is to understand the collective behavior of the various elements of these networks. Although the results from graph theory have proven to be powerful in investigating the structures of complex networks, few books focus on the algorithmic aspects of complex network analysis. Filling this need, Complex Networks: An Algorithmic Perspective supplies the basic theoretical algorithmic and graph theoretic knowledge needed by every researcher and student of complex networks. This book is about specifying, classifying, designing, and implementing mostly sequential and also parallel and distributed algorithms that can be used to analyze the static properties of complex networks. Providing a focused scope which consists of graph theory and algorithms for complex networks, the book identifies and describes a repertoire of algorithms that may be useful for any complex network. Provides the basic background in terms of graph theory Supplies a survey of the key algorithms for the analysis of complex networks Presents case studies of complex networks that illustrate the implementation of algorithms in real-world networks, including protein interaction networks, social networks, and computer networks Requiring only a basic discrete mathematics and algorithms background, the book supplies guidance that is accessible to beginning researchers and students with little background in complex networks. To help beginners in the field, most of the algorithms are provided in ready-to-be-executed form. While not a primary textbook, the author has included pedagogical features such as learning objectives, end-of-chapter summaries, and review questions
A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.
This book focuses on theoretical aspects of the affine projection algorithm (APA) for adaptive filtering. The APA is a natural generalization of the classical, normalized least-mean-squares (NLMS) algorithm. The book first explains how the APA evolved from the NLMS algorithm, where an affine projection view is emphasized. By looking at those adaptation algorithms from such a geometrical point of view, we can find many of the important properties of the APA, e.g., the improvement of the convergence rate over the NLMS algorithm especially for correlated input signals. After the birth of the APA in the mid-1980s, similar algorithms were put forward by other researchers independently from different perspectives. This book shows that they are variants of the APA, forming a family of APAs. Then it surveys research on the convergence behavior of the APA, where statistical analyses play important roles. It also reviews developments of techniques to reduce the computational complexity of the APA, which are important for real-time processing. It covers a recent study on the kernel APA, which extends the APA so that it is applicable to identification of not only linear systems but also nonlinear systems. The last chapter gives an overview of current topics on variable parameter APAs. The book is self-contained, and is suitable for graduate students and researchers who are interested in advanced theory of adaptive filtering.
Recursion is one of the most fundamental concepts in computer science and a key programming technique that allows computations to be carried out repeatedly. Despite the importance of recursion for algorithm design, most programming books do not cover the topic in detail, despite the fact that numerous computer programming professors and researchers in the field of computer science education agree that recursion is difficult for novice students. Introduction to Recursive Programming provides a detailed and comprehensive introduction to recursion. This text will serve as a useful guide for anyone who wants to learn how to think and program recursively, by analyzing a wide variety of computational problems of diverse difficulty. It contains specific chapters on the most common types of recursion (linear, tail, and multiple), as well as on algorithm design paradigms in which recursion is prevalent (divide and conquer, and backtracking). Therefore, it can be used in introductory programming courses, and in more advanced classes on algorithm design. The book also covers lower-level topics related to iteration and program execution, and includes a rich chapter on the theoretical analysis of the computational cost of recursive programs, offering readers the possibility to learn some basic mathematics along the way. It also incorporates several elements aimed at helping students master the material. First, it contains a larger collection of simple problems in order to provide a solid foundation of the core concepts, before diving into more complex material. In addition, one of the book's main assets is the use of a step-by-step methodology, together with specially designed diagrams, for guiding and illustrating the process of developing recursive algorithms. Furthermore, the book covers combinatorial problems and mutual recursion. These topics can broaden students' understanding of recursion by forcing them to apply the learned concepts differently, or in a more sophisticated manner. The code examples have been written in Python 3, but should be straightforward to understand for students with experience in other programming languages. Finally, worked out solutions to over 120 end-of-chapter exercises are available for instructors.
This text discusses the applications and optimization of emerging smart technologies in the field of healthcare. It further explains different modeling scenarios of the latest technologies in the health care system and compare the results to better understand the nature and progress of the disease in the human body that leads to early diagnosis and better cure of disease and treatment with the help of distributed technology. Covers the implementation models using technologies such as artificial intelligence, machine learning, deep learning with distributed systems for better diagnosis and treatment of diseases. Gives in-depth review of the technological advancements like advanced sensing technologies like Plasmonic sensors, usage of RFIDs and electronic diagnostic tools in the field of healthcare engineering Discusses possibilities of augmented reality and virtual reality interventions for providing unique solutions in medical science, clinical research, psychology, and neurological disorders Highlights the future challenges and risks involved in the application of smart technologies like Cloud computing, fog computing, IOT and distributed computing in heathcare. Confers to utilize the AI and ML and associated aids in healthcare sectors in the post Covid 19 to revitalize the medical set up Contributions included in the book will motivate the technological developers and researchers to develop new algorithms and protocols in healthcare field. It will serve as the vast place for knowledge regarding healthcare health care delivery, health care management, health care in governance, and health monitoring approaches using distributed environments. It will serve as an ideal reference text for graduate students and researchers in diverse engineering fields including electrical, electronics and communication, computer, and biomedical.
Revealing the flaws in human decision making, this book explores how AI can be used to optimise decisions for improved business outcomes and efficiency, as well as looking ahead into the significant contributions Decision Intelligence (DI) can make to society and the ethical challenges it may raise. Offering an impressive framework of Decision Intelligence (DI), from the theories and concepts used to design autonomous intelligent agents to the technologies that power DI systems and the ways in which companies use decision-making building blocks to build DI solutions that enable businesses to democratise AI, this book provides a systematic approach to AI intelligence and human involvement. Replete with case studies on DI application, as well as wider discussions on the social implications of the technology, this book appeals to both students of AI and data solutions and businesses considering DI adoption.
The big challenge for a successful AI project isn't deciding which problems you can solve. It's deciding which problems you should solve. In Managing Successful AI Projects, author and AI consultant Veljko Krunic reveals secrets for succeeding in AI that he developed with Fortune 500 companies, early-stage start-ups, and other business across multiple industries. Key Features * Selecting the right AI project to meet specific business goals * Economizing resources to deliver the best value for money * How to measure the success of your AI efforts in the business terms * Predict if you are you on the right track to deliver your intended business results For executives, managers, team leaders, and business-focused data scientists. No specific technical knowledge or programming skills required. About the technology Companies small and large are initiating AI projects, investing vast sums of money on software, developers, and data scientists. Too often, these AI projects focus on technology at the expense of actionable or tangible business results, resulting in scattershot results and wasted investment. Managing Successful AI Projects sets out a blueprint for AI projects to ensure they are predictable, successful, and profitable. It's filled with practical techniques for running data science programs that ensure they're cost effective and focused on the right business goals. Veljko Krunic is an independent data science consultant who has worked with companies that range from start-ups to Fortune 10 enterprises. He holds a PhD in Computer Science and an MS in Engineering Management, both from the University of Colorado at Boulder. He is also a Six Sigma Master Black Belt.
Hardware-intrinsic security is a young field dealing with secure secret key storage. By generating the secret keys from the intrinsic properties of the silicon, e.g., from intrinsic Physical Unclonable Functions (PUFs), no permanent secret key storage is required anymore, and the key is only present in the device for a minimal amount of time. The field is extending to hardware-based security primitives and protocols such as block ciphers and stream ciphers entangled with the hardware, thus improving IC security. While at the application level there is a growing interest in hardware security for RFID systems and the necessary accompanying system architectures. This book brings together contributions from researchers and practitioners in academia and industry, an interdisciplinary group with backgrounds in physics, mathematics, cryptography, coding theory and processor theory. It will serve as important background material for students and practitioners, and will stimulate much further research and development.
This book provides a systematic and comparative description of the vast number of research issues related to the quality of data and information. It does so by delivering a sound, integrated and comprehensive overview of the state of the art and future development of data and information quality in databases and information systems. To this end, it presents an extensive description of the techniques that constitute the core of data and information quality research, including record linkage (also called object identification), data integration, error localization and correction, and examines the related techniques in a comprehensive and original methodological framework. Quality dimension definitions and adopted models are also analyzed in detail, and differences between the proposed solutions are highlighted and discussed. Furthermore, while systematically describing data and information quality as an autonomous research area, paradigms and influences deriving from other areas, such as probability theory, statistical data analysis, data mining, knowledge representation, and machine learning are also included. Last not least, the book also highlights very practical solutions, such as methodologies, benchmarks for the most effective techniques, case studies, and examples. The book has been written primarily for researchers in the fields of databases and information management or in natural sciences who are interested in investigating properties of data and information that have an impact on the quality of experiments, processes and on real life. The material presented is also sufficiently self-contained for masters or PhD-level courses, and it covers all the fundamentals and topics without the need for other textbooks. Data and information system administrators and practitioners, who deal with systems exposed to data-quality issues and as a result need a systematization of the field and practical methods in the area, will also benefit from the combination of concrete practical approaches with sound theoretical formalisms.
This rigorous introduction to network science presents random graphs as models for real-world networks. Such networks have distinctive empirical properties and a wealth of new models have emerged to capture them. Classroom tested for over ten years, this text places recent advances in a unified framework to enable systematic study. Designed for a master's-level course, where students may only have a basic background in probability, the text covers such important preliminaries as convergence of random variables, probabilistic bounds, coupling, martingales, and branching processes. Building on this base - and motivated by many examples of real-world networks, including the Internet, collaboration networks, and the World Wide Web - it focuses on several important models for complex networks and investigates key properties, such as the connectivity of nodes. Numerous exercises allow students to develop intuition and experience in working with the models.
Shadow Algorithms Data Miner provides a high-level understanding of the complete set of shadow concepts and algorithms, addressing their usefulness from a larger graphics system perspective. It discusses the applicability and limitations of all the direct illumination approaches for shadow generation. With an emphasis on shadow fundamentals, the book gives an organized picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. It helps readers select the most relevant algorithms for their needs by placing the shadow algorithms in real-world contexts and looking at them from a larger graphics system perspective. As a result, readers know where to start for their application needs, which algorithms to begin considering, and which papers and supplemental material should be consulted for further details.
Combinatorial Scientific Computing explores the latest research on creating algorithms and software tools to solve key combinatorial problems on large-scale high-performance computing architectures. It includes contributions from international researchers who are pioneers in designing software and applications for high-performance computing systems. The book offers a state-of-the-art overview of the latest research, tool development, and applications. It focuses on load balancing and parallelization on high-performance computers, large-scale optimization, algorithmic differentiation of numerical simulation code, sparse matrix software tools, and combinatorial challenges and applications in large-scale social networks. The authors unify these seemingly disparate areas through a common set of abstractions and algorithms based on combinatorics, graphs, and hypergraphs. Combinatorial algorithms have long played a crucial enabling role in scientific and engineering computations and their importance continues to grow with the demands of new applications and advanced architectures. By addressing current challenges in the field, this volume sets the stage for the accelerated development and deployment of fundamental enabling technologies in high-performance scientific computing.
Computer science and economics have engaged in a lively interaction over the past fifteen years, resulting in the new field of algorithmic game theory. Many problems that are central to modern computer science, ranging from resource allocation in large networks to online advertising, involve interactions between multiple self-interested parties. Economics and game theory offer a host of useful models and definitions to reason about such problems. The flow of ideas also travels in the other direction, and concepts from computer science are increasingly important in economics. This book grew out of the author's Stanford University course on algorithmic game theory, and aims to give students and other newcomers a quick and accessible introduction to many of the most important concepts in the field. The book also includes case studies on online advertising, wireless spectrum auctions, kidney exchange, and network management.
In machine learning applications, practitioners must take into account the cost associated with the algorithm. These costs include:
Cost-Sensitive Machine Learning is one of the first books to provide an overview of the current research efforts and problems in this area. It discusses real-world applications that incorporate the cost of learning into the modeling process. The first part of the book presents the theoretical underpinnings of cost-sensitive machine learning. It describes well-established machine learning approaches for reducing data acquisition costs during training as well as approaches for reducing costs when systems must make predictions for new samples. The second part covers real-world applications that effectively trade off different types of costs. These applications not only use traditional machine learning approaches, but they also incorporate cutting-edge research that advances beyond the constraining assumptions by analyzing the application needs from first principles. Spurring further research on several open problems, this volume highlights the often implicit assumptions in machine learning techniques that were not fully understood in the past. The book also illustrates the commercial importance of cost-sensitive machine learning through its coverage of the rapid application developments made by leading companies and academic research labs.
A revealing look at how negative biases against women of color are embedded in search engine results and algorithms Run a Google search for "black girls"-what will you find? "Big Booty" and other sexually explicit terms are likely to come up as top search terms. But, if you type in "white girls," the results are radically different. The suggested porn sites and un-moderated discussions about "why black women are so sassy" or "why black women are so angry" presents a disturbing portrait of black womanhood in modern society. In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color. Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance-operating as a source for email, a major vehicle for primary and secondary school learning, and beyond-understanding and reversing these disquieting trends and discriminatory practices is of utmost importance. An original, surprising and, at times, disturbing account of bias on the internet, Algorithms of Oppression contributes to our understanding of how racism is created, maintained, and disseminated in the 21st century.
Gather and analyze data successfully, identify trends, and then create overarching strategies and actionable next steps - all through Excel. This book will show even those who lack a technical background how to make advanced interactive reports with only Excel at hand. Advanced visualization is available to everyone, and this step-by-step guide will show you how. The information in this book is presented in an accessible and understandable way for everyone, regardless of the level of technical skills and proficiency in MS Excel. The dashboard development process is given in the format of step-by-step instructions, taking you through each step in detail. Universal checklists and recommendations of a practicing business analyst and trainer will help in solving various tasks when working with data visualization. Illustrations will help you perceive information easily and quickly. Make Your Data Speak will show you how to master the main rules, techniques and tricks of professional data visualization in just a few days. What You'll Learn See how interactive dashboards can be useful for a business Review basic rules for building dashboards Understand why it's important to pay attention to colors and fonts when developing a dashboard Create interactive management reports in Excel Who This Book is For Company executives and divisional managers, Middle managers, business analysts
This book describes how we can design and make efficient processors for high-performance computing, AI, and data science. Although there are many textbooks on the design of processors we do not have a widely accepted definition of the efficiency of a general-purpose computer architecture. Without a definition of the efficiency, it is difficult to make scientific approach to the processor design. In this book, a clear definition of efficiency is given and thus a scientific approach for processor design is made possible. In chapter 2, the history of the development of high-performance processor is overviewed, to discuss what quantity we can use to measure the efficiency of these processors. The proposed quantity is the ratio between the minimum possible energy consumption and the actual energy consumption for a given application using a given semiconductor technology. In chapter 3, whether or not this quantity can be used in practice is discussed, for many real-world applications. In chapter 4, general-purpose processors in the past and present are discussed from this viewpoint. In chapter 5, how we can actually design processors with near-optimal efficiencies is described, and in chapter 6 how we can program such processors. This book gives a new way to look at the field of the design of high-performance processors. |
You may like...
Illustrated Computational Intelligence…
Priti Srinivas Sajja
Hardcover
R3,943
Discovery Miles 39 430
MATLAB Applications in Engineering
Constantin Volosencu
Hardcover
|