![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Data structures
Justice apps - mobile and web-based programmes that can assist individuals with legal tasks - are being produced, improved, and accessed at an unprecedented rate. These technologies have the potential to reshape the justice system, improve access to justice, and demystify legal institutions. Using artificial intelligence techniques, apps can even facilitate the resolution of common legal disputes. However, these opportunities must be assessed in light of the many challenges associated with app use in the justice sector. These include the digital divide and other accessibility issues; the ethical challenges raised by the dehumanisation of legal processes; and various privacy, security, and confidentiality risks. Surveying the landscape of this emergent industry, this book explores the objectives, opportunities, and challenges presented by apps across all areas of the justice sector. Detailed consideration is also given to the use of justice apps in specific legal contexts, including the family law and criminal law sectors. The first book to engage with justice apps, this book will appeal to a wide range of legal scholars, students, practitioners, and policy-makers.
Metaheuristic algorithms are considered as generic optimization tools that can solve very complex problems characterized by having very large search spaces. Metaheuristic methods reduce the effective size of the search space through the use of effective search strategies. Book Features: Provides a unified view of the most popular metaheuristic methods currently in use Includes the necessary concepts to enable readers to implement and modify already known metaheuristic methods to solve problems Covers design aspects and implementation in MATLAB (R) Contains numerous examples of problems and solutions that demonstrate the power of these methods of optimization The material has been written from a teaching perspective and, for this reason, this book is primarily intended for undergraduate and postgraduate students of artificial intelligence, metaheuristic methods, and/or evolutionary computation. The objective is to bridge the gap between metaheuristic techniques and complex optimization problems that profit from the convenient properties of metaheuristic approaches. Therefore, engineer practitioners who are not familiar with metaheuristic computation will appreciate that the techniques discussed are beyond simple theoretical tools, since they have been adapted to solve significant problems that commonly arise in such areas.
The introduction of public key cryptography (PKC) was a critical advance in IT security. In contrast to symmetric key cryptography, it enables confidential communication between entities in open networks, in particular the Internet, without prior contact. Beyond this PKC also enables protection techniques that have no analogue in traditional cryptography, most importantly digital signatures which for example support Internet security by authenticating software downloads and updates. Although PKC does not require the confidential exchange of secret keys, proper management of the private and public keys used in PKC is still of vital importance: the private keys must remain private, and the public keys must be verifiably authentic. So understanding so-called public key infrastructures (PKIs) that manage key pairs is at least as important as studying the ingenious mathematical ideas underlying PKC. In this book the authors explain the most important concepts underlying PKIs and discuss relevant standards, implementations, and applications. The book is structured into chapters on the motivation for PKI, certificates, trust models, private keys, revocation, validity models, certification service providers, certificate policies, certification paths, and practical aspects of PKI. This is a suitable textbook for advanced undergraduate and graduate courses in computer science, mathematics, engineering, and related disciplines, complementing introductory courses on cryptography. The authors assume only basic computer science prerequisites, and they include exercises in all chapters and solutions in an appendix. They also include detailed pointers to relevant standards and implementation guidelines, so the book is also appropriate for self-study and reference by industrial and academic researchers and practitioners.
Realism and Complexity in Social Science is an argument for a new approach to investigating the social world, that of complex realism. Complex realism brings together a number of strands of thought, in scientific realism, complexity science, probability theory and social research methodology. It proposes that the reality of the social world is that it is probabilistic, yet there exists enough invariance to make the discovery and explanation of social objects and causal mechanisms possible. This forms the basis for the development of a complex realist foundation for social research, that utilises a number of new and novel approaches to investigation, alongside the more traditional corpus of quantitative and qualitative methods. Research examples are drawn from research in sociology, epidemiology, criminology, social policy and human geography. The book assumes no prior knowledge of realism, probability or complexity and in the early chapters, the reader is introduced to these concepts and the arguments against them. Although the book is grounded in philosophical reasoning, this is in a direct and accessible style that will appeal both to social researchers with a methodological interest and philosophers with an interest in social investigation.
Designed for a proof-based course on linear algebra, this rigorous and concise textbook intentionally introduces vector spaces, inner products, and vector and matrix norms before Gaussian elimination and eigenvalues so students can quickly discover the singular value decomposition (SVD)-arguably the most enlightening and useful of all matrix factorizations. Gaussian elimination is then introduced after the SVD and the four fundamental subspaces and is presented in the context of vector spaces rather than as a computational recipe. This allows the authors to use linear independence, spanning sets and bases, and the four fundamental subspaces to explain and exploit Gaussian elimination and the LU factorization, as well as the solution of overdetermined linear systems in the least squares sense and eigenvalues and eigenvectors. This unique textbook also includes examples and problems focused on concepts rather than the mechanics of linear algebra. The problems at the end of each chapter and in an associated website encourage readers to explore how to use the notions introduced in the chapter in a variety of ways. Additional problems, quizzes, and exams will be posted on an accompanying website and updated regularly. The Less Is More Linear Algebra of Vector Spaces and Matrices is for students and researchers interested in learning linear algebra who have the mathematical maturity to appreciate abstract concepts that generalize intuitive ideas. The early introduction of the SVD makes the book particularly useful for those interested in using linear algebra in applications such as scientific computing and data science. It is appropriate for a first proof-based course in linear algebra.
Provides complete update and organization of the previous books, with some material moving online; Includes new problems, projects, and exercises; Includes interactive coding resources to accompany the book, including examples in the text, exercises, projects, and refection questions.
This book focuses on lattice-based cryptosystems, widely considered to be one of the most promising post-quantum cryptosystems and provides fundamental insights into how to construct provably secure cryptosystems from hard lattice problems. The concept of provable security is used to inform the choice of lattice tool for designing cryptosystems, including public-key encryption, identity-based encryption, attribute-based encryption, key change and digital signatures. Given its depth of coverage, the book especially appeals to graduate students and young researchers who plan to enter this research area.
This book combines the advantages of high-dimensional data visualization and machine learning in the context of identifying complex n-D data patterns. It vastly expands the class of reversible lossless 2-D and 3-D visualization methods, which preserve the n-D information. This class of visual representations, called the General Lines Coordinates (GLCs), is accompanied by a set of algorithms for n-D data classification, clustering, dimension reduction, and Pareto optimization. The mathematical and theoretical analyses and methodology of GLC are included, and the usefulness of this new approach is demonstrated in multiple case studies. These include the Challenger disaster, world hunger data, health monitoring, image processing, text classification, market forecasts for a currency exchange rate, computer-aided medical diagnostics, and others. As such, the book offers a unique resource for students, researchers, and practitioners in the emerging field of Data Science.
Metaheuristic optimization has become a prime alternative for solving complex optimization problems in several areas. Hence, practitioners and researchers have been paying extensive attention to those metaheuristic algorithms that are mainly based on natural phenomena. However, when those algorithms are implemented, there are not enough books that deal with theoretical and experimental problems in a friendly manner so this book presents a novel structure that includes a complete description of the most important metaheuristic optimization algorithms as well as a new proposal of a new metaheuristic optimization named earthquake optimization. This book also has several practical exercises and a toolbox for MATLAB (R) and a toolkit for LabVIEW are integrated as complementary material for this book. These toolkits allow readers to move from a simulation environment to an experimentation one very fast. This book is suitable for researchers, students, and professionals in several areas, such as economics, architecture, computer science, electrical engineering, and control systems. The unique features of this book are as follows: Developed for researchers, undergraduate and graduate students, and practitioners A friendly description of the main metaheuristic optimization algorithms Theoretical and practical optimization examples A new earthquake optimization algorithm Updated state-of-the-art and research optimization projects The authors are multidisciplinary/interdisciplinary lecturers and researchers who have written a structure-friendly learning methodology to understand each metaheuristic optimization algorithm presented in this book.
Soft computing methods such as neural networks and genetic algorithms draw on the problem solving strategies of the natural world which differ fundamentally from the mathematically-based computing methods normally used in engineering. Human brains are highly effective computers with capabilities far beyond those of the most sophisticated electronic computers. The 'soft computing' methods they use can solve very difficult inverse problems based on reduction in disorder. This book outlines these methods and applies them to a range of difficult engineering problems, including applications in computational mechanics, earthquake engineering, and engineering design. Most of these are difficult inverse problems - especially in engineering design - and are treated in depth.
Modern computing relies on future and emergent technologies which have been conceived via interaction between computer science, engineering, chemistry, physics and biology. This highly interdisciplinary book presents advances in the fields of parallel, distributed and emergent information processing and computation. The book represents major breakthroughs in parallel quantum protocols, elastic cloud servers, structural properties of interconnection networks, internet of things, morphogenetic collective systems, swarm intelligence and cellular automata, unconventionality in parallel computation, algorithmic information dynamics, localized DNA computation, graph-based cryptography, slime mold inspired nano-electronics and cytoskeleton computers. Features Truly interdisciplinary, spanning computer science, electronics, mathematics and biology Covers widely popular topics of future and emergent computing technologies, cloud computing, parallel computing, DNA computation, security and network analysis, cryptography, and theoretical computer science Provides unique chapters written by top experts in theoretical and applied computer science, information processing and engineering From Parallel to Emergent Computing provides a visionary statement on how computing will advance in the next 25 years and what new fields of science will be involved in computing engineering. This book is a valuable resource for computer scientists working today, and in years to come.
This book provides insights into contemporary issues and challenges in soft computing applications and techniques in healthcare. It will be a useful guide to identify, categorise and assess the role of different soft computing techniques for disease, diagnosis and prediction due to technological advancements. The book explores applications in soft computing and covers empirical properties of artificial neural network (ANN), evolutionary computing, fuzzy logic and statistical techniques. It presents basic and advanced concepts to help beginners and industry professionals get up to speed on the latest developments in soft computing and healthcare systems. It incorporates the latest methodologies and challenges facing soft computing, examines descriptive, predictive and social network techniques and discusses analytics tools and their role in providing effective solutions for science and technology. The primary users for the book include researchers, academicians, postgraduate students, specialists and practitioners. Dr. Ashish Mishra is a professor in the Department of Computer Science and Engineering, Gyan Ganga Institute of Technology and Sciences, Jabalpur, Madhya Pradesh, India. He has contributed in organising the INSPIRE Science Internship Camp. He is a member of the Institute of Electrical and Electronics Engineers and is a life member of the Computer Society of India. His research interests include the Internet of Things, data mining, cloud computing, image processing and knowledge-based systems. He holds nine patents in Intellectual Property, India. He has authored four books in the areas of data mining, image processing and LaTex. Dr. G. Suseendran is an assistant professor, Department of Information Technology, School of Computing Sciences, Vels Institute of Science, Technology & Advanced Studies (VISTAS), Chennai, Tamil Nadu, India. His research interests include ad-hoc networks, the Internet of Things, data mining, cloud computing, image processing, knowledge-based systems, and Web information exploration. He has published more than 75 research papers in various international journals such as Science Citation Index, Springer Book Chapter, Scopus, IEEE Access and UGC-referred journals. Prof. Trung-Nghia Phung is an associate professor and Head of Academic Affairs, Thai Nguyen University of Information and Communication Technology (ICTU). He has published more than 60 research papers. His main research interest lies in the field of speech, audio, and biomedical signal processing. He serves as a technical committee program member, track chair, session chair, and reviewer of many international conferences and journals. He was a co-Chair of the International Conference on Advances in Information and Communication Technology 2016 (ICTA 2016) and a Session Chair of the 4th International Conference on Information System Design and Intelligent Applications (INDIA 2017).
This compendium provides a detailed account of the lognormality principle characterizing the human motor behavior by summarizing a sound theoretical framework for modeling such a behavior, introducing the most recent algorithms for extracting the lognormal components of complex movements in 2, 2.5 and 3 dimensions. It also vividly reports the most advanced applications to handwriting analysis and recognition, signature and writer verification, gesture recognition and calligraphy generation, evaluation of motor skills, improvement/degradation with aging, handwriting learning, education and developmental deficits, prescreening of children with ADHD (Attention Development and Hyperactivity Disorder), monitoring of concussion recovery, diagnosis and monitoring of Alzheimer's and Parkinson's diseases and aging effects in speech and handwriting.The volume provides a unique and useful source of references on the lognormality principle, an update on the most recent advances and an outlook at the most promising future developments in e-Security, e-Learning and e-Health.
The vast circulations of mobile devices, sensors and data mean that the social world is now defined by a complex interweaving of human and machine agency. Key to this is the growing power of algorithms - the decision-making parts of code - in our software dense and data rich environments. Algorithms can shape how we are retreated, what we know, who we connect with and what we encounter, and they present us with some important questions about how society operates and how we understand it. This book offers a series of concepts, approaches and ideas for understanding the relations between algorithms and power. Each chapter provides a unique perspective on the integration of algorithms into the social world. As such, this book directly tackles some of the most important questions facing the social sciences today. This book was originally published as a special issue of Information, Communication & Society.
The first edition of Exercises in Programming Style was honored as an ACM Notable Book and praised as "The best programming book of the decade." This new edition retains the same presentation but has been upgraded to Python 3, and there is a new section on neural network styles. Using a simple computational task (term frequency) to illustrate different programming styles, Exercises in Programming Style helps readers understand the various ways of writing programs and designing systems. It is designed to be used in conjunction with code provided on an online repository. The book complements and explains the raw code in a way that is accessible to anyone who regularly practices the art of programming. The book can also be used in advanced programming courses in computer science and software engineering programs. The book contains 40 different styles for writing the term frequency task. The styles are grouped into ten categories: historical, basic, function composition, objects and object interactions, reflection and metaprogramming, adversity, data-centric, concurrency, interactivity, and neural networks. The author states the constraints in each style and explains the example programs. Each chapter first presents the constraints of the style, next shows an example program, and then gives a detailed explanation of the code. Most chapters also have sections focusing on the use of the style in systems design as well as sections describing the historical context in which the programming style emerged.
Disk-Based Algorithms for Big Data is a product of recent advances in the areas of big data, data analytics, and the underlying file systems and data management algorithms used to support the storage and analysis of massive data collections. The book discusses hard disks and their impact on data management, since Hard Disk Drives continue to be common in large data clusters. It also explores ways to store and retrieve data though primary and secondary indices. This includes a review of different in-memory sorting and searching algorithms that build a foundation for more sophisticated on-disk approaches like mergesort, B-trees, and extendible hashing. Following this introduction, the book transitions to more recent topics, including advanced storage technologies like solid-state drives and holographic storage; peer-to-peer (P2P) communication; large file systems and query languages like Hadoop/HDFS, Hive, Cassandra, and Presto; and NoSQL databases like Neo4j for graph structures and MongoDB for unstructured document data. Designed for senior undergraduate and graduate students, as well as professionals, this book is useful for anyone interested in understanding the foundations and advances in big data storage and management, and big data analytics. About the Author Dr. Christopher G. Healey is a tenured Professor in the Department of Computer Science and the Goodnight Distinguished Professor of Analytics in the Institute for Advanced Analytics, both at North Carolina State University in Raleigh, North Carolina. He has published over 50 articles in major journals and conferences in the areas of visualization, visual and data analytics, computer graphics, and artificial intelligence. He is a recipient of the National Science Foundation's CAREER Early Faculty Development Award and the North Carolina State University Outstanding Instructor Award. He is a Senior Member of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE), and an Associate Editor of ACM Transaction on Applied Perception, the leading worldwide journal on the application of human perception to issues in computer science.
"Ask not what your compiler can do for you, ask what you can do for your compiler." --John Levesque, Director of Cray's Supercomputing Centers of Excellence The next decade of computationally intense computing lies with more powerful multi/manycore nodes where processors share a large memory space. These nodes will be the building block for systems that range from a single node workstation up to systems approaching the exaflop regime. The node itself will consist of 10's to 100's of MIMD (multiple instruction, multiple data) processing units with SIMD (single instruction, multiple data) parallel instructions. Since a standard, affordable memory architecture will not be able to supply the bandwidth required by these cores, new memory organizations will be introduced. These new node architectures will represent a significant challenge to application developers. Programming for Hybrid Multi/Manycore MPP Systems attempts to briefly describe the current state-of-the-art in programming these systems, and proposes an approach for developing a performance-portable application that can effectively utilize all of these systems from a single application. The book starts with a strategy for optimizing an application for multi/manycore architectures. It then looks at the three typical architectures, covering their advantages and disadvantages. The next section of the book explores the other important component of the target-the compiler. The compiler will ultimately convert the input language to executable code on the target, and the book explores how to make the compiler do what we want. The book then talks about gathering runtime statistics from running the application on the important problem sets previously discussed. How best to utilize available memory bandwidth and virtualization is covered next, along with hybridization of a program. The last part of the book includes several major applications, and examines future hardware advancements and how the application developer may prepare for those advancements.
Find the right algorithm for your image processing application Exploring the recent achievements that have occurred since the mid-1990s, Circular and Linear Regression: Fitting Circles and Lines by Least Squares explains how to use modern algorithms to fit geometric contours (circles and circular arcs) to observed data in image processing and computer vision. The author covers all facets-geometric, statistical, and computational-of the methods. He looks at how the numerical algorithms relate to one another through underlying ideas, compares the strengths and weaknesses of each algorithm, and illustrates how to combine the algorithms to achieve the best performance. After introducing errors-in-variables (EIV) regression analysis and its history, the book summarizes the solution of the linear EIV problem and highlights its main geometric and statistical properties. It next describes the theory of fitting circles by least squares, before focusing on practical geometric and algebraic circle fitting methods. The text then covers the statistical analysis of curve and circle fitting methods. The last chapter presents a sample of "exotic" circle fits, including some mathematically sophisticated procedures that use complex numbers and conformal mappings of the complex plane. Essential for understanding the advantages and limitations of the practical schemes, this book thoroughly addresses the theoretical aspects of the fitting problem. It also identifies obscure issues that may be relevant in future research.
Combining knowledge with strategies, Data Structure Practice for Collegiate Programming Contests and Education presents the first comprehensive book on data structure in programming contests. This book is designed for training collegiate programming contest teams in the nuances of data structure and for helping college students in computer-related majors to gain deeper understanding of data structure. Based on successful experiences in many world-level contests, the book includes 204 typical problems and detailed analyses selected from the ACM International Collegiate Programming Contest and other major programming contests since 1990. It is divided into four sections that focus on: Fundamental programming skills Experiments for linear lists Experiments for trees Experiments for graphs Each chapter contains a set of problems and includes hints. The book also provides test data for most problems as well as sources and IDs for online judgments that help with improving programming skills. Introducing a multi-options model and considerations of context, Data Structure Practice for Collegiate Programming Contests and Education encourages students to think creatively in solving programming problems. By taking readers through practical contest problems from analysis to implementation, it provides a complete source for enhancing understanding and polishing skills in programming.
With the growing interest in and use of big data analytics in many industries and in many research fields around the globe, this new volume addresses the need for a comprehensive resource on the core concepts of big data analytics along with the tools, techniques, and methodologies. The book gives the why and the how of big data analytics in an organized and straightforward manner, using both theoretical and practical approaches. The book's authors have organized the contents in a systematic manner, starting with an introduction and overview of big data analytics and then delving into pre-processing methods, feature selection methods and algorithms, big data streams, and big data classification. Such terms and methods as swarm intelligence, data mining, the bat algorithm and genetic algorithms, big data streams, and many more are discussed. The authors explain how deep learning and machine learning along with other methods and tools are applied in big data analytics. The last section of the book presents a selection of illustrative case studies that show examples of the use of data analytics in industries such as health care, business, education, and social media. Research Practitioner's Handbook on Big Data Analytics will be a valuable addition to the libraries of practitioners in data collection in many industries along with research scholars and faculty in the domain of big data analytics. The book can also serve as a handy textbook for courses in data collection, data mining, and big data analytics.
Recommender systems provide users (businesses or individuals) with personalized online recommendations of products or information, to address the problem of information overload and improve personalized services. Recent successful applications of recommender systems are providing solutions to transform online services for e-government, e-business, e-commerce, e-shopping, e-library, e-learning, e-tourism, and more.This unique compendium not only describes theoretical research but also reports on new application developments, prototypes, and real-world case studies of recommender systems. The comprehensive volume provides readers with a timely snapshot of how new recommendation methods and algorithms can overcome challenging issues. Furthermore, the monograph systematically presents three dimensions of recommender systems - basic recommender system concepts, advanced recommender system methods, and real-world recommender system applications.By providing state-of-the-art knowledge, this excellent reference text will immensely benefit researchers, managers, and professionals in business, government, and education to understand the concepts, methods, algorithms and application developments in recommender systems.
Examines classic algorithms, geometric diagrams, and mechanical principles for enhancing visualization of statistical estimation procedures and mathematical concepts in physics, engineering, and computer programming.
This is a how-to book for solving geometric problems robustly or error free in actual practice. The contents and accompanying source code are based on the feature requests and feedback received from industry professionals and academics who want both the descriptions and source code for implementations of geometric algorithms. The book provides a framework for geometric computing using several arithmetic systems and describes how to select the appropriate system for the problem at hand. Key Features: A framework of arithmetic systems that can be applied to many geometric algorithms to obtain robust or error-free implementations Detailed derivations for algorithms that lead to implementable code Teaching the readers how to use the book concepts in deriving algorithms in their fields of application The Geometric Tools Library, a repository of well-tested code at the Geometric Tools website, https://www.geometrictools.com, that implements the book concepts
Multi-objective optimization problems (MOPs) and uncertain optimization problems (UOPs) which widely exist in real life are challengeable problems in the fields of decision making, system designing, and scheduling, amongst others. Decomposition exploits the ideas of aEURO~making things simpleaEURO (TM) and aEURO~divide and conqueraEURO (TM) to transform a complex problem into a series of simple ones with the aim of reducing the computational complexity. In order to tackle the abovementioned two types of complicated optimization problems, this book introduces the decomposition strategy and conducts a systematic study to perfect the usage of decomposition in the field of multi-objective optimization, and extend the usage of decomposition in the field of uncertain optimization.
This book acquaints readers with recent developments in dynamical systems theory and its applications, with a strong focus on the control and estimation of nonlinear systems. Several algorithms are proposed and worked out for a set of model systems, in particular so-called input-affine or bilinear systems, which can serve to approximate a wide class of nonlinear control systems. These can either take the form of state space models or be represented by an input-output equation. The approach taken here further highlights the role of modern mathematical and conceptual tools, including differential algebraic theory, observer design for nonlinear systems and generalized canonical forms. |
You may like...
Research Anthology on Architectures…
Information R Management Association
Hardcover
R12,639
Discovery Miles 126 390
Inspired by Nature - Essays Presented to…
Susan Stepney, Andrew Adamatzky
Hardcover
Turbulence and Interactions…
Michel O. Deville, Jean-Luc Estivalezes, …
Hardcover
R2,671
Discovery Miles 26 710
Model-Driven Development and Operation…
Dana Petcu, Peter Matthews, …
Hardcover
R1,284
Discovery Miles 12 840
|