![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Algorithms & procedures
Metaheuristic algorithms are considered as generic optimization tools that can solve very complex problems characterized by having very large search spaces. Metaheuristic methods reduce the effective size of the search space through the use of effective search strategies. Book Features: Provides a unified view of the most popular metaheuristic methods currently in use Includes the necessary concepts to enable readers to implement and modify already known metaheuristic methods to solve problems Covers design aspects and implementation in MATLAB (R) Contains numerous examples of problems and solutions that demonstrate the power of these methods of optimization The material has been written from a teaching perspective and, for this reason, this book is primarily intended for undergraduate and postgraduate students of artificial intelligence, metaheuristic methods, and/or evolutionary computation. The objective is to bridge the gap between metaheuristic techniques and complex optimization problems that profit from the convenient properties of metaheuristic approaches. Therefore, engineer practitioners who are not familiar with metaheuristic computation will appreciate that the techniques discussed are beyond simple theoretical tools, since they have been adapted to solve significant problems that commonly arise in such areas.
Extremal Optimization: Fundamentals, Algorithms, and Applications introduces state-of-the-art extremal optimization (EO) and modified EO (MEO) solutions from fundamentals, methodologies, and algorithms to applications based on numerous classic publications and the authors' recent original research results. It promotes the movement of EO from academic study to practical applications. The book covers four aspects, beginning with a general review of real-world optimization problems and popular solutions with a focus on computational complexity, such as "NP-hard" and the "phase transitions" occurring on the search landscape. Next, it introduces computational extremal dynamics and its applications in EO from principles, mechanisms, and algorithms to the experiments on some benchmark problems such as TSP, spin glass, Max-SAT (maximum satisfiability), and graph partition. It then presents studies on the fundamental features of search dynamics and mechanisms in EO with a focus on self-organized optimization, evolutionary probability distribution, and structure features (e.g., backbones), which are based on the authors' recent research results. Finally, it discusses applications of EO and MEO in multiobjective optimization, systems modeling, intelligent control, and production scheduling. The authors present the advanced features of EO in solving NP-hard problems through problem formulation, algorithms, and simulation studies on popular benchmarks and industrial applications. They also focus on the development of MEO and its applications. This book can be used as a reference for graduate students, research developers, and practical engineers who work on developing optimization solutions for those complex systems with hardness that cannot be solved with mathematical optimization or other computational intelligence, such as evolutionary computations.
Disk-Based Algorithms for Big Data is a product of recent advances in the areas of big data, data analytics, and the underlying file systems and data management algorithms used to support the storage and analysis of massive data collections. The book discusses hard disks and their impact on data management, since Hard Disk Drives continue to be common in large data clusters. It also explores ways to store and retrieve data though primary and secondary indices. This includes a review of different in-memory sorting and searching algorithms that build a foundation for more sophisticated on-disk approaches like mergesort, B-trees, and extendible hashing. Following this introduction, the book transitions to more recent topics, including advanced storage technologies like solid-state drives and holographic storage; peer-to-peer (P2P) communication; large file systems and query languages like Hadoop/HDFS, Hive, Cassandra, and Presto; and NoSQL databases like Neo4j for graph structures and MongoDB for unstructured document data. Designed for senior undergraduate and graduate students, as well as professionals, this book is useful for anyone interested in understanding the foundations and advances in big data storage and management, and big data analytics. About the Author Dr. Christopher G. Healey is a tenured Professor in the Department of Computer Science and the Goodnight Distinguished Professor of Analytics in the Institute for Advanced Analytics, both at North Carolina State University in Raleigh, North Carolina. He has published over 50 articles in major journals and conferences in the areas of visualization, visual and data analytics, computer graphics, and artificial intelligence. He is a recipient of the National Science Foundation's CAREER Early Faculty Development Award and the North Carolina State University Outstanding Instructor Award. He is a Senior Member of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE), and an Associate Editor of ACM Transaction on Applied Perception, the leading worldwide journal on the application of human perception to issues in computer science.
Find the right algorithm for your image processing application Exploring the recent achievements that have occurred since the mid-1990s, Circular and Linear Regression: Fitting Circles and Lines by Least Squares explains how to use modern algorithms to fit geometric contours (circles and circular arcs) to observed data in image processing and computer vision. The author covers all facets-geometric, statistical, and computational-of the methods. He looks at how the numerical algorithms relate to one another through underlying ideas, compares the strengths and weaknesses of each algorithm, and illustrates how to combine the algorithms to achieve the best performance. After introducing errors-in-variables (EIV) regression analysis and its history, the book summarizes the solution of the linear EIV problem and highlights its main geometric and statistical properties. It next describes the theory of fitting circles by least squares, before focusing on practical geometric and algebraic circle fitting methods. The text then covers the statistical analysis of curve and circle fitting methods. The last chapter presents a sample of "exotic" circle fits, including some mathematically sophisticated procedures that use complex numbers and conformal mappings of the complex plane. Essential for understanding the advantages and limitations of the practical schemes, this book thoroughly addresses the theoretical aspects of the fitting problem. It also identifies obscure issues that may be relevant in future research.
The first edition of Exercises in Programming Style was honored as an ACM Notable Book and praised as "The best programming book of the decade." This new edition retains the same presentation but has been upgraded to Python 3, and there is a new section on neural network styles. Using a simple computational task (term frequency) to illustrate different programming styles, Exercises in Programming Style helps readers understand the various ways of writing programs and designing systems. It is designed to be used in conjunction with code provided on an online repository. The book complements and explains the raw code in a way that is accessible to anyone who regularly practices the art of programming. The book can also be used in advanced programming courses in computer science and software engineering programs. The book contains 40 different styles for writing the term frequency task. The styles are grouped into ten categories: historical, basic, function composition, objects and object interactions, reflection and metaprogramming, adversity, data-centric, concurrency, interactivity, and neural networks. The author states the constraints in each style and explains the example programs. Each chapter first presents the constraints of the style, next shows an example program, and then gives a detailed explanation of the code. Most chapters also have sections focusing on the use of the style in systems design as well as sections describing the historical context in which the programming style emerged.
Metaheuristic optimization has become a prime alternative for solving complex optimization problems in several areas. Hence, practitioners and researchers have been paying extensive attention to those metaheuristic algorithms that are mainly based on natural phenomena. However, when those algorithms are implemented, there are not enough books that deal with theoretical and experimental problems in a friendly manner so this book presents a novel structure that includes a complete description of the most important metaheuristic optimization algorithms as well as a new proposal of a new metaheuristic optimization named earthquake optimization. This book also has several practical exercises and a toolbox for MATLAB (R) and a toolkit for LabVIEW are integrated as complementary material for this book. These toolkits allow readers to move from a simulation environment to an experimentation one very fast. This book is suitable for researchers, students, and professionals in several areas, such as economics, architecture, computer science, electrical engineering, and control systems. The unique features of this book are as follows: Developed for researchers, undergraduate and graduate students, and practitioners A friendly description of the main metaheuristic optimization algorithms Theoretical and practical optimization examples A new earthquake optimization algorithm Updated state-of-the-art and research optimization projects The authors are multidisciplinary/interdisciplinary lecturers and researchers who have written a structure-friendly learning methodology to understand each metaheuristic optimization algorithm presented in this book.
This volume comprises six well-versed contributed chapters devoted to report the latest fi ndings on the applications of machine learning for big data analytics. Big data is a term for data sets that are so large or complex that traditional data processing application software is inadequate to deal with them. The possible challenges in this direction include capture, storage, analysis, data curation, search, sharing, transfer, visualization, querying, updating and information privacy. Big data analytics is the process of examining large and varied data sets - i.e., big data - to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful information that can help organizations make more-informed business decisions. This volume is intended to be used as a reference by undergraduate and post graduate students of the disciplines of computer science, electronics and telecommunication, information science and electrical engineering. THE SERIES: FRONTIERS IN COMPUTATIONAL INTELLIGENCE The series Frontiers In Computational Intelligence is envisioned to provide comprehensive coverage and understanding of cutting edge research in computational intelligence. It intends to augment the scholarly discourse on all topics relating to the advances in artifi cial life and machine learning in the form of metaheuristics, approximate reasoning, and robotics. Latest research fi ndings are coupled with applications to varied domains of engineering and computer sciences. This field is steadily growing especially with the advent of novel machine learning algorithms being applied to different domains of engineering and technology. The series brings together leading researchers that intend to continue to advance the fi eld and create a broad knowledge about the most recent research.
This is a how-to book for solving geometric problems robustly or error free in actual practice. The contents and accompanying source code are based on the feature requests and feedback received from industry professionals and academics who want both the descriptions and source code for implementations of geometric algorithms. The book provides a framework for geometric computing using several arithmetic systems and describes how to select the appropriate system for the problem at hand. Key Features: A framework of arithmetic systems that can be applied to many geometric algorithms to obtain robust or error-free implementations Detailed derivations for algorithms that lead to implementable code Teaching the readers how to use the book concepts in deriving algorithms in their fields of application The Geometric Tools Library, a repository of well-tested code at the Geometric Tools website, https://www.geometrictools.com, that implements the book concepts
"Ask not what your compiler can do for you, ask what you can do for your compiler." --John Levesque, Director of Cray's Supercomputing Centers of Excellence The next decade of computationally intense computing lies with more powerful multi/manycore nodes where processors share a large memory space. These nodes will be the building block for systems that range from a single node workstation up to systems approaching the exaflop regime. The node itself will consist of 10's to 100's of MIMD (multiple instruction, multiple data) processing units with SIMD (single instruction, multiple data) parallel instructions. Since a standard, affordable memory architecture will not be able to supply the bandwidth required by these cores, new memory organizations will be introduced. These new node architectures will represent a significant challenge to application developers. Programming for Hybrid Multi/Manycore MPP Systems attempts to briefly describe the current state-of-the-art in programming these systems, and proposes an approach for developing a performance-portable application that can effectively utilize all of these systems from a single application. The book starts with a strategy for optimizing an application for multi/manycore architectures. It then looks at the three typical architectures, covering their advantages and disadvantages. The next section of the book explores the other important component of the target-the compiler. The compiler will ultimately convert the input language to executable code on the target, and the book explores how to make the compiler do what we want. The book then talks about gathering runtime statistics from running the application on the important problem sets previously discussed. How best to utilize available memory bandwidth and virtualization is covered next, along with hybridization of a program. The last part of the book includes several major applications, and examines future hardware advancements and how the application developer may prepare for those advancements.
Combining knowledge with strategies, Data Structure Practice for Collegiate Programming Contests and Education presents the first comprehensive book on data structure in programming contests. This book is designed for training collegiate programming contest teams in the nuances of data structure and for helping college students in computer-related majors to gain deeper understanding of data structure. Based on successful experiences in many world-level contests, the book includes 204 typical problems and detailed analyses selected from the ACM International Collegiate Programming Contest and other major programming contests since 1990. It is divided into four sections that focus on: Fundamental programming skills Experiments for linear lists Experiments for trees Experiments for graphs Each chapter contains a set of problems and includes hints. The book also provides test data for most problems as well as sources and IDs for online judgments that help with improving programming skills. Introducing a multi-options model and considerations of context, Data Structure Practice for Collegiate Programming Contests and Education encourages students to think creatively in solving programming problems. By taking readers through practical contest problems from analysis to implementation, it provides a complete source for enhancing understanding and polishing skills in programming.
Provides a comprehensive introduction to multi-robot systems planning and task allocation; Explores multi robot aerial planning, flight planning, orienteering and coverage, and deployment, patrolling, and foraging; Includes real-world case studies; Treats different aspects of cooperation in multi-agent systems.
The field of multidimensional data structures is large and growing very quickly. Here, for the first time, is a thorough treatment of multidimensional point data, object and image-based representations, intervals and small rectangles, and high-dimensional datasets. The book includes a thorough introduction; a comprehensive survey to spatial and multidimensional data structures and algorithms; and implementation details for the most useful data structures. Along with the hundreds of worked exercises and hundreds of illustrations, the result is an excellent and valuable reference tool for professionals in many areas, including computer graphics, databases, geographic information systems (GIS), game programming, image processing, pattern recognition, solid modeling, similarity retrieval, and VLSI design. Award Winner in 2006 Best Book competition in Professional and Scholarly Publishing from the Association of American Publishers. Morgan Kaufmann would like to congratulate Hanan Samet on receiving the UCGIS 2009 Research Award Read the announcement here: http:
//www.ucgis.org/summer2009/researchaward.htm
Bioinformatics is growing by leaps and bounds; theories/algorithms/statistical techniques are constantly evolving. Nevertheless, a core body of algorithmic ideas have emerged and researchers are beginning to adopt a "problem solving" approach to bioinformatics, wherein they use solutions to well-abstracted problems as building blocks to solve larger scope problems. "Problem Solving Handbook for Computational Biology" and Bioinformatics is an edited volume contributed by world renowned leaders in this field. This comprehensive handbook with problem solving emphasis, covers all relevant areas of computational biology and bioinformatics. Web resources and related themes are highlighted at every opportunity in this central easy-to-read reference. Designed for advanced-level students, researchers and professors in computer science and bioengineering as a reference or secondary text, this handbook is also suitable for professionals working in this industry.
The authors' treatment of data structures in Data Structures and Algorithms is unified by an informal notion of "abstract data types," allowing readers to compare different implementations of the same concept. Algorithm design techniques are also stressed and basic algorithm analysis is covered. Most of the programs are written in Pascal.
After a decade of development, genetic algorithms and genetic programming have become a widely accepted toolkit for computational finance. Genetic Algorithms and Genetic Programming in Computational Finance is a pioneering volume devoted entirely to a systematic and comprehensive review of this subject. Chapters cover various areas of computational finance, including financial forecasting, trading strategies development, cash flow management, option pricing, portfolio management, volatility modeling, arbitraging, and agent-based simulations of artificial stock markets. Two tutorial chapters are also included to help readers quickly grasp the essence of these tools. Finally, a menu-driven software program, Simple GP, accompanies the volume, which will enable readers without a strong programming background to gain hands-on experience in dealing with much of the technical material introduced in this work.
This book offers a clear and comprehensive introduction to broad learning, one of the novel learning problems studied in data mining and machine learning. Broad learning aims at fusing multiple large-scale information sources of diverse varieties together, and carrying out synergistic data mining tasks across these fused sources in one unified analytic. This book takes online social networks as an application example to introduce the latest alignment and knowledge discovery algorithms. Besides the overview of broad learning, machine learning and social network basics, specific topics covered in this book include network alignment, link prediction, community detection, information diffusion, viral marketing, and network embedding.
This book explores inductive inference using the minimum message length (MML) principle, a Bayesian method which is a realisation of Ockham's Razor based on information theory. Accompanied by a library of software, the book can assist an applications programmer, student or researcher in the fields of data analysis and machine learning to write computer programs based upon this principle. MML inference has been around for 50 years and yet only one highly technical book has been written about the subject. The majority of research in the field has been backed by specialised one-off programs but this book includes a library of general MML-based software, in Java. The Java source code is available under the GNU GPL open-source license. The software library is documented using Javadoc which produces extensive cross referenced HTML manual pages. Every probability distribution and statistical model that is described in the book is implemented and documented in the software library. The library may contain a component that directly solves a reader's inference problem, or contain components that can be put together to solve the problem, or provide a standard interface under which a new component can be written to solve the problem. This book will be of interest to application developers in the fields of machine learning and statistics as well as academics, postdocs, programmers and data scientists. It could also be used by third year or fourth year undergraduate or postgraduate students.
Modelling Transitions shows what computational, formal and data-driven approaches can and could mean for sustainability transitions research, presenting the state-of-the-art and exploring what lies beyond. Featuring contributions from many well-known authors, this book presents the various benefits of modelling for transitions research. More than just taking stock, it also critically examines what modelling of transformative change means and could mean for transitions research and for other disciplines that study societal changes. This includes identifying a variety of approaches currently not part of the portfolios of transitions modellers. Far from only singing praise, critical methodological and philosophical introspection are key aspects of this important book. This book speaks to modellers and non-modellers alike who value the development of robust knowledge on transitions to sustainability, including colleagues in congenial fields. Be they students, researchers or practitioners, everyone interested in transitions should find this book relevant as reference, resource and guide.
Examines classic algorithms, geometric diagrams, and mechanical principles for enhancing visualization of statistical estimation procedures and mathematical concepts in physics, engineering, and computer programming.
Divided roughly into two sections, this book provides a brief history of the development of ECG along with heart rate variability (HRV) algorithms and the engineering innovations over the last decade in this area. It reviews clinical research, presents an overview of the clinical field, and the importance of heart rate variability in diagnosis. The book then discusses the use of particular ECG and HRV algorithms in the context of clinical applications.
This edited book first consolidates the results of the EU-funded EDISON project (Education for Data Intensive Science to Open New science frontiers), which developed training material and information to assist educators, trainers, employers, and research infrastructure managers in identifying, recruiting and inspiring the data science professionals of the future. It then deepens the presentation of the information and knowledge gained to allow for easier assimilation by the reader. The contributed chapters are presented in sequence, each chapter picking up from the end point of the previous one. After the initial book and project overview, the chapters present the relevant data science competencies and body of knowledge, the model curriculum required to teach the required foundations, profiles of professionals in this domain, and use cases and applications. The text is supported with appendices on related process models. The book can be used to develop new courses in data science, evaluate existing modules and courses, draft job descriptions, and plan and design efficient data-intensive research teams across scientific disciplines.
Exact eigenvalues, eigenvectors, and principal vectors of operators with infinite dimensional ranges can rarely be found. Therefore, one must approximate such operators by finite rank operators, then solve the original eigenvalue problem approximately. Serving as both an outstanding text for graduate students and as a source of current results for research scientists, Spectral Computations for Bounded Operators addresses the issue of solving eigenvalue problems for operators on infinite dimensional spaces. From a review of classical spectral theory through concrete approximation techniques to finite dimensional situations that can be implemented on a computer, this volume illustrates the marriage of pure and applied mathematics. It contains a variety of recent developments, including a new type of approximation that encompasses a variety of approximation methods but is simple to verify in practice. It also suggests a new stopping criterion for the QR Method and outlines advances in both the iterative refinement and acceleration techniques for improving the accuracy of approximations. The authors illustrate all definitions and results with elementary examples and include numerous exercises. Spectral Computations for Bounded Operators thus serves as both an outstanding text for second-year graduate students and as a source of current results for research scientists.
This collection of essays explores the different ways the insights from complexity theory can be applied to law. Complexity theory - a variant of systems theory - views law as an emergent, complex, self-organising system comprised of an interactive network of actors and systems that operate with no overall guiding hand, giving rise to complex, collective behaviour in law communications and actions. Addressing such issues as the unpredictability of legal systems, the ability of legal systems to adapt to changes in society, the importance of context, and the nature of law, the essays look to the implications of a complexity theory analysis for the study of public policy and administrative law, international law and human rights, regulatory practices in business and finance, and the practice of law and legal ethics. These are areas where law, which craves certainty, encounters unending, irresolvable complexity. This collection shows the many ways complexity theory thinking can reshape and clarify our understanding of the various problems relating to the theory and practice of law.
Fuzzy social choice theory is useful for modeling the uncertainty and imprecision prevalent in social life yet it has been scarcely applied and studied in the social sciences. Filling this gap, Application of Fuzzy Logic to Social Choice Theory provides a comprehensive study of fuzzy social choice theory. The book explains the concept of a fuzzy maximal subset of a set of alternatives, fuzzy choice functions, the factorization of a fuzzy preference relation into the "union" (conorm) of a strict fuzzy relation and an indifference operator, fuzzy non-Arrowian results, fuzzy versions of Arrow's theorem, and Black's median voter theorem for fuzzy preferences. It examines how unambiguous and exact choices are generated by fuzzy preferences and whether exact choices induced by fuzzy preferences satisfy certain plausible rationality relations. The authors also extend known Arrowian results involving fuzzy set theory to results involving intuitionistic fuzzy sets as well as the Gibbard-Satterthwaite theorem to the case of fuzzy weak preference relations. The final chapter discusses Georgescu's degree of similarity of two fuzzy choice functions.
Synthetic Aperture Radar Automatic Detection Algorithms (SARADA) for Oil Spills conveys the pivotal tool required to fully comprehend the advanced algorithms in radar monitoring and detection of oil spills, particularly quantum computing and algorithms as a keystone to comprehending theories and algorithms behind radar imaging and detection of marine pollution. Bridging the gap between modern quantum mechanics and computing detection algorithms of oil spills, this book contains precise theories and techniques for automatic identification of oil spills from SAR measurements. Based on modern quantum physics, the book also includes the novel theory on radar imaging mechanism of oil spills. With the use of precise quantum simulation of trajectory movements of oil spills using a sequence of radar images, this book demonstrates the use of SARADA for contamination by oil spills as a promising novel technique. Key Features: Introduces basic concepts of a radar remote sensing. Fills a gap in the knowledge base of quantum theory and microwave remote sensing. Discusses the important aspects of oil spill imaging in radar data in relation to the quantum theory. Provides recent developments and progresses of automatic detection algorithms of oil spill from radar data. Presents 2-D oil spill radar data in 4-D images. |
You may like...
Advances in Intelligent Computing
J K Mandal, Paramartha Dutta, …
Hardcover
R3,106
Discovery Miles 31 060
Computational Methods and Algorithms for…
Kwok Tai Chui, Miltiadis D Lytras
Hardcover
R6,044
Discovery Miles 60 440
|