![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming > Algorithms & procedures
State-of-the-art airbag algorithms make a decision to fire restraint systems in a crash by evaluating the deceleration of the entire vehicle during the single events of the accident. In order to meet the ever increasing requirements of consumer test organizations and global legislators, a detailed knowledge of the nature and direction of the crash would be of great benefit. The algorithms used in current vehicles can only do this to a limited extent. Andre Leschke presents a completely different algorithm concept to solve these problems. In addition to vehicle deceleration, the chronological sequence of an accident and the associated local and temporal destruction of the vehicle are possible indicators for an accident's severity. About the Author: Dr. Andre Leschke has earned his doctoral degree from Tor-Vergata University of Rome, Italy. Currently, he is working as head of a team of vehicle safety developers in the German automotive industry.
Every other day we hear about new ways to put deep learning to good use: improved medical imaging, accurate credit card fraud detection, long range weather forecasting, and more. PyTorch puts these superpowers in your hands, providing a comfortable Python experience that gets you started quickly and then grows with you as you, and your deep learning skills, become more sophisticated. Deep Learning with PyTorch teaches you how to implement deep learning algorithms with Python and PyTorch. This book takes you into a fascinating case study: building an algorithm capable of detecting malignant lung tumors using CT scans. As the authors guide you through this real example, you'll discover just how effective and fun PyTorch can be. Key features * Using the PyTorch tensor API * Understanding automatic differentiation in PyTorch * Training deep neural networks * Monitoring training and visualizing results * Interoperability with NumPy Audience Written for developers with some knowledge of Python as well as basic linear algebra skills. Some understanding of deep learning will be helpful, however no experience with PyTorch or other deep learning frameworks is required. About the technology PyTorch is a machine learning framework with a strong focus on deep neural networks. Because it emphasizes GPU-based acceleration, PyTorch performs exceptionally well on readily-available hardware and scales easily to larger systems. Eli Stevens has worked in Silicon Valley for the past 15 years as a software engineer, and the past 7 years as Chief Technical Officer of a startup making medical device software. Luca Antiga is co-founder and CEO of an AI engineering company located in Bergamo, Italy, and a regular contributor to PyTorch.
This book is a comprehensive introduction to the methods and algorithms of modern data analytics. It provides a sound mathematical basis, discusses advantages and drawbacks of different approaches, and enables the reader to design and implement data analytics solutions for real-world applications. This book has been used for more than ten years in the Data Mining course at the Technical University of Munich. Much of the content is based on the results of industrial research and development projects at Siemens.
Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables. As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. You'll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are: Forward selection component analysis Local feature selection Linking features and a target with a hidden Markov model Improvements on traditional stepwise selection Nominal-to-ordinal conversion All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code. The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it. What You Will Learn Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set. Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods. Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets. Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input. Who This Book Is For Intermediate to advanced data science programmers and analysts.
This book constitutes the thoroughly refereed proceedings of the 11th International Symposium on Intelligence Computation and Applications, ISICA 2019, held in Guangzhou, China, in November 2019. The 65 papers presented were carefully reviewed and selected from the total of 112 submissions. This volume features the most up-to-date research in evolutionary algorithms, parallel computing and quantum computing, evolutionary multi-objective and dynamic optimization, intelligent multimedia systems, virtualization and AI applications, smart scheduling, intelligent control, big data and cloud computing, deep learning, and hybrid machine learning systems.The papers are organized according to the following topical sections: new frontier in evolutionary algorithms; evolutionary multi-objective and dynamic optimization; intelligent multimedia systems; virtualization and AI applications; smart scheduling; intelligent control; big data and cloud computing; statistical learning.
ALGORITHMS IN BIOINFORMATICS Explore a comprehensive and insightful treatment of the practical application of bioinformatic algorithms in a variety of fields Algorithms in Bioinformatics: Theory and Implementation delivers a fulsome treatment of some of the main algorithms used to explain biological functions and relationships. It introduces readers to the art of algorithms in a practical manner which is linked with biological theory and interpretation. The book covers many key areas of bioinformatics, including global and local sequence alignment, forced alignment, detection of motifs, Sequence logos, Markov chains or information entropy. Other novel approaches are also described, such as Self-Sequence alignment, Objective Digital Stains (ODSs) or Spectral Forecast and the Discrete Probability Detector (DPD) algorithm. The text incorporates graphical illustrations to highlight and emphasize the technical details of computational algorithms found within, to further the reader's understanding and retention of the material. Throughout, the book is written in an accessible and practical manner, showing how algorithms can be implemented and used in JavaScript on Internet Browsers. The author has included more than 120 open-source implementations of the material, as well as 33 ready-to-use presentations. The book contains original material that has been class-tested by the author and numerous cases are examined in a biological and medical context. Readers will also benefit from the inclusion of: A thorough introduction to biological evolution, including the emergence of life, classifications and some known theories and molecular mechanisms A detailed presentation of new methods, such as Self-sequence alignment, Objective Digital Stains and Spectral Forecast A treatment of sequence alignment, including local sequence alignment, global sequence alignment and forced sequence alignment with full implementations Discussions of position-specific weight matrices, including the count, weight, relative frequencies, and log-likelihoods matrices A detailed presentation of the methods related to Markov Chains as well as a description of their implementation in Bioinformatics and adjacent fields An examination of information and entropy, including sequence logos and explanations related to their meaning An exploration of the current state of bioinformatics, including what is known and what issues are usually avoided in the field A chapter on philosophical transactions that allows the reader a broader view of the prediction process Native computer implementations in the context of the field of Bioinformatics Extensive worked examples with detailed case studies that point out the meaning of different results Perfect for professionals and researchers in biology, medicine, engineering, and information technology, as well as upper level undergraduate students in these fields, Algorithms in Bioinformatics: Theory and Implementation will also earn a place in the libraries of software engineers who wish to understand how to implement bioinformatic algorithms in their products.
Implement practical data structures and algorithms for text search and discover how it is used inside other larger applications. This unique in-depth guide explains string algorithms using the C programming language. String Algorithms in C teaches you the following algorithms and how to use them: classical exact search algorithms; tries and compact tries; suffix trees and arrays; approximative pattern searches; and more. In this book, author Thomas Mailund provides a library with all the algorithms and applicable source code that you can use in your own programs. There are implementations of all the algorithms presented in this book so there are plenty of examples. You'll understand that string algorithms are used in various applications such as image processing, computer vision, text analytics processing from data science to web applications, information retrieval from databases, network security, and much more. What You Will Learn Use classical exact search algorithms including naive search, borders/border search, Knuth-Morris-Pratt, and Boyer-Moor with or without Horspool Search in trees, use tries and compact tries, and work with the Aho-Carasick algorithm Process suffix trees including the use and development of McCreight's algorithm Work with suffix arrays including binary searches; sorting naive constructions; suffix tree construction; skew algorithms; and the Borrows-Wheeler transform (BWT) Deal with enhanced suffix arrays including longest common prefix (LCP) Carry out approximative pattern searches among suffix trees and approximative BWT searches Who This Book Is For Those with at least some prior programming experience with C or Assembly and have at least prior experience with programming algorithms.
This two-volume set (CCIS 1159 and CCIS 1160) constitutes the proceedings of the 14th International Conference on Bio-inspired Computing: Theories and Applications, BIC-TA 2019, held in Zhengzhou, China, in November 2019. The 121 full papers presented in both volumes were selected from 197 submissions. The papers are organized according to the topical headings: evolutionary computation and swarm intelligence; bioinformatics and systems biology; complex networks; DNA and molecular computing; neural networks and articial intelligence.
Realism and Complexity in Social Science is an argument for a new approach to investigating the social world, that of complex realism. Complex realism brings together a number of strands of thought, in scientific realism, complexity science, probability theory and social research methodology. It proposes that the reality of the social world is that it is probabilistic, yet there exists enough invariance to make the discovery and explanation of social objects and causal mechanisms possible. This forms the basis for the development of a complex realist foundation for social research, that utilises a number of new and novel approaches to investigation, alongside the more traditional corpus of quantitative and qualitative methods. Research examples are drawn from research in sociology, epidemiology, criminology, social policy and human geography. The book assumes no prior knowledge of realism, probability or complexity and in the early chapters, the reader is introduced to these concepts and the arguments against them. Although the book is grounded in philosophical reasoning, this is in a direct and accessible style that will appeal both to social researchers with a methodological interest and philosophers with an interest in social investigation.
This book describes the uses of different mathematical modeling and soft computing techniques used in epidemiology for experiential research in projects such as how infectious diseases progress to show the likely outcome of an epidemic, and to contribute to public health interventions. This book covers mathematical modeling and soft computing techniques used to study the spread of diseases, predict the future course of an outbreak, and evaluate epidemic control strategies. This book explores the applications covering numerical and analytical solutions, presents basic and advanced concepts for beginners and industry professionals, and incorporates the latest methodologies and challenges using mathematical modeling and soft computing techniques in epidemiology. Primary users of this book include researchers, academicians, postgraduate students, and specialists.
An Introduction to Parallel Programming, Second Edition presents a tried-and-true tutorial approach that shows students how to develop effective parallel programs with MPI, Pthreads and OpenMP. As the first undergraduate text to directly address compiling and running parallel programs on multi-core and cluster architecture, this second edition carries forward its clear explanations for designing, debugging and evaluating the performance of distributed and shared-memory programs while adding coverage of accelerators via new content on GPU programming and heterogeneous programming. New and improved user-friendly exercises teach students how to compile, run and modify example programs.
Handbook of IoT and Blockchain: Methods, solutions, and Recent Advancements includes contributions from around the globe on recent advances and findings in the domain of Internet of Things (IoT) and Blockchain. Chapters include theoretical analysis, practical implications, and extensive surveys with analysis on methods, algorithms, and processes for new product development. IoT and Blockchain are the emerging topics in the current manufacturing scenario.This handbook includes recent advances; showcases the work of research around the globe; offers theoretical analysis and practical implications; presents extensive surveys with analysis, new contributions, and proposals on methods, algorithms, and processes; and also covers recent advances from quantitative and qualitative articles, case studies, conceptual works, and theoretical backing. This handbook will be of interest to graduate students, researchers, academicians, institutions, and professionals that are interested in exploring the areas of IoT and Blockchain.
This book gathers the peer-reviewed proceedings of the International Ethical Hacking Conference, eHaCON 2019, the second international conference of its kind, which was held in Kolkata, India, in August 2019. Bringing together the most outstanding research papers presented at the conference, the book shares new findings on computer network attacks and defenses, commercial security solutions, and hands-on, real-world security lessons learned. The respective sections include network security, ethical hacking, cryptography, digital forensics, cloud security, information security, mobile communications security, and cyber security.
Provides complete update and organization of the previous books, with some material moving online; Includes new problems, projects, and exercises; Includes interactive coding resources to accompany the book, including examples in the text, exercises, projects, and refection questions.
The Design and Analysis of Computer Algorithms introduces the basic data structures and programming techniques often used in efficient algorithms. It covers the use of lists, push-down stacks, queues, trees, and graphs.
Networks powered by algorithms are pervasive. Major contemporary technology trends - Internet of Things, Big Data, Digital Platform Power, Blockchain, and the Algorithmic Society - are manifestations of this phenomenon. The internet, which once seemed an unambiguous benefit to society, is now the basis for invasions of privacy, massive concentrations of power, and wide-scale manipulation. The algorithmic networked world poses deep questions about power, freedom, fairness, and human agency. The influential 1997 Federal Communications Commission whitepaper "Digital Tornado" hailed the "endless spiral of connectivity" that would transform society, and today, little remains untouched by digital connectivity. Yet fundamental questions remain unresolved, and even more serious challenges have emerged. This important collection, which offers a reckoning and a foretelling, features leading technology scholars who explain the legal, business, ethical, technical, and public policy challenges of building pervasive networks and algorithms for the benefit of humanity. This title is also available as Open Access on Cambridge Core.
This book constitutes the refereed post-conference proceedings of the Second International Conference on Cyber Security and Computer Science, ICONCS 2020, held in Dhaka, Bangladesh, in February 2020. The 58 full papers were carefully reviewed and selected from 133 submissions. The papers detail new ideas, inventions, and application experiences to cyber security systems. They are organized in topical sections on optimization problems; image steganography and risk analysis on web applications; machine learning in disease diagnosis and monitoring; computer vision and image processing in health care; text and speech processing; machine learning in health care; blockchain applications; computer vision and image processing in health care; malware analysis; computer vision; future technology applications; computer networks; machine learning on imbalanced data; computer security; Bangla language processing.
From the New York Times to Gawker, a behind-the-scenes look at how performance analytics are transforming journalism today-and how they might remake other professions tomorrow Journalists today are inundated with data about which stories attract the most clicks, likes, comments, and shares. These metrics influence what stories are written, how news is promoted, and even which journalists get hired and fired. Do metrics make journalists more accountable to the public? Or are these data tools the contemporary equivalent of a stopwatch wielded by a factory boss, worsening newsroom working conditions and journalism quality? In All the News That's Fit to Click, Caitlin Petre takes readers behind the scenes at the New York Times, Gawker, and the prominent news analytics company Chartbeat to explore how performance metrics are transforming the work of journalism. Petre describes how digital metrics are a powerful but insidious new form of managerial surveillance and discipline. Real-time analytics tools are designed to win the trust and loyalty of wary journalists by mimicking key features of addictive games, including immersive displays, instant feedback, and constantly updated "scores" and rankings. Many journalists get hooked on metrics-and pressure themselves to work ever harder to boost their numbers. Yet this is not a simple story of managerial domination. Contrary to the typical perception of metrics as inevitably disempowering, Petre shows how some journalists leverage metrics to their advantage, using them to advocate for their professional worth and autonomy. An eye-opening account of data-driven journalism, All the News That's Fit to Click is also an important preview of how the metrics revolution may transform other professions.
Data has emerged as a key component that determines how interactions across the world are structured, mediated and represented. This book examines these new data publics and the areas in which they become operative, via analysis of politics, geographies, environments and social media platforms. By claiming to offer a mechanism to translate every conceivable occurrence into an abstract code that can be endlessly manipulated, digitally processed data has caused conventional reference systems which hinge on our ability to mark points of origin, to rapidly implode. Authors from a range of disciplines provide insights into such a political economy of data capitalism; the political possibilities of techno-logics beyond data appropriation and data refusal; questions of visual, spatial and geographical organization; emergent ways of life and the environments that sustain them; and the current challenges of data publics, which is explored via case studies of three of the most influential platforms in the social media economy today: Facebook, Instagram and Whatsapp. Data Publics will be of great interest to academics and students in the fields of computer science, philosophy, sociology, media and communication studies, architecture, visual culture, art and design, and urban and cultural studies.
Machine learning, one of the top emerging sciences, has an extremely broad range of applications. However, many books on the subject provide only a theoretical approach, making it difficult for a newcomer to grasp the subject material. This book provides a more practical approach by explaining the concepts of machine learning algorithms and describing the areas of application for each algorithm, using simple practical examples to demonstrate each algorithm and showing how different issues related to these algorithms are applied.
Metaheuristic optimization has become a prime alternative for solving complex optimization problems in several areas. Hence, practitioners and researchers have been paying extensive attention to those metaheuristic algorithms that are mainly based on natural phenomena. However, when those algorithms are implemented, there are not enough books that deal with theoretical and experimental problems in a friendly manner so this book presents a novel structure that includes a complete description of the most important metaheuristic optimization algorithms as well as a new proposal of a new metaheuristic optimization named earthquake optimization. This book also has several practical exercises and a toolbox for MATLAB (R) and a toolkit for LabVIEW are integrated as complementary material for this book. These toolkits allow readers to move from a simulation environment to an experimentation one very fast. This book is suitable for researchers, students, and professionals in several areas, such as economics, architecture, computer science, electrical engineering, and control systems. The unique features of this book are as follows: Developed for researchers, undergraduate and graduate students, and practitioners A friendly description of the main metaheuristic optimization algorithms Theoretical and practical optimization examples A new earthquake optimization algorithm Updated state-of-the-art and research optimization projects The authors are multidisciplinary/interdisciplinary lecturers and researchers who have written a structure-friendly learning methodology to understand each metaheuristic optimization algorithm presented in this book.
The two volumes LNAI 11649 and 11650 constitute the refereed proceedings of the 20th Annual Conference "Towards Autonomous Robotics", TAROS 2019, held in London, UK, in July 2019. The 87 full papers and 12 short papers presented were carefully reviewed and selected from 101 submissions. The papers present and discuss significant findings and advances in autonomous robotics research and applications. They are organized in the following topical sections: robotic grippers and manipulation; soft robotics, sensing and mobile robots; robotic learning, mapping and planning; human-robot interaction; and robotic systems and applications.
This book constitutes the proceedings of the 22nd International Symposium on Fundamentals of Computation Theory, FCT 2019, held in Copenhagen, Denmark, in August 2019.The 21 full papers included in this volume were carefully reviewed and selected from 45 submissions. In addition, the book contains 3 invited talks in full-paper length. The papers were organized in topical sections named: formal methods, complexity, and algorithms.
This book includes introduction of several algorithms which are exclusively for graph based problems, namely combinatorial optimization problems, path formation problems, etc. Each chapter includes the introduction of the basic traditional nature inspired algorithm and discussion of the modified version for discrete algorithms including problems pertaining to discussed algorithms. |
![]() ![]() You may like...
Fundamentals of Human Embryology…
John Allan, Beverley Kramer
Paperback
|