![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Data structures
This book discusses one of the major applications of artificial intelligence: the use of machine learning to extract useful information from multimodal data. It discusses the optimization methods that help minimize the error in developing patterns and classifications, which further helps improve prediction and decision-making. The book also presents formulations of real-world machine learning problems, and discusses AI solution methodologies as standalone or hybrid approaches. Lastly, it proposes novel metaheuristic methods to solve complex machine learning problems. Featuring valuable insights, the book helps readers explore new avenues leading toward multidisciplinary research discussions.
This book constitutes the proceedings of the 16th International Conference on Web and Internet Economics, WINE 2020, held in Beijing, China, in December 2020. The 31 full papers presented together with 11 abstracts were carefully reviewed and selected from 136 submissions. The issues in theoretical computer science, artificial intelligence, operations research are of particular importance in the Web and the Internet that enable the interaction of large and diverse populations. The Conference on Web and Internet Economics (WINE) is an interdisciplinary forum for the exchange of ideas and results on incentives and computation arising from these various fields.
Genetic Programming Theory and Practice VI was developed from the sixth workshop at the University of Michigan s Center for the Study of Complex Systems to facilitate the exchange of ideas and information related to the rapidly advancing field of Genetic Programming (GP). Contributions from the foremost international researchers and practitioners in the GP arena examine the similarities and differences between theoretical and empirical results on real-world problems. The text explores the synergy between theory and practice, producing a comprehensive view of the state of the art in GP application. These contributions address several significant interdependent themes which emerged from this year s workshop, including: (1) Making efficient and effective use of test data. (2) Sustaining the long-term evolvability of our GP systems. (3) Exploiting discovered subsolutions for reuse. (4) Increasing the role of a Domain Expert."
Artificial Intelligence is a seemingly neutral technology, but it is increasingly used to manage workforces and make decisions to hire and fire employees. Its proliferation in the workplace gives the impression of a fairer, more efficient system of management. A machine can't discriminate, after all. Augmented Exploitation explores the reality of the impact of AI on workers' lives. While the consensus is that AI is a completely new way of managing a workplace, the authors show that, on the contrary, AI is used as most technologies are used under capitalism: as a smokescreen that hides the deep exploitation of workers. Going beyond platform work and the gig economy, the authors explore emerging forms of algorithmic governance and AI-augmented apps that have been developed to utilise innovative ways to collect data about workers and consumers, as well as to keep wages and worker representation under control. They also show that workers are not taking this lying down, providing case studies of new and exciting form of resistance that are springing up across the globe.
This book features selected papers from the 5th International Conference on Mathematics and Computing (ICMC 2019), organized by the School of Computer Engineering, Kalinga Institute of Industrial Technology Bhubaneswar, India, on February 6 - 9, 2019. Covering recent advances in the field of mathematics, statistics and scientific computing, the book presents innovative work by leading academics, researchers and experts from industry.
Vast holdings and assessment of consumer data by large companies are not new phenomena. Firms' ability to leverage the data to reach customers in targeted campaigns and gain market share is, and on an unprecedented scale. Major companies have moved from serving as data or inventory storehouses, suppliers, and exchange mechanisms to monetizing their data and expanding the products they offer. Such changes have implications for both firms and consumers in the coming years. In Success with Big Data, Russell Walker investigates the use of internal Big Data to stimulate innovations for operational effectiveness, and the ways in which external Big Data is developed for gauging, or even prompting, customer buying decisions. Walker examines the nature of Big Data, the novel measures they create for market activity, and the payoffs they can offer from the connectedness of the business and social world. With case studies from Apple, Netflix, Google, and Amazon, Walker both explores the market transformations that are changing perceptions of Big Data, and provides a framework for assessing and evaluating Big Data. Although the world appears to be moving toward a marketplace where consumers will be able to "pull" offers from firms, rather than simply receiving offers, Walker observes that such changes will require careful consideration of legal and unspoken business practices as they affect consumer privacy. Rigorous and meticulous, Success with Big Data is a valuable resource for graduate students and professionals with an interest in Big Data, digital platforms, and analytics.
The contributions gathered in this book focus on modern methods for statistical learning and modeling in data analysis and present a series of engaging real-world applications. The book covers numerous research topics, ranging from statistical inference and modeling to clustering and factorial methods, from directional data analysis to time series analysis and small area estimation. The applications reflect new analyses in a variety of fields, including medicine, finance, engineering, marketing and cyber risk. The book gathers selected and peer-reviewed contributions presented at the 12th Scientific Meeting of the Classification and Data Analysis Group of the Italian Statistical Society (CLADAG 2019), held in Cassino, Italy, on September 11-13, 2019. CLADAG promotes advanced methodological research in multivariate statistics with a special focus on data analysis and classification, and supports the exchange and dissemination of ideas, methodological concepts, numerical methods, algorithms, and computational and applied results. This book, true to CLADAG's goals, is intended for researchers and practitioners who are interested in the latest developments and applications in the field of data analysis and classification.
This book contains extended and revised versions of the best papers presented at the 27th IFIP WG 10.5/IEEE International Conference on Very Large Scale Integration, VLSI-SoC 2019, held in Cusco, Peru, in October 2019. The 15 full papers included in this volume were carefully reviewed and selected from the 28 papers (out of 82 submissions) presented at the conference. The papers discuss the latest academic and industrial results and developments as well as future trends in the field of System-on-Chip (SoC) design, considering the challenges of nano-scale, state-of-the-art and emerging manufacturing technologies. In particular they address cutting-edge research fields like heterogeneous, neuromorphic and brain-inspired, biologically-inspired, approximate computing systems.
Here, the authors propose a method for the formal development of parallel programs - or multiprograms as they prefer to call them. They accomplish this with a minimum of formal gear, i.e. with the predicate calculus and the well- established theory of Owicki and Gries. They show that the Owicki/Gries theory can be effectively put to work for the formal development of multiprograms, regardless of whether these algorithms are distributed or not.
The LNCS journal Transactions on Computational Science reflects recent developments in the field of Computational Science, conceiving the field not as a mere ancillary science but rather as an innovative approach supporting many other scientific disciplines. The journal focuses on original high-quality research in the realm of computational science in parallel and distributed environments, encompassing the facilitating theoretical foundations and the applications of large-scale computations and massive data processing. It addresses researchers and practitioners in areas ranging from aerospace to biochemistry, from electronics to geosciences, from mathematics to software architecture, presenting verifiable computational methods, findings, and solutions, and enabling industrial users to apply techniques of leading-edge, large-scale, high performance computational methods.This, the 38th issue of the Transactions on Computational Science, is devoted to research on modelling, optimization, and graphs, with applications in 3D and sketch modelling, engineering design, evolutionary computing, and networks.
This two volume set (CCIS 1257 and 1258) constitutes the refereed proceedings of the 6th International Conference of Pioneering Computer Scientists, Engineers and Educators, ICPCSEE 2020 held in Taiyuan, China, in September 2020. The 98 papers presented in these two volumes were carefully reviewed and selected from 392 submissions. The papers are organized in topical sections: database, machine learning, network, graphic images, system, natural language processing, security, algorithm, application, and education. The chapter "Highly Parallel SPARQL Engine for RDF" is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
Maintaining a practical perspective, Python Programming: A Practical Approach acquaints you with the wonderful world of programming. The book is a starting point for those who want to learn Python programming. The backbone of any programming, which is the data structure and components such as strings, lists, etc., have been illustrated with many examples and enough practice problems to instill a level of self-confidence in the reader. Drawing on knowledge gained directly from teaching Computer Science as a subject and working on a wide range of projects related to ML, AI, deep learning, and blockchain, the authors have tried their best to present the necessary skills for a Python programmer. Once the foundation of Python programming is built and the readers are aware of the exact structure, dimensions, processing, building blocks, and representation of data, they can readily take up their specific problems from the area of interest and solve them with the help of Python. These include, but are not limited to, operators, control flow, strings, functions, module processing, object-oriented programming, exception and file handling, multithreading, synchronization, regular expressions, and Python database programming. This book on Python programming is specially designed to keep readers busy with learning fundamentals and generates a sense of confidence by attempting the assignment problems. We firmly believe that explaining any particular technology deviates from learning the fundamentals of a programming language. This book is focused on helping readers attempt implementation in their areas of interest through the skills imparted through this book. We have attempted to present the real essence of Python programming, which you can confidently apply in real life by using Python as a tool. Salient Features Based on real-world requirements and solution. Simple presentation without avoiding necessary details of the topic. Executable programs on almost every topic. Plenty of exercise questions, designed to test readers' skills and understanding. Purposefully designed to be instantly applicable, Python Programming: A Practical Approach provides implementation examples so that the described subject matter can be immediately implemented due to the well-known versatility of Python in handling different data types with ease.
This book presents various computational and cognitive modeling approaches in the areas of health, education, finance, theenvironment, engineering, commerce and industry. Gathering selected conference papers presented atthe International Conference on Trends in Computational and Cognitive Engineering (TCCE), it sharescutting-edge insights and ideas from mathematicians, engineers, scientists and researchers anddiscusses fresh perspectives on problem solving in a range of research areas.
Algorithmic graph theory has been expanding at an extremely rapid rate since the middle of the twentieth century, in parallel with the growth of computer science and the accompanying utilization of computers, where efficient algorithms have been a prime goal. This book presents material on developments on graph algorithms and related concepts that will be of value to both mathematicians and computer scientists, at a level suitable for graduate students, researchers and instructors. The fifteen expository chapters, written by acknowledged international experts on their subjects, focus on the application of algorithms to solve particular problems. All chapters were carefully edited to enhance readability and standardize the chapter structure as well as the terminology and notation. The editors provide basic background material in graph theory, and a chapter written by the book's Academic Consultant, Martin Charles Golumbic (University of Haifa, Israel), provides background material on algorithms as connected with graph theory.
This two-volume set of LNCS 12146 and 12147 constitutes the refereed proceedings of the 18th International Conference on Applied Cryptography and Network Security, ACNS 2020, held in Rome, Italy, in October 2020.The conference was held virtually due to the COVID-19 pandemic. The 46 revised full papers presented were carefully reviewed and selected from 214 submissions. The papers were organized in topical sections named: cryptographic protocols cryptographic primitives, attacks on cryptographic primitives, encryption and signature, blockchain and cryptocurrency, secure multi-party computation, post-quantum cryptography.
This two-volume set constitutes the refereed proceedings of the 16th International Conference on Collaborative Computing: Networking, Applications, and Worksharing, CollaborateCom 2020, held in Shanghai, China, in October 2020.The 61 full papers and 16 short papers presented were carefully reviewed and selected from 211 submissions. The papers reflect the conference sessions as follows: Collaborative Applications for Network and E-Commerce; Optimization for Collaborate System; Cloud and Edge Computing; Artificial Intelligence; AI Application and Optimization; Classification and Recommendation; Internet of Things; Collaborative Robotics and Autonomous Systems; Smart Transportation.
This book constitutes the refereed post-conference proceedings of the Second International Conference on Cyber Security and Computer Science, ICONCS 2020, held in Dhaka, Bangladesh, in February 2020. The 58 full papers were carefully reviewed and selected from 133 submissions. The papers detail new ideas, inventions, and application experiences to cyber security systems. They are organized in topical sections on optimization problems; image steganography and risk analysis on web applications; machine learning in disease diagnosis and monitoring; computer vision and image processing in health care; text and speech processing; machine learning in health care; blockchain applications; computer vision and image processing in health care; malware analysis; computer vision; future technology applications; computer networks; machine learning on imbalanced data; computer security; Bangla language processing.
This three volume set (CCIS 1237-1239) constitutes the proceedings of the 18th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2020, in June 2020. The conference was scheduled to take place in Lisbon, Portugal, at University of Lisbon, but due to COVID-19 pandemic it was held virtually. The 173 papers were carefully reviewed and selected from 213 submissions. The papers are organized in topical sections: homage to Enrique Ruspini; invited talks; foundations and mathematics; decision making, preferences and votes; optimization and uncertainty; games; real world applications; knowledge processing and creation; machine learning I; machine learning II; XAI; image processing; temporal data processing; text analysis and processing; fuzzy interval analysis; theoretical and applied aspects of imprecise probabilities; similarities in artificial intelligence; belief function theory and its applications; aggregation: theory and practice; aggregation: pre-aggregation functions and other generalizations of monotonicity; aggregation: aggregation of different data structures; fuzzy methods in data mining and knowledge discovery; computational intelligence for logistics and transportation problems; fuzzy implication functions; soft methods in statistics and data analysis; image understanding and explainable AI; fuzzy and generalized quantifier theory; mathematical methods towards dealing with uncertainty in applied sciences; statistical image processing and analysis, with applications in neuroimaging; interval uncertainty; discrete models and computational intelligence; current techniques to model, process and describe time series; mathematical fuzzy logic and graded reasoning models; formal concept analysis, rough sets, general operators and related topics; computational intelligence methods in information modelling, representation and processing.
This book constitutes the refereed post-conference proceedings of two conferences: The 8th EAI International Conference on ArtsIT, Interactivity and Game Creation (ArtsIT 2019), and the 4th EAI International Conference on Design, Learning, and Innovation (DLI 2019). Both conferences were hosed in Aalborg, Denmark, and took place November 6-8, 2019. The 61 revised full papers presented were carefully selected from 98 submissions. The papers represent a forum for the dissemination of cutting-edge research results in the area of arts, design and technology, including open related topics like interactivity and game creation.
This volume constitutes the refereed post-conference proceedings of the 5th International Conference on Machine Learning and Intelligent Communications, MLICOM 2020, held in Shenzhen, China, in September 2020. Due to COVID-19 pandemic the conference was held virtually. The 55 revised full papers were carefully selected from 133 submissions. The papers are organized thematically in intelligent resource ( spectrum, power) allocation schemes; applications of neural network and deep learning; decentralized learning for wireless communication systems; intelligent antennas design and dynamic configuration; intelligent communications; intelligent positioning and navigation systems; smart unmanned vehicular technology; intelligent space and terrestrial integrated networks; machine learning algorithm and Intelligent networks.
This book constitutes the proceedings of the 19th International Symposium on Intelligent Data Analysis, IDA 2021, which was planned to take place in Porto, Portugal. Due to the COVID-19 pandemic the conference was held online during April 26-28, 2021.The 35 papers included in this book were carefully reviewed and selected from 113 submissions. The papers were organized in topical sections named: modeling with neural networks; modeling with statistical learning; modeling language and graphs; and modeling special data formats.
This book constitutes extended and revised versions of the selected papers from the 13th International Joint Conference on Biomedical Engineering Systems and Technologies, BIOSTEC 2020, held in Valletta, Malta, in February 2020.The 29 revised and extended full papers presented were carefully reviewed and selected from a total of 363 submissions. The papers are organized in topical sections on biomedical electronics and devices; bioimaging; bioinformatics models, methods and algorithms; bio-inspired systems and signal processing; health informatic
State-of-the-art airbag algorithms make a decision to fire restraint systems in a crash by evaluating the deceleration of the entire vehicle during the single events of the accident. In order to meet the ever increasing requirements of consumer test organizations and global legislators, a detailed knowledge of the nature and direction of the crash would be of great benefit. The algorithms used in current vehicles can only do this to a limited extent. Andre Leschke presents a completely different algorithm concept to solve these problems. In addition to vehicle deceleration, the chronological sequence of an accident and the associated local and temporal destruction of the vehicle are possible indicators for an accident's severity. About the Author: Dr. Andre Leschke has earned his doctoral degree from Tor-Vergata University of Rome, Italy. Currently, he is working as head of a team of vehicle safety developers in the German automotive industry.
Introducing a NEW addition to our growing library of computer science titles, Algorithm Design and Applications, by Michael T. Goodrich & Roberto Tamassia! Algorithms is a course required for all computer science majors, with a strong focus on theoretical topics. Students enter the course after gaining hands-on experience with computers, and are expected to learn how algorithms can be applied to a variety of contexts. This new book integrates application with theory. Goodrich & Tamassia believe that the best way to teach algorithmic topics is to present them in a context that is motivated from applications to uses in society, computer games, computing industry, science, engineering, and the internet. The text teaches students about designing and using algorithms, illustrating connections between topics being taught and their potential applications, increasing engagement.
Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables. As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. You'll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are: Forward selection component analysis Local feature selection Linking features and a target with a hidden Markov model Improvements on traditional stepwise selection Nominal-to-ordinal conversion All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code. The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it. What You Will Learn Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set. Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods. Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets. Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input. Who This Book Is For Intermediate to advanced data science programmers and analysts. |
![]() ![]() You may like...
Python Programming for Computations…
Computer Language
Hardcover
|