![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing
The future of music archiving and search engines lies in deep learning and big data. Music information retrieval algorithms automatically analyze musical features like timbre, melody, rhythm or musical form, and artificial intelligence then sorts and relates these features. At the first International Symposium on Computational Ethnomusicological Archiving held on November 9 to 11, 2017 at the Institute of Systematic Musicology in Hamburg, Germany, a new Computational Phonogram Archiving standard was discussed as an interdisciplinary approach. Ethnomusicologists, music and computer scientists, systematic musicologists as well as music archivists, composers and musicians presented tools, methods and platforms and shared fieldwork and archiving experiences in the fields of musical acoustics, informatics, music theory as well as on music storage, reproduction and metadata. The Computational Phonogram Archiving standard is also in high demand in the music market as a search engine for music consumers. This book offers a comprehensive overview of the field written by leading researchers around the globe.
The Internet of Things has become a major influence on the development of new technologies and innovations. When combined with smart services, the end-user experience can be significantly enhanced. Novel Design and the Applications of Smart-M3 Platforms in the Internet of Things: Emerging Research and Opportunities provides an innovative outlook on the development of open source technology for the creation of smart spaces and services. Including a range of relevant topics such as interoperability, system architecture, and information processing, this book is an ideal reference source for academics, researchers, graduate students, and practitioners interested in the latest advancements in the Internet of things.
The implementation of data and information analysis has become a trending solution within multiple professions. New tools and approaches are continually being developed within data analysis to further solve the challenges that come with professional strategy. Pattern recognition is an innovative method that provides comparison techniques and defines new characteristics within the information acquisition process. Despite its recent trend, a considerable amount of research regarding pattern recognition and its various strategies is lacking. Pattern Recognition Applications in Engineering is an essential reference source that discusses various strategies of pattern recognition algorithms within industrial and research applications and provides examples of results in different professional areas including electronics, computation, and health monitoring. Featuring research on topics such as condition monitoring, data normalization, and bio-inspired developments, this book is ideally designed for analysts; researchers; civil, mechanical, and electronic engineers; computing scientists; chemists; academicians; and students.
This book is a collection of essays exploring adaptive systems from
many perspectives, ranging from computational applications to
models of adaptation in living and social systems. The essays on
computation discuss history, theory, applications, and possible
threats of adaptive and evolving computations systems. The modeling
chapters cover topics such as evolution in microbial populations,
the evolution of cooperation, and how ideas about evolution relate
to economics.
The World Wide Web is changing the way we use technology, bringing e-learning and teaching to a whole new dimension of collaboration and communication. Looking Toward the Future of Technology-Enhanced Education: Ubiquitous Learning and the Digital Native bridges the gap between technology and education by presenting innovative research on the future of education. An essential reference on e-learning, this scholarly publication examines current research in technology enhanced learning, provides new didactic models for education, and discusses the newest technologies and their impact on education.
Forecasting is one of the most important activities that form the basis for strategic, tactical, and operational decisions in all business organizations. Recently, neural networks have emerged as an important tool for business forecasting. There are considerable interests and applications in forecasting using neural networks. Neural Networks in Business Forecasting provides for researchers and practitioners some recent advances in applying neural networks to business forecasting. A number of case studies demonstrating the innovative or successful applications of neural networks to many areas of business as well as methods to improve neural network forecasting performance are presented.
The past few years have seen a major change in computing systems, as growing data volumes and stalling processor speeds require more and more applications to scale out to clusters. Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out their computations over clusters. At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too. This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing. We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective. This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.
Developments in Technologies for Human-Centric Mobile Computing and Applications is a comprehensive collection of knowledge and practice in the development of technologies in human -centric mobile technology. This book focuses on the developmental aspects of mobile technology; bringing together researchers, educators, and practitioners to encourage readers to think outside of the box.
a) Provides basic concepts of Natural Language Processing for getting started from scratch. b) Introduces advanced concepts for scaling, deep learning and real-world issues seen in the industry. c) Provides applications of Natural Language Processing over a diverse set of 15 industry verticals. d) Shares practical implementation including Python code, tools and techniques for a variety of Natural Language Processing applications and industrial products for a hands-on experience. e) Gives readers a sense of all there is to build successful Natural Language Processing projects: the concepts, applications, opportunities and hands-on material.
The main purpose of this book is not only to present recent studies and advances in the field of social science research, but also to stimulate discussion on related practical issues concerning statistics, mathematics, and economics. Accordingly, a broad range of tools and techniques that can be used to solve problems on these topics are presented in detail in this book, which offers an ideal reference work for all researchers interested in effective quantitative and qualitative tools. The content is divided into three major sections. The first, which is titled "Social work", collects papers on problems related to the social sciences, e.g. social cohesion, health, and digital technologies. Papers in the second part, "Education and teaching issues," address qualitative aspects, education, learning, violence, diversity, disability, and ageing, while the book's final part, "Recent trends in qualitative and quantitative models for socio-economic systems and social work", features contributions on both qualitative and quantitative issues. The book is based on a scientific collaboration, in the social sciences, mathematics, statistics, and economics, among experts from the "Pablo de Olavide" University of Seville (Spain), the "University of Defence" of Brno (Czech Republic), the "G. D'Annunzio" University of Chieti-Pescara (Italy) and "Alexandru Ioan Cuza University" of Iasi (Romania). The contributions, which have been selected using a peer-review process, examine a wide variety of topics related to the social sciences in general, while also highlighting new and intriguing empirical research conducted in various countries. Given its scope, the book will appeal, in equal measure, to sociologists, mathematicians, statisticians and philosophers, and more generally to scholars and specialists in related fields.
Complexes of physically interacting proteins constitute fundamental functional units that drive almost all biological processes within cells. A faithful reconstruction of the entire set of protein complexes (the "complexosome") is therefore important not only to understand the composition of complexes but also the higher level functional organization within cells. Advances over the last several years, particularly through the use of high-throughput proteomics techniques, have made it possible to map substantial fractions of protein interactions (the "interactomes") from model organisms including Arabidopsis thaliana (a flowering plant), Caenorhabditis elegans (a nematode), Drosophila melanogaster (fruit fly), and Saccharomyces cerevisiae (budding yeast). These interaction datasets have enabled systematic inquiry into the identification and study of protein complexes from organisms. Computational methods have played a significant role in this context, by contributing accurate, efficient, and exhaustive ways to analyze the enormous amounts of data. These methods have helped to compensate for some of the limitations in experimental datasets including the presence of biological and technical noise and the relative paucity of credible interactions. In this book, we systematically walk through computational methods devised to date (approximately between 2000 and 2016) for identifying protein complexes from the network of protein interactions (the protein-protein interaction (PPI) network). We present a detailed taxonomy of these methods, and comprehensively evaluate them for protein complex identification across a variety of scenarios including the absence of many true interactions and the presence of false-positive interactions (noise) in PPI networks. Based on this evaluation, we highlight challenges faced by the methods, for instance in identifying sparse, sub-, or small complexes and in discerning overlapping complexes, and reveal how a combination of strategies is necessary to accurately reconstruct the entire complexosome.
This book addresses the experimental calibration of best-estimate numerical simulation models. The results of measurements and computations are never exact. Therefore, knowing only the nominal values of experimentally measured or computed quantities is insufficient for applications, particularly since the respective experimental and computed nominal values seldom coincide. In the author's view, the objective of predictive modeling is to extract "best estimate" values for model parameters and predicted results, together with "best estimate" uncertainties for these parameters and results. To achieve this goal, predictive modeling combines imprecisely known experimental and computational data, which calls for reasoning on the basis of incomplete, error-rich, and occasionally discrepant information. The customary methods used for data assimilation combine experimental and computational information by minimizing an a priori, user-chosen, "cost functional" (usually a quadratic functional that represents the weighted errors between measured and computed responses). In contrast to these user-influenced methods, the BERRU (Best Estimate Results with Reduced Uncertainties) Predictive Modeling methodology developed by the author relies on the thermodynamics-based maximum entropy principle to eliminate the need for relying on minimizing user-chosen functionals, thus generalizing the "data adjustment" and/or the "4D-VAR" data assimilation procedures used in the geophysical sciences. The BERRU predictive modeling methodology also provides a "model validation metric" which quantifies the consistency (agreement/disagreement) between measurements and computations. This "model validation metric" (or "consistency indicator") is constructed from parameter covariance matrices, response covariance matrices (measured and computed), and response sensitivities to model parameters. Traditional methods for computing response sensitivities are hampered by the "curse of dimensionality," which makes them impractical for applications to large-scale systems that involve many imprecisely known parameters. Reducing the computational effort required for precisely calculating the response sensitivities is paramount, and the comprehensive adjoint sensitivity analysis methodology developed by the author shows great promise in this regard, as shown in this book. After discarding inconsistent data (if any) using the consistency indicator, the BERRU predictive modeling methodology provides best-estimate values for predicted parameters and responses along with best-estimate reduced uncertainties (i.e., smaller predicted standard deviations) for the predicted quantities. Applying the BERRU methodology yields optimal, experimentally validated, "best estimate" predictive modeling tools for designing new technologies and facilities, while also improving on existing ones.
The Complete "Tool Kit" for the Hottest Area in RF/Wireless Design!
In today's society, the professional development of teachers is urgent due to the constant change in working conditions and the impact that information and communication technologies have in teaching practices. ""Online Learning Communities and Teacher Professional Development: Methods for Improved Education Delivery"" features innovative applications and solutions useful for teachers in developing knowledge and skills for the integration of technology into everyday teaching practices. This defining collection of field research discusses how technology itself can serve as an important resource in terms of providing arenas for professional development.
Recent research reveals that socioeconomic factors of the neighborhoods where road users live and where pedestrian-vehicle crashes occur are important in determining the severity of the crashes, with the former having a greater influence. Hence, road safety countermeasures, especially those focusing on the road users, should be targeted at these high risk neighborhoods. Big Data Analytics in Traffic and Transportation Engineering: Emerging Research and Opportunities is an essential reference source that discusses access to transportation and examines vehicle-pedestrian crashes, specifically in relation to socioeconomic factors that influence them, main predictors, factors that contribute to crash severity, and the enhancement of pedestrian safety measures. Featuring research on topics such as public transport, accessibility, and spatial distribution, this book is ideally designed for policymakers, transportation engineers, road safety designers, transport planners and managers, professionals, academicians, researchers, and public administrators.
As today's world continues to advance, Artificial Intelligence (AI) is a field that has become a staple of technological development and led to the advancement of numerous professional industries. An application within AI that has gained attention is machine learning. Machine learning uses statistical techniques and algorithms to give computer systems the ability to understand and its popularity has circulated through many trades. Understanding this technology and its countless implementations is pivotal for scientists and researchers across the world. The Handbook of Research on Emerging Trends and Applications of Machine Learning provides a high-level understanding of various machine learning algorithms along with modern tools and techniques using Artificial Intelligence. In addition, this book explores the critical role that machine learning plays in a variety of professional fields including healthcare, business, and computer science. While highlighting topics including image processing, predictive analytics, and smart grid management, this book is ideally designed for developers, data scientists, business analysts, information architects, finance agents, healthcare professionals, researchers, retail traders, professors, and graduate students seeking current research on the benefits, implementations, and trends of machine learning.
This book provides a comprehensive guide to the state-of-the-art in cardiovascular computing and highlights novel directions and challenges in this constantly evolving multidisciplinary field. The topics covered span a wide range of methods and clinical applications of cardiovascular computing, including advanced technologies for the acquisition and analysis of signals and images, cardiovascular informatics, and mathematical and computational modeling.
During the last two decades, computer and information technologies have forced great changes in the ways businesses manage operations in meeting the desired quality of products and services, customer demands, competition, and other challenges.
Expansive growth and use of the Internet in recent years has led to computational networking and an increased use of e-collaborative technologies leading to many possibilities including collaboration of tasks from remote locations. Interdisciplinary Perspectives on E-Collaboration: Emerging Trends and Applications focuses on e-collaboration technologies that enable group-based interaction, and the impact that those technologies have on group work. A defining body of research, this reference addresses a range of e-collaboration topics including interdisciplinary perspectives on e-collaboration, and adaptation and creativity in e-collaboration.
This is an information science reference. Distance learning technologies have reshaped the diffusion of communication within the educational system. Within this expanding field, the possibilities for an interactive, cross-boundary education are endless.""Strategic Applications of Distance Learning Technologies"" provides tactical uses of distance education technologies to assist instructors and researchers in their quest to provide a progressive, alternative approach to traditional education techniques. This collection of advanced research incorporates global challenges and opportunities of technology integration while outlining strategies for distance learning within developing countries.
This book highlights the emerging field of intelligent computing and developing smart systems. It includes chapters discussing the outcome of challenging research related to distributed computing, smart machines and their security related research, and also covers next-generation communication techniques and the networking technologies that have the potential to build the future communication infrastructure. Bringing together computing, communications and other aspects of intelligent and smart computing, it contributes to developing a roadmap for future research on intelligent systems. |
You may like...
Hardware Accelerator Systems for…
Shiho Kim, Ganesh Chandra Deka
Hardcover
R3,950
Discovery Miles 39 500
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
|