![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This Springer book provides a perfect platform to submit chapters that discuss the prospective developments and innovative ideas in artificial intelligence and machine learning techniques in the diagnosis of COVID-19. COVID-19 is a huge challenge to humanity and the medical sciences. So far as of today, we have been unable to find a medical solution (Vaccine). However, globally, we are still managing the use of technology for our work, communications, analytics, and predictions with the use of advancement in data science, communication technologies (5G & Internet), and AI. Therefore, we might be able to continue and live safely with the use of research in advancements in data science, AI, machine learning, mobile apps, etc., until we can find a medical solution such as a vaccine. We have selected eleven chapters after the vigorous review process. Each chapter has demonstrated the research contributions and research novelty. Each group of authors must fulfill strict requirements.
This book provides a comprehensive set of characterization, prediction, optimization, evaluation, and evolution techniques for a diagnosis system for fault isolation in large electronic systems. Readers with a background in electronics design or system engineering can use this book as a reference to derive insightful knowledge from data analysis and use this knowledge as guidance for designing reasoning-based diagnosis systems. Moreover, readers with a background in statistics or data analytics can use this book as a practical case study for adapting data mining and machine learning techniques to electronic system design and diagnosis. This book identifies the key challenges in reasoning-based, board-level diagnosis system design and presents the solutions and corresponding results that have emerged from leading-edge research in this domain. It covers topics ranging from highly accurate fault isolation, adaptive fault isolation, diagnosis-system robustness assessment, to system performance analysis and evaluation, knowledge discovery and knowledge transfer. With its emphasis on the above topics, the book provides an in-depth and broad view of reasoning-based fault diagnosis system design. * Explains and applies optimized techniques from the machine-learning domain to solve the fault diagnosis problem in the realm of electronic system design and manufacturing;* Demonstrates techniques based on industrial data and feedback from an actual manufacturing line;* Discusses practical problems, including diagnosis accuracy, diagnosis time cost, evaluation of diagnosis system, handling of missing syndromes in diagnosis, and need for fast diagnosis-system development.
Jump-start your career as a data scientist--learn to develop datasets for exploration, analysis, and machine learning SQL for Data Scientists: A Beginner's Guide for Building Datasets for Analysis is a resource that's dedicated to the Structured Query Language (SQL) and dataset design skills that data scientists use most. Aspiring data scientists will learn how to how to construct datasets for exploration, analysis, and machine learning. You can also discover how to approach query design and develop SQL code to extract data insights while avoiding common pitfalls. You may be one of many people who are entering the field of Data Science from a range of professions and educational backgrounds, such as business analytics, social science, physics, economics, and computer science. Like many of them, you may have conducted analyses using spreadsheets as data sources, but never retrieved and engineered datasets from a relational database using SQL, which is a programming language designed for managing databases and extracting data. This guide for data scientists differs from other instructional guides on the subject. It doesn't cover SQL broadly. Instead, you'll learn the subset of SQL skills that data analysts and data scientists use frequently. You'll also gain practical advice and direction on "how to think about constructing your dataset." Gain an understanding of relational database structure, query design, and SQL syntax Develop queries to construct datasets for use in applications like interactive reports and machine learning algorithms Review strategies and approaches so you can design analytical datasets Practice your techniques with the provided database and SQL code In this book, author Renee Teate shares knowledge gained during a 15-year career working with data, in roles ranging from database developer to data analyst to data scientist. She guides you through SQL code and dataset design concepts from an industry practitioner's perspective, moving your data scientist career forward!
Data is everywhere - it's just not very well connected, which makes it super hard to relate dataset to dataset. Using graphs as the underlying glue, you can readily join data together and create navigation paths across diverse sets of data. Add Elixir, with its awesome power of concurrency, and you'll soon be mastering data networks. Learn how different graph models can be accessed and used from within Elixir and how you can build a robust semantics overlay on top of graph data structures. We'll start from the basics and examine the main graph paradigms. Get ready to embrace the world of connected data! Graphs provide an intuitive and highly flexible means for organizing and querying huge amounts of loosely coupled data items. These data networks, or graphs in math speak, are typically stored and queried using graph databases. Elixir, with its noted support for fault tolerance and concurrency, stands out as a language eminently suited to processing sparsely connected and distributed datasets. Using Elixir and graph-aware packages in the Elixir ecosystem, you'll easily be able to fit your data to graphs and networks, and gain new information insights. Build a testbed app for comparing native graph data with external graph databases. Develop a set of applications under a single umbrella app to drill down into graph structures. Build graph models in Elixir, and query graph databases of various stripes - using Cypher and Gremlin with property graphs and SPARQL with RDF graphs. Transform data from one graph modeling regime to another. Understand why property graphs are especially good at graph traversal problems, while RDF graphs shine at integrating different semantic models and can scale up to web proportions. Harness the outstanding power of concurrent processing in Elixir to work with distributed graph datasets and manage data at scale. What You Need: To follow along with the book, you should have Elixir 1.10+ installed. The book will guide you through setting up an umbrella application for a graph testbed using a variety of graph databases for which Java SDK 8+ is generally required. Instructions for installing the graph databases are given in an appendix.
This book presents Proceedings of the International Conference on Intelligent Systems and Networks (ICISN 2022), held at Hanoi in Vietnam. It includes peer reviewed high quality articles on Intelligent System and Networks. It brings together professionals and researchers in the area and presents a platform for exchange of ideas and to foster future collaboration. The topics covered in this book include- Foundations of Computer Science; Computational Intelligence Language and speech processing; Software Engineering Software development methods; Wireless Communications Signal Processing for Communications; Electronics track IoT and Sensor Systems Embedded Systems; etc.
This book constitutes the refereed proceedings of the 22nd International TRIZ Future Conference on Automated Invention for Smart Industries, TFC 2022, which took place in Warsaw, Poland, in September 2022; the event was sponsored by IFIP WG 5.4.The 39 full papers presented were carefully reviewed and selected from 43 submissions. They are organized in the following thematic sections: New perspectives of TRIZ; AI in systematic innovation; systematic innovations supporting IT and AI; TRIZ applications; TRIZ education and ecosystem.
Environmental information systems (EIS) are concerned with the management of data about the soil, the water, the air, and the species in the world around us. This first textbook on the topic gives a conceptual framework for EIS by structuring the data flow into 4 phases: data capture, storage, analysis, and metadata management. This flow corresponds to a complex aggregation process gradually transforming the incoming raw data into concise documents suitable for high-level decision support. All relevant concepts are covered, including statistical classification, data fusion, uncertainty management, knowledge based systems, GIS, spatial databases, multidimensional access methods, object-oriented databases, simulation models, and Internet-based information management. Several case studies present EIS in practice.
This book presents recent advances in Knowledge discovery in databases (KDD) with a focus on the areas of market basket database, time-stamped databases and multiple related databases. Various interesting and intelligent algorithms are reported on data mining tasks. A large number of association measures are presented, which play significant roles in decision support applications. This book presents, discusses and contrasts new developments in mining time-stamped data, time-based data analyses, the identification of temporal patterns, the mining of multiple related databases, as well as local patterns analysis.
The growth of the Internet and the availability of enormous volumes of data in digital form has necessitated intense interest in techniques for assisting the user in locating data of interest. The Internet has over 350 million pages of data and is expected to reach over one billion pages by the year 2000. Buried on the Internet are both valuable nuggets for answering questions as well as large quantities of information the average person does not care about. The Digital Library effort is also progressing, with the goal of migrating from the traditional book environment to a digital library environment. Information Retrieval Systems: Theory and Implementation provides a theoretical and practical explanation of the latest advancements in information retrieval and their application to existing systems. It takes a system approach, discussing all aspects of an Information Retrieval System. The importance of the Internet and its associated hypertext-linked structure is put into perspective as a new type of information retrieval data structure. The total system approach also includes discussion of the human interface and the importance of information visualization for identification of relevant information. The theoretical metrics used to describe information systems are expanded to discuss their practical application in the uncontrolled environment of real world systems. Information Retrieval Systems: Theory and Implementation is suitable as a textbook for a graduate-level course on information retrieval, and as a reference for researchers and practitioners in industry.
Service-oriented computing has become one of the predominant factors in current IT research and development. Web services seem to be the middleware solution of the future for highly interoperable distributed software solutions. In parallel, research on the Semantic Web provides the results required to exploit distributed machine-processable data. To combine these two research lines into industrial-strength applications, a number of research projects have been set up by organizations like W3C and the EU. Dieter Fensel and his coauthors deliver a profound introduction into one of the most promising approaches the Web Service Modeling Ontology (WSMO). After a brief presentation of the underlying basic technologies and standards of the World Wide Web, the Semantic Web, and Web Services, they detail all the elements of WSMO from basic concepts to possible applications in e-commerce, e-government and e-banking, and they also describe its relation to other approaches like OWL-S or WSDL-S. While many of the related technologies and standards are still under development, this book already offers both a broad conceptual introduction and lots of pointers to future application scenarios for researchers in academia and industry as well as for developers of distributed Web applications.
This book, which has been in the making for some eighteen years, would never have begun were it not for Dr. David Dewhirst in 1976 kindly having shown the author a packet of papers in the archives of the Cambridge Obser vatories. These letters and miscellaneous papers of Fearon Fallows sparked an interest in the history of the Royal Observatory at the Cape of Good Hope which, after the diversion of producing several books on later phases of the Observatory, has finally resulted in a detailed study of the origin and first years of the Observatory's life. Publication of this book coincides with the 175th anniversary of the founding of the Royal Observatory, e.G.H. Observatories are built for the use of astronomers. They are built through astronomers, architects, engineers and contractors acting in concert (if not always in harmony). They are constructed, with whatever techniques and skills are available, from bricks, stones and mortar; but their construction may take a toll of personal relationships, patience, and flesh and blood."
This book introduces the properties of conservative extensions of First Order Logic (FOL) to new Intensional First Order Logic (IFOL). This extension allows for intensional semantics to be used for concepts, thus affording new and more intelligent IT systems. Insofar as it is conservative, it preserves software applications and constitutes a fundamental advance relative to the current RDB databases, Big Data with NewSQL, Constraint databases, P2P systems, and Semantic Web applications. Moreover, the many-valued version of IFOL can support the AI applications based on many-valued logics.
This book addresses a range of aging intensity functions, which make it possible to measure and compare aging trends for lifetime random variables. Moreover, they can be used for the characterization of lifetime distributions, also with bounded support. Stochastic orders based on the aging intensities, and their connections with some other orders, are also discussed. To demonstrate the applicability of aging intensity in reliability practice, the book analyzes both real and generated data. The estimated, properly chosen, aging intensity function is mainly recommended to identify data's lifetime distribution, and secondly, to estimate some of the parameters of the identified distribution. Both reliability researchers and practitioners will find the book a valuable guide and source of inspiration.
Creating scientific workflow applications is a very challenging task due to the complexity of the distributed computing environments involved, the complex control and data flow requirements of scientific applications, and the lack of high-level languages and tools support. Particularly, sophisticated expertise in distributed computing is commonly required to determine the software entities to perform computations of workflow tasks, the computers on which workflow tasks are to be executed, the actual execution order of workflow tasks, and the data transfer between them. Qin and Fahringer present a novel workflow language called Abstract Workflow Description Language (AWDL) and the corresponding standards-based, knowledge-enabled tool support, which simplifies the development of scientific workflow applications. AWDL is an XML-based language for describing scientific workflow applications at a high level of abstraction. It is designed in a way that allows users to concentrate on specifying such workflow applications without dealing with either the complexity of distributed computing environments or any specific implementation technology. This research monograph is organized into five parts: overview, programming, optimization, synthesis, and conclusion, and is complemented by an appendix and an extensive reference list. The topics covered in this book will be of interest to both computer science researchers (e.g. in distributed programming, grid computing, or large-scale scientific applications) and domain scientists who need to apply workflow technologies in their work, as well as engineers who want to develop distributed and high-throughput workflow applications, languages and tools.
Most applications generate large datasets, like social networking and social influence programs, smart cities applications, smart house environments, Cloud applications, public web sites, scientific experiments and simulations, data warehouse, monitoring platforms, and e-government services. Data grows rapidly, since applications produce continuously increasing volumes of both unstructured and structured data. Large-scale interconnected systems aim to aggregate and efficiently exploit the power of widely distributed resources. In this context, major solutions for scalability, mobility, reliability, fault tolerance and security are required to achieve high performance and to create a smart environment. The impact on data processing, transfer and storage is the need to re-evaluate the approaches and solutions to better answer the user needs. A variety of solutions for specific applications and platforms exist so a thorough and systematic analysis of existing solutions for data science, data analytics, methods and algorithms used in Big Data processing and storage environments is significant in designing and implementing a smart environment. Fundamental issues pertaining to smart environments (smart cities, ambient assisted leaving, smart houses, green houses, cyber physical systems, etc.) are reviewed. Most of the current efforts still do not adequately address the heterogeneity of different distributed systems, the interoperability between them, and the systems resilience. This book will primarily encompass practical approaches that promote research in all aspects of data processing, data analytics, data processing in different type of systems: Cluster Computing, Grid Computing, Peer-to-Peer, Cloud/Edge/Fog Computing, all involving elements of heterogeneity, having a large variety of tools and software to manage them. The main role of resource management techniques in this domain is to create the suitable frameworks for development of applications and deployment in smart environments, with respect to high performance. The book focuses on topics covering algorithms, architectures, management models, high performance computing techniques and large-scale distributed systems.
Extensive research and development has produce mutation tools for languages such as Fortran, Ada, C, and IDL; empirical evaluations comparing mutation with other test adequacy criteria; empirical evidence and theoretical justification for the coupling effect; and techniques for speeding up mutation testing using various types of high performance architectures. Mutation has received the attention of software developers and testers in such diverse areas as network protocols and nuclear simulation. Mutation Testing for the New Century brings together cutting edge research results in mutation testing from a wide range of researchers. This book provides answers to key questions related to mutation and raises questions yet to be answered. It is an excellent resource for researchers, practitioners, and students of software engineering.
Praise for the First Edition "A very useful book for self study and reference." "Very well written. It is concise and really packs a lot of material in a valuable reference book." "An informative and well-written book . . . presented in an easy-to-understand style with many illustrative numerical examples taken from engineering and scientific studies." Practicing engineers and scientists often have a need to utilize statistical approaches to solving problems in an experimental setting. Yet many have little formal training in statistics. Statistical Design and Analysis of Experiments gives such readers a carefully selected, practical background in the statistical techniques that are most useful to experimenters and data analysts who collect, analyze, and interpret data. The First Edition of this now-classic book garnered praise in the field. Now its authors update and revise their text, incorporating readers’ suggestions as well as a number of new developments. Statistical Design and Analysis of Experiments, Second Edition emphasizes the strategy of experimentation, data analysis, and the interpretation of experimental results, presenting statistics as an integral component of experimentation from the planning stage to the presentation of conclusions. Giving an overview of the conceptual foundations of modern statistical practice, the revised text features discussions of:
Ideal for both students and professionals, this focused and cogent reference has proven to be an excellent classroom textbook with numerous examples. It deserves a place among the tools of every engineer and scientist working in an experimental setting.
This book examines the field of parallel database management systems and illustrates the great variety of solutions based on a shared-storage or a shared-nothing architecture. Constantly dropping memory prices and the desire to operate with low-latency responses on large sets of data paved the way for main memory-based parallel database management systems. However, this area is currently dominated by the shared-nothing approach in order to preserve the in-memory performance advantage by processing data locally on each server. The main argument this book makes is that such an unilateral development will cease due to the combination of the following three trends: a) Today's network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory on a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic and provide high-availability. c) A modern storage system such as Stanford's RAM Cloud even keeps all data resident in the main memory. Exploiting these characteristics in the context of a main memory-based parallel database management system is desirable. The book demonstrates that the advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
The process of developing predictive models includes many stages. Most resources focus on the modeling algorithms but neglect other critical aspects of the modeling process. This book describes techniques for finding the best representations of predictors for modeling and for nding the best subset of predictors for improving model performance. A variety of example data sets are used to illustrate the techniques along with R programs for reproducing the results.
Intelligent Integration of Information presents a collection of chapters bringing the science of intelligent integration forward. The focus on integration defines tasks that increase the value of information when information from multiple sources is accessed, related, and combined. This contributed volume has also been published as a special double issue of the Journal of Intelligent Information Systems (JIIS), Volume 6:2/3.
This book aims to help the reader better understand the importance of data analysis in project management. Moreover, it provides guidance by showing tools, methods, techniques and lessons learned on how to better utilize the data gathered from the projects. First and foremost, insight into the bridge between data analytics and project management aids practitioners looking for ways to maximize the practical value of data procured. The book equips organizations with the know-how necessary to adapt to a changing workplace dynamic through key lessons learned from past ventures. The book's integrated approach to investigating both fields enhances the value of research findings.
Modern information systems differ in essence from their predecessors. They support operations at multiple locations and different time zones, are distributed and network-based, and use multidimensional data analysis, data warehousing, knowledge discovery, knowledge management, mobile computing, and other modern information processing methods. This book considers fundamental issues of modern information systems. It discusses query processing, data quality, data mining, knowledge management, mobile computing, software engineering for information systems construction, and other topics. The book presents research results that are not available elsewhere. With more than 40 contributors, it is a solid source of information about the state of the art in the field of databases and information systems. It is intended for researchers, advanced students, and practitioners who are concerned with the development of advanced information systems.
This book provides a thorough overview of cutting-edge research on electronics applications relevant to industry, the environment, and society at large. It covers a broad spectrum of application domains, from automotive to space and from health to security, while devoting special attention to the use of embedded devices and sensors for imaging, communication and control. The volume is based on the 2021 ApplePies Conference, held online in September 2021, which brought together researchers and stakeholders to consider the most significant current trends in the field of applied electronics and to debate visions for the future. Areas addressed by the conference included information communication technology; biotechnology and biomedical imaging; space; secure, clean and efficient energy; the environment; and smart, green and integrated transport. As electronics technology continues to develop apace, constantly meeting previously unthinkable targets, further attention needs to be directed toward the electronics applications and the development of systems that facilitate human activities. This book, written by industrial and academic professionals, represents a valuable contribution in this endeavor.
The aim of the book is to help students become data scientists. Since this requires a series of courses over a considerable period of time, the book intends to accompany students from the beginning to an advanced understanding of the knowledge and skills that define a modern data scientist. The book presents a comprehensive overview of the mathematical foundations of the programming language R and of its applications to data science. |
You may like...
Cognitive and Soft Computing Techniques…
Akash Kumar Bhoi, Victor Hugo Costa de Albuquerque, …
Paperback
R2,583
Discovery Miles 25 830
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
|