![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
This book explores the concepts and role of green computing and its recent developments for making the environment sustainable. It focuses on green automation in disciplines such as computers, nanoscience, information technology, and biochemistry. This book is characterized through descriptions of sustainability, green computing, their relevance to the environment, society, and its applications. Presents how to make the environment sustainable through engineering aspects and green computing Explores concepts and the role of green computing with recent developments Processes green automation linked with various disciplines such as nanoscience, information technology, and biochemistry Explains the concepts of green computing linked with sustainable environment through information technology This book will be of interest to researchers, libraries, students, and academicians that are interested in the concepts of green computing linked with green automation through information technology and their impacts on the future.
Industrial Applications of Machine Learning shows how machine learning can be applied to address real-world problems in the fourth industrial revolution, and provides the required knowledge and tools to empower readers to build their own solutions based on theory and practice. The book introduces the fourth industrial revolution and its current impact on organizations and society. It explores machine learning fundamentals, and includes four case studies that address a real-world problem in the manufacturing or logistics domains, and approaches machine learning solutions from an application-oriented point of view. The book should be of special interest to researchers interested in real-world industrial problems. Features Describes the opportunities, challenges, issues, and trends offered by the fourth industrial revolution Provides a user-friendly introduction to machine learning with examples of cutting-edge applications in different industrial sectors Includes four case studies addressing real-world industrial problems solved with machine learning techniques A dedicated website for the book contains the datasets of the case studies for the reader's reproduction, enabling the groundwork for future problem-solving Uses of three of the most widespread software and programming languages within the engineering and data science communities, namely R, Python, and Weka
Big Data: A Tutorial-Based Approach explores the tools and techniques used to bring about the marriage of structured and unstructured data. It focuses on Hadoop Distributed Storage and MapReduce Processing by implementing (i) Tools and Techniques of Hadoop Eco System, (ii) Hadoop Distributed File System Infrastructure, and (iii) efficient MapReduce processing. The book includes Use Cases and Tutorials to provide an integrated approach that answers the 'What', 'How', and 'Why' of Big Data. Features Identifies the primary drivers of Big Data Walks readers through the theory, methods and technology of Big Data Explains how to handle the 4 V's of Big Data in order to extract value for better business decision making Shows how and why data connectors are critical and necessary for Agile text analytics Includes in-depth tutorials to perform necessary set-ups, installation, configuration and execution of important tasks Explains the command line as well as GUI interface to a powerful data exchange tool between Hadoop and legacy r-dbms databases
Presents the knowledge and history of Bitcoin Offers recent Blockchain applications Discusses developing working code for real-world Blockchain applications Includes many real-life examples Covers going from the original bitcoin protocol to the second generation Ethereum platform
In this book, the authors first address the research issues by providing a motivating scenario, followed by the exploration of the principles and techniques of the challenging topics. Then they solve the raised research issues by developing a series of methodologies. More specifically, the authors study the query optimization and tackle the query performance prediction for knowledge retrieval. They also handle unstructured data processing, data clustering for knowledge extraction. To optimize the queries issued through interfaces against knowledge bases, the authors propose a cache-based optimization layer between consumers and the querying interface to facilitate the querying and solve the latency issue. The cache depends on a novel learning method that considers the querying patterns from individual's historical queries without having knowledge of the backing systems of the knowledge base. To predict the query performance for appropriate query scheduling, the authors examine the queries' structural and syntactical features and apply multiple widely adopted prediction models. Their feature modelling approach eschews the knowledge requirement on both the querying languages and system. To extract knowledge from unstructured Web sources, the authors examine two kinds of Web sources containing unstructured data: the source code from Web repositories and the posts in programming question-answering communities. They use natural language processing techniques to pre-process the source codes and obtain the natural language elements. Then they apply traditional knowledge extraction techniques to extract knowledge. For the data from programming question-answering communities, the authors make the attempt towards building programming knowledge base by starting with paraphrase identification problems and develop novel features to accurately identify duplicate posts. For domain specific knowledge extraction, the authors propose to use a clustering technique to separate knowledge into different groups. They focus on developing a new clustering algorithm that uses manifold constraints in the optimization task and achieves fast and accurate performance. For each model and approach presented in this dissertation, the authors have conducted extensive experiments to evaluate it using either public dataset or synthetic data they generated.
This book compares four parameters of problems in arbitrary information systems: complexity of problem representation and complexity of deterministic, nondeterministic, and strongly nondeterministic decision trees for problem solving. Deterministic decision trees are widely used as classifiers, as a means of knowledge representation, and as algorithms. Nondeterministic (strongly nondeterministic) decision trees can be interpreted as systems of true decision rules that cover all objects (objects from one decision class). This book develops tools for the study of decision trees, including bounds on complexity and algorithms for construction of decision trees for decision tables with many-valued decisions. It considers two approaches to the investigation of decision trees for problems in information systems: local, when decision trees can use only attributes from the problem representation; and global, when decision trees can use arbitrary attributes from the information system. For both approaches, it describes all possible types of relationships among the four parameters considered and discusses the algorithmic problems related to decision tree optimization. The results presented are useful for researchers who apply decision trees and rules to algorithm design and to data analysis, especially those working in rough set theory, test theory and logical analysis of data. This book can also be used as the basis for graduate courses.
Epidemic trend analysis, timeline progression, prediction, and recommendation are critical for initiating effective public health control strategies, and AI and data analytics play an important role in epidemiology, diagnostic, and clinical fronts. The focus of this book is data analytics for COVID-19, which includes an overview of COVID-19 in terms of epidemic/pandemic, data processing and knowledge extraction. Data sources, storage and platforms are discussed along with discussions on data models, their performance, different big data techniques, tools and technologies. This book also addresses the challenges in applying analytics to pandemic scenarios, case studies and control strategies. Aimed at Data Analysts, Epidemiologists and associated researchers, this book: discusses challenges of AI model for big data analytics in pandemic scenarios; explains how different big data analytics techniques can be implemented; provides a set of recommendations to minimize infection rate of COVID-19; summarizes various techniques of data processing and knowledge extraction; enables users to understand big data analytics techniques required for prediction purposes.
Enterprise Systems have been used for many years to integrate technology with the management of an organization but rapid technological disruptions are now creating new challenges and opportunities that require urgent consideration. This book reappraises the implementation and management of Enterprise Systems in the digital age and investigates the vital link between business processes, information technology and the Internet for an organization's competitive advantage and success. This book primarily focuses on the implementation, operation, management and integration of Enterprise Systems with fastemerging disruptive technologies such as blockchains, big data, cryptocurrencies, artificial intelligence, cloud computing, data mining and data analytics. These disruptive technologies are now becoming mainstream and the book proposes several innovations that organizations need to adopt to remain competitive within this rapidly changing landscape. In addition, it examines Enterprise Systems, their components, architecture, and applications and enlightens readers on the benefits and shortcomings of implementing them. This book contains primary research on organizations, case studies, and benchmarks ERP implementation against international best practice.
This comprehensive book unveils the working relationship of blockchain and the fog/edge computing. The contents of the book have been designed in such a way that the reader will not only understand blockchain and fog/edge computing but will also understand their co-existence and their collaborative power to solve a range of versatile problems. The first part of the book covers fundamental concepts and the applications of blockchain-enabled fog and edge computing. These include: Internet of Things, Tactile Internet, Smart City; and E-challan in the Internet of Vehicles. The second part of the book covers security and privacy related issues of blockchain-enabled fog and edge computing. These include, hardware primitive based Physical Unclonable Functions; Secure Management Systems; security of Edge and Cloud in the presence of blockchain; secure storage in fog using blockchain; and using differential privacy for edge-based Smart Grid over blockchain. This book is written for students, computer scientists, researchers and developers, who wish to work in the domain of blockchain and fog/edge computing. One of the unique features of this book is highlighting the issues, challenges, and future research directions associated with Blockchain-enabled fog and edge computing paradigm. We hope the readers will consider this book a valuable addition in the domain of Blockchain and fog/edge computing.
Automatic Indexing and Abstracting of Document Texts summarizes the latest techniques of automatic indexing and abstracting, and the results of their application. It also places the techniques in the context of the study of text, manual indexing and abstracting, and the use of the indexing descriptions and abstracts in systems that select documents or information from large collections. Important sections of the book consider the development of new techniques for indexing and abstracting. The techniques involve the following: using text grammars, learning of the themes of the texts including the identification of representative sentences or paragraphs by means of adequate cluster algorithms, and learning of classification patterns of texts. In addition, the book is an attempt to illuminate new avenues for future research. Automatic Indexing and Abstracting of Document Texts is an excellent reference for researchers and professionals working in the field of content management and information retrieval.
Diabetes Mellitus (DM, commonly referred to as diabetes, is a metabolic disorder in which there are high blood sugar levels over a prolonged period. Lack of sufficient insulin causes presence of excess sugar levels in the blood. As a result the glucose levels in diabetic patients are more than normal ones. It has symptoms like frequent urination, increased hunger, increase thirst and high blood sugar. There are mainly three types of diabetes namely type-1, type-2 and gestational diabetes. Type-1 DM occurs due to immune system mistakenly attacks and destroys the beta-cells and Type-2 DM occurs due to insulin resistance. Gestational DM occurs in women during pregnancy due to insulin blocking by pregnancy harmones. Among these three types of DM, type-2 DM is more prevalence, and impacting so many millions of people across the world. Classification and predictive systems are actually reliable in the health care sector to explore hidden patterns in the patients data. These systems aid, medical professionals to enhance their diagnosis, prognosis along with remedy organizing techniques. The less percentage of improvement in classifier predictive accuracy is very important for medical diagnosis purposes where mistakes can cause a lot of damage to patient's life. Hence, we need a more accurate classification system for prediction of type-2 DM. Although, most of the above classification algorithms are efficient, they failed to provide good accuracy with low computational cost. In this book, we proposed various classification algorithms using soft computing techniques like Neural Networks (NNs), Fuzzy Systems (FS) and Swarm Intelligence (SI). The experimental results demonstrate that these algorithms are able to produce high classification accuracy at less computational cost. The contributions presented in this book shall attempt to address the following objectives using soft computing approaches for identification of diabetes mellitus. Introuducing an optimized RBFN model called Opt-RBFN. Designing a cost effective rule miner called SM-RuleMiner for type-2 diabetes diagnosis. Generating more interpretable fuzzy rules for accurate diagnosis of type2 diabetes using RST-BatMiner. Developing accurate cascade ensemble frameworks called Diabetes-Network for type-2 diabetes diagnosis. Proposing a Multi-level ensemble framework called Dia-Net for improving the classification accuracy of type-2 diabetes diagnosis. Designing an Intelligent Diabetes Risk score Model called Intelli-DRM estimate the severity of Diabetes mellitus. This book serves as a reference book for scientific investigators who need to analyze disease data and/or numerical data, as well as researchers developing methodology in soft computing field. It may also be used as a textbook for a graduate and post graduate level course in machine learning or soft computing.
The idea behind this book is to simplify the journey of aspiring readers and researchers to understand Big Data, IoT and Machine Learning. It also includes various real-time/offline applications and case studies in the fields of engineering, computer science, information security and cloud computing using modern tools. This book consists of two sections: Section I contains the topics related to Applications of Machine Learning, and Section II addresses issues about Big Data, the Cloud and the Internet of Things. This brings all the related technologies into a single source so that undergraduate and postgraduate students, researchers, academicians and people in industry can easily understand them. Features Addresses the complete data science technologies workflow Explores basic and high-level concepts and services as a manual for those in the industry and at the same time can help beginners to understand both basic and advanced aspects of machine learning Covers data processing and security solutions in IoT and Big Data applications Offers adaptive, robust, scalable and reliable applications to develop solutions for day-to-day problems Presents security issues and data migration techniques of NoSQL databases
This book is about the rise of data as a driver of innovation and economic growth. It charts the evolution of business data as a valuable resource and explores some of the key business, economic and social issues surrounding the data-driven revolution we are currently going through. Readers will gain an understanding of the historical underpinnings of the data business and why the collection and use of data has been driven by commercial needs. Readers will also gain insights into the rise of the modern data-driven technology giants, their business models and the reasons for their success. Alongside this, some of the key social issues including privacy are considered and the challenges these pose to policymakers and regulators. Finally, the impact of pervasive computing and the Internet of Things (IoT) is explored in the context of the new sources of data that are being generated. This book is useful for students and practitioners wanting to better understand the origins and drivers of the current technological revolution and the key role that data plays in innovation and business success.
This book is about the rise of data as a driver of innovation and economic growth. It charts the evolution of business data as a valuable resource and explores some of the key business, economic and social issues surrounding the data-driven revolution we are currently going through. Readers will gain an understanding of the historical underpinnings of the data business and why the collection and use of data has been driven by commercial needs. Readers will also gain insights into the rise of the modern data-driven technology giants, their business models and the reasons for their success. Alongside this, some of the key social issues including privacy are considered and the challenges these pose to policymakers and regulators. Finally, the impact of pervasive computing and the Internet of Things (IoT) is explored in the context of the new sources of data that are being generated. This book is useful for students and practitioners wanting to better understand the origins and drivers of the current technological revolution and the key role that data plays in innovation and business success.
Social Networks with Rich Edge Semantics introduces a new mechanism for representing social networks in which pairwise relationships can be drawn from a range of realistic possibilities, including different types of relationships, different strengths in the directions of a pair, positive and negative relationships, and relationships whose intensities change with time. For each possibility, the book shows how to model the social network using spectral embedding. It also shows how to compose the techniques so that multiple edge semantics can be modeled together, and the modeling techniques are then applied to a range of datasets. Features Introduces the reader to difficulties with current social network analysis, and the need for richer representations of relationships among nodes, including accounting for intensity, direction, type, positive/negative, and changing intensities over time Presents a novel mechanism to allow social networks with qualitatively different kinds of relationships to be described and analyzed Includes extensions to the important technique of spectral embedding, shows that they are mathematically well motivated and proves that their results are appropriate Shows how to exploit embeddings to understand structures within social networks, including subgroups, positional significance, link or edge prediction, consistency of role in different contexts, and net flow of properties through a node Illustrates the use of the approach for real-world problems for online social networks, criminal and drug smuggling networks, and networks where the nodes are themselves groups Suitable for researchers and students in social network research, data science, statistical learning, and related areas, this book will help to provide a deeper understanding of real-world social networks.
Biometrics in a Data Driven World: Trends, Technologies, and Challenges aims to inform readers about the modern applications of biometrics in the context of a data-driven society, to familiarize them with the rich history of biometrics, and to provide them with a glimpse into the future of biometrics. The first section of the book discusses the fundamentals of biometrics and provides an overview of common biometric modalities, namely face, fingerprints, iris, and voice. It also discusses the history of the field, and provides an overview of emerging trends and opportunities. The second section of the book introduces readers to a wide range of biometric applications. The next part of the book is dedicated to the discussion of case studies of biometric modalities currently used on mobile applications. As smartphones and tablet computers are rapidly becoming the dominant consumer computer platforms, biometrics-based authentication is emerging as an integral part of protecting mobile devices against unauthorized access, while enabling new and highly popular applications, such as secure online payment authorization. The book concludes with a discussion of future trends and opportunities in the field of biometrics, which will pave the way for advancing research in the area of biometrics, and for the deployment of biometric technologies in real-world applications. The book is designed for individuals interested in exploring the contemporary applications of biometrics, from students to researchers and practitioners working in this field. Both undergraduate and graduate students enrolled in college-level security courses will also find this book to be an especially useful companion.
Fully updated and expanded from the previous edition, A Practical Guide to Database Design, Second Edition is intended for those involved in the design or development of a database system or application. It begins by illustrating how to develop a Third Normal Form data model where data is placed "where it belongs". The reader is taken step-by-step through the Normalization process, first using a simple then a more complex set of data requirements. Next, usage analysis for each Logical Data Model is reviewed and a Physical Data Model is produced that will satisfy user performance requirements. Finally, each Physical Data Model is used as input to create databases using both Microsoft Access and SQL Server. The book next shows how to use an industry-leading data modeling tool to define and manage logical and physical data models, and how to create Data Definition Language statements to create or update a database running in SQL Server, Oracle, or other type of DBMS. One chapter is devoted to illustrating how Microsoft Access can be used to create user interfaces to review and update underlying tables in that database as well as tables residing in SQL Server or Oracle. For users involved with Cyber activity or support, one chapter illustrates how to extract records of interest from a log file using PERL, then shows how to load these extracted records into one or more SQL Server "tracking" tables adding status flags for analysts to use when reviewing activity of interest. These status flags are used to flag/mark collected records as "Reviewed", "Pending" (currently being analyzed) and "Resolved". The last chapter then shows how to build a web-based GUI using PHP to query these tracking tables and allow an analyst to review new activity, flag items that need to be investigated, and finally flag items that have been investigated and resolved. Note that the book has complete code/scripts for both PERL and the PHP GUI.
From the Foreword: "[This book] provides a comprehensive overview of the fundamental concepts in healthcare process management as well as some advanced topics in the cutting-edge research of the closely related areas. This book is ideal for graduate students and practitioners who want to build the foundations and develop novel contributions in healthcare process modeling and management." --Christopher Yang, Drexel University Process modeling and process management are traversal disciplines which have earned more and more relevance over the last two decades. Several research areas are involved within these disciplines, including database systems, database management, information systems, ERP, operations research, formal languages, and logic. Process Modeling and Management for Healthcare provides the reader with an in-depth analysis of what process modeling and process management techniques can do in healthcare, the major challenges faced, and those challenges remaining to be faced. The book features contributions from leading authors in the field. The book is structured into two parts. Part one covers fundamentals and basic concepts in healthcare. It explores the architecture of a process management environment, the flexibility of a process model, and the compliance of a process model. It also features a real application domain of patients suffering from age-related macular degeneration. Part two of the book includes advanced topics from the leading frontiers of scientific research on process management and healthcare. This section of the book covers software metrics to measure features of the process model as a software artifact. It includes process analysis to discover the formal properties of the process model prior to deploying it in real application domains. Abnormal situations and exceptions, as well as temporal clinical guidelines, are also presented in depth Pro.
Disk-Based Algorithms for Big Data is a product of recent advances in the areas of big data, data analytics, and the underlying file systems and data management algorithms used to support the storage and analysis of massive data collections. The book discusses hard disks and their impact on data management, since Hard Disk Drives continue to be common in large data clusters. It also explores ways to store and retrieve data though primary and secondary indices. This includes a review of different in-memory sorting and searching algorithms that build a foundation for more sophisticated on-disk approaches like mergesort, B-trees, and extendible hashing. Following this introduction, the book transitions to more recent topics, including advanced storage technologies like solid-state drives and holographic storage; peer-to-peer (P2P) communication; large file systems and query languages like Hadoop/HDFS, Hive, Cassandra, and Presto; and NoSQL databases like Neo4j for graph structures and MongoDB for unstructured document data. Designed for senior undergraduate and graduate students, as well as professionals, this book is useful for anyone interested in understanding the foundations and advances in big data storage and management, and big data analytics. About the Author Dr. Christopher G. Healey is a tenured Professor in the Department of Computer Science and the Goodnight Distinguished Professor of Analytics in the Institute for Advanced Analytics, both at North Carolina State University in Raleigh, North Carolina. He has published over 50 articles in major journals and conferences in the areas of visualization, visual and data analytics, computer graphics, and artificial intelligence. He is a recipient of the National Science Foundation's CAREER Early Faculty Development Award and the North Carolina State University Outstanding Instructor Award. He is a Senior Member of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE), and an Associate Editor of ACM Transaction on Applied Perception, the leading worldwide journal on the application of human perception to issues in computer science.
Event mining encompasses techniques for automatically and efficiently extracting valuable knowledge from historical event/log data. The field, therefore, plays an important role in data-driven system management. Event Mining: Algorithms and Applications presents state-of-the-art event mining approaches and applications with a focus on computing system management. The book first explains how to transform log data in disparate formats and contents into a canonical form as well as how to optimize system monitoring. It then shows how to extract useful knowledge from data. It describes intelligent and efficient methods and algorithms to perform data-driven pattern discovery and problem determination for managing complex systems. The book also discusses data-driven approaches for the detailed diagnosis of a system issue and addresses the application of event summarization in Twitter messages (tweets). Understanding the interdisciplinary field of event mining can be challenging as it requires familiarity with several research areas and the relevant literature is scattered in diverse publications. This book makes it easier to explore the field by providing both a good starting point for readers not familiar with the topics and a comprehensive reference for those already working in this area.
Optimize Your Chemical Database Design and Use of Relational Databases in Chemistry helps programmers and users improve their ability to search and manipulate chemical structures and information, especially when using chemical database "cartridges". It illustrates how the organizational, data integrity, and extensibility properties of relational databases are best utilized when working with chemical information. The author facilitates an understanding of existing relational database schemas and shows how to design new schemas that contain tables of data and chemical structures. By using database extension cartridges, he provides methods to properly store and search chemical structures. He explains how to download and install a fully functioning database using free, open-source chemical extension cartridges within PostgreSQL. The author also discusses how to access a database on a computer network using both new and existing applications. Through examples of good database design, this book shows you that relational databases are the best way to store, search, and operate on chemical information.
Learning analytics is one of the most important research issues in the field of educational technology. By analyzing logs and records in educational databases and systems, it can provide useful information to teachers, learners, and decision makers - information which they can use to improve teaching strategies, learning performances, and educational policies. However, it is a great challenge for most researchers to efficiently analyze educational data in a meaningful way. This book presents various learning analytics approaches and applications, including the process of determining the coding scheme, analyzing the collected data, and interpreting the findings. This book was originally published as a special issue of Interactive Learning Environments.
These proceedings gather cutting-edge papers exploring the principles, techniques, and applications of Microservices in Big Data Analytics. The ICETCE-2019 is the latest installment in a successful series of annual conferences that began in 2011. Every year since, it has significantly contributed to the research community in the form of numerous high-quality research papers. This year, the conference's focus was on the highly relevant area of Microservices in Big Data Analytics.
This book explores recent advances in the Internet of things (IoT) via advanced technologies and provides an overview of most aspects which are relevant for advance secure, distributed, decentralized blockchain technology in the Internet of things, their applications, and industry IoT. The book provides an in-depth analysis of the step-by-step evolution of IoT to create a change by enhancing the productivity of industries. It introduces how connected things, data, and their communication (data sharing) environment build a transparent, reliable, secure environment for people, processes, systems, and services with the help of blockchain technology.
This edited book adopts a cognitive perspective to provide breadth and depth to state-of-the-art research related to understanding, analyzing, predicting and improving one of the most prominent and important classes of behavior of modern humans, information search. It is timely as the broader research area of cognitive computing and cognitive technology have recently attracted much attention, and there has been a surge in interest to develop systems and technology that are more compatible with human cognitive abilities. Divided into three interlocking sections, the first introduces the foundational concepts of information search from a cognitive computing perspective to highlight the research questions and approaches that are shared among the contributing authors. Relevant concepts from psychology, information and computing sciences are addressed. The second section discusses methods and tools that are used to understand and predict information search behavior and how the cognitive perspective can provide unique insights into the complexities of the behavior in various contexts. The final part highlights a number of areas of applications of which education and training, collaboration and conversational search interfaces are important ones. Understanding and Improving Information Search - A Cognitive Approach includes contributions from cognitive psychologists, information and computing scientists around the globe, including researchers from Europe (France, Netherlands, Germany), the US, and Asia (India, Japan), providing their unique but coherent perspectives to the core issues and questions most relevant to our current understanding of information search behavior and improving information search. |
You may like...
Heroes of World War II - A World War II…
Kelly Milner Halls
Hardcover
Orangutans - Geographic Variation in…
Serge A. Wich, S. Suci Utami Atmoko, …
Hardcover
R3,609
Discovery Miles 36 090
|