![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Big data is a field of research that is growing rapidly, and as the Covid-19 crisis has shown, health care is an area that could benefit greatly from its increased use and application. Big data, as derived partly from the internet of things and analysed according to specific algorithms, has a large and beneficial role to play in preventative medicine, in monitoring the health of specific groups, and in improving diagnostics. Big Data Analytics and Intelligence: A Perspective for Health Care focuses on various areas of health care, ranging from nutrition to cancer, and providing diverse perspectives on all of them. This book explores the entire life-cycle of big data, from information retrieval to analysis, and it shows how big data's applications can enhance, streamline and improve services for patients and health-care professionals. Each chapter focuses on a specific area of health care and how big data is applicable to it, with background and current examples provided.
Intelligent Data Analysis for Biomedical Applications: Challenges and Solutions presents specialized statistical, pattern recognition, machine learning, data abstraction and visualization tools for the analysis of data and discovery of mechanisms that create data. It provides computational methods and tools for intelligent data analysis, with an emphasis on problem-solving relating to automated data collection, such as computer-based patient records, data warehousing tools, intelligent alarming, effective and efficient monitoring, and more. This book provides useful references for educational institutions, industry professionals, researchers, scientists, engineers and practitioners interested in intelligent data analysis, knowledge discovery, and decision support in databases.
This volume provides a comprehensive introduction to mHealth technology and is accessible to technology-oriented researchers and practitioners with backgrounds in computer science, engineering, statistics, and applied mathematics. The contributing authors include leading researchers and practitioners in the mHealth field. The book offers an in-depth exploration of the three key elements of mHealth technology: the development of on-body sensors that can identify key health-related behaviors (sensors to markers), the use of analytic methods to predict current and future states of health and disease (markers to predictors), and the development of mobile interventions which can improve health outcomes (predictors to interventions). Chapters are organized into sections, with the first section devoted to mHealth applications, followed by three sections devoted to the above three key technology areas. Each chapter can be read independently, but the organization of the entire book provides a logical flow from the design of on-body sensing technology, through the analysis of time-varying sensor data, to interactions with a user which create opportunities to improve health outcomes. This volume is a valuable resource to spur the development of this growing field, and ideally suited for use as a textbook in an mHealth course.
Social Network Analytics: Computational Research Methods and Techniques focuses on various technical concepts and aspects of social network analysis. The book features the latest developments and findings in this emerging area of research. In addition, it includes a variety of applications from several domains, such as scientific research, and the business and industrial sectors. The technical aspects of analysis are covered in detail, including visualizing and modeling, network theory, mathematical models, the big data analytics of social networks, multidimensional scaling, and more. As analyzing social network data is rapidly gaining interest in the scientific research community because of the importance of the information and insights that can be culled from the wealth of data inherent in the various aspects of the network, this book provides insights on measuring the relationships and flows between people, groups, organizations, computers, URLs, and more.
Meta-Analytics: Consensus Approaches and System Patterns for Data Analysis presents an exhaustive set of patterns for data science to use on any machine learning based data analysis task. The book virtually ensures that at least one pattern will lead to better overall system behavior than the use of traditional analytics approaches. The book is 'meta' to analytics, covering general analytics in sufficient detail for readers to engage with, and understand, hybrid or meta- approaches. The book has relevance to machine translation, robotics, biological and social sciences, medical and healthcare informatics, economics, business and finance. Inn addition, the analytics within can be applied to predictive algorithms for everyone from police departments to sports analysts.
This textbook grew out of notes for the ECE143 Programming for Data Analysis class that the author has been teaching at University of California, San Diego, which is a requirement for both graduate and undergraduate degrees in Machine Learning and Data Science. This book is ideal for readers with some Python programming experience. The book covers key language concepts that must be understood to program effectively, especially for data analysis applications. Certain low-level language features are discussed in detail, especially Python memory management and data structures. Using Python effectively means taking advantage of its vast ecosystem. The book discusses Python package management and how to use third-party modules as well as how to structure your own Python modules. The section on object-oriented programming explains features of the language that facilitate common programming patterns. After developing the key Python language features, the book moves on to third-party modules that are foundational for effective data analysis, starting with Numpy. The book develops key Numpy concepts and discusses internal Numpy array data structures and memory usage. Then, the author moves onto Pandas and details its many features for data processing and alignment. Because strong visualizations are important for communicating data analysis, key modules such as Matplotlib are developed in detail, along with web-based options such as Bokeh, Holoviews, Altair, and Plotly. The text is sprinkled with many tricks-of-the-trade that help avoid common pitfalls. The author explains the internal logic embodied in the Python language so that readers can get into the Python mindset and make better design choices in their codes, which is especially helpful for newcomers to both Python and data analysis. To get the most out of this book, open a Python interpreter and type along with the many code samples.
The book proposes a systematic approach to big data collection, documentation and development of analytic procedures that foster collaboration on a large scale. This approach, designated as "data factoring" emphasizes the need to think of each individual dataset developed by an individual project as part of a broader data ecosystem, easily accessible and exploitable by parties not directly involved with data collection and documentation. Furthermore, data factoring uses and encourages pre-analytic operations that add value to big data sets, especially recombining and repurposing. The book proposes a research-development agenda that can undergird an ideal data factory approach. Several programmatic chapters discuss specialized issues involved in data factoring (documentation, meta-data specification, building flexible, yet comprehensive data ontologies, usability issues involved in collaborative tools, etc.). The book also presents case studies for data factoring and processing that can lead to building better scientific collaboration and data sharing strategies and tools. Finally, the book presents the teaching utility of data factoring and the ethical and privacy concerns related to it. Chapter 9 of this book is available open access under a CC BY 4.0 license at link.springer.com
As millions of people have been exposed to computing through the tremendous growth of microcomputers, there has developed an increasing appreciation of the history of data processing, which dates back many decades before the arrival of the computer. Stretching back to at least the 1860s, such early technologies as adding machines, punch cards, and the office appliance industry are now being recognized for their place in the history of the information processing industry. This work brings together a comprehensive list of sources that offer a general introduction to the literature of the industry. Divided into nine chapters covering topics and historical periods, the bibliography provides an annotated list of published materials describing both the history of the industry and significant items of general interest. Each chapter is introduced with a short review of historically important issues and comments on the literature, and contains contemporary publications as well as more recent material. To give the work a continuing usefulness, ongoing publications, such as computer magazines, are highlighted. Entries are grouped under nearly 100 subheadings, covering such material as contemporary descriptions of hardware and software of the past, seminal technical papers, industry surveys, programming languages, significant individuals and companies, and the role of Japan and microcomputing. All citations are annotated with a brief summary of either the work's contents or its historical importance, while two indexes provide both subject references and author citations. This bibliography will be an important reference source for courses in the history of data processing and business history, and auseful addition to public, college, and university libraries.
Learn the basics of Data Science through an easy to understand conceptual framework and immediately practice using RapidMiner platform. Whether you are brand new to data science or working on your tenth project, this book will show you how to analyze data, uncover hidden patterns and relationships to aid important decisions and predictions. Data Science has become an essential tool to extract value from data for any organization that collects, stores and processes data as part of its operations. This book is ideal for business users, data analysts, business analysts, engineers, and analytics professionals and for anyone who works with data. You'll be able to: Gain the necessary knowledge of different data science techniques to extract value from data. Master the concepts and inner workings of 30 commonly used powerful data science algorithms. Implement step-by-step data science process using using RapidMiner, an open source GUI based data science platform Data Science techniques covered: Exploratory data analysis, Visualization, Decision trees, Rule induction, k-nearest neighbors, Naive Bayesian classifiers, Artificial neural networks, Deep learning, Support vector machines, Ensemble models, Random forests, Regression, Recommendation engines, Association analysis, K-Means and Density based clustering, Self organizing maps, Text mining, Time series forecasting, Anomaly detection, Feature selection and more...
This book uses a mathematical approach to deriving the laws of science and technology, based upon the concept of Fisher information. The approach that follows from these ideas is called the principle of Extreme Physical Information (EPI). The authors show how to use EPI to determine the theoretical input/output laws of unknown systems. Will benefit readers whose math skill is at the level of an undergraduate science or engineering degree.
Dependence Analysis may be considered to be the second edition of the author's 1988 book, Dependence Analysis for Supercomputing. It is, however, a completely new work that subsumes the material of the 1988 publication. This book is the third volume in the series Loop Transformations for Restructuring Compilers. This series has been designed to provide a complete mathematical theory of transformations that can be used to automatically change a sequential program containing FORTRAN-like do loops into an equivalent parallel form. In Dependence Analysis, the author extends the model to a program consisting of do loops and assignment statements, where the loops need not be sequentially nested and are allowed to have arbitrary strides. In the context of such a program, the author studies, in detail, dependence between statements of the program caused by program variables that are elements of arrays. Dependence Analysis is directed toward graduate and undergraduate students, and professional writers of restructuring compilers. The prerequisite for the book consists of some knowledge of programming languages, and familiarity with calculus and graph theory. No knowledge of linear programming is required.
Automatic Performance Prediction of Parallel Programs presents a unified approach to the problem of automatically estimating the performance of parallel computer programs. The author focuses primarily on distributed memory multiprocessor systems, although large portions of the analysis can be applied to shared memory architectures as well. The author introduces a novel and very practical approach for predicting some of the most important performance parameters of parallel programs, including work distribution, number of transfers, amount of data transferred, network contention, transfer time, computation time and number of cache misses. This approach is based on advanced compiler analysis that carefully examines loop iteration spaces, procedure calls, array subscript expressions, communication patterns, data distributions and optimizing code transformations at the program level; and the most important machine specific parameters including cache characteristics, communication network indices, and benchmark data for computational operations at the machine level. The material has been fully implemented as part of P3T, which is an integrated automatic performance estimator of the Vienna Fortran Compilation System (VFCS), a state-of-the-art parallelizing compiler for Fortran77, Vienna Fortran and a subset of High Performance Fortran (HPF) programs. A large number of experiments using realistic HPF and Vienna Fortran code examples demonstrate highly accurate performance estimates, and the ability of the described performance prediction approach to successfully guide both programmer and compiler in parallelizing and optimizing parallel programs. A graphical user interface is described and displayed that visualizes each program source line together with the corresponding parameter values. P3T uses color-coded performance visualization to immediately identify hot spots in the parallel program. Performance data can be filtered and displayed at various levels of detail. Colors displayed by the graphical user interface are visualized in greyscale. Automatic Performance Prediction of Parallel Programs also includes coverage of fundamental problems of automatic parallelization for distributed memory multicomputers, a description of the basic parallelization strategy and a large variety of optimizing code transformations as included under VFCS.
This book highlights some of the unique aspects of spatio-temporal graph data from the perspectives of modeling and developing scalable algorithms. The authors discuss in the first part of this book, the semantic aspects of spatio-temporal graph data in two application domains, viz., urban transportation and social networks. Then the authors present representational models and data structures, which can effectively capture these semantics, while ensuring support for computationally scalable algorithms. In the first part of the book, the authors describe algorithmic development issues in spatio-temporal graph data. These algorithms internally use the semantically rich data structures developed in the earlier part of this book. Finally, the authors introduce some upcoming spatio-temporal graph datasets, such as engine measurement data, and discuss some open research problems in the area. This book will be useful as a secondary text for advanced-level students entering into relevant fields of computer science, such as transportation and urban planning. It may also be useful for researchers and practitioners in the field of navigational algorithms.
This book gathers a collection of high-quality peer-reviewed research papers presented at the International Conference on Big Data, IoT and Machine Learning (BIM 2021), held in Cox's Bazar, Bangladesh, during 23-25 September 2021. The book covers research papers in the field of big data, IoT and machine learning. The book will be helpful for active researchers and practitioners in the field.
This digital electronics text focuses on "how to" design, build, operate and adapt data acquisition systems. The material begins with basic logic gates and ends with a 40 KHz voltage measurer. The approach aims to cover a minimal number of topics in detail. The data acquisition circuits described communicate with a host computer through parallel I/O ports. The fundamental idea of the book is that parallel I/O ports (available for all popular computers) offer a superior balance of simplicity, low cost, speed, flexibility and adaptability. All circuits and software are thoroughly tested. Construction details and troubleshooting guidelines are included. This book is intended to serve people who teach or study one of the following: digital electronics, circuit design, software that interacts outside hardware, the process of computer based acquisition, and the design, adaptation, construction and testing of measurement systems.
ELlA M. LEIBOWITZ Director, Wise Observatory Chair, Scientific Organizing Committee The international symposium on "Astronomical Time Series" was held at the Tel Aviv University campus in Tel Aviv, from December 30 1996 to January 11997. It was organized in order to celebrate the 25th anniversary of the Florence and George Wise Observatory (WO) operated by Tel Aviv University. The site of the 1 meter telescope of the observatory is near the town of Mitzpe-Ramon, some 220 km south of Tel Aviv, at the center of the Israeli Negev highland. There were two major reasons for the choice of Time Series as the sub ject matter for our symposium. One is mainly concerned with the subject matter itself, and one is related particularly to the Wise Observatory. There is hardly any doubt that astronomical time series are among the most ancient concepts in human civilization and culture. One can even say that astronomical time series preceeded astronomy itself, as the impression of the day /night cycle on Earth is probably the first and most fundamental effect that impress a. human being, or, in fact, most living creatures on this planet. An echo of this idea. can be heard in the Biblical story of Creation, where the concept of night and day preceeds the creation of the astronomical objects."
* Essay-based format weaves together technical details and case studies to cut through complexity * Provides a strong background in business situations that companies face, to ensure that data analytics efforts are productively directed and organized * Appropriate for both business and engineering students who need to understand the data analytics lifecycle
* Essay-based format weaves together technical details and case studies to cut through complexity * Provides a strong background in business situations that companies face, to ensure that data analytics efforts are productively directed and organized * Appropriate for both business and engineering students who need to understand the data analytics lifecycle
This book explains the Linked Data domain by adopting a bottom-up approach: it introduces the fundamental Semantic Web technologies and building blocks, which are then combined into methodologies and end-to-end examples for publishing datasets as Linked Data, and use cases that harness scholarly information and sensor data. It presents how Linked Data is used for web-scale data integration, information management and search. Special emphasis is given to the publication of Linked Data from relational databases as well as from real-time sensor data streams. The authors also trace the transformation from the document-based World Wide Web into a Web of Data. Materializing the Web of Linked Data is addressed to researchers and professionals studying software technologies, tools and approaches that drive the Linked Data ecosystem, and the Web in general.
Updated new edition of Ralph Kimball's groundbreaking book on dimensional modeling for data warehousing and business intelligence The first edition of Ralph Kimball's "The Data Warehouse Toolkit" introduced the industry to dimensional modeling, and now his books are considered the most authoritative guides in this space. This new third edition is a complete library of updated dimensional modeling techniques, the most comprehensive collection ever. It covers new and enhanced star schema dimensional modeling patterns, adds two new chapters on ETL techniques, includes new and expanded business matrices for 12 case studies, and more.Authored by Ralph Kimball and Margy Ross, known worldwide as educators, consultants, and influential thought leaders in data warehousing and business intelligenceBegins with fundamental design recommendations and progresses through increasingly complex scenariosPresents unique modeling techniques for business applications such as inventory management, procurement, invoicing, accounting, customer relationship management, big data analytics, and moreDraws real-world case studies from a variety of industries, including retail sales, financial services, telecommunications, education, health care, insurance, e-commerce, and more Design dimensional databases that are easy to understand and provide fast query response with "The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling, 3rd Edition."
Leverage the power of Talent Intelligence (TI) to make evidence-informed decisions that drive business performance by using data about people, skills, jobs, business functions and geographies. Improved access to people and business data has created huge opportunities for the HR function. However, simply having access to this data is not enough. HR professionals need to know how to analyse the data, know what questions to ask of it and where and how the insights from the data can add the most value. Talent Intelligence is a practical guide that explains everything HR professionals need to know to achieve this. It outlines what Talent Intelligence (TI) is why it's important, how to use it to improve business results and includes guidance on how HR professionals can build the business case for it. This book also explains how and why talent intelligence is different from workforce planning, sourcing research and standard predictive HR analytics and shows how to assess where in the organization talent intelligence can have the biggest impact and how to demonstrate the results to all stakeholders. Most importantly, this book covers KPIs and metrics for success, short-term and long-term TI goals, an outline of what success looks like and the skills needed for effective Talent Intelligence. It also features case studies from organizations including Philips, Barclays and Kimberly-Clark.
Provides a concise review of impacts of social media analytics Reviews associated risks in the form of data leakage, privacy, transparency, exploitation, and ownership Analysis's tactics and growing vulnerabilities, exposure and cybercriminal expansion Reviews manipulation and new evolving technologies in social media analytics Innovative and emerging models to help develop strategic understanding.
Nearly every large corporation and governmental agency is taking a fresh look at their current enterprise-scale business intelligence (BI) and data warehousing implementations at the dawn of the "Big Data Era"...and most see a critical need to revitalize their current capabilities. Whether they find the frustrating and business-impeding continuation of a long-standing "silos of data" problem, or an over-reliance on static production reports at the expense of predictive analytics and other true business intelligence capabilities, or a lack of progress in achieving the long-sought-after enterprise-wide "single version of the truth" - or all of the above - IT Directors, strategists, and architects find that they need to go back to the drawing board and produce a brand new BI/data warehousing roadmap to help move their enterprises from their current state to one where the promises of emerging technologies and a generation's worth of best practices can finally deliver high-impact, architecturally evolvable enterprise-scale business intelligence and data warehousing. Author Alan Simon, whose BI and data warehousing experience dates back to the late 1970s and who has personally delivered or led more than thirty enterprise-wide BI/data warehousing roadmap engagements since the mid-1990s, details a comprehensive step-by-step approach to building a best practices-driven, multi-year roadmap in the quest for architecturally evolvable BI and data warehousing at the enterprise scale. Simon addresses the triad of technology, work processes, and organizational/human factors considerations in a manner that blends the visionary and the pragmatic. |
You may like...
Formal Methods for Open Object-Based…
Scott F. Smith, Carolyn L. Talcott
Hardcover
R5,373
Discovery Miles 53 730
|