![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Nearly every large corporation and governmental agency is taking a fresh look at their current enterprise-scale business intelligence (BI) and data warehousing implementations at the dawn of the "Big Data Era"...and most see a critical need to revitalize their current capabilities. Whether they find the frustrating and business-impeding continuation of a long-standing "silos of data" problem, or an over-reliance on static production reports at the expense of predictive analytics and other true business intelligence capabilities, or a lack of progress in achieving the long-sought-after enterprise-wide "single version of the truth" - or all of the above - IT Directors, strategists, and architects find that they need to go back to the drawing board and produce a brand new BI/data warehousing roadmap to help move their enterprises from their current state to one where the promises of emerging technologies and a generation's worth of best practices can finally deliver high-impact, architecturally evolvable enterprise-scale business intelligence and data warehousing. Author Alan Simon, whose BI and data warehousing experience dates back to the late 1970s and who has personally delivered or led more than thirty enterprise-wide BI/data warehousing roadmap engagements since the mid-1990s, details a comprehensive step-by-step approach to building a best practices-driven, multi-year roadmap in the quest for architecturally evolvable BI and data warehousing at the enterprise scale. Simon addresses the triad of technology, work processes, and organizational/human factors considerations in a manner that blends the visionary and the pragmatic.
Features contributions from thought leaders across academia, industry, and government Focuses on novel algorithms and practical applications
Doing data science is difficult. Projects are typically very dynamic with requirements that change as data understanding grows. The data itself arrives piecemeal, is added to, replaced, contains undiscovered flaws and comes from a variety of sources. Teams also have mixed skill sets and tooling is often limited. Despite these disruptions, a data science team must get off the ground fast and begin demonstrating value with traceable, tested work products. This is when you need Guerrilla Analytics. In this book, you will learn about: The Guerrilla Analytics Principles: simple rules of thumb for maintaining data provenance across the entire analytics life cycle from data extraction, through analysis to reporting. Reproducible, traceable analytics: how to design and implement work products that are reproducible, testable and stand up to external scrutiny. Practice tips and war stories: 90 practice tips and 16 war stories based on real-world project challenges encountered in consulting, pre-sales and research. Preparing for battle: how to set up your team's analytics environment in terms of tooling, skill sets, workflows and conventions. Data gymnastics: over a dozen analytics patterns that your team will encounter again and again in projects
Every day, more and more kinds of historical data become available, opening exciting new avenues of inquiry but also new challenges. This updated and expanded book describes and demonstrates the ways these data can be explored to construct cultural heritage knowledge, for research and in teaching and learning. It helps humanities scholars to grasp Big Data in order to do their work, whether that means understanding the underlying algorithms at work in search engines or designing and using their own tools to process large amounts of information.Demonstrating what digital tools have to offer and also what 'digital' does to how we understand the past, the authors introduce the many different tools and developing approaches in Big Data for historical and humanistic scholarship, show how to use them, what to be wary of, and discuss the kinds of questions and new perspectives this new macroscopic perspective opens up. Originally authored 'live' online with ongoing feedback from the wider digital history community, Exploring Big Historical Data breaks new ground and sets the direction for the conversation into the future.Exploring Big Historical Data should be the go-to resource for undergraduate and graduate students confronted by a vast corpus of data, and researchers encountering these methods for the first time. It will also offer a helping hand to the interested individual seeking to make sense of genealogical data or digitized newspapers, and even the local historical society who are trying to see the value in digitizing their holdings.
Developing and implementing a systematic analytics strategy can result in a sustainable competitive advantage within the sport business industry. This timely and relevant book provides practical strategies to collect data and then convert that data into meaningful, value-added information and actionable insights. Its primary objective is to help sport business organizations utilize data-driven decision-making to generate optimal revenue from such areas as ticket sales and corporate partnerships. To that end, the book includes in-depth case studies from such leading sports organizations as the Orlando Magic, Tampa Bay Buccaneers, Duke University, and the Aspire Group. The core purpose of sport business analytics is to convert raw data into information that enables sport business professionals to make strategic business decisions that result in improved company financial performance and a measurable and sustainable competitive advantage. Readers will learn about the role of big data and analytics in: Ticket pricing Season ticket member retention Fan engagement Sponsorship valuation Customer relationship management Digital marketing Market research Data visualization. This book examines changes in the ticketing marketplace and spotlights innovative ticketing strategies used in various sport organizations. It shows how to engage fans with social media and digital analytics, presents techniques to analyze engagement and marketing strategies, and explains how to utilize analytics to leverage fan engagement to enhance revenue for sport organizations. Filled with insightful case studies, this book benefits both sports business professionals and students. The concluding chapter on teaching sport analytics further enhances its value to academics.
The ethics of data and analytics, in many ways, is no different than any endeavor to find the "right" answer. When a business chooses a supplier, funds a new product, or hires an employee, managers are making decisions with moral implications. The decisions in business, like all decisions, have a moral component in that people can benefit or be harmed, rules are followed or broken, people are treated fairly or not, and rights are enabled or diminished. However, data analytics introduces wrinkles or moral hurdles in how to think about ethics. Questions of accountability, privacy, surveillance, bias, and power stretch standard tools to examine whether a decision is good, ethical, or just. Dealing with these questions requires different frameworks to understand what is wrong and what could be better. Ethics of Data and Analytics: Concepts and Cases does not search for a new, different answer or to ban all technology in favor of human decision-making. The text takes a more skeptical, ironic approach to current answers and concepts while identifying and having solidarity with others. Applying this to the endeavor to understand the ethics of data and analytics, the text emphasizes finding multiple ethical approaches as ways to engage with current problems to find better solutions rather than prioritizing one set of concepts or theories. The book works through cases to understand those marginalized by data analytics programs as well as those empowered by them. Three themes run throughout the book. First, data analytics programs are value-laden in that technologies create moral consequences, reinforce or undercut ethical principles, and enable or diminish rights and dignity. This places an additional focus on the role of developers in their incorporation of values in the design of data analytics programs. Second, design is critical. In the majority of the cases examined, the purpose is to improve the design and development of data analytics programs. Third, data analytics, artificial intelligence, and machine learning are about power. The discussion of power-who has it, who gets to keep it, and who is marginalized-weaves throughout the chapters, theories, and cases. In discussing ethical frameworks, the text focuses on critical theories that question power structures and default assumptions and seek to emancipate the marginalized.
The ethics of data and analytics, in many ways, is no different than any endeavor to find the "right" answer. When a business chooses a supplier, funds a new product, or hires an employee, managers are making decisions with moral implications. The decisions in business, like all decisions, have a moral component in that people can benefit or be harmed, rules are followed or broken, people are treated fairly or not, and rights are enabled or diminished. However, data analytics introduces wrinkles or moral hurdles in how to think about ethics. Questions of accountability, privacy, surveillance, bias, and power stretch standard tools to examine whether a decision is good, ethical, or just. Dealing with these questions requires different frameworks to understand what is wrong and what could be better. Ethics of Data and Analytics: Concepts and Cases does not search for a new, different answer or to ban all technology in favor of human decision-making. The text takes a more skeptical, ironic approach to current answers and concepts while identifying and having solidarity with others. Applying this to the endeavor to understand the ethics of data and analytics, the text emphasizes finding multiple ethical approaches as ways to engage with current problems to find better solutions rather than prioritizing one set of concepts or theories. The book works through cases to understand those marginalized by data analytics programs as well as those empowered by them. Three themes run throughout the book. First, data analytics programs are value-laden in that technologies create moral consequences, reinforce or undercut ethical principles, and enable or diminish rights and dignity. This places an additional focus on the role of developers in their incorporation of values in the design of data analytics programs. Second, design is critical. In the majority of the cases examined, the purpose is to improve the design and development of data analytics programs. Third, data analytics, artificial intelligence, and machine learning are about power. The discussion of power-who has it, who gets to keep it, and who is marginalized-weaves throughout the chapters, theories, and cases. In discussing ethical frameworks, the text focuses on critical theories that question power structures and default assumptions and seek to emancipate the marginalized.
Research findings and dissemination are making healthcare more effective. Electronic health records systems and advanced tools are making care delivery more efficient. Legislative reforms are striving to make care more affordable. Efforts still need to be focused on making healthcare more accessible. Clinical Videoconferencing in Telehealth takes a comprehensive and vital step forward in providing mental health and primary care services for those who cannot make traditional office visits, live in remote areas, have transportation or mobility issues or have competing demands. Practical, evidence-based information is presented in a step by step format at two levels: for administrators, including information regarding selecting the right videoconferencing technology, navigating regulatory issues, policy temples, boilerplate language for entering into care agreements with other entities and practical solutions to multisite programming; and for clinicians, including protocols for safe, therapeutically sound practice, informed consent and tips for overcoming common technical barriers to communication in clinical videoconferencing contexts. Checklists, tables, templates, links, vignettes and other tools help to equip professional readers for providing safe services that are streamlined and relevant while avoiding guesswork, false starts and waste. The book takes a friendly-mentor approach to communication in areas such as: Logistics for administrators: Clinical videoconferencing infrastructures and technologies Policy development, procedures and tools for responsible and compliant programming Navigating issues related to providing services in multiple locations Protocols for clinicians: The informed consent process in clinical videoconferencing Clinical assessment and safety planning for remote services Minimizing communication disruption and optimizing the therapeutic alliance Clinical Videoconferencing in Telehealth aptly demonstrates the promise and potential of this technology for clinicians, clinic managers, administrators and others affiliated with mental health clinical practices. It is designed to be the comprehensive "one-stop" tool for clinical videoconferencing service development for programs and individual clinicians.
1) Discusses technical details of the Machine Learning tools and techniques in the different types of cancers 2) Machine learning and data mining in healthcare is a very important topic and hence there would be a demand for such a book 3) As compared to other titles, the proposed book focuses on different types of cancer disease and their prediction strategy using machine leaning and data mining.
Connects four contemporary areas of research: Artificial Intelligence, big data analytics, knowledge modelling, and healthcare Covers a list of diverse topics related to healthcare and knowledge modelling Summarizes the most important recent and valuable research related to big data analytics in the healthcare sector Includes case studies related to the application of big data in healthcare Highlights modern developments, challenges, opportunities, and future research directions in healthcare
This book aims to explain Data Analytics towards decision making in terms of models and algorithms, theoretical concepts, applications, experiments in relevant domains or focused on specific issues. It explores the concepts of database technology, machine learning, knowledge-based system, high performance computing, information retrieval, finding patterns hidden in large datasets and data visualization. Also, it presents various paradigms including pattern mining, clustering, classification, and data analysis. Overall aim is to provide technical solutions in the field of data analytics and data mining. Features: Covers descriptive statistics with respect to predictive analytics and business analytics. Discusses different data analytics platforms for real-time applications. Explain SMART business models. Includes algorithms in data sciences alongwith automated methods and models. Explores varied challenges encountered by researchers and businesses in the realm of real-time analytics. This book aims at researchers and graduate students in data analytics, data sciences, data mining, and signal processing.
A well thought out, fit-for-purpose data strategy is vital to modern data-driven businesses. This book is your essential guide to planning, developing and implementing such a strategy, presenting a framework which takes you from data strategy definition to successful strategy delivery and execution with support and engagement from stakeholders. Key topics include data-driven business transformation, change enablers, benefits realisation and measurement.
Gain a thorough understanding of today's sometimes daunting, ever-changing world of technology as you learn how to apply the latest technology to your academic, professional and personal life with TECHNOLOGY FOR SUCCESS: COMPUTER CONCEPTS. Written by a team of best-selling technology authors and based on extensive research and feedback from students like you, this edition breaks each topic into brief, inviting lessons that address the "what, why and how" behind digital advancements to ensure deep understanding and application to today's real world. Optional online MindTap and SAM (Skills Assessment Manager) learning tools offer hands-on and step-by-step training, videos that cover the more difficult concepts and simulations that challenge you to solve problems in the actual world. You leave this course able to read the latest technology news and understand its impact on your daily life, the economy and society.
Recent years have seen an explosion in new kinds of data on infectious diseases, including data on social contacts, whole genome sequences of pathogens, biomarkers for susceptibility to infection, serological panel data, and surveillance data. The Handbook of Infectious Disease Data Analysis provides an overview of many key statistical methods that have been developed in response to such new data streams and the associated ability to address key scientific and epidemiological questions. A unique feature of the Handbook is the wide range of topics covered. Key features Contributors include many leading researchers in the field Divided into four main sections: Basic concepts, Analysis of Outbreak Data, Analysis of Seroprevalence Data, Analysis of Surveillance Data Numerous case studies and examples throughout Provides both introductory material and key reference material
The need for analytics skills is a source of the burgeoning growth in the number of analytics and decision science programs in higher education developed to feed the need for capable employees in this area. The very size and continuing growth of this need means that there is still space for new program development. Schools wishing to pursue business analytics programs intentionally assess the maturity level of their programs and take steps to close the gap. Teaching Data Analytics: Pedagogy and Program Design is a reference for faculty and administrators seeking direction about adding or enhancing analytics offerings at their institutions. It provides guidance by examining best practices from the perspectives of faculty and practitioners. By emphasizing the connection of data analytics to organizational success, it reviews the position of analytics and decision science programs in higher education, and to review the critical connection between this area of study and career opportunities. The book features: A variety of perspectives ranging from the scholarly theoretical to the practitioner applied An in-depth look into a wide breadth of skills from closely technology-focused to robustly soft human connection skills Resources for existing faculty to acquire and maintain additional analytics-relevant skills that can enrich their current course offerings. Acknowledging the dichotomy between data analytics and data science, this book emphasizes data analytics rather than data science, although the book does touch upon the data science realm. Starting with industry perspectives, the book covers the applied world of data analytics, covering necessary skills and applications, as well as developing compelling visualizations. It then dives into pedagogical and program design approaches in data analytics education and concludes with ideas for program design tactics. This reference is a launching point for discussions about how to connect industry's need for skilled data analysts to higher education's need to design a rigorous curriculum that promotes student critical thinking, communication, and ethical skills. It also provides insight into adding new elements to existing data analytics courses and for taking the next step in adding data analytics offerings, whether it be incorporating additional analytics assignments into existing courses, offering one course designed for undergraduates, or an integrated program designed for graduate students.
The organization of data is clearly of great importance in the design of high performance algorithms and architectures. Although there are several landmark papers on this subject, no comprehensive treatment has appeared. This monograph is intended to fill that gap. We introduce a model of computation for parallel computer architec tures, by which we are able to express the intrinsic complexity of data or ganization for specific architectures. We apply this model of computation to several existing parallel computer architectures, e.g., the CDC 205 and CRAY vector-computers, and the MPP binary array processor. The study of data organization in parallel computations was introduced as early as 1970. During the development of the ILLIAC IV system there was a need for a theory of possible data arrangements in interleaved mem ory systems. The resulting theory dealt primarily with storage schemes also called skewing schemes for 2-dimensional matrices, i.e., mappings from a- dimensional array to a number of memory banks. By means of the model of computation we are able to apply the theory of skewing schemes to var ious kinds of parallel computer architectures. This results in a number of consequences for both the design of parallel computer architectures and for applications of parallel processing."
The researcher in computer content analysis is often faced with a paucity of guidance in conducting a study. Published exemplars of best practice in computer content analysis are rare, and computer content analysis seems to have developed independently in a number of disciplines, with researchers in one field often unaware of new and innovative techniques developed by researchers in other areas. This volume contains numerous articles illustrating the current state of the art of computer content analysis. Research is presented by scholars in political science, natural resource management, mass communication, marketing, education, and other fields, with the aim of providing exemplars for further research on the computer analysis and understanding of textual materials. The studies presented in Applications of Computer Content Analysis offer a varied spectrum of exemplary studies, Researchers can, due to the breadth of the studies presented here, find methodological, theoretical, and practical suggestions which will significantly ease the process of creating new research--and will significantly reduce the duplication of effort which has, until now, plagued computer content analytic research. Intended for an audience of graduate students, scholars, and in-field practitioners, this will serve as an invaluable resource, full of useful examples, for those interesting in using computers to analyze newspapers articles, emails, mediated communication, or any other sort of digital communication.
Data is fundamentally changing the nature of businesses and organisations and the mechanisms for delivering products and services. This book is a practical guide to developing strategy and policy for data governance, in line with the developing ISO 38505 governance of data standards. It will assist an organisation wanting to become more of a data driven business by explaining how to assess the value, risks and constraints associated with collecting, using and distributing data.
This book introduces readers to a workload-aware methodology for large-scale graph algorithm optimization in graph-computing systems, and proposes several optimization techniques that can enable these systems to handle advanced graph algorithms efficiently. More concretely, it proposes a workload-aware cost model to guide the development of high-performance algorithms. On the basis of the cost model, the book subsequently presents a system-level optimization resulting in a partition-aware graph-computing engine, PAGE. In addition, it presents three efficient and scalable advanced graph algorithms - the subgraph enumeration, cohesive subgraph detection, and graph extraction algorithms. This book offers a valuable reference guide for junior researchers, covering the latest advances in large-scale graph analysis; and for senior researchers, sharing state-of-the-art solutions based on advanced graph algorithms. In addition, all readers will find a workload-aware methodology for designing efficient large-scale graph algorithms.
This book presents a step by step Asset Health Management Optimization Approach Using Internet of Things (IoT). The authors provide a comprehensive study which includes the descriptive, diagnostic, predictive, and prescriptive analysis in detail. The presentation focuses on the challenges of the parameter selection, statistical data analysis, predictive algorithms, big data storage and selection, data pattern recognition, machine learning techniques, asset failure distribution estimation, reliability and availability enhancement, condition based maintenance policy, failure detection, data driven optimization algorithm, and a multi-objective optimization approach, all of which can significantly enhance the reliability and availability of the system.
A unique, integrated approach to exploratory data mining and data quality Data analysts at information-intensive businesses are frequently asked to analyze new data sets that are often dirty–composed of numerous tables possessing unknown properties. Prior to analysis, this data must be cleaned and explored–often a long and arduous task. Ensuring data quality is a notoriously messy problem that can only be addressed by drawing on methods from many disciplines, including statistics, exploratory data mining, database management, and metadata coding. Where other books on data mining and analysis focus primarily on the last stage of the analysis procedure, Exploratory Data Mining and Data Cleaning uses a uniquely integrated approach to data exploration and data cleaning to develop a suitable modeling strategy that will help analysts to more effectively determine and implement the final technique. The authors, both seasoned data analysts at a major corporation, draw on their own professional experience to:
A groundbreaking addition to the existing literature, Exploratory Data Mining and Data Cleaning serves as an important reference for data analysts who need to analyze large amounts of unfamiliar data, operations managers, and students in undergraduate or graduate-level courses dealing with data analysis and data mining.
Measuring the abundance of individuals and the diversity of species are core components of most ecological research projects and conservation monitoring. This book brings together in one place, for the first time, the methods used to estimate the abundance of individuals in nature. The statistical basis of each method is detailed along with practical considerations for survey design and data collection. Methods are illustrated using data ranging from Alaskan shrubs to Yellowstone grizzly bears, not forgetting Costa Rican ants and Prince Edward Island lobsters. Where necessary, example code for use with the open source software R is supplied. When appropriate, reference is made to other widely used programs. After opening with a brief synopsis of relevant statistical methods, the first section deals with the abundance of stationary items such as trees, shrubs, coral, etc. Following a discussion of the use of quadrats and transects in the contexts of forestry sampling and the assessment of plant cover, there are chapters addressing line-intercept sampling, the use of nearest-neighbour distances, and variable sized plots. The second section deals with individuals that move, such as birds, mammals, reptiles, fish, etc. Approaches discussed include double-observer sampling, removal sampling, capture-recapture methods and distance sampling. The final section deals with the measurement of species richness; species diversity; species-abundance distributions; and other aspects of diversity such as evenness, similarity, turnover and rarity. This is an essential reference for anyone involved in advanced undergraduate or postgraduate ecological research and teaching, or those planning and carrying out data analysis as part of conservation survey and monitoring programmes.
This new edition covers some of the key topics relating to the latest version of MS Office through Excel 2019, including the creation of custom ribbons by injecting XML code into Excel Workbooks and how to link Excel VBA macros to customize ribbon objects. It now also provides examples in using ADO, DAO, and SQL queries to retrieve data from databases for analysis. Operations such as fully automated linear and non-linear curve fitting, linear and non-linear mapping, charting, plotting, sorting, and filtering of data have been updated to leverage the newest Excel VBA object models. The text provides examples on automated data analysis and the preparation of custom reports suitable for legal archiving and dissemination. Functionality Demonstrated in This Edition Includes: Find and extract information raw data files Format data in color (conditional formatting) Perform non-linear and linear regressions on data Create custom functions for specific applications Generate datasets for regressions and functions Create custom reports for regulatory agencies Leverage email to send generated reports Return data to Excel using ADO, DAO, and SQL queries Create database files for processed data Create tables, records, and fields in databases Add data to databases in fields or records Leverage external computational engines Call functions in MATLAB (R) and Origin (R) from Excel
Measuring the abundance of individuals and the diversity of species are core components of most ecological research projects and conservation monitoring. This book brings together in one place, for the first time, the methods used to estimate the abundance of individuals in nature. The statistical basis of each method is detailed along with practical considerations for survey design and data collection. Methods are illustrated using data ranging from Alaskan shrubs to Yellowstone grizzly bears, not forgetting Costa Rican ants and Prince Edward Island lobsters. Where necessary, example code for use with the open source software R is supplied. When appropriate, reference is made to other widely used programs. After opening with a brief synopsis of relevant statistical methods, the first section deals with the abundance of stationary items such as trees, shrubs, coral, etc. Following a discussion of the use of quadrats and transects in the contexts of forestry sampling and the assessment of plant cover, there are chapters addressing line-intercept sampling, the use of nearest-neighbour distances, and variable sized plots. The second section deals with individuals that move, such as birds, mammals, reptiles, fish, etc. Approaches discussed include double-observer sampling, removal sampling, capture-recapture methods and distance sampling. The final section deals with the measurement of species richness; species diversity; species-abundance distributions; and other aspects of diversity such as evenness, similarity, turnover and rarity. This is an essential reference for anyone involved in advanced undergraduate or postgraduate ecological research and teaching, or those planning and carrying out data analysis as part of conservation survey and monitoring programmes.
Comprehensive Coverage of the Entire Area of ClassificationResearch on the problem of classification tends to be fragmented across such areas as pattern recognition, database, data mining, and machine learning. Addressing the work of these different communities in a unified way, Data Classification: Algorithms and Applications explores the underlying algorithms of classification as well as applications of classification in a variety of problem domains, including text, multimedia, social network, and biological data. This comprehensive book focuses on three primary aspects of data classification: Methods: The book first describes common techniques used for classification, including probabilistic methods, decision trees, rule-based methods, instance-based methods, support vector machine methods, and neural networks. Domains: The book then examines specific methods used for data domains such as multimedia, text, time-series, network, discrete sequence, and uncertain data. It also covers large data sets and data streams due to the recent importance of the big data paradigm. Variations: The book concludes with insight on variations of the classification process. It discusses ensembles, rare-class learning, distance function learning, active learning, visual learning, transfer learning, and semi-supervised learning as well as evaluation aspects of classifiers. |
You may like...
Demystifying Graph Data Science - Graph…
Pethuru Raj, Abhishek Kumar, …
Hardcover
Handbook of Big Data Analytics, Volume 1…
Vadlamani Ravi, Aswani Kumar Cherukuri
Hardcover
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Cloud-Based Big Data Analytics in…
Ram Shringar Rao, Nanhay Singh, …
Hardcover
R6,677
Discovery Miles 66 770
Machine Learning and Data Analytics for…
Manikant Roy, Lovi Raj Gupta
Hardcover
R10,591
Discovery Miles 105 910
Cognitive and Soft Computing Techniques…
Akash Kumar Bhoi, Victor Hugo Costa de Albuquerque, …
Paperback
R2,583
Discovery Miles 25 830
|