![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
The last decade has witnessed the rise of big data in game development as the increasing proliferation of Internet-enabled gaming devices has made it easier than ever before to collect large amounts of player-related data. At the same time, the emergence of new business models and the diversification of the player base have exposed a broader potential audience, which attaches great importance to being able to tailor game experiences to a wide range of preferences and skill levels. This, in turn, has led to a growing interest in data mining techniques, as they offer new opportunities for deriving actionable insights to inform game design, to ensure customer satisfaction, to maximize revenues, and to drive technical innovation. By now, data mining and analytics have become vital components of game development. The amount of work being done in this area nowadays makes this an ideal time to put together a book on this subject. Data Analytics Applications in Gaming and Entertainment seeks to provide a cross section of current data analytics applications in game production. It is intended as a companion for practitioners, academic researchers, and students seeking knowledge on the latest practices in game data mining. The chapters have been chosen in such a way as to cover a wide range of topics and to provide readers with a glimpse at the variety of applications of data mining in gaming. A total of 25 authors from industry and academia have contributed 12 chapters covering topics such as player profiling, approaches for analyzing player communities and their social structures, matchmaking, churn prediction and customer lifetime value estimation, communication of analytical results, and visual approaches to game analytics. This book's perspectives and concepts will spark heightened interest in game analytics and foment innovative ideas that will advance the exciting field of online gaming and entertainment.
Automatic Performance Prediction of Parallel Programs presents a unified approach to the problem of automatically estimating the performance of parallel computer programs. The author focuses primarily on distributed memory multiprocessor systems, although large portions of the analysis can be applied to shared memory architectures as well. The author introduces a novel and very practical approach for predicting some of the most important performance parameters of parallel programs, including work distribution, number of transfers, amount of data transferred, network contention, transfer time, computation time and number of cache misses. This approach is based on advanced compiler analysis that carefully examines loop iteration spaces, procedure calls, array subscript expressions, communication patterns, data distributions and optimizing code transformations at the program level; and the most important machine specific parameters including cache characteristics, communication network indices, and benchmark data for computational operations at the machine level. The material has been fully implemented as part of P3T, which is an integrated automatic performance estimator of the Vienna Fortran Compilation System (VFCS), a state-of-the-art parallelizing compiler for Fortran77, Vienna Fortran and a subset of High Performance Fortran (HPF) programs. A large number of experiments using realistic HPF and Vienna Fortran code examples demonstrate highly accurate performance estimates, and the ability of the described performance prediction approach to successfully guide both programmer and compiler in parallelizing and optimizing parallel programs. A graphical user interface is described and displayed that visualizes each program source line together with the corresponding parameter values. P3T uses color-coded performance visualization to immediately identify hot spots in the parallel program. Performance data can be filtered and displayed at various levels of detail. Colors displayed by the graphical user interface are visualized in greyscale. Automatic Performance Prediction of Parallel Programs also includes coverage of fundamental problems of automatic parallelization for distributed memory multicomputers, a description of the basic parallelization strategy and a large variety of optimizing code transformations as included under VFCS.
This book highlights some of the unique aspects of spatio-temporal graph data from the perspectives of modeling and developing scalable algorithms. The authors discuss in the first part of this book, the semantic aspects of spatio-temporal graph data in two application domains, viz., urban transportation and social networks. Then the authors present representational models and data structures, which can effectively capture these semantics, while ensuring support for computationally scalable algorithms. In the first part of the book, the authors describe algorithmic development issues in spatio-temporal graph data. These algorithms internally use the semantically rich data structures developed in the earlier part of this book. Finally, the authors introduce some upcoming spatio-temporal graph datasets, such as engine measurement data, and discuss some open research problems in the area. This book will be useful as a secondary text for advanced-level students entering into relevant fields of computer science, such as transportation and urban planning. It may also be useful for researchers and practitioners in the field of navigational algorithms.
This book gathers a collection of high-quality peer-reviewed research papers presented at the International Conference on Big Data, IoT and Machine Learning (BIM 2021), held in Cox's Bazar, Bangladesh, during 23-25 September 2021. The book covers research papers in the field of big data, IoT and machine learning. The book will be helpful for active researchers and practitioners in the field.
Big Data Analytics for Sensor-Network Collected Intelligence explores state-of-the-art methods for using advanced ICT technologies to perform intelligent analysis on sensor collected data. The book shows how to develop systems that automatically detect natural and human-made events, how to examine people's behaviors, and how to unobtrusively provide better services. It begins by exploring big data architecture and platforms, covering the cloud computing infrastructure and how data is stored and visualized. The book then explores how big data is processed and managed, the key security and privacy issues involved, and the approaches used to ensure data quality. In addition, readers will find a thorough examination of big data analytics, analyzing statistical methods for data analytics and data mining, along with a detailed look at big data intelligence, ubiquitous and mobile computing, and designing intelligence system based on context and situation. Indexing: The books of this series are submitted to EI-Compendex and SCOPUS
This digital electronics text focuses on "how to" design, build, operate and adapt data acquisition systems. The material begins with basic logic gates and ends with a 40 KHz voltage measurer. The approach aims to cover a minimal number of topics in detail. The data acquisition circuits described communicate with a host computer through parallel I/O ports. The fundamental idea of the book is that parallel I/O ports (available for all popular computers) offer a superior balance of simplicity, low cost, speed, flexibility and adaptability. All circuits and software are thoroughly tested. Construction details and troubleshooting guidelines are included. This book is intended to serve people who teach or study one of the following: digital electronics, circuit design, software that interacts outside hardware, the process of computer based acquisition, and the design, adaptation, construction and testing of measurement systems.
ELlA M. LEIBOWITZ Director, Wise Observatory Chair, Scientific Organizing Committee The international symposium on "Astronomical Time Series" was held at the Tel Aviv University campus in Tel Aviv, from December 30 1996 to January 11997. It was organized in order to celebrate the 25th anniversary of the Florence and George Wise Observatory (WO) operated by Tel Aviv University. The site of the 1 meter telescope of the observatory is near the town of Mitzpe-Ramon, some 220 km south of Tel Aviv, at the center of the Israeli Negev highland. There were two major reasons for the choice of Time Series as the sub ject matter for our symposium. One is mainly concerned with the subject matter itself, and one is related particularly to the Wise Observatory. There is hardly any doubt that astronomical time series are among the most ancient concepts in human civilization and culture. One can even say that astronomical time series preceeded astronomy itself, as the impression of the day /night cycle on Earth is probably the first and most fundamental effect that impress a. human being, or, in fact, most living creatures on this planet. An echo of this idea. can be heard in the Biblical story of Creation, where the concept of night and day preceeds the creation of the astronomical objects."
Leverage the power of Talent Intelligence (TI) to make evidence-informed decisions that drive business performance by using data about people, skills, jobs, business functions and geographies. Improved access to people and business data has created huge opportunities for the HR function. However, simply having access to this data is not enough. HR professionals need to know how to analyse the data, know what questions to ask of it and where and how the insights from the data can add the most value. Talent Intelligence is a practical guide that explains everything HR professionals need to know to achieve this. It outlines what Talent Intelligence (TI) is why it's important, how to use it to improve business results and includes guidance on how HR professionals can build the business case for it. This book also explains how and why talent intelligence is different from workforce planning, sourcing research and standard predictive HR analytics and shows how to assess where in the organization talent intelligence can have the biggest impact and how to demonstrate the results to all stakeholders. Most importantly, this book covers KPIs and metrics for success, short-term and long-term TI goals, an outline of what success looks like and the skills needed for effective Talent Intelligence. It also features case studies from organizations including Philips, Barclays and Kimberly-Clark.
This book explains the Linked Data domain by adopting a bottom-up approach: it introduces the fundamental Semantic Web technologies and building blocks, which are then combined into methodologies and end-to-end examples for publishing datasets as Linked Data, and use cases that harness scholarly information and sensor data. It presents how Linked Data is used for web-scale data integration, information management and search. Special emphasis is given to the publication of Linked Data from relational databases as well as from real-time sensor data streams. The authors also trace the transformation from the document-based World Wide Web into a Web of Data. Materializing the Web of Linked Data is addressed to researchers and professionals studying software technologies, tools and approaches that drive the Linked Data ecosystem, and the Web in general.
Provides a concise review of impacts of social media analytics Reviews associated risks in the form of data leakage, privacy, transparency, exploitation, and ownership Analysis's tactics and growing vulnerabilities, exposure and cybercriminal expansion Reviews manipulation and new evolving technologies in social media analytics Innovative and emerging models to help develop strategic understanding.
Nearly every large corporation and governmental agency is taking a fresh look at their current enterprise-scale business intelligence (BI) and data warehousing implementations at the dawn of the "Big Data Era"...and most see a critical need to revitalize their current capabilities. Whether they find the frustrating and business-impeding continuation of a long-standing "silos of data" problem, or an over-reliance on static production reports at the expense of predictive analytics and other true business intelligence capabilities, or a lack of progress in achieving the long-sought-after enterprise-wide "single version of the truth" - or all of the above - IT Directors, strategists, and architects find that they need to go back to the drawing board and produce a brand new BI/data warehousing roadmap to help move their enterprises from their current state to one where the promises of emerging technologies and a generation's worth of best practices can finally deliver high-impact, architecturally evolvable enterprise-scale business intelligence and data warehousing. Author Alan Simon, whose BI and data warehousing experience dates back to the late 1970s and who has personally delivered or led more than thirty enterprise-wide BI/data warehousing roadmap engagements since the mid-1990s, details a comprehensive step-by-step approach to building a best practices-driven, multi-year roadmap in the quest for architecturally evolvable BI and data warehousing at the enterprise scale. Simon addresses the triad of technology, work processes, and organizational/human factors considerations in a manner that blends the visionary and the pragmatic.
Features contributions from thought leaders across academia, industry, and government Focuses on novel algorithms and practical applications
Doing data science is difficult. Projects are typically very dynamic with requirements that change as data understanding grows. The data itself arrives piecemeal, is added to, replaced, contains undiscovered flaws and comes from a variety of sources. Teams also have mixed skill sets and tooling is often limited. Despite these disruptions, a data science team must get off the ground fast and begin demonstrating value with traceable, tested work products. This is when you need Guerrilla Analytics. In this book, you will learn about: The Guerrilla Analytics Principles: simple rules of thumb for maintaining data provenance across the entire analytics life cycle from data extraction, through analysis to reporting. Reproducible, traceable analytics: how to design and implement work products that are reproducible, testable and stand up to external scrutiny. Practice tips and war stories: 90 practice tips and 16 war stories based on real-world project challenges encountered in consulting, pre-sales and research. Preparing for battle: how to set up your team's analytics environment in terms of tooling, skill sets, workflows and conventions. Data gymnastics: over a dozen analytics patterns that your team will encounter again and again in projects
Every day, more and more kinds of historical data become available, opening exciting new avenues of inquiry but also new challenges. This updated and expanded book describes and demonstrates the ways these data can be explored to construct cultural heritage knowledge, for research and in teaching and learning. It helps humanities scholars to grasp Big Data in order to do their work, whether that means understanding the underlying algorithms at work in search engines or designing and using their own tools to process large amounts of information.Demonstrating what digital tools have to offer and also what 'digital' does to how we understand the past, the authors introduce the many different tools and developing approaches in Big Data for historical and humanistic scholarship, show how to use them, what to be wary of, and discuss the kinds of questions and new perspectives this new macroscopic perspective opens up. Originally authored 'live' online with ongoing feedback from the wider digital history community, Exploring Big Historical Data breaks new ground and sets the direction for the conversation into the future.Exploring Big Historical Data should be the go-to resource for undergraduate and graduate students confronted by a vast corpus of data, and researchers encountering these methods for the first time. It will also offer a helping hand to the interested individual seeking to make sense of genealogical data or digitized newspapers, and even the local historical society who are trying to see the value in digitizing their holdings.
The ethics of data and analytics, in many ways, is no different than any endeavor to find the "right" answer. When a business chooses a supplier, funds a new product, or hires an employee, managers are making decisions with moral implications. The decisions in business, like all decisions, have a moral component in that people can benefit or be harmed, rules are followed or broken, people are treated fairly or not, and rights are enabled or diminished. However, data analytics introduces wrinkles or moral hurdles in how to think about ethics. Questions of accountability, privacy, surveillance, bias, and power stretch standard tools to examine whether a decision is good, ethical, or just. Dealing with these questions requires different frameworks to understand what is wrong and what could be better. Ethics of Data and Analytics: Concepts and Cases does not search for a new, different answer or to ban all technology in favor of human decision-making. The text takes a more skeptical, ironic approach to current answers and concepts while identifying and having solidarity with others. Applying this to the endeavor to understand the ethics of data and analytics, the text emphasizes finding multiple ethical approaches as ways to engage with current problems to find better solutions rather than prioritizing one set of concepts or theories. The book works through cases to understand those marginalized by data analytics programs as well as those empowered by them. Three themes run throughout the book. First, data analytics programs are value-laden in that technologies create moral consequences, reinforce or undercut ethical principles, and enable or diminish rights and dignity. This places an additional focus on the role of developers in their incorporation of values in the design of data analytics programs. Second, design is critical. In the majority of the cases examined, the purpose is to improve the design and development of data analytics programs. Third, data analytics, artificial intelligence, and machine learning are about power. The discussion of power-who has it, who gets to keep it, and who is marginalized-weaves throughout the chapters, theories, and cases. In discussing ethical frameworks, the text focuses on critical theories that question power structures and default assumptions and seek to emancipate the marginalized.
The ethics of data and analytics, in many ways, is no different than any endeavor to find the "right" answer. When a business chooses a supplier, funds a new product, or hires an employee, managers are making decisions with moral implications. The decisions in business, like all decisions, have a moral component in that people can benefit or be harmed, rules are followed or broken, people are treated fairly or not, and rights are enabled or diminished. However, data analytics introduces wrinkles or moral hurdles in how to think about ethics. Questions of accountability, privacy, surveillance, bias, and power stretch standard tools to examine whether a decision is good, ethical, or just. Dealing with these questions requires different frameworks to understand what is wrong and what could be better. Ethics of Data and Analytics: Concepts and Cases does not search for a new, different answer or to ban all technology in favor of human decision-making. The text takes a more skeptical, ironic approach to current answers and concepts while identifying and having solidarity with others. Applying this to the endeavor to understand the ethics of data and analytics, the text emphasizes finding multiple ethical approaches as ways to engage with current problems to find better solutions rather than prioritizing one set of concepts or theories. The book works through cases to understand those marginalized by data analytics programs as well as those empowered by them. Three themes run throughout the book. First, data analytics programs are value-laden in that technologies create moral consequences, reinforce or undercut ethical principles, and enable or diminish rights and dignity. This places an additional focus on the role of developers in their incorporation of values in the design of data analytics programs. Second, design is critical. In the majority of the cases examined, the purpose is to improve the design and development of data analytics programs. Third, data analytics, artificial intelligence, and machine learning are about power. The discussion of power-who has it, who gets to keep it, and who is marginalized-weaves throughout the chapters, theories, and cases. In discussing ethical frameworks, the text focuses on critical theories that question power structures and default assumptions and seek to emancipate the marginalized.
Research findings and dissemination are making healthcare more effective. Electronic health records systems and advanced tools are making care delivery more efficient. Legislative reforms are striving to make care more affordable. Efforts still need to be focused on making healthcare more accessible. Clinical Videoconferencing in Telehealth takes a comprehensive and vital step forward in providing mental health and primary care services for those who cannot make traditional office visits, live in remote areas, have transportation or mobility issues or have competing demands. Practical, evidence-based information is presented in a step by step format at two levels: for administrators, including information regarding selecting the right videoconferencing technology, navigating regulatory issues, policy temples, boilerplate language for entering into care agreements with other entities and practical solutions to multisite programming; and for clinicians, including protocols for safe, therapeutically sound practice, informed consent and tips for overcoming common technical barriers to communication in clinical videoconferencing contexts. Checklists, tables, templates, links, vignettes and other tools help to equip professional readers for providing safe services that are streamlined and relevant while avoiding guesswork, false starts and waste. The book takes a friendly-mentor approach to communication in areas such as: Logistics for administrators: Clinical videoconferencing infrastructures and technologies Policy development, procedures and tools for responsible and compliant programming Navigating issues related to providing services in multiple locations Protocols for clinicians: The informed consent process in clinical videoconferencing Clinical assessment and safety planning for remote services Minimizing communication disruption and optimizing the therapeutic alliance Clinical Videoconferencing in Telehealth aptly demonstrates the promise and potential of this technology for clinicians, clinic managers, administrators and others affiliated with mental health clinical practices. It is designed to be the comprehensive "one-stop" tool for clinical videoconferencing service development for programs and individual clinicians.
1) Discusses technical details of the Machine Learning tools and techniques in the different types of cancers 2) Machine learning and data mining in healthcare is a very important topic and hence there would be a demand for such a book 3) As compared to other titles, the proposed book focuses on different types of cancer disease and their prediction strategy using machine leaning and data mining.
Connects four contemporary areas of research: Artificial Intelligence, big data analytics, knowledge modelling, and healthcare Covers a list of diverse topics related to healthcare and knowledge modelling Summarizes the most important recent and valuable research related to big data analytics in the healthcare sector Includes case studies related to the application of big data in healthcare Highlights modern developments, challenges, opportunities, and future research directions in healthcare
Gain a thorough understanding of today's sometimes daunting, ever-changing world of technology as you learn how to apply the latest technology to your academic, professional and personal life with TECHNOLOGY FOR SUCCESS: COMPUTER CONCEPTS. Written by a team of best-selling technology authors and based on extensive research and feedback from students like you, this edition breaks each topic into brief, inviting lessons that address the "what, why and how" behind digital advancements to ensure deep understanding and application to today's real world. Optional online MindTap and SAM (Skills Assessment Manager) learning tools offer hands-on and step-by-step training, videos that cover the more difficult concepts and simulations that challenge you to solve problems in the actual world. You leave this course able to read the latest technology news and understand its impact on your daily life, the economy and society.
A well thought out, fit-for-purpose data strategy is vital to modern data-driven businesses. This book is your essential guide to planning, developing and implementing such a strategy, presenting a framework which takes you from data strategy definition to successful strategy delivery and execution with support and engagement from stakeholders. Key topics include data-driven business transformation, change enablers, benefits realisation and measurement.
Recent years have seen an explosion in new kinds of data on infectious diseases, including data on social contacts, whole genome sequences of pathogens, biomarkers for susceptibility to infection, serological panel data, and surveillance data. The Handbook of Infectious Disease Data Analysis provides an overview of many key statistical methods that have been developed in response to such new data streams and the associated ability to address key scientific and epidemiological questions. A unique feature of the Handbook is the wide range of topics covered. Key features Contributors include many leading researchers in the field Divided into four main sections: Basic concepts, Analysis of Outbreak Data, Analysis of Seroprevalence Data, Analysis of Surveillance Data Numerous case studies and examples throughout Provides both introductory material and key reference material
The need for analytics skills is a source of the burgeoning growth in the number of analytics and decision science programs in higher education developed to feed the need for capable employees in this area. The very size and continuing growth of this need means that there is still space for new program development. Schools wishing to pursue business analytics programs intentionally assess the maturity level of their programs and take steps to close the gap. Teaching Data Analytics: Pedagogy and Program Design is a reference for faculty and administrators seeking direction about adding or enhancing analytics offerings at their institutions. It provides guidance by examining best practices from the perspectives of faculty and practitioners. By emphasizing the connection of data analytics to organizational success, it reviews the position of analytics and decision science programs in higher education, and to review the critical connection between this area of study and career opportunities. The book features: A variety of perspectives ranging from the scholarly theoretical to the practitioner applied An in-depth look into a wide breadth of skills from closely technology-focused to robustly soft human connection skills Resources for existing faculty to acquire and maintain additional analytics-relevant skills that can enrich their current course offerings. Acknowledging the dichotomy between data analytics and data science, this book emphasizes data analytics rather than data science, although the book does touch upon the data science realm. Starting with industry perspectives, the book covers the applied world of data analytics, covering necessary skills and applications, as well as developing compelling visualizations. It then dives into pedagogical and program design approaches in data analytics education and concludes with ideas for program design tactics. This reference is a launching point for discussions about how to connect industry's need for skilled data analysts to higher education's need to design a rigorous curriculum that promotes student critical thinking, communication, and ethical skills. It also provides insight into adding new elements to existing data analytics courses and for taking the next step in adding data analytics offerings, whether it be incorporating additional analytics assignments into existing courses, offering one course designed for undergraduates, or an integrated program designed for graduate students.
The organization of data is clearly of great importance in the design of high performance algorithms and architectures. Although there are several landmark papers on this subject, no comprehensive treatment has appeared. This monograph is intended to fill that gap. We introduce a model of computation for parallel computer architec tures, by which we are able to express the intrinsic complexity of data or ganization for specific architectures. We apply this model of computation to several existing parallel computer architectures, e.g., the CDC 205 and CRAY vector-computers, and the MPP binary array processor. The study of data organization in parallel computations was introduced as early as 1970. During the development of the ILLIAC IV system there was a need for a theory of possible data arrangements in interleaved mem ory systems. The resulting theory dealt primarily with storage schemes also called skewing schemes for 2-dimensional matrices, i.e., mappings from a- dimensional array to a number of memory banks. By means of the model of computation we are able to apply the theory of skewing schemes to var ious kinds of parallel computer architectures. This results in a number of consequences for both the design of parallel computer architectures and for applications of parallel processing." |
You may like...
Fundamentals of Analytical Chemistry
Stanley Crouch, Douglas Skoog, …
Hardcover
Handbook of Thermal Analysis and…
Sergey Vyazovkin, Nobuyoshi Koga, …
Paperback
Encyclopedia of Spectroscopy and…
John C. Lindon, George E. Tranter, …
Hardcover
R59,229
Discovery Miles 592 290
Aggregation-Induced Emission: Materials…
Michiya Fujiki, bin Liu, …
Hardcover
R4,839
Discovery Miles 48 390
Metal Chalcogenide Biosensors…
Ali Salehabadi, Morteza Enhessari, …
Paperback
R3,696
Discovery Miles 36 960
Applications of Advanced Omics…
Virginia Garcia-Canas, Alejandro Cifuentes, …
Hardcover
Prof. of Drug Substances, Excipients and…
Abdulrahman Al-Majed
Hardcover
R5,239
Discovery Miles 52 390
Assessing Transformation Products of…
Joerg E. Drewes, Thomas Letzel
Hardcover
R4,835
Discovery Miles 48 350
Assessing Exposures and Reducing Risks…
James N. Seiber, Robert I. Krieger, …
Hardcover
R2,043
Discovery Miles 20 430
|