![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Reference & Interdisciplinary > Communication studies > Data analysis
As the analysis of big datasets in sports performance becomes a more entrenched part of the sporting landscape, so the value of sport scientists and analysts with formal training in data analytics grows. Sports Analytics: Analysis, Visualisation and Decision Making in Sports Performance provides the most authoritative and comprehensive guide to the use of analytics in sport and its application in sports performance, coaching, talent identification and sports medicine available. Employing an approach-based structure and integrating problem-based learning throughout the text, the book clearly defines the difference between analytics and analysis and goes on to explain and illustrate methods including: Interactive visualisation Simulation and modelling Geospatial data analysis Spatiotemporal analysis Machine learning Genomic data analysis Social network analysis Offering a mixed-methods case study chapter, no other book offers the same level of scientific grounding or practical application in sports data analytics. Sports Analytics is essential reading for all students of sports analytics, and useful supplementary reading for students and professionals in talent identification and development, sports performance analysis, sports medicine and applied computer science.
This book covers several new areas in the growing field of analytics with some innovative applications in different business contexts, and consists of selected presentations at the 6th IIMA International Conference on Advanced Data Analysis, Business Analytics and Intelligence. The book is conceptually divided in seven parts. The first part gives expository briefs on some topics of current academic and practitioner interests, such as data streams, binary prediction and reliability shock models. In the second part, the contributions look at artificial intelligence applications with chapters related to explainable AI, personalized search and recommendation, and customer retention management. The third part deals with credit risk analytics, with chapters on optimization of credit limits and mitigation of agricultural lending risks. In its fourth part, the book explores analytics and data mining in the retail context. In the fifth part, the book presents some applications of analytics to operations management. This part has chapters related to improvement of furnace operations, forecasting food indices and analytics for improving student learning outcomes. The sixth part has contributions related to adaptive designs in clinical trials, stochastic comparisons of systems with heterogeneous components and stacking of models. The seventh and final part contains chapters related to finance and economics topics, such as role of infrastructure and taxation on economic growth of countries and connectedness of markets with heterogenous agents, The different themes ensure that the book would be of great value to practitioners, post-graduate students, research scholars and faculty teaching advanced business analytics courses.
A guide to the principles and methods of data analysis that does not require knowledge of statistics or programming A General Introduction to Data Analytics is an essential guide to understand and use data analytics. This book is written using easy-to-understand terms and does not require familiarity with statistics or programming. The authors--noted experts in the field--highlight an explanation of the intuition behind the basic data analytics techniques. The text also contains exercises and illustrative examples. Thought to be easily accessible to non-experts, the book provides motivation to the necessity of analyzing data. It explains how to visualize and summarize data, and how to find natural groups and frequent patterns in a dataset. The book also explores predictive tasks, be them classification or regression. Finally, the book discusses popular data analytic applications, like mining the web, information retrieval, social network analysis, working with text, and recommender systems. The learning resources offer: A guide to the reasoning behind data mining techniques A unique illustrative example that extends throughout all the chapters Exercises at the end of each chapter and larger projects at the end of each of the text's two main parts Together with these learning resources, the book can be used in a 13-week course guide, one chapter per course topic. The book was written in a format that allows the understanding of the main data analytics concepts by non-mathematicians, non-statisticians and non-computer scientists interested in getting an introduction to data science. A General Introduction to Data Analytics is a basic guide to data analytics written in highly accessible terms.
Digital transformation is a vital practice for organizations trying to keep up with competitors, but with new digital approaches constantly promising to revolutionize the workplace it can feel impossible to keep up. Cut through the hype with this accessible guide to making end-to-end digital transformation happen. While technology offers the possibility for business improvement, successful digital transformation also requires an effective strategy, the right culture, change management, the ability to stimulate innovation and the knowledge of where to upskill and where to bring in new talent. The Practical Guide to Digital Transformation covers each of these factors and more by breaking the process down to 17 easy-to-follow and practical steps. Each chapter includes a case study of an organization getting it right, along with advice on putting the principle into action, key tips and tricks, and what you might say in your next meeting. This book also outlines how to start with the foundations of 'doing digital' and build from there, including data science, cyber security, workable technology, minimised stack duplication, data registers and good user experience. Quickly build confidence and make change happen with this actionable guide to the essentials of digital transformation.
More students study management and organization studies than ever, the number of business schools worldwide continues to rise, and more management research is being published in a greater number of journals than could have been imagined twenty years ago. Dennis Tourish looks beneath the surface of this progress to expose a field in crisis and in need of radical reform. He identifies the ways in which management research has lost its way, including a remoteness from the practical problems that managers and employees face, a failure to replicate key research findings, poor writing, endless obscure theorizing, and an increasing number of research papers being retracted for fraud and other forms of malpractice. Tourish suggests fundamental changes to remedy these issues, enabling management research to become more robust, more interesting and more valuable to society. A must read for academics, practising managers, university administrators and policy makers within higher education.
Leverage the power of Talent Intelligence (TI) to make evidence-informed decisions that drive business performance by using data about people, skills, jobs, business functions and geographies. Improved access to people and business data has created huge opportunities for the HR function. However, simply having access to this data is not enough. HR professionals need to know how to analyse the data, know what questions to ask of it and where and how the insights from the data can add the most value. Talent Intelligence is a practical guide that explains everything HR professionals need to know to achieve this. It outlines what Talent Intelligence (TI) is why it's important, how to use it to improve business results and includes guidance on how HR professionals can build the business case for it. This book also explains how and why talent intelligence is different from workforce planning, sourcing research and standard predictive HR analytics and shows how to assess where in the organization talent intelligence can have the biggest impact and how to demonstrate the results to all stakeholders. Most importantly, this book covers KPIs and metrics for success, short-term and long-term TI goals, an outline of what success looks like and the skills needed for effective Talent Intelligence. It also features case studies from organizations including Philips, Barclays and Kimberly-Clark.
This book provides readers with a thorough understanding of various research areas within the field of data science. The book introduces readers to various techniques for data acquisition, extraction, and cleaning, data summarizing and modeling, data analysis and communication techniques, data science tools, deep learning, and various data science applications. Researchers can extract and conclude various future ideas and topics that could result in potential publications or thesis. Furthermore, this book contributes to Data Scientists' preparation and to enhancing their knowledge of the field. The book provides a rich collection of manuscripts in highly regarded data science topics, edited by professors with long experience in the field of data science. Introduces various techniques, methods, and algorithms adopted by Data Science experts Provides a detailed explanation of data science perceptions, reinforced by practical examples Presents a road map of future trends suitable for innovative data science research and practice
This book constitutes selected and revised papers from the First Mediterranean Forum - Data Science Conference, MeFDATA 2020, held in Sarajevo, Bosnia and Herzegovina, in October 2020. The 11 papers presented were carefully reviewed and selected from the 26 qualified submissions. The papers are organized in the topical sections on human behaviour and pandemic; applications in medicine; industrial applications; natural language processing.
This volume presents the latest advances in statistics and data science, including theoretical, methodological and computational developments and practical applications related to classification and clustering, data gathering, exploratory and multivariate data analysis, statistical modeling, and knowledge discovery and seeking. It includes contributions on analyzing and interpreting large, complex and aggregated datasets, and highlights numerous applications in economics, finance, computer science, political science and education. It gathers a selection of peer-reviewed contributions presented at the 16th Conference of the International Federation of Classification Societies (IFCS 2019), which was organized by the Greek Society of Data Analysis and held in Thessaloniki, Greece, on August 26-29, 2019.
Although longitudinal social network data are increasingly collected, there are few guides on how to navigate the range of available tools for longitudinal network analysis. The applied social scientist is left to wonder: Which model is most appropriate for my data? How should I get started with this modeling strategy? And how do I know if my model is any good? This book answers these questions. Author Scott Duxbury assumes that the reader is familiar with network measurement, description, and notation, and is versed in regression analysis, but is likely unfamiliar with statistical network methods. The goal of the book is to guide readers towards choosing, applying, assessing, and interpreting a longitudinal network model, and each chapter is organized with a specific data structure or research question in mind. A companion website includes data and R code to replicate the examples in the book.
This comprehensive book, rich with applications, offers a quantitative framework for the analysis of the various capture-recapture models for open animal populations, while also addressing associated computational methods. The state of our wildlife populations provides a litmus test for the state of our environment, especially in light of global warming and the increasing pollution of our land, seas, and air. In addition to monitoring our food resources such as fisheries, we need to protect endangered species from the effects of human activities (e.g. rhinos, whales, or encroachments on the habitat of orangutans). Pests must be be controlled, whether insects or viruses, and we need to cope with growing feral populations such as opossums, rabbits, and pigs. Accordingly, we need to obtain information about a given population's dynamics, concerning e.g. mortality, birth, growth, breeding, sex, and migration, and determine whether the respective population is increasing , static, or declining. There are many methods for obtaining population information, but the most useful (and most work-intensive) is generically known as "capture-recapture," where we mark or tag a representative sample of individuals from the population and follow that sample over time using recaptures, resightings, or dead recoveries. Marks can be natural, such as stripes, fin profiles, and even DNA; or artificial, such as spots on insects. Attached tags can, for example, be simple bands or streamers, or more sophisticated variants such as radio and sonic transmitters. To estimate population parameters, sophisticated and complex mathematical models have been devised on the basis of recapture information and computer packages. This book addresses the analysis of such models. It is primarily intended for ecologists and wildlife managers who wish to apply the methods to the types of problems discussed above, though it will also benefit researchers and graduate students in ecology. Familiarity with basic statistical concepts is essential.
This volume focuses on the ethics of internet and social networking research exploring the challenges faced by researchers making use of social media and big data in their research. The internet, the world wide web and social media - indeed all forms of online communications - are attractive fields of research across a range of disciplines. They offer opportunities for methodological initiatives and innovations in research and easily accessed, massive amounts of primary and secondary data sources. This collection examines the new challenges posed by data generated online, explores how researchers are addressing those ethical challenges, and provides rich case studies of ethical decision making in the digital age.
Discover this multi-disciplinary and insightful work, which integrates machine learning, edge computing, and big data. Presents the basics of training machine learning models, key challenges and issues, as well as comprehensive techniques including edge learning algorithms, and system design issues. Describes architectures, frameworks, and key technologies for learning performance, security, and privacy, as well as incentive issues in training/inference at the network edge. Intended to stimulate fruitful discussions, inspire further research ideas, and inform readers from both academia and industry backgrounds. Essential reading for experienced researchers and developers, or for those who are just entering the field.
This book highlights some of the unique aspects of spatio-temporal graph data from the perspectives of modeling and developing scalable algorithms. The authors discuss in the first part of this book, the semantic aspects of spatio-temporal graph data in two application domains, viz., urban transportation and social networks. Then the authors present representational models and data structures, which can effectively capture these semantics, while ensuring support for computationally scalable algorithms. In the first part of the book, the authors describe algorithmic development issues in spatio-temporal graph data. These algorithms internally use the semantically rich data structures developed in the earlier part of this book. Finally, the authors introduce some upcoming spatio-temporal graph datasets, such as engine measurement data, and discuss some open research problems in the area. This book will be useful as a secondary text for advanced-level students entering into relevant fields of computer science, such as transportation and urban planning. It may also be useful for researchers and practitioners in the field of navigational algorithms.
At first glance, the skills required to work in the data science field appear to be self-explanatory. Do not be fooled. Impactful data science demands an interdisciplinary knowledge of business philosophy, project management, salesmanship, presentation, and more. In Managing Your Data Science Projects, author Robert de Graaf explores important concepts that are frequently overlooked in much of the instructional literature that is available to data scientists new to the field. If your completed models are to be used and maintained most effectively, you must be able to present and sell them within your organization in a compelling way. The value of data science within an organization cannot be overstated. Thus, it is vital that strategies and communication between teams are dexterously managed. Three main ways that data science strategy is used in a company is to research its customers, assess risk analytics, and log operational measurements. These all require different managerial instincts, backgrounds, and experiences, and de Graaf cogently breaks down the unique reasons behind each. They must align seamlessly to eventually be adopted as dynamic models. Data science is a relatively new discipline, and as such, internal processes for it are not as well-developed within an operational business as others. With Managing Your Data Science Projects, you will learn how to create products that solve important problems for your customers and ensure that the initial success is sustained throughout the product's intended life. Your users will trust you and your models, and most importantly, you will be a more well-rounded and effectual data scientist throughout your career. Who This Book Is For Early-career data scientists, managers of data scientists, and those interested in entering the field of data science
The social sciences are becoming datafied. The questions once considered the domain of sociologists are now answered by data scientists operating on large datasets and breaking with methodological tradition, for better or worse. The traditional social sciences, such as sociology or anthropology, are under the double threat of becoming marginalized or even irrelevant, both from new methods of research which require more computational skills and from increasing competition from the corporate world which gains an additional advantage based on data access. However, unlike data scientists, sociologists and anthropologists have a long history of doing qualitative research. The more quantified datasets we have, the more difficult it is to interpret them without adding layers of qualitative interpretation. Big Data therefore needs Thick Data. This book presents the available arsenal of new methods and tools for studying society both quantitatively and qualitatively, opening ground for the social sciences to take the lead in analysing digital behaviour. It shows that Big Data can and should be supplemented and interpreted through thick data as well as cultural analysis. Thick Big Data is critically important for students and researchers in the social sciences to understand the possibilities of digital analysis, both in the quantitative and qualitative area, and to successfully build mixed-methods approaches.
Faculty members, scholars, and researchers often ask where they should publish their work; which outlets are most suitable to showcase their research? Which journals should they publish in to ensure their work is read and cited? How can the impact of their scholarly output be maximized? The answers to these and related questions affect not only individual scholars, but also academic and research institution stakeholders who are under constant pressure to create and implement organizational policies, evaluation measures and reward systems that encourage quality, high impact research from their members. The explosion of academic research in recent years, along with advances in information technology, has given rise to omnipresent and increasingly important scholarly metrics. These measures need to be assessed and used carefully, however, as their widespread availability often tempts users to jump to improper conclusions without considering several caveats. While various quantitative tools enable the ranking, evaluating, categorizing, and comparing of journals and articles, metrics such as author or article citation counts, journal impact factors, and related measures of institutional research output are somewhat inconsistent with traditional goals and objectives of higher education research and scholarly academic endeavors. This book provides guidance to individual researchers, research organizations, and academic institutions as they grapple with rapidly developing issues surrounding scholarly metrics and their potential value to both policy-makers, as evaluation and measurement tools, and individual scholars, as a way to identify colleagues for potential collaboration, promote their position as public intellectuals, and support intellectual community engagement.
Dirty data is a problem that costs businesses thousands, if not millions, every year. In organisations large and small across the globe you will hear talk of data quality issues. What you will rarely hear about is the consequences or how to fix it. Between the Spreadsheets: Classifying and Fixing Dirty Data draws on classification expert Susan Walsh's decade of experience in data classification to present a fool-proof method for cleaning and classifying your data. The book covers everything from the very basics of data classification to normalisation and taxonomies, and presents the author's proven COAT methodology, helping ensure an organisation's data is Consistent, Organised, Accurate and Trustworthy. A series of data horror stories outlines what can go wrong in managing data, and if it does, how it can be fixed. After reading this book, regardless of your level of experience, not only will you be able to work with your data more efficiently, but you will also understand the impact the work you do with it has, and how it affects the rest of the organisation. Written in an engaging and highly practical manner, Between the Spreadsheets gives readers of all levels a deep understanding of the dangers of dirty data and the confidence and skills to work more efficiently and effectively with it.
Reliable data analysis lies at the heart of scientific research, helping you to figure out what your data is really telling you. Yet the analysis of data can be a stumbling block for even the most experienced researcher - and can be a particularly daunting prospect when analyzing your own data for the first time. Drawing on the author's extensive experience of supporting project students, Scientific Data Analysis is a guide for any science undergraduate or beginning graduate who needs to analyse their own data, and wants a clear, step-by-step description of how to carry out their analysis in a robust, error-free way. With video content generated by the author to dovetail with the printed text, the resource not only describes the principles of data analysis and the strategies that should be adopted for a successful outcome but also shows you how to carry out that analysis - with the videos breaking down the process of analysis into easy-to-digest chunks. With guidance on the use of Minitab, SPSS and Excel, Scientific Data Analysis doesn't just support the use of one particular software package: it is the ideal guide to carrying out your own data analysis regardless of the software you have chosen. Online Resource Centre: The
A nuts-and-bolts guide to conducting your own professional-quality surveys without paying professional fees. How can you gauge public support for a cause or test the market for a product or service? What are the best methods for validating opinions for use in a paper or dissertation? A well-documented survey is the answer. But what if you don’t have thousands of dollars to commission one? No problem. How to Conduct Your Own Survey gives you everything you need to do it yourself! Without any prior training, you can learn expert techniques for conducting accurate, low-cost surveys. In step-by-step, down-to-earth language, Priscilla Salant and Don A. Dillman give you the tools you need to:
In this provocative yet practical guidebook Steve Morlidge demonstrates why the approach and methods of performance reporting that all information professionals have been taught fails, and what we need to do differently to help us make sense of the dynamic, complex and data rich world in which we now live and work. Reporting on performance should not be treated as worthy but dull, requiring no more than routine comparisons of actual against targets. This traditional approach is based on the false premise organisations can be managed as if they were a simple mechanical system operating in a predictable environment. And the methods associated with it, such as variance analyses and data tables that are used to measure and communicate performance, are completely inadequate. Instead, Morlidge argues performance reporting should be reconceived as an act of perception conducted on behalf of the organisation, helping to make sense of the sensory inputs (data) that it has at its disposal. And to do so effectively performance reporters need to learn from and exploit the strengths of our own brains, compensate for its weaknesses and communicate in a way that makes it easy for their audience's brains to assimilate. Drawing on the latest insights from cognitive science in this book you will learn: * how to bring a dynamic perspective into performance reporting * how to deploy a set of simple tools to help speared the signal from the noise inherent in large data sets and to make sound inferences * how to set goals intelligently * about the grammar of data visualization and how use it to design powerful and simple reports In this way information professionals are uniquely charged with the responsibility for creating the shared consciousness that is a prerequisite for organisations to effectively respond and adapt to their environments.
Despite businesses often being based on creating desirable experiences, products and services for consumers, many fail to consider the end user in their planning and development processes. This book is here to change that. User experience research, also known as UX research, focuses on understanding user behaviours, needs and motivations through a range of observational techniques, task analysis and other methodologies. User Research is a practical guide that shows readers how to use the vast array of user research methods available. Written by one of the UK's leading UX research professionals, readers can benefit from in-depth knowledge that explores the fundamentals of user research. Covering all the key research methods including face-to-face user testing, card sorting, surveys, A/B testing and many more, the book gives expert insight into the nuances, advantages and disadvantages of each, while also providing guidance on how to interpret, analyze and share the data once it has been obtained. Now in its second edition, User Research provides a new chapter on research operations and infrastructure as well as new material on combining user research methodologies.
Like the three editions that preceded it, this new edition targets markets in health care practice and educational settings. It addresses practicing nurses and nursing students, together with nursing leadership and nursing faculty. It speaks to nursing informatics specialists and-in a departure from earlier editions of this title-to all nurses, regardless of their specialty, extending its usefulness as a text as noted below. In recognition of the evolving electronic health information environment and of interdisciplinary health care teams, the book is designed to be of interest to members of other health care professions (quality officers, administrators, etc.) as well as health information technology professionals (in health care facilities and in industry). The book will include numerous relevant case studies to illustrate the theories and principles discussed, making it an ideal candidate for use within nursing curricula (both undergraduate and graduate), as well as continuing education and staff development programs. This book honors the format established by the first three editions by including a content array and questions to guide the reader. This 4th edition also includes numerous brief case studies that help to illustrate the theories and practices described within the various chapters. Most of these "mini-cases" are provided by members of professional nursing organizations that comprise the TIGER Initiative. These mini-cases are listed in the front matter and highlighted via formatting throughout the text. |
You may like...
Java For Kids - NetBeans 11 Programming…
Philip Conrod, Lou Tylee
Paperback
R2,063
Discovery Miles 20 630
Retargetable C Compiler, A - Design and…
David Hanson, Christopher Fraser
Paperback
R1,534
Discovery Miles 15 340
|