![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Reference & Interdisciplinary > Communication studies > Data analysis
More students study management and organization studies than ever, the number of business schools worldwide continues to rise, and more management research is being published in a greater number of journals than could have been imagined twenty years ago. Dennis Tourish looks beneath the surface of this progress to expose a field in crisis and in need of radical reform. He identifies the ways in which management research has lost its way, including a remoteness from the practical problems that managers and employees face, a failure to replicate key research findings, poor writing, endless obscure theorizing, and an increasing number of research papers being retracted for fraud and other forms of malpractice. Tourish suggests fundamental changes to remedy these issues, enabling management research to become more robust, more interesting and more valuable to society. A must read for academics, practising managers, university administrators and policy makers within higher education.
This book provides readers with a thorough understanding of various research areas within the field of data science. The book introduces readers to various techniques for data acquisition, extraction, and cleaning, data summarizing and modeling, data analysis and communication techniques, data science tools, deep learning, and various data science applications. Researchers can extract and conclude various future ideas and topics that could result in potential publications or thesis. Furthermore, this book contributes to Data Scientists' preparation and to enhancing their knowledge of the field. The book provides a rich collection of manuscripts in highly regarded data science topics, edited by professors with long experience in the field of data science. Introduces various techniques, methods, and algorithms adopted by Data Science experts Provides a detailed explanation of data science perceptions, reinforced by practical examples Presents a road map of future trends suitable for innovative data science research and practice
This book constitutes selected and revised papers from the First Mediterranean Forum - Data Science Conference, MeFDATA 2020, held in Sarajevo, Bosnia and Herzegovina, in October 2020. The 11 papers presented were carefully reviewed and selected from the 26 qualified submissions. The papers are organized in the topical sections on human behaviour and pandemic; applications in medicine; industrial applications; natural language processing.
This volume presents the latest advances in statistics and data science, including theoretical, methodological and computational developments and practical applications related to classification and clustering, data gathering, exploratory and multivariate data analysis, statistical modeling, and knowledge discovery and seeking. It includes contributions on analyzing and interpreting large, complex and aggregated datasets, and highlights numerous applications in economics, finance, computer science, political science and education. It gathers a selection of peer-reviewed contributions presented at the 16th Conference of the International Federation of Classification Societies (IFCS 2019), which was organized by the Greek Society of Data Analysis and held in Thessaloniki, Greece, on August 26-29, 2019.
Discover this multi-disciplinary and insightful work, which integrates machine learning, edge computing, and big data. Presents the basics of training machine learning models, key challenges and issues, as well as comprehensive techniques including edge learning algorithms, and system design issues. Describes architectures, frameworks, and key technologies for learning performance, security, and privacy, as well as incentive issues in training/inference at the network edge. Intended to stimulate fruitful discussions, inspire further research ideas, and inform readers from both academia and industry backgrounds. Essential reading for experienced researchers and developers, or for those who are just entering the field.
Although longitudinal social network data are increasingly collected, there are few guides on how to navigate the range of available tools for longitudinal network analysis. The applied social scientist is left to wonder: Which model is most appropriate for my data? How should I get started with this modeling strategy? And how do I know if my model is any good? This book answers these questions. Author Scott Duxbury assumes that the reader is familiar with network measurement, description, and notation, and is versed in regression analysis, but is likely unfamiliar with statistical network methods. The goal of the book is to guide readers towards choosing, applying, assessing, and interpreting a longitudinal network model, and each chapter is organized with a specific data structure or research question in mind. A companion website includes data and R code to replicate the examples in the book.
This comprehensive book, rich with applications, offers a quantitative framework for the analysis of the various capture-recapture models for open animal populations, while also addressing associated computational methods. The state of our wildlife populations provides a litmus test for the state of our environment, especially in light of global warming and the increasing pollution of our land, seas, and air. In addition to monitoring our food resources such as fisheries, we need to protect endangered species from the effects of human activities (e.g. rhinos, whales, or encroachments on the habitat of orangutans). Pests must be be controlled, whether insects or viruses, and we need to cope with growing feral populations such as opossums, rabbits, and pigs. Accordingly, we need to obtain information about a given population's dynamics, concerning e.g. mortality, birth, growth, breeding, sex, and migration, and determine whether the respective population is increasing , static, or declining. There are many methods for obtaining population information, but the most useful (and most work-intensive) is generically known as "capture-recapture," where we mark or tag a representative sample of individuals from the population and follow that sample over time using recaptures, resightings, or dead recoveries. Marks can be natural, such as stripes, fin profiles, and even DNA; or artificial, such as spots on insects. Attached tags can, for example, be simple bands or streamers, or more sophisticated variants such as radio and sonic transmitters. To estimate population parameters, sophisticated and complex mathematical models have been devised on the basis of recapture information and computer packages. This book addresses the analysis of such models. It is primarily intended for ecologists and wildlife managers who wish to apply the methods to the types of problems discussed above, though it will also benefit researchers and graduate students in ecology. Familiarity with basic statistical concepts is essential.
The 3rd edition of this popular textbook introduces the reader to the investigation of vegetation systems with an emphasis on data analysis. The book succinctly illustrates the various paths leading to high quality data suitable for pattern recognition, pattern testing, static and dynamic modelling and model testing including spatial and temporal aspects of ecosystems. Step-by-step introductions using small examples lead to more demanding approaches illustrated by real world examples aimed at explaining interpretations. All data sets and examples described in the book are available online and are written using the freely available statistical package R. This book will be of particular value to beginning graduate students and postdoctoral researchers of vegetation ecology, ecological data analysis, and ecological modelling, and experienced researchers needing a guide to new methods. A completely revised and updated edition of this popular introduction to data analysis in vegetation ecology. Includes practical step-by-step examples using the freely available statistical package R. Complex concepts and operations are explained using clear illustrations and case studies relating to real world phenomena. Emphasizes method selection rather than just giving a set of recipes.
This volume focuses on the ethics of internet and social networking research exploring the challenges faced by researchers making use of social media and big data in their research. The internet, the world wide web and social media - indeed all forms of online communications - are attractive fields of research across a range of disciplines. They offer opportunities for methodological initiatives and innovations in research and easily accessed, massive amounts of primary and secondary data sources. This collection examines the new challenges posed by data generated online, explores how researchers are addressing those ethical challenges, and provides rich case studies of ethical decision making in the digital age.
This book highlights some of the unique aspects of spatio-temporal graph data from the perspectives of modeling and developing scalable algorithms. The authors discuss in the first part of this book, the semantic aspects of spatio-temporal graph data in two application domains, viz., urban transportation and social networks. Then the authors present representational models and data structures, which can effectively capture these semantics, while ensuring support for computationally scalable algorithms. In the first part of the book, the authors describe algorithmic development issues in spatio-temporal graph data. These algorithms internally use the semantically rich data structures developed in the earlier part of this book. Finally, the authors introduce some upcoming spatio-temporal graph datasets, such as engine measurement data, and discuss some open research problems in the area. This book will be useful as a secondary text for advanced-level students entering into relevant fields of computer science, such as transportation and urban planning. It may also be useful for researchers and practitioners in the field of navigational algorithms.
At first glance, the skills required to work in the data science field appear to be self-explanatory. Do not be fooled. Impactful data science demands an interdisciplinary knowledge of business philosophy, project management, salesmanship, presentation, and more. In Managing Your Data Science Projects, author Robert de Graaf explores important concepts that are frequently overlooked in much of the instructional literature that is available to data scientists new to the field. If your completed models are to be used and maintained most effectively, you must be able to present and sell them within your organization in a compelling way. The value of data science within an organization cannot be overstated. Thus, it is vital that strategies and communication between teams are dexterously managed. Three main ways that data science strategy is used in a company is to research its customers, assess risk analytics, and log operational measurements. These all require different managerial instincts, backgrounds, and experiences, and de Graaf cogently breaks down the unique reasons behind each. They must align seamlessly to eventually be adopted as dynamic models. Data science is a relatively new discipline, and as such, internal processes for it are not as well-developed within an operational business as others. With Managing Your Data Science Projects, you will learn how to create products that solve important problems for your customers and ensure that the initial success is sustained throughout the product's intended life. Your users will trust you and your models, and most importantly, you will be a more well-rounded and effectual data scientist throughout your career. Who This Book Is For Early-career data scientists, managers of data scientists, and those interested in entering the field of data science
The social sciences are becoming datafied. The questions once considered the domain of sociologists are now answered by data scientists operating on large datasets and breaking with methodological tradition, for better or worse. The traditional social sciences, such as sociology or anthropology, are under the double threat of becoming marginalized or even irrelevant, both from new methods of research which require more computational skills and from increasing competition from the corporate world which gains an additional advantage based on data access. However, unlike data scientists, sociologists and anthropologists have a long history of doing qualitative research. The more quantified datasets we have, the more difficult it is to interpret them without adding layers of qualitative interpretation. Big Data therefore needs Thick Data. This book presents the available arsenal of new methods and tools for studying society both quantitatively and qualitatively, opening ground for the social sciences to take the lead in analysing digital behaviour. It shows that Big Data can and should be supplemented and interpreted through thick data as well as cultural analysis. Thick Big Data is critically important for students and researchers in the social sciences to understand the possibilities of digital analysis, both in the quantitative and qualitative area, and to successfully build mixed-methods approaches.
Statistics for Social Work with SPSS provides readers with a user-friendly, evidence-based, and practical resource to help them make sense of, organize, analyze, and interpret data in contemporary contexts. It incorporates one of the most well-known statistics software applications, the Statistical Package for the Social Science (SPSS), within each chapter to help readers integrate their knowledge either manually or with the assistance of technology. The book begins with a brief introduction to statistics and research, followed by chapters that address variables, frequency distributions, measures of central tendency, and measures of variability. Additional chapters cover probability and hypothesis testing; normal distribution and Z score; correlation; simple linear regression; one-way ANOVA; and more. Each chapter features concise, simple explanations of key terms, formulas, and calculations; study questions and answers; specific SPSS instructions on computerized computations; and evidence-based, practical examples to support the learning experience. Presenting students with highly accessible and universally understandable statistical concepts, Statistics for Social Work with SPSS is an ideal textbook for undergraduate and graduate-level courses in social work statistics, as well as research-based courses within the social and behavioral sciences.
Faculty members, scholars, and researchers often ask where they should publish their work; which outlets are most suitable to showcase their research? Which journals should they publish in to ensure their work is read and cited? How can the impact of their scholarly output be maximized? The answers to these and related questions affect not only individual scholars, but also academic and research institution stakeholders who are under constant pressure to create and implement organizational policies, evaluation measures and reward systems that encourage quality, high impact research from their members. The explosion of academic research in recent years, along with advances in information technology, has given rise to omnipresent and increasingly important scholarly metrics. These measures need to be assessed and used carefully, however, as their widespread availability often tempts users to jump to improper conclusions without considering several caveats. While various quantitative tools enable the ranking, evaluating, categorizing, and comparing of journals and articles, metrics such as author or article citation counts, journal impact factors, and related measures of institutional research output are somewhat inconsistent with traditional goals and objectives of higher education research and scholarly academic endeavors. This book provides guidance to individual researchers, research organizations, and academic institutions as they grapple with rapidly developing issues surrounding scholarly metrics and their potential value to both policy-makers, as evaluation and measurement tools, and individual scholars, as a way to identify colleagues for potential collaboration, promote their position as public intellectuals, and support intellectual community engagement.
Reliable data analysis lies at the heart of scientific research, helping you to figure out what your data is really telling you. Yet the analysis of data can be a stumbling block for even the most experienced researcher - and can be a particularly daunting prospect when analyzing your own data for the first time. Drawing on the author's extensive experience of supporting project students, Scientific Data Analysis is a guide for any science undergraduate or beginning graduate who needs to analyse their own data, and wants a clear, step-by-step description of how to carry out their analysis in a robust, error-free way. With video content generated by the author to dovetail with the printed text, the resource not only describes the principles of data analysis and the strategies that should be adopted for a successful outcome but also shows you how to carry out that analysis - with the videos breaking down the process of analysis into easy-to-digest chunks. With guidance on the use of Minitab, SPSS and Excel, Scientific Data Analysis doesn't just support the use of one particular software package: it is the ideal guide to carrying out your own data analysis regardless of the software you have chosen. Online Resource Centre: The
A nuts-and-bolts guide to conducting your own professional-quality surveys without paying professional fees. How can you gauge public support for a cause or test the market for a product or service? What are the best methods for validating opinions for use in a paper or dissertation? A well-documented survey is the answer. But what if you don’t have thousands of dollars to commission one? No problem. How to Conduct Your Own Survey gives you everything you need to do it yourself! Without any prior training, you can learn expert techniques for conducting accurate, low-cost surveys. In step-by-step, down-to-earth language, Priscilla Salant and Don A. Dillman give you the tools you need to:
This textbook presents an innovative new perspective on the economics of development, including insights from a broad range of disciplines. It starts with the current state of affairs, a discussion of data availability, reliability, and analysis, and an historic overview of the deep influence of fundamental factors on human prosperity. Next, it focuses on the role of human interaction in terms of trade, capital, and knowledge flows, as well as the associated implications for institutions, contracts, and finance. The book also highlights differences in the development paths of emerging countries in order to provide a better understanding of the concepts of development and the Millennium Development Goals. Insights from other disciplines are used help to understand human development with regard to other issues, such as inequalities, health, demography, education, and poverty. The book concludes by emphasizing the importance of connections, location, and human interaction in determining future prosperity.
Despite businesses often being based on creating desirable experiences, products and services for consumers, many fail to consider the end user in their planning and development processes. This book is here to change that. User experience research, also known as UX research, focuses on understanding user behaviours, needs and motivations through a range of observational techniques, task analysis and other methodologies. User Research is a practical guide that shows readers how to use the vast array of user research methods available. Written by one of the UK's leading UX research professionals, readers can benefit from in-depth knowledge that explores the fundamentals of user research. Covering all the key research methods including face-to-face user testing, card sorting, surveys, A/B testing and many more, the book gives expert insight into the nuances, advantages and disadvantages of each, while also providing guidance on how to interpret, analyze and share the data once it has been obtained. Now in its second edition, User Research provides a new chapter on research operations and infrastructure as well as new material on combining user research methodologies.
Like the three editions that preceded it, this new edition targets markets in health care practice and educational settings. It addresses practicing nurses and nursing students, together with nursing leadership and nursing faculty. It speaks to nursing informatics specialists and-in a departure from earlier editions of this title-to all nurses, regardless of their specialty, extending its usefulness as a text as noted below. In recognition of the evolving electronic health information environment and of interdisciplinary health care teams, the book is designed to be of interest to members of other health care professions (quality officers, administrators, etc.) as well as health information technology professionals (in health care facilities and in industry). The book will include numerous relevant case studies to illustrate the theories and principles discussed, making it an ideal candidate for use within nursing curricula (both undergraduate and graduate), as well as continuing education and staff development programs. This book honors the format established by the first three editions by including a content array and questions to guide the reader. This 4th edition also includes numerous brief case studies that help to illustrate the theories and practices described within the various chapters. Most of these "mini-cases" are provided by members of professional nursing organizations that comprise the TIGER Initiative. These mini-cases are listed in the front matter and highlighted via formatting throughout the text.
This textbook presents an innovative new perspective on the economics of development, including insights from a broad range of disciplines. It starts with the current state of affairs, a discussion of data availability, reliability, and analysis, and an historic overview of the deep influence of fundamental factors on human prosperity. Next, it focuses on the role of human interaction in terms of trade, capital, and knowledge flows, as well as the associated implications for institutions, contracts, and finance. The book also highlights differences in the development paths of emerging countries in order to provide a better understanding of the concepts of development and the Millennium Development Goals. Insights from other disciplines are used help to understand human development with regard to other issues, such as inequalities, health, demography, education, and poverty. The book concludes by emphasizing the importance of connections, location, and human interaction in determining future prosperity.
A valuable new edition of a standard reference The use of statistical methods for categorical data has increased dramatically, particularly for applications in the biomedical and social sciences. An Introduction to Categorical Data Analysis, Third Edition summarizes these methods and shows readers how to use them using software. Readers will find a unified generalized linear models approach that connects logistic regression and loglinear models for discrete data with normal regression for continuous data. Adding to the value in the new edition is: - Illustrations of the use of R software to perform all the analyses in the book - A new chapter on alternative methods for categorical data, including smoothing and regularization methods (such as the lasso), classification methods such as linear discriminant analysis and classification trees, and cluster analysis - New sections in many chapters introducing the Bayesian approach for the methods of that chapter - More than 70 analyses of data sets to illustrate application of the methods, and about 200 exercises, many containing other data sets - An appendix showing how to use SAS, Stata, and SPSS, and an appendix with short solutions to most odd-numbered exercises Written in an applied, nontechnical style, this book illustrates the methods using a wide variety of real data, including medical clinical trials, environmental questions, drug use by teenagers, horseshoe crab mating, basketball shooting, correlates of happiness, and much more. An Introduction to Categorical Data Analysis, Third Edition is an invaluable tool for statisticians and biostatisticians as well as methodologists in the social and behavioral sciences, medicine and public health, marketing, education, and the biological and agricultural sciences.
Since long before computers were even thought of, data has been collected and organized by diverse cultures across the world. Once access to the Internet became a reality for large swathes of the world's population, the amount of data generated each day became huge, and continues to grow exponentially. It includes all our uploaded documents, video, and photos, all our social media traffic, our online shopping, even the GPS data from our cars. 'Big Data' represents a qualitative change, not simply a quantitative one. The term refers both to the new technologies involved, and to the way it can be used by business and government. Dawn E. Holmes uses a variety of case studies to explain how data is stored, analysed, and exploited by a variety of bodies from big companies to organizations concerned with disease control. Big data is transforming the way businesses operate, and the way medical research can be carried out. At the same time, it raises important ethical issues; Holmes discusses cases such as the Snowden affair, data security, and domestic smart devices which can be hijacked by hackers. ABOUT THE SERIES: The Very Short Introductions series from Oxford University Press contains hundreds of titles in almost every subject area. These pocket-sized books are the perfect way to get ahead in a new subject quickly. Our expert authors combine facts, analysis, perspective, new ideas, and enthusiasm to make interesting and challenging topics highly readable.
For increasingly data-savvy clients, lawyers can no longer give "it depends" answers rooted in anecdata. Clients insist that their lawyers justify their reasoning, and with more than a limited set of war stories. The considered judgment of an experienced lawyer is unquestionably valuable. However, on balance, clients would rather have the considered judgment of an experienced lawyer informed by the most relevant information required to answer their questions. Data-Driven Law: Data Analytics and the New Legal Services helps legal professionals meet the challenges posed by a data-driven approach to delivering legal services. Its chapters are written by leading experts who cover such topics as: Mining legal data Computational law Uncovering bias through the use of Big Data Quantifying the quality of legal services Data mining and decision-making Contract analytics and contract standards In addition to providing clients with data-based insight, legal firms can track a matter with data from beginning to end, from the marketing spend through to the type of matter, hours spent, billed, and collected, including metrics on profitability and success. Firms can organize and collect documents after a matter and even automate them for reuse. Data on marketing related to a matter can be an amazing source of insight about which practice areas are most profitable. Data-driven decision-making requires firms to think differently about their workflow. Most firms warehouse their files, never to be seen again after the matter closes. Running a data-driven firm requires lawyers and their teams to treat information about the work as part of the service, and to collect, standardize, and analyze matter data from cradle to grave. More than anything, using data in a law practice requires a different mindset about the value of this information. This book helps legal professionals to develop this data-driven mindset.
A comprehensive compilation of new developments in data linkage methodology The increasing availability of large administrative databases has led to a dramatic rise in the use of data linkage, yet the standard texts on linkage are still those which describe the seminal work from the 1950-60s, with some updates. Linkage and analysis of data across sources remains problematic due to lack of discriminatory and accurate identifiers, missing data and regulatory issues. Recent developments in data linkage methodology have concentrated on bias and analysis of linked data, novel approaches to organising relationships between databases and privacy-preserving linkage. Methodological Developments in Data Linkage brings together a collection of contributions from members of the international data linkage community, covering cutting edge methodology in this field. It presents opportunities and challenges provided by linkage of large and often complex datasets, including analysis problems, legal and security aspects, models for data access and the development of novel research areas. New methods for handling uncertainty in analysis of linked data, solutions for anonymised linkage and alternative models for data collection are also discussed. Key Features : Presents cutting edge methods for a topic of increasing importance to a wide range of research areas, with applications to data linkage systems internationally Covers the essential issues associated with data linkage today Includes examples based on real data linkage systems, highlighting the opportunities, successes and challenges that the increasing availability of linkage data provides Novel approach incorporates technical aspects of both linkage, management and analysis of linked data This book will be of core interest to academics, government employees, data holders, data managers, analysts and statisticians who use administrative data. It will also appeal to researchers in a variety of areas, including epidemiology, biostatistics, social statistics, informatics, policy and public health.
"Modeling with Data" fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods. Klemens's accessible survey describes these models in a unified and nontraditional manner, providing alternative ways of looking at statistical concepts that often befuddle students. The book includes nearly one hundred sample programs of all kinds. Links to these programs will be available on this page at a later date. "Modeling with Data" will interest anyone looking for a comprehensive guide to these powerful statistical tools, including researchers and graduate students in the social sciences, biology, engineering, economics, and applied mathematics. |
You may like...
Qualitative Data - An Introduction to…
Carl Auerbach, Louise B Silverstein
Hardcover
R2,859
Discovery Miles 28 590
Applying Data Science and Learning…
Goran Trajkovski, Marylee Demeter, …
Hardcover
R5,333
Discovery Miles 53 330
Perspectives on Spatial Data Analysis
Luc Anselin, Sergio J. Rey
Hardcover
R4,694
Discovery Miles 46 940
Numerical Methods in Environmental Data…
Moses Eterigho Emetere
Paperback
R2,428
Discovery Miles 24 280
Big Data Processing Using Spark in Cloud
Mamta Mittal, Valentina E. Balas, …
Hardcover
R2,677
Discovery Miles 26 770
Ethical Practice of Statistics and Data…
Rochelle Tractenberg
Hardcover
R2,402
Discovery Miles 24 020
Handbook of Research Methods in…
R. J. Huddleston, Thomas Jamieson, …
Hardcover
R9,450
Discovery Miles 94 500
|