Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Was ist Informationsdesign? Welche Designdisziplinen spielen dabei eine Rolle? Und wo liegen Schnittstellen zu anderen Disziplinen wie Usability-Engineering und Informationsarchitektur? Das Kompendium bietet eine umfassende Einfuhrung in theoretische und gestalterische Grundlagen, in Geschichte und Praxis des Informationsdesigns. Verstandlich und anschaulich beschreiben die Autoren Teildisziplinen und Aufgabenfelder des Informationsdesigns: von Interaktionsdesign, Ausstellungsdesign und Signaletik uber Corporate Design, Textdesign und Sounddesign bis hin zu Informationsdidaktik und Informationspsychologie. Begriffsdefinitionen, Tipps sowie Beispiele aus der Praxis machen das Kompendium Informationsdesign zu einem Handbuch fur Studierende, Dozenten und Praktiker.
Data Mining for Business Analytics: Concepts, Techniques, and Applications with JMP Pro(R) presents an applied and interactive approach to data mining. Featuring hands-on applications with JMP Pro(R), a statistical package from the SAS Institute, the book uses engaging, real-world examples to build a theoretical and practical understanding of key data mining methods, especially predictive models for classification and prediction. Topics include data visualization, dimension reduction techniques, clustering, linear and logistic regression, classification and regression trees, discriminant analysis, naive Bayes, neural networks, uplift modeling, ensemble models, and time series forecasting. Data Mining for Business Analytics: Concepts, Techniques, and Applications with JMP Pro(R) also includes: * Detailed summaries that supply an outline of key topics at the beginning of each chapter * End-of-chapter examples and exercises that allow readers to expand their comprehension of the presented material * Data-rich case studies to illustrate various applications of data mining techniques * A companion website with over two dozen data sets, exercises and case study solutions, and slides for instructors Data Mining for Business Analytics: Concepts, Techniques, and Applications with JMP Pro(R) is an excellent textbook for advanced undergraduate and graduate-level courses on data mining, predictive analytics, and business analytics. The book is also a one-of-a-kind resource for data scientists, analysts, researchers, and practitioners working with analytics in the fields of management, finance, marketing, information technology, healthcare, education, and any other data-rich field. Galit Shmueli, PhD, is Distinguished Professor at National Tsing Hua University s Institute of Service Science. She has designed and instructed data mining courses since 2004 at University of Maryland, Statistics.com, Indian School of Business, and National Tsing Hua University, Taiwan. Professor Shmueli is known for her research and teaching in business analytics, with a focus on statistical and data mining methods in information systems and healthcare. She has authored over 70 journal articles, books, textbooks, and book chapters, including Data Mining for Business Analytics: Concepts, Techniques, and Applications in XLMiner(R), Third Edition, also published by Wiley. Peter C. Bruce is President and Founder of the Institute for Statistics Education at www.statistics.com He has written multiple journal articles and is the developer of Resampling Stats software. He is the author of Introductory Statistics and Analytics: A Resampling Perspective and co-author of Data Mining for Business Analytics: Concepts, Techniques, and Applications in XLMiner (R), Third Edition, both published by Wiley. Mia Stephens is Academic Ambassador at JMP(R), a division of SAS Institute. Prior to joining SAS, she was an adjunct professor of statistics at the University of New Hampshire and a founding member of the North Haven Group LLC, a statistical training and consulting company. She is the co-author of three other books, including Visual Six Sigma: Making Data Analysis Lean, Second Edition, also published by Wiley. Nitin R. Patel, PhD, is Chairman and cofounder of Cytel, Inc., based in Cambridge, Massachusetts. A Fellow of the American Statistical Association, Dr. Patel has also served as a Visiting Professor at the Massachusetts Institute of Technology and at Harvard University. He is a Fellow of the Computer Society of India and was a professor at the Indian Institute of Management, Ahmedabad, for 15 years. He is co-author of Data Mining for Business Analytics: Concepts, Techniques, and Applications in XLMiner(R), Third Edition, also published by Wiley.
Anomaly detection is the detective work of machine learning: finding the unusual, catching the fraud, discovering strange activity in large and complex datasets. But, unlike Sherlock Holmes, you may not know what the puzzle is, much less what "suspects" you're looking for. This O'Reilly report uses practical examples to explain how the underlying concepts of anomaly detection work. From banking security to natural sciences, medicine, and marketing, anomaly detection has many useful applications in this age of big data. And the search for anomalies will intensify once the Internet of Things spawns even more new types of data. The concepts described in this report will help you tackle anomaly detection in your own project. Use probabilistic models to predict what's normal and contrast that to what you observe Set an adaptive threshold to determine which data falls outside of the normal range, using the t-digest algorithm Establish normal fluctuations in complex systems and signals (such as an EKG) with a more adaptive probablistic model Use historical data to discover anomalies in sporadic event streams, such as web traffic Learn how to use deviations in expected behavior to trigger fraud alerts
Since long before computers were even thought of, data has been collected and organized by diverse cultures across the world. Once access to the Internet became a reality for large swathes of the world's population, the amount of data generated each day became huge, and continues to grow exponentially. It includes all our uploaded documents, video, and photos, all our social media traffic, our online shopping, even the GPS data from our cars. 'Big Data' represents a qualitative change, not simply a quantitative one. The term refers both to the new technologies involved, and to the way it can be used by business and government. Dawn E. Holmes uses a variety of case studies to explain how data is stored, analysed, and exploited by a variety of bodies from big companies to organizations concerned with disease control. Big data is transforming the way businesses operate, and the way medical research can be carried out. At the same time, it raises important ethical issues; Holmes discusses cases such as the Snowden affair, data security, and domestic smart devices which can be hijacked by hackers. ABOUT THE SERIES: The Very Short Introductions series from Oxford University Press contains hundreds of titles in almost every subject area. These pocket-sized books are the perfect way to get ahead in a new subject quickly. Our expert authors combine facts, analysis, perspective, new ideas, and enthusiasm to make interesting and challenging topics highly readable.
Das vorliegende Buch stellt den Controller 68332 aus der 68300-Familie des Herstellers Motorola vor. Mit seiner 32 bit Struktur, der umfangreichen Peripherie und einem Adressbereich von 16 MByte gehort er zur oberen Leistungsklasse. Rund 60 Programmbeispiele und 30 Ubungsaufgaben vertiefen den Stoff."
Knowing everything you can about each click to your Web site can help you make strategic decisions regarding your business. This book is about the why, not just the how, of web analytics and the rules for developing a "culture of analysis" inside your organization. Why you should collect various types of data. Why you need a strategy. Why it must remain flexible. Why your data must generate meaningful action. The authors answer these critical questions--and many more--using their decade of experience in Web analytics.
A theoretical and practical guide to using corpus linguistic techniques in stylistic analysis The use of corpora in stylistics has increased substantially in recent years but until now there has been no book detailing the theoretical basis and methodological practices of corpus stylistics. This book surveys the field and sets the agenda for this fast-developing area. Focusing on how to use off-the-shelf corpus software, such as AntConc, Wmatrix, and the Brigham Young University (BYU) corpus interface, this step-by-step guide explains the theory and practice of using corpus methods and tools for stylistic analysis. Eight original case studies demonstrate how to use corpus tools to analyse style in a range of texts, from the contemporary to the historical. McIntyre and Walker explain how to develop appropriate research questions for corpus stylistic analysis, construct and annotate corpora, make sense of statistics, and analyse corpus data. In addition, the book provides practical advice on how to manage the transition from quantitative results to qualitative analysis, and explores how theories, models and frameworks from stylistics can be used to enhance the qualitative phase of corpus analysis. Supported by detailed instructions on how to access and use relevant corpus software, this is a user's guide to doing corpus stylistic analysis. For students and researchers in stylistics new to the use of corpus methods and theories, the book presents a 'how-to' guide; for corpus linguists it opens the door to the theories, models and frameworks developed in stylistics that are of value to mainstream corpus linguistics.
The World Wide Web has a massive and permanent influence on our lives. Economy, industry, education, healthcare, public administration, entertainment - there is hardly any part of our daily lives which has not been pervaded by the Internet. Accordingly, modern Web applications are fully-fledged, complex software systems, and in order to be successful their development must be thorough and systematic. Web Engineering is the application of quantifiable approaches to the cost-effective requirements analysis, design, implementation, testing, operation and maintenance of high quality Web applications. Web Engineers face the same traditional concerns as Software Engineers: the risks of failure to meet business needs, project schedule delays, budget overruns and poor quality of deliverables. But in the Web environment new and complicated issues demand attention, too. Web Engineering addresses the problems associated with shorter lead times which require rapid prototyping and agile methods, the interactivity and visual nature of the medium which make HCI aspects highly significant, and multimedia features of Web applications. This well-organized guide takes a rigorous interdisciplinary approach to Web Engineering, covering Web development concepts, methods, tools and techniques, and is ideal for undergraduate and graduate students on Web-focused or Software Engineering courses, as well as Web software developers, Web designers and project managers.
Data Mining: Concepts and Techniques, Fourth Edition introduces concepts, principles, and methods for mining patterns, knowledge, and models from various kinds of data for diverse applications. Specifically, it delves into the processes for uncovering patterns and knowledge from massive collections of data, known as knowledge discovery from data, or KDD. It focuses on the feasibility, usefulness, effectiveness, and scalability of data mining techniques for large data sets. After an introduction to the concept of data mining, the authors explain the methods for preprocessing, characterizing, and warehousing data. They then partition the data mining methods into several major tasks, introducing concepts and methods for mining frequent patterns, associations, and correlations for large data sets; data classificcation and model construction; cluster analysis; and outlier detection. Concepts and methods for deep learning are systematically introduced as one chapter. Finally, the book covers the trends, applications, and research frontiers in data mining.
This book highlights advanced applications of geospatial data analytics to address real-world issues in urban society. With a connected world, we are generating spatial at unprecedented rates which can be harnessed for insightful analytics which define the way we analyze past events and define the future directions. This book is an anthology of applications of spatial data and analytics performed on them for gaining insights which can be used for problem solving in an urban setting. Each chapter is contributed by spatially aware data scientists in the making who present spatial perspectives drawn on spatial big data. The book shall benefit mature researchers and student alike to discourse a variety of urban applications which display the use of machine learning algorithms on spatial big data for real-world problem solving.
This book gathers a collection of high-quality peer-reviewed research papers presented at the International Conference on Big Data, IoT and Machine Learning (BIM 2021), held in Cox's Bazar, Bangladesh, during 23-25 September 2021. The book covers research papers in the field of big data, IoT and machine learning. The book will be helpful for active researchers and practitioners in the field.
This book includes original unpublished contributions presented at the International Conference on Data Analytics and Management (ICDAM 2021), held at Jan Wyzykowski University, Poland, during June 2021. The book covers the topics in data analytics, data management, big data, computational intelligence, and communication networks. The book presents innovative work by leading academics, researchers, and experts from industry which is useful for young researchers and students.
Educational Data Analytics (EDA) have been attributed with significant benefits for enhancing on-demand personalized educational support of individual learners as well as reflective course (re)design for achieving more authentic teaching, learning and assessment experiences integrated into real work-oriented tasks. This open access textbook is a tutorial for developing, practicing and self-assessing core competences on educational data analytics for digital teaching and learning. It combines theoretical knowledge on core issues related to collecting, analyzing, interpreting and using educational data, including ethics and privacy concerns. The textbook provides questions and teaching materials/ learning activities as quiz tests of multiple types of questions, added after each section, related to the topic studied or the video(s) referenced. These activities reproduce real-life contexts by using a suitable use case scenario (storytelling), encouraging learners to link theory with practice; self-assessed assignments enabling learners to apply their attained knowledge and acquired competences on EDL. By studying this book, you will know where to locate useful educational data in different sources and understand their limitations; know the basics for managing educational data to make them useful; understand relevant methods; and be able to use relevant tools; know the basics for organising, analysing, interpreting and presenting learner-generated data within their learning context, understand relevant learning analytics methods and be able to use relevant learning analytics tools; know the basics for analysing and interpreting educational data to facilitate educational decision making, including course and curricula design, understand relevant teaching analytics methods and be able to use relevant teaching analytics tools; understand issues related with educational data ethics and privacy. This book is intended for school leaders and teachers engaged in blended (using the flipped classroom model) and online (during COVID-19 crisis and beyond) teaching and learning; e-learning professionals (such as, instructional designers and e-tutors) of online and blended courses; instructional technologists; researchers as well as undergraduate and postgraduate university students studying education, educational technology and relevant fields.
Richly illustrated in color, Statistics and Data Analysis for Microarrays Using R and Bioconductor, Second Edition provides a clear and rigorous description of powerful analysis techniques and algorithms for mining and interpreting biological information. Omitting tedious details, heavy formalisms, and cryptic notations, the text takes a hands-on, example-based approach that teaches students the basics of R and microarray technology as well as how to choose and apply the proper data analysis tool to specific problems. New to the Second EditionCompletely updated and double the size of its predecessor, this timely second edition replaces the commercial software with the open source R and Bioconductor environments. Fourteen new chapters cover such topics as the basic mechanisms of the cell, reliability and reproducibility issues in DNA microarrays, basic statistics and linear models in R, experiment design, multiple comparisons, quality control, data pre-processing and normalization, Gene Ontology analysis, pathway analysis, and machine learning techniques. Methods are illustrated with toy examples and real data and the R code for all routines is available on an accompanying downloadable resource. With all the necessary prerequisites included, this best-selling book guides students from very basic notions to advanced analysis techniques in R and Bioconductor. The first half of the text presents an overview of microarrays and the statistical elements that form the building blocks of any data analysis. The second half introduces the techniques most commonly used in the analysis of microarray data.
Customers and products are the heart of any business, and corporations collect more data about them every year. However, just because you have data doesn t mean you can use it effectively. If not properly integrated, data can actually encourage false conclusions that result in bad decisions and lost opportunities. Entity Resolution (ER) is a powerful tool for transforming data into accurate, value-added information. Using entity resolution methods and techniques, you can identify equivalent records from multiple sources corresponding to the same real-world person, place, or thing. This emerging area of data management is clearly explained
throughout the book. It teaches you the process of locating and
linking information about the same entity - eliminating
duplications - and making crucial business decisions based on the
results. This book is an authoritative, vendor-independent
technical reference for researchers, graduate students and
practitioners, including architects, technical analysts, and
solution developers. In short, Entity Resolution and Information
Quality gives you the applied level know-how you need to aggregate
data from disparate sources and form accurate customer and product
profiles that support effective marketing and sales. It is an
invaluable guide for succeeding in today s info-centric
environment.
Data Analysis for Social Microblogging Platforms explores the nature of microblog datasets, also covering the larger field which focuses on information, data and knowledge in the context of natural language processing. The book investigates a range of significant computational techniques which enable data and computer scientists to recognize patterns in these vast datasets, including machine learning, data mining algorithms, rough set and fuzzy set theory, evolutionary computations, combinatorial pattern matching, clustering, summarization and classification. Chapters focus on basic online micro blogging data analysis research methodologies, community detection, summarization application development, performance evaluation and their applications in big data.
Manufacturing Execution Systeme (MES) sind das Werkzeug, mit dem die Fertigungsprozesse transparent gemacht werden und mit dem die Ablaufe in Realtime unter Berucksichtigung von Zielvorgaben geregelt werden konnen. Das Buch soll helfen, ein MES zielorientiert im Unternehmen einzufuhren. Hierzu werden nicht nur Ratschlage zur Konzeption gegeben, sondern es wird auch bei der "internen Vermarktung" des MES-Vorhabens in Form von Ratschlagen und Wirtschaftlichkeitsbetrachtungen unterstutzt. Im Anschluss daran werden Hinweise zur Erstellung eines Pflichtenhefts sowie zur Ausschreibung und Anbieterauswahl gegeben. Neben Tipps vom Projektstart bis zum Produktivstart des Systems werden Themen wie Mitarbeiterqualifizierung und Support angesprochen. Ferner wird aufgezeigt, wie der Einfuhrungsprozess durch externe MES-Berater unterstutzt werden kann. Zwei Fallbeispiele zeigen, wie die Einfuhrung in der Praxis verlief und welcher Nutzen durch das MES erzielt werden konnte. Zur Besserung Nutzung des Systems werden noch organisatorische Massnahmen beschrieben, wie die Mitarbeitereinbindung mit Zielvereinbarungen und Pramienentlohnung, die auch neue Tarifmodelle, wie z.B. ERA (Entgeltrahmenabkommen) vorsehen. Ein Kapitel mit Checklisten, Literaturtipps und Weblinks schliesst dieses Buch ab."
Solutions Manual to accompany Statistical Data Analytics: Foundations for Data Mining, Informatics, and Knowledge Discovery A comprehensive introduction to statistical methods for data mining and knowledge discovery. Extensive solutions using actual data (with sample R programming code) are provided, illustrating diverse informatic sources in genomics, biomedicine, ecological remote sensing, astronomy, socioeconomics, marketing, advertising and finance, among many others.
At the intersection of computer science and healthcare, data analytics has emerged as a promising tool for solving problems across many healthcare-related disciplines. Supplying a comprehensive overview of recent healthcare analytics research, Healthcare Data Analytics provides a clear understanding of the analytical techniques currently available to solve healthcare problems. The book details novel techniques for acquiring, handling, retrieving, and making best use of healthcare data. It analyzes recent developments in healthcare computing and discusses emerging technologies that can help improve the health and well-being of patients. Written by prominent researchers and experts working in the healthcare domain, the book sheds light on many of the computational challenges in the field of medical informatics. Each chapter in the book is structured as a "survey-style" article discussing the prominent research issues and the advances made on that research topic. The book is divided into three major categories: Healthcare Data Sources and Basic Analytics - details the various healthcare data sources and analytical techniques used in the processing and analysis of such data Advanced Data Analytics for Healthcare - covers advanced analytical methods, including clinical prediction models, temporal pattern mining methods, and visual analytics Applications and Practical Systems for Healthcare - covers the applications of data analytics to pervasive healthcare, fraud detection, and drug discovery along with systems for medical imaging and decision support Computer scientists are usually not trained in domain-specific medical concepts, whereas medical practitioners and researchers have limited exposure to the data analytics area. The contents of this book will help to bring together these diverse communities by carefully and comprehensively discussing the most relevant contributions from each domain.
It is not lost on commercial organisations that where we live colours how we view ourselves and others. That is why so many now place us into social groups on the basis of the type of postcode in which we live. Social scientists call this practice "commercial sociology". Richard Webber originated Acorn and Mosaic, the two most successful geodemographic classifications. Roger Burrows is a critical interdisciplinary social scientist. Together they chart the origins of this practice and explain the challenges it poses to long-established social scientific beliefs such as: the role of the questionnaire in an era of "big data" the primacy of theory the relationship between qualitative and quantitative modes of understanding the relevance of visual clues to lay understanding. To help readers evaluate the validity of this form of classification, the book assesses how well geodemographic categories track the emergence of new types of residential neighbourhood and subject a number of key contemporary issues to geodemographic modes of analysis.
This comprehensive and authoritative guide will teach you the DAX language for business intelligence, data modeling, and analytics. Leading Microsoft BI consultants Marco Russo and Alberto Ferrari help you master everything from table functions through advanced code and model optimization. You'll learn exactly what happens under the hood when you run a DAX expression, how DAX behaves differently from other languages, and how to use this knowledge to write fast, robust code. If you want to leverage all of DAX's remarkable power and flexibility, this no-compromise "deep dive" is exactly what you need. Perform powerful data analysis with DAX for Microsoft SQL Server Analysis Services, Excel, and Power BI Master core DAX concepts, including calculated columns, measures, and error handling Understand evaluation contexts and the CALCULATE and CALCULATETABLE functions Perform time-based calculations: YTD, MTD, previous year, working days, and more Work with expanded tables, complex functions, and elaborate DAX expressions Perform calculations over hierarchies, including parent/child hierarchies Use DAX to express diverse and unusual relationships Measure DAX query performance with SQL Server Profiler and DAX Studio
This book provides an introduction to spatial analyses concerning disaggregated (or micro) spatial data. Particular emphasis is put on spatial data compilation and the structuring of the connections between the observations. Descriptive analysis methods of spatial data are presented in order to identify and measure the spatial, global and local dependency. The authors then focus on autoregressive spatial models, to control the problem of spatial dependency between the residues of a basic linear statistical model, thereby contravening one of the basic hypotheses of the ordinary least squares approach. This book is a popularized reference for students looking to work with spatialized data, but who do not have the advanced statistical theoretical basics.
This book provides thorough and comprehensive coverage of most of the new and important quantitative methods of data analysis for graduate students and practitioners. In recent years, data analysis methods have exploded alongside advanced computing power, and it is critical to understand such methods to get the most out of data, and to extract signal from noise. The book excels in explaining difficult concepts through simple explanations and detailed explanatory illustrations. Most unique is the focus on confidence limits for power spectra and their proper interpretation, something rare or completely missing in other books. Likewise, there is a thorough discussion of how to assess uncertainty via use of Expectancy, and the easy to apply and understand Bootstrap method. The book is written so that descriptions of each method are as self-contained as possible. Many examples are presented to clarify interpretations, as are user tips in highlighted boxes.
The different facets of the sharing economy offer numerous opportunities for businesses ? particularly those that can be distinguished by their creative ideas and their ability to easily connect buyers and senders of goods and services via digital platforms. At the beginning of the growth of this economy, the advanced digital technologies generated billions of bytes of data that constitute what we call Big Data. This book underlines the facilitating role of Big Data analytics, explaining why and how data analysis algorithms can be integrated operationally, in order to extract value and to improve the practices of the sharing economy. It examines the reasons why these new techniques are necessary for businesses of this economy and proposes a series of useful applications that illustrate the use of data in the sharing ecosystem.
Das St. Galler Modell fur prozesszentriertes Customer Relationship Management basiert auf Praxiserfahrungen, die in acht Fallstudien fuhrender Unternehmen dokumentiert sind: Ganzheitliches Kundenbindungsmarketing der Direkt Anlage Bank; Contact Center der Swisscom; Kampagnen- und Kundenmanagement bei Genossenschaftsbanken; Kundenzentrierte Prozesse und Systeme der Credit Suisse, LGT Bank in Liechtenstein und Neuen Zurcher Zeitung; Management von Projekt- und Kundenwissen bei der SAP. Das Gesamtmodell beschreibt mit Kunden-, Kanal- sowie Prozess- und Wissensmanagement die wesentlichen Instrumente zur radikalen Ausrichtung auf Kundenprozesse. Eine Ubersicht der achtzehn wichtigsten Einfuhrungsmethoden aus Literatur, Beratung und von Systemanbietern unterstutzt die erfolgreiche Projektdurchfuhrung." |
You may like...
Temporal and Spatio-temporal Data Mining
Wynne Hsu, Mong Li Lee, …
Hardcover
R2,735
Discovery Miles 27 350
Handbook of Research on Engineering…
Bhushan Patil, Manisha Vohra
Hardcover
R10,030
Discovery Miles 100 300
Challenges and Applications of Data…
V. Sathiyamoorthi, Atilla Elci
Hardcover
R7,116
Discovery Miles 71 160
|