![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data mining
Data mining essentially relies on several mathematical disciplines, many of which are presented in this second edition of this book. Topics include partially ordered sets, combinatorics, general topology, metric spaces, linear spaces, graph theory. To motivate the reader a significant number of applications of these mathematical tools are included ranging from association rules, clustering algorithms, classification, data constraints, logical data analysis, etc. The book is intended as a reference for researchers and graduate students. The current edition is a significant expansion of the first edition. We strived to make the book self-contained and only a general knowledge of mathematics is required. More than 700 exercises are included and they form an integral part of the material. Many exercises are in reality supplemental material and their solutions are included.
This book focuses on mobile data and its applications in the wireless networks of the future. Several topics form the basis of discussion, from a mobile data mining platform for collecting mobile data, to mobile data processing, and mobile feature discovery. Usage of mobile data mining is addressed in the context of three applications: wireless communication optimization, applications of mobile data mining on the cellular networks of the future, and how mobile data shapes future cities. In the discussion of wireless communication optimization, both licensed and unlicensed spectra are exploited. Advanced topics include mobile offloading, resource sharing, user association, network selection and network coexistence. Mathematical tools, such as traditional convexappl/non-convex, stochastic processing and game theory are used to find objective solutions. Discussion of the applications of mobile data mining to cellular networks of the future includes topics such as green communication networks, 5G networks, and studies of the problems of cell zooming, power control, sleep/wake, and energy saving. The discussion of mobile data mining in the context of smart cities of the future covers applications in urban planning and environmental monitoring: the technologies of deep learning, neural networks, complex networks, and network embedded data mining. Mobile Data Mining and Applications will be of interest to wireless operators, companies, governments as well as interested end users.
With today's consumers spending more time on their mobiles than on their PCs, new methods of empirical stochastic modeling have emerged that can provide marketers with detailed information about the products, content, and services their customers desire. Data Mining Mobile Devices defines the collection of machine-sensed environmental data pertaining to human social behavior. It explains how the integration of data mining and machine learning can enable the modeling of conversation context, proximity sensing, and geospatial location throughout large communities of mobile users. Examines the construction and leveraging of mobile sites Describes how to use mobile apps to gather key data about consumers' behavior and preferences Discusses mobile mobs, which can be differentiated as distinct marketplaces-including Apple (R), Google (R), Facebook (R), Amazon (R), and Twitter (R) Provides detailed coverage of mobile analytics via clustering, text, and classification AI software and techniques Mobile devices serve as detailed diaries of a person, continuously and intimately broadcasting where, how, when, and what products, services, and content your consumers desire. The future is mobile-data mining starts and stops in consumers' pockets. Describing how to analyze Wi-Fi and GPS data from websites and apps, the book explains how to model mined data through the use of artificial intelligence software. It also discusses the monetization of mobile devices' desires and preferences that can lead to the triangulated marketing of content, products, or services to billions of consumers-in a relevant, anonymous, and personal manner.
Written by renowned data science experts Foster Provost and Tom Fawcett, Data Science for Business introduces the fundamental principles of data science, and walks you through the "data-analytic thinking" necessary for extracting useful knowledge and business value from the data you collect. This guide also helps you understand the many data-mining techniques in use today. Based on an MBA course Provost has taught at New York University over the past ten years, Data Science for Business provides examples of real-world business problems to illustrate these principles. You'll not only learn how to improve communication between business stakeholders and data scientists, but also how participate intelligently in your company's data science projects. You'll also discover how to think data-analytically, and fully appreciate how data science methods can support business decision-making. Understand how data science fits in your organization - and how you can use it for competitive advantage Treat data as a business asset that requires careful investment if you're to gain real value Approach business problems data-analytically, using the data-mining process to gather good data in the most appropriate way Learn general concepts for actually extracting knowledge from data Apply data science principles when interviewing data science job candidates
A step-by-step guide to data mining applications in CRM. Following a handbook approach, this book bridges the gap between analytics and their use in everyday marketing, providing guidance on solving real business problems using data mining techniques. The book is organized into three parts. Part one provides a methodological roadmap, covering both the business and the technical aspects. The data mining process is presented in detail along with specific guidelines for the development of optimized acquisition, cross/ deep/ up selling and retention campaigns, as well as effective customer segmentation schemes. In part two, some of the most useful data mining algorithms are explained in a simple and comprehensive way for business users with no technical expertise. Part three is packed with real world case studies which employ the use of three leading data mining tools: IBM SPSS Modeler, RapidMiner and Data Mining for Excel. Case studies from industries including banking, retail and telecommunications are presented in detail so as to serve as templates for developing similar applications. Key Features : Includes numerous real-world case studies which are presented step by step, demystifying the usage of data mining models and clarifying all the methodological issues. Topics are presented with the use of three leading data mining tools: IBM SPSS Modeler, RapidMiner and Data Mining for Excel. Accompanied by a website featuring material from each case study, including datasets and relevant code. Combining data mining and business knowledge, this practical book provides all the necessary information for designing, setting up, executing and deploying data mining techniques in CRM. Effective CRM using Predictive Analytics will benefit data mining practitioners and consultants, data analysts, statisticians, and CRM officers. The book will also be useful to academics and students interested in applied data mining.
"Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception"" "covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and multimodal fusion. Part II - Scalability Issues presents indexing and distributed methods for scaling up these components for high-dimensional data and Web-scale datasets. The book presents some real-world applications and remarks on future research and development directions. The book is designed for researchers, graduate students, and practitioners in the fields of Computer Vision, Machine Learning, Large-scale Data Mining, Database, and Multimedia Information Retrieval. Dr. Edward Y. Chang was a professor at the Department of Electrical & Computer Engineering, University of California at Santa Barbara, before he joined Google as a research director in 2006. Dr. Chang received his M.S. degree in Computer Science and Ph.D degree in Electrical Engineering, both from Stanford University.
Often considered more of an art than a science, books on clustering have been dominated by learning through example with techniques chosen almost through trial and error. Even the two most popular, and most related, clustering methods K-Means for partitioning and Ward's method for hierarchical clustering have lacked the theoretical underpinning required to establish a firm relationship between the two methods and relevant interpretation aids. Other approaches, such as spectral clustering or consensus clustering, are considered absolutely unrelated to each other or to the two above mentioned methods. Clustering: A Data Recovery Approach, Second Edition presents a unified modeling approach for the most popular clustering methods: the K-Means and hierarchical techniques, especially for divisive clustering. It significantly expands coverage of the mathematics of data recovery, and includes a new chapter covering more recent popular network clustering approaches spectral, modularity and uniform, additive, and consensus treated within the same data recovery approach. Another added chapter covers cluster validation and interpretation, including recent developments for ontology-driven interpretation of clusters. Altogether, the insertions added a hundred pages to the book, even in spite of the fact that fragments unrelated to the main topics were removed. Illustrated using a set of small real-world datasets and more than a hundred examples, the book is oriented towards students, practitioners, and theoreticians of cluster analysis. Covering topics that are beyond the scope of most texts, the author s explanations of data recovery methods, theory-based advice, pre- and post-processing issues and his clear, practical instructions for real-world data mining make this book ideally suited for teaching, self-study, and professional reference.
"This text should be required reading for everyone in contemporary business." --Peter Woodhull, CEO, Modus21 "The one book that clearly describes and links Big Data concepts to business utility." --Dr. Christopher Starr, PhD "Simply, this is the best Big Data book on the market!" --Sam Rostam, Cascadian IT Group "...one of the most contemporary approaches I've seen to Big Data fundamentals..." --Joshua M. Davis, PhD The Definitive Plain-English Guide to Big Data for Business and Technology Professionals Big Data Fundamentals provides a pragmatic, no-nonsense introduction to Big Data. Best-selling IT author Thomas Erl and his team clearly explain key Big Data concepts, theory and terminology, as well as fundamental technologies and techniques. All coverage is supported with case study examples and numerous simple diagrams. The authors begin by explaining how Big Data can propel an organization forward by solving a spectrum of previously intractable business problems. Next, they demystify key analysis techniques and technologies and show how a Big Data solution environment can be built and integrated to offer competitive advantages. Discovering Big Data's fundamental concepts and what makes it different from previous forms of data analysis and data science Understanding the business motivations and drivers behind Big Data adoption, from operational improvements through innovation Planning strategic, business-driven Big Data initiatives Addressing considerations such as data management, governance, and security Recognizing the 5 "V" characteristics of datasets in Big Data environments: volume, velocity, variety, veracity, and value Clarifying Big Data's relationships with OLTP, OLAP, ETL, data warehouses, and data marts Working with Big Data in structured, unstructured, semi-structured, and metadata formats Increasing value by integrating Big Data resources with corporate performance monitoring Understanding how Big Data leverages distributed and parallel processing Using NoSQL and other technologies to meet Big Data's distinct data processing requirements Leveraging statistical approaches of quantitative and qualitative analysis Applying computational analysis methods, including machine learning
Customer and Business Analytics: Applied Data Mining for Business Decision Making Using R explains and demonstrates, via the accompanying open-source software, how advanced analytical tools can address various business problems. It also gives insight into some of the challenges faced when deploying these tools. Extensively classroom-tested, the text is ideal for students in customer and business analytics or applied data mining as well as professionals in small- to medium-sized organizations. The book offers an intuitive understanding of how different analytics algorithms work. Where necessary, the authors explain the underlying mathematics in an accessible manner. Each technique presented includes a detailed tutorial that enables hands-on experience with real data. The authors also discuss issues often encountered in applied data mining projects and present the CRISP-DM process model as a practical framework for organizing these projects. Showing how data mining can improve the performance of organizations, this book and its R-based software provide the skills and tools needed to successfully develop advanced analytics capabilities.
Leverage the full power of Bayesian analysis for competitive advantage Bayesian methods can solve problems you can't reliably handle any other way. Building on your existing Excel analytics skills and experience, Microsoft Excel MVP Conrad Carlberg helps you make the most of Excel's Bayesian capabilities and move toward R to do even more. Step by step, with real-world examples, Carlberg shows you how to use Bayesian analytics to solve a wide array of real problems. Carlberg clarifies terminology that often bewilders analysts, and offers sample R code to take advantage of the rethinking package in R and its gateway to Stan. As you incorporate these Bayesian approaches into your analytical toolbox, you'll build a powerful competitive advantage for your organization-and yourself. Explore key ideas and strategies that underlie Bayesian analysis Distinguish prior, likelihood, and posterior distributions, and compare algorithms for driving sampling inputs Use grid approximation to solve simple univariate problems, and understand its limits as parameters increase Perform complex simulations and regressions with quadratic approximation and Richard McElreath's quap function Manage text values as if they were numeric Learn today's gold-standard Bayesian sampling technique: Markov Chain Monte Carlo (MCMC) Use MCMC to optimize execution speed in high-complexity problems Discover when frequentist methods fail and Bayesian methods are essential-and when to use both in tandem
Data structures is a key course for computer science and related majors. This book presents a variety of practical or engineering cases and derives abstract concepts from concrete problems. Besides basic concepts and analysis methods, it introduces basic data types such as sequential list, tree as well as graph. This book can be used as an undergraduate textbook, as a training textbook or a self-study textbook for engineers.
Spectral Feature Selection for Data Mining introduces a novel feature selection technique that establishes a general platform for studying existing feature selection algorithms and developing new algorithms for emerging problems in real-world applications. This technique represents a unified framework for supervised, unsupervised, and semisupervised feature selection. The book explores the latest research achievements, sheds light on new research directions, and stimulates readers to make the next creative breakthroughs. It presents the intrinsic ideas behind spectral feature selection, its theoretical foundations, its connections to other algorithms, and its use in handling both large-scale data sets and small sample problems. The authors also cover feature selection and feature extraction, including basic concepts, popular existing algorithms, and applications. A timely introduction to spectral feature selection, this book illustrates the potential of this powerful dimensionality reduction technique in high-dimensional data processing. Readers learn how to use spectral feature selection to solve challenging problems in real-life applications and discover how general feature selection and extraction are connected to spectral feature selection.
Machine Learning and Knowledge Discovery for Engineering Systems Health Management presents state-of-the-art tools and techniques for automatically detecting, diagnosing, and predicting the effects of adverse events in an engineered system. With contributions from many top authorities on the subject, this volume is the first to bring together the two areas of machine learning and systems health management. Divided into three parts, the book explains how the fundamental algorithms and methods of both physics-based and data-driven approaches effectively address systems health management. The first part of the text describes data-driven methods for anomaly detection, diagnosis, and prognosis of massive data streams and associated performance metrics. It also illustrates the analysis of text reports using novel machine learning approaches that help detect and discriminate between failure modes. The second part focuses on physics-based methods for diagnostics and prognostics, exploring how these methods adapt to observed data. It covers physics-based, data-driven, and hybrid approaches to studying damage propagation and prognostics in composite materials and solid rocket motors. The third part discusses the use of machine learning and physics-based approaches in distributed data centers, aircraft engines, and embedded real-time software systems. Reflecting the interdisciplinary nature of the field, this book shows how various machine learning and knowledge discovery techniques are used in the analysis of complex engineering systems. It emphasizes the importance of these techniques in managing the intricate interactions within and between the systems to maintain a high degree of reliability.
The latest inventions in internet technology influence most of business and daily activities. Internet security, internet data management, web search, data grids, cloud computing, and web-based applications play vital roles, especially in business and industry, as more transactions go online and mobile. Issues related to ubiquitous computing are becoming critical. Internet technology and data engineering should reinforce efficiency and effectiveness of business processes. These technologies should help people make better and more accurate decisions by presenting necessary information and possible consequences for the decisions. Intelligent information systems should help us better understand and manage information with ubiquitous data repository and cloud computing. This book is a compilation of some recent research findings in Internet Technology and Data Engineering. This book provides state-of-the-art accounts in computational algorithms/tools, database management and database technologies, intelligent information systems, data engineering applications, internet security, internet data management, web search, data grids, cloud computing, web-based application, and other related topics.
This open access book describes the results of natural language processing and machine learning methods applied to clinical text from electronic patient records. It is divided into twelve chapters. Chapters 1-4 discuss the history and background of the original paper-based patient records, their purpose, and how they are written and structured. These initial chapters do not require any technical or medical background knowledge. The remaining eight chapters are more technical in nature and describe various medical classifications and terminologies such as ICD diagnosis codes, SNOMED CT, MeSH, UMLS, and ATC. Chapters 5-10 cover basic tools for natural language processing and information retrieval, and how to apply them to clinical text. The difference between rule-based and machine learning-based methods, as well as between supervised and unsupervised machine learning methods, are also explained. Next, ethical concerns regarding the use of sensitive patient records for research purposes are discussed, including methods for de-identifying electronic patient records and safely storing patient records. The book's closing chapters present a number of applications in clinical text mining and summarise the lessons learned from the previous chapters. The book provides a comprehensive overview of technical issues arising in clinical text mining, and offers a valuable guide for advanced students in health informatics, computational linguistics, and information retrieval, and for researchers entering these fields.
Data mining is one of the most rapidly growing research areas in computer science and statistics. In Volume 2 of this three volume series, we have brought together contributions from some of the most prestigious researchers in theoretical data mining. Each of the chapters is self contained. Statisticians and applied scientists/ engineers will find this volume valuable. Additionally, it provides a sourcebook for graduate students interested in the current direction of research in data mining.
Biomarker discovery is an important area of biomedical research that may lead to significant breakthroughs in disease analysis and targeted therapy. Biomarkers are biological entities whose alterations are measurable and are characteristic of a particular biological condition. Discovering, managing, and interpreting knowledge of new biomarkers are challenging and attractive problems in the emerging field of biomedical informatics. This volume is a collection of state-of-the-art research into the application of data mining to the discovery and analysis of new biomarkers. Presenting new results, models and algorithms, the included contributions focus on biomarker data integration, information retrieval methods, and statistical machine learning techniques. This volume is intended for students, and researchers in bioinformatics, proteomics, and genomics, as well engineers and applied scientists interested in the interdisciplinary application of data mining techniques.
This volume comprises the 6th IFIP International Conference on Intelligent Infor- tion Processing. As the world proceeds quickly into the Information Age, it encounters both successes and challenges, and it is well recognized nowadays that intelligent information processing provides the key to the Information Age and to mastering many of these challenges. Intelligent information processing supports the most - vanced productive tools that are said to be able to change human life and the world itself. However, the path is never a straight one and every new technology brings with it a spate of new research problems to be tackled by researchers; as a result we are not running out of topics; rather the demand is ever increasing. This conference provides a forum for engineers and scientists in academia and industry to present their latest research findings in all aspects of intelligent information processing. This is the 6th IFIP International Conference on Intelligent Information Processing. We received more than 50 papers, of which 35 papers are included in this program as regular papers and 4 as short papers. We are grateful for the dedicated work of both the authors and the referees, and we hope these proceedings will continue to bear fruit over the years to come. All papers submitted were reviewed by two referees. A conference such as this cannot succeed without help from many individuals who contributed their valuable time and expertise.
The book provides an overview of the state-of-the-art of map construction algorithms, which use tracking data in the form of trajectories to generate vector maps. The most common trajectory type is GPS-based trajectories. It introduces three emerging algorithmic categories, outlines their general algorithmic ideas, and discusses three representative algorithms in greater detail. To quantify map construction algorithms, the authors include specific datasets and evaluation measures. The datasets, source code of map construction algorithms and evaluation measures are publicly available on http://www.mapconstruction.org. The web site serves as a repository for map construction data and algorithms and researchers can contribute by uploading their own code and benchmark data. Map Construction Algorithms is an excellent resource for professionals working in computational geometry, spatial databases, and GIS. Advanced-level students studying computer science, geography and mathematics will also find this book a useful tool.
Data mining consists of attempting to discover novel and useful knowledge from data, trying to find patterns among datasets that can help in intelligent decision making. However, reports of real-world case studies are not generally detailed in the literature, due to the fact that they are usually based on proprietary datasets, making it impossible to publish the results. This kind of situation makes hard to evaluate, in a precise way, the degree of effectiveness of data mining techniques in real-world applications. On the other hand, researchers of this field of expertise usually exploit public-domain datasets. This volume offers a wide spectrum of research work developed for data mining for real-world application. In the following, we give a brief introduction of the chapters that are included in this book.
This book covers deep-learning-based approaches for sentiment analysis, a relatively new, but fast-growing research area, which has significantly changed in the past few years. The book presents a collection of state-of-the-art approaches, focusing on the best-performing, cutting-edge solutions for the most common and difficult challenges faced in sentiment analysis research. Providing detailed explanations of the methodologies, the book is a valuable resource for researchers as well as newcomers to the field.
Most life science researchers will agree that biology is not a truly theoretical branch of science. The hype around computational biology and bioinformatics beginning in the nineties of the 20th century was to be short lived (1, 2). When almost no value of practical importance such as the optimal dose of a drug or the three-dimensional structure of an orphan protein can be computed from fundamental principles, it is still more straightforward to determine them experimentally. Thus, experiments and observationsdogeneratetheoverwhelmingpartofinsightsintobiologyandmedicine. The extrapolation depth and the prediction power of the theoretical argument in life sciences still have a long way to go. Yet, two trends have qualitatively changed the way how biological research is done today. The number of researchers has dramatically grown and they, armed with the same protocols, have produced lots of similarly structured data. Finally, high-throu- put technologies such as DNA sequencing or array-based expression profiling have been around for just a decade. Nevertheless, with their high level of uniform data generation, they reach the threshold of totally describing a living organism at the biomolecular level for the first time in human history. Whereas getting exact data about living systems and the sophistication of experimental procedures have primarily absorbed the minds of researchers previously, the weight increasingly shifts to the problem of interpreting accumulated data in terms of biological function and bio- lecular mechanisms.
Data engineering has grown rapidly in the past decade, leaving many software engineers, data scientists, and analysts looking for a comprehensive view of this practice. With this practical book, you will learn how to plan and build systems to serve the needs of your organization and customers by evaluating the best technologies available in the framework of the data engineering lifecycle. Authors Joe Reis and Matt Housley walk you through the data engineering lifecycle and show you how to stitch together a variety of cloud technologies to serve the needs of downstream data consumers. You will understand how to apply the concepts of data generation, ingestion, orchestration, transformation, storage, governance, and deployment that are critical in any data environment regardless of the underlying technology. This book will help you: Assess data engineering problems using an end-to-end data framework of best practices Cut through marketing hype when choosing data technologies, architecture, and processes Use the data engineering lifecycle to design and build a robust architecture Incorporate data governance and security across the data engineering lifecycle
This textbook offers a comprehensive introduction to Machine Learning techniques and algorithms. This Third Edition covers newer approaches that have become highly topical, including deep learning, and auto-encoding, introductory information about temporal learning and hidden Markov models, and a much more detailed treatment of reinforcement learning. The book is written in an easy-to-understand manner with many examples and pictures, and with a lot of practical advice and discussions of simple applications. The main topics include Bayesian classifiers, nearest-neighbor classifiers, linear and polynomial classifiers, decision trees, rule-induction programs, artificial neural networks, support vector machines, boosting algorithms, unsupervised learning (including Kohonen networks and auto-encoding), deep learning, reinforcement learning, temporal learning (including long short-term memory), hidden Markov models, and the genetic algorithm. Special attention is devoted to performance evaluation, statistical assessment, and to many practical issues ranging from feature selection and feature construction to bias, context, multi-label domains, and the problem of imbalanced classes. |
You may like...
News Search, Blogs and Feeds - A Toolkit
Lars Vage, Lars Iselid
Paperback
R1,332
Discovery Miles 13 320
|