![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts and edited to present a coherent and comprehensive, yet not redundant, practically oriented introduction.
Web Intelligence is a new direction for scientific research and development that explores the fundamental roles as well as practical impacts of artificial intelligence and advanced information technology for the next generation of Web-empowered systems, services, and environments. Web Intelligence is regarded as the key research field for the development of the Wisdom Web (including the Semantic Web). As the first book devoted to Web Intelligence, this coherently written multi-author monograph provides a thorough introduction and a systematic overview of this new field. It presents both the current state of research and development as well as application aspects. The book will be a valuable and lasting source of reference for researchers and developers interested in Web Intelligence. Students and developers will additionally appreciate the numerous illustrations and examples.
The increasing availability of data in our current, information overloaded society has led to the need for valid tools for its modelling and analysis. Data mining and applied statistical methods are the appropriate tools to extract knowledge from such data. This book provides an accessible introduction to data mining methods in a consistent and application oriented statistical framework, using case studies drawn from real industry projects and highlighting the use of data mining methods in a variety of business applications. Introduces data mining methods and applications.Covers classical and Bayesian multivariate statistical methodology as well as machine learning and computational data mining methods.Includes many recent developments such as association and sequence rules, graphical Markov models, lifetime value modelling, credit risk, operational risk and web mining.Features detailed case studies based on applied projects within industry.Incorporates discussion of data mining software, with case studies analysed using R.Is accessible to anyone with a basic knowledge of statistics or data analysis.Includes an extensive bibliography and pointers to further reading within the text. "Applied Data Mining for Business and Industry, 2nd edition" is aimed at advanced undergraduate and graduate students of data mining, applied statistics, database management, computer science and economics. The case studies will provide guidance to professionals working in industry on projects involving large volumes of data, such as customer relationship management, web design, risk management, marketing, economics and finance.
This book offers a unique review of how astronomical information handling (in the broad sense) evolved in the course of the 20th century, and especially during its second half. This volume is a natural complement to the book Information handling in astronomy published in the same series. The scope of these two volumes includes not only dealing with professional astronomical data from the collecting instruments (ground-based and space-borne) to the users/researchers, but also publishing, education and public outreach. In short, the information flow in astronomy is thus illustrated from sources (cosmic objects) to end (mankind's knowledge). The experts contributing to this book have done their best to write in a way understandable to readers not necessarily hyperspecialized in astronomy while providing specific detailed information, as well as plenty of pointers and bibliographic elements. Especially enlightening are some lessons learned' sections.
Data Streams: Models and Algorithms primarily discusses issues related to the mining aspects of streams. Recent progress in hardware technology makes it possible for organizations to store and record large streams of transactional data. For example, even simple daily transactions, such as using the credit card or phone, result in automated data storage, which brings us to a fairly new topic called data streams. This volume covers mining aspects of data streams in a comprehensive style, in which each contributed chapter contains a survey on the topic, the key ideas in the field from that particular topic, and future research directions. Data Streams: Models and Algorithms is intended for a professional audience composed of researchers and practitioners in industry. This book is also appropriate for graduate-level students in computer science.
Understanding sequence data, and the ability to utilize this hidden knowledge, creates a significant impact on many aspects of our society. Examples of sequence data include DNA, protein, customer purchase history, web surfing history, and more. Sequence Data Mining provides balanced coverage of the existing results on sequence data mining, as well as pattern types and associated pattern mining methods. While there are several books on data mining and sequence data analysis, currently there are no books that balance both of these topics. This professional volume fills in the gap, allowing readers to access state-of-the-art results in one place. Sequence Data Mining is designed for professionals working in bioinformatics, genomics, web services, and financial data analysis. This book is also suitable for advanced-level students in computer science and bioengineering. Forward by Professor Jiawei Han, University of Illinois at Urbana-Champaign.
Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e., to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space."
This book thoroughly covers the remote sensing visualization and analysis techniques based on computational imaging and vision in Earth science. Remote sensing is considered a significant information source for monitoring and mapping natural and man-made land through the development of sensor resolutions that committed different Earth observation platforms. The book includes related topics for the different systems, models, and approaches used in the visualization of remote sensing images. It offers flexible and sophisticated solutions for removing uncertainty from the satellite data. It introduces real time big data analytics to derive intelligence systems in enterprise earth science applications. Furthermore, the book integrates statistical concepts with computer-based geographic information systems (GIS). It focuses on image processing techniques for observing data together with uncertainty information raised by spectral, spatial, and positional accuracy of GPS data. The book addresses several advanced improvement models to guide the engineers in developing different remote sensing visualization and analysis schemes. Highlights on the advanced improvement models of the supervised/unsupervised classification algorithms, support vector machines, artificial neural networks, fuzzy logic, decision-making algorithms, and Time Series Model and Forecasting are addressed. This book guides engineers, designers, and researchers to exploit the intrinsic design remote sensing systems. The book gathers remarkable material from an international experts' panel to guide the readers during the development of earth big data analytics and their challenges.
The most important use of computing in the future will be in the context of the global "digital convergence" where everything becomes digital and every thing is inter-networked. The application will be dominated by storage, search, retrieval, analysis, exchange and updating of information in a wide variety of forms. Heavy demands will be placed on systems by many simultaneous re quests. And, fundamentally, all this shall be delivered at much higher levels of dependability, integrity and security. Increasingly, large parallel computing systems and networks are providing unique challenges to industry and academia in dependable computing, espe cially because of the higher failure rates intrinsic to these systems. The chal lenge in the last part of this decade is to build a systems that is both inexpensive and highly available. A machine cluster built of commodity hardware parts, with each node run ning an OS instance and a set of applications extended to be fault resilient can satisfy the new stringent high-availability requirements. The focus of this book is to present recent techniques and methods for im plementing fault-tolerant parallel and distributed computing systems. Section I, Fault-Tolerant Protocols, considers basic techniques for achieving fault-tolerance in communication protocols for distributed systems, including synchronous and asynchronous group communication, static total causal order ing protocols, and fail-aware datagram service that supports communications by time."
Multimedia Cartography provides a contemporary overview of the issues related to multimedia cartography and the design and production elements that are unique to this area of mapping. The book has been written for professional cartographers interested in moving into multimedia mapping, for cartographers already involved in producing multimedia titles who wish to discover the approaches that other practitioners in multimedia cartography have taken and for students and academics in the mapping sciences and related geographical fields wishing to update their knowledge about current issues related to cartographic design and production. It provides a new approach to cartography one based on the exploitation of the many rich media components and avant-garde approach that multimedia offers."
The authors focus on the mathematical models and methods that support most data mining applications and solution techniques.
This book provides an overview of the theory and application of linear and nonlinear mixed-effects models in the analysis of grouped data, such as longitudinal data, repeated measures, and multilevel data. Over 170 figures are included in the book.
New state-of-the-art techniques for analyzing and managing Web data have emerged due to the need for dealing with huge amounts of data which are circulated on the Web. ""Web Data Management Practices: Emerging Techniques and Technologies"" provides a thorough understanding of major issues, current practices, and the main ideas in the field of Web data management, helping readers to identify current and emerging issues, as well as future trends in this area. ""Web Data Management Practices: Emerging Techniques and Technologies"" presents a complete overview of important aspects related to Web data management practices, such as: Web mining, Web data clustering, and others. This book also covers an extensive range of topics, including related issues about Web mining, Web caching and replication, Web services, and the XML standard.
Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.
Advancements in digital sensor technology, digital image analysis techniques, as well as computer software and hardware have brought together the fields of computer vision and photogrammetry, which are now converging towards sharing, to a great extent, objectives and algorithms. The potential for mutual benefits by the close collaboration and interaction of these two disciplines is great, as photogrammetric know-how can be aided by the most recent image analysis developments in computer vision, while modern quantitative photogrammetric approaches can support computer vision activities. Devising methodologies for automating the extraction of man-made objects (e.g. buildings, roads) from digital aerial or satellite imagery is an application where this cooperation and mutual support is already reaping benefits. The valuable spatial information collected using these interdisciplinary techniques is of improved qualitative and quantitative accuracy. This book offers a comprehensive selection of high-quality and in-depth contributions from world-wide leading research institutions, treating theoretical as well as implementational issues, and representing the state-of-the-art on this subject among the photogrammetric and computer vision communities.
1. When I was asked by the editors of this book to write a foreword, I was seized by panic. Obviously, neither I am an expert in Knowledge Representation in Fuzzy Databases nor I could have been beforehand unaware that the book's contributors would be some of the most outstanding researchers in the field. However, Amparo Vila's gentle insistence gradually broke down my initial resistance, and panic then gave way to worry. Which paving stones did I have at my disposal for making an entrance to the book? After thinking about it for some time, I concluded that it would be pretentious on my part to focus on the subjects which are dealt with directly in the contributions presented, and that it would instead be better to confine myself to making some general reflections on knowledge representation given by imprecise information using fuzzy sets; reflections which have been suggested to me by some words in the following articles such as: graded notions, fuzzy objects, uncertainty, fuzzy implications, fuzzy inference, empty intersection, etc.
This book explains how to perform data de-noising, in large scale, with a satisfactory level of accuracy. Three main issues are considered. Firstly, how to eliminate the error propagation from one stage to next stages while developing a filtered model. Secondly, how to maintain the positional importance of data whilst purifying it. Finally, preservation of memory in the data is crucial to extract smart data from noisy big data. If, after the application of any form of smoothing or filtering, the memory of the corresponding data changes heavily, then the final data may lose some important information. This may lead to wrong or erroneous conclusions. But, when anticipating any loss of information due to smoothing or filtering, one cannot avoid the process of denoising as on the other hand any kind of analysis of big data in the presence of noise can be misleading. So, the entire process demands very careful execution with efficient and smart models in order to effectively deal with it.
This book and sofwtare package provide a complement to the traditional data analysis tools already widely available. It presents an introduction to the analysis of data using neural networks. Neural network functions discussed include multilayer feed-forward networks using error back propagation, genetic algorithm-neural network hybrids, generalized regression neural networks, learning quantizer networks, and self-organizing feature maps. In an easy-to-use, Windows-based environment it offers a wide range of data analytic tools which are not usually found together: these include genetic algorithms, probabilistic networks, as well as a number of related techniques that support these - notably, fractal dimension analysis, coherence analysis, and mutual information analysis. The text presents a number of worked examples and case studies using Simulnet, the software package which comes with the book. Readers are assumed to have a basic understanding of computers and elementary mathematics. With this background, a reader will find themselves quickly conducting sophisticated hands-on analyses of data sets.
Data stewards in any organization are the backbone of a successful data governance implementation because they do the work to make data trusted, dependable, and high quality. Since the publication of the first edition, there have been critical new developments in the field, such as integrating Data Stewardship into project management, handling Data Stewardship in large international companies, handling "big data" and Data Lakes, and a pivot in the overall thinking around the best way to align data stewardship to the data-moving from business/organizational function to data domain. Furthermore, the role of process in data stewardship is now recognized as key and needed to be covered. Data Stewardship, Second Edition provides clear and concise practical advice on implementing and running data stewardship, including guidelines on how to organize based on organizational/company structure, business functions, and data ownership. The book shows data managers how to gain support for a stewardship effort, maintain that support over the long-term, and measure the success of the data stewardship effort. It includes detailed lists of responsibilities for each type of data steward and strategies to help the Data Governance Program Office work effectively with the data stewards.
Handbook of Economic Expectations discusses the state-of-the-art in the collection, study and use of expectations data in economics, including the modelling of expectations formation and updating, as well as open questions and directions for future research. The book spans a broad range of fields, approaches and applications using data on subjective expectations that allows us to make progress on fundamental questions around the formation and updating of expectations by economic agents and their information sets. The information included will help us study heterogeneity and potential biases in expectations and analyze impacts on behavior and decision-making under uncertainty. |
You may like...
Mastering Oracle PL/SQL - Practical…
Christopher Beck, Joel Kallman, …
Paperback
Seminal Contributions to Modelling and…
Khalid Al-Begain, Andrzej Bargiela
Hardcover
R3,320
Discovery Miles 33 200
Reduced Order Methods for Modeling and…
Alfio Quarteroni, Gianluigi Rozza
Hardcover
R3,453
Discovery Miles 34 530
When Compressive Sensing Meets Mobile…
Linghe Kong, Bowen Wang, …
Hardcover
R2,653
Discovery Miles 26 530
|