![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data mining
Hyperspectral Image Fusion is the first text dedicated to the fusion techniques for such a huge volume of data consisting of a very large number of images. This monograph brings out recent advances in the research in the area of visualization of hyperspectral data. It provides a set of pixel-based fusion techniques, each of which is based on a different framework and has its own advantages and disadvantages. The techniques are presented with complete details so that practitioners can easily implement them. It is also demonstrated how one can select only a few specific bands to speed up the process of fusion by exploiting spatial correlation within successive bands of the hyperspectral data. While the techniques for fusion of hyperspectral images are being developed, it is also important to establish a framework for objective assessment of such techniques. This monograph has a dedicated chapter describing various fusion performance measures that are applicable to hyperspectral image fusion. This monograph also presents a notion of consistency of a fusion technique which can be used to verify the suitability and applicability of a technique for fusion of a very large number of images. This book will be a highly useful resource to the students, researchers, academicians and practitioners in the specific area of hyperspectral image fusion, as well as generic image fusion.
This book constitutes the refereed proceedings of the 17th International Conference on Engineering Applications of Neural Networks, EANN 2016, held in Aberdeen, UK, in September 2016. The 22 revised full papers and three short papers presented together with two tutorials were carefully reviewed and selected from 41 submissions. The papers are organized in topical sections on active learning and dynamic environments; semi-supervised modeling; classification applications; clustering applications; cyber-physical systems and cloud applications; time-series prediction; learning-algorithms.
This book constitutes the thoroughly refereed post-conference proceedings of the International Conference on Industrial IoT Technologies and Applications, IoT 2016, held in GuangZhou, China, in March 2016. The volume contains 26 papers carefully reviewed and selected from 55 submissions focusing on topics such as big data, cloud computing, Internet of Things (IoT).
This book explores how PPPM, clinical practice, and basic research could be best served by information technology (IT). A use-case was developed for hepatocellular carcinoma (HCC). The subject was approached with four interrelated tasks: (1) review of clinical practices relating to HCC; (2) propose an IT system relating to HCC, including clinical decision support and research needs; (3) determine how a clinical liver cancer center can contribute; and, (4) examine the enhancements and impact that the first three tasks will have on the management of HCC. An IT System for Personalized Medicine (ITS-PM) for HCC will provide the means to identify and determine the relative value of the wide number of variables, including clinical assessment of the patient -- functional status, liver function, degree of cirrhosis, and comorbidities; tumor biology, at a molecular, genetic and anatomic level; tumor burden and individual patient response; medical and operative treatments and their outcomes.
This book examines the field of parallel database management systems and illustrates the great variety of solutions based on a shared-storage or a shared-nothing architecture. Constantly dropping memory prices and the desire to operate with low-latency responses on large sets of data paved the way for main memory-based parallel database management systems. However, this area is currently dominated by the shared-nothing approach in order to preserve the in-memory performance advantage by processing data locally on each server. The main argument this book makes is that such an unilateral development will cease due to the combination of the following three trends: a) Today's network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory on a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic and provide high-availability. c) A modern storage system such as Stanford's RAM Cloud even keeps all data resident in the main memory. Exploiting these characteristics in the context of a main memory-based parallel database management system is desirable. The book demonstrates that the advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
This two volume set (CCIS 623 and 634) constitutes the refereed proceedings of the Second International Conference of Young Computer Scientists, Engineers and Educators, ICYCSEE 2016, held in Harbin, China, in August 2016. The 91 revised full papers presented were carefully reviewed and selected from 338 submissions. The papers are organized in topical sections on Research Track (Part I) and Education Track, Industry Track, and Demo Track (Part II) and cover a wide range of topics related to social computing, social media, social network analysis, social modeling, social recommendation, machine learning, data mining.
Abstraction is a fundamental mechanism underlying both human and artificial perception, representation of knowledge, reasoning and learning. This mechanism plays a crucial role in many disciplines, notably Computer Programming, Natural and Artificial Vision, Complex Systems, Artificial Intelligence and Machine Learning, Art, and Cognitive Sciences. This book first provides the reader with an overview of the notions of abstraction proposed in various disciplines by comparing both commonalities and differences. After discussing the characterizing properties of abstraction, a formal model, the KRA model, is presented to capture them. This model makes the notion of abstraction easily applicable by means of the introduction of a set of abstraction operators and abstraction patterns, reusable across different domains and applications. It is the impact of abstraction in Artificial Intelligence, Complex Systems and Machine Learning which creates the core of the book. A general framework, based on the KRA model, is presented, and its pragmatic power is illustrated with three case studies: Model-based diagnosis, Cartographic Generalization, and learning Hierarchical Hidden Markov Models.
The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasibility of these systems relies basically on P2P (peer-to-peer) techniques and the support of agent systems with scaling and decentralized control. Synergy between grids, P2P systems, and agent technologies is the key to data- and knowledge-centered systems in large-scale environments.This volume, the 26th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, focuses on Data Warehousing and Knowledge Discovery from Big Data, and contains extended and revised versions of four papers selected as the best papers from the 16th International Conference on Data Warehousing and Knowledge Discovery (DaWaK 2014), held in Munich, Germany, during September 1-5, 2014. The papers focus on data cube computation, the construction and analysis of a data warehouse in the context of cancer epidemiology, pattern mining algorithms, and frequent item-set border approximation.
This book constitutes the thoroughly refereed proceedings of the 9th Russian Summer School on Information Retrieval, RuSSIR 2015, held in Saint Petersburg, Russia, in August 2015. The volume includes 5 tutorial papers, summarizing lectures given at the event, and 6 revised papers from the school participants. The papers focus on various aspects of information retrieval.
This proceedings set contains 85 selected full papers presented at the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences - MCO 2015, held on May 11-13, 2015 at Lorraine University, France. The present part I of the 2 volume set includes articles devoted to Combinatorial optimization and applications, DC programming and DCA: thirty years of Developments, Dynamic Optimization, Modelling and Optimization in financial engineering, Multiobjective programming, Numerical Optimization, Spline Approximation and Optimization, as well as Variational Principles and Applications.
Michael Nofer examines whether and to what extent Social Media can be used to predict stock returns. Market-relevant information is available on various platforms on the Internet, which largely consist of user generated content. For instance, emotions can be extracted in order to identify the investors' risk appetite and in turn the willingness to invest in stocks. Discussion forums also provide an opportunity to identify opinions on certain companies. Taking Social Media platforms as examples, the author examines the forecasting quality of user generated content on the Internet.
This book addresses the need for a unified framework describing how soft computing and machine learning techniques can be judiciously formulated and used in building efficient pattern recognition models. The text reviews both established and cutting-edge research, providing a careful balance of theory, algorithms, and applications, with a particular emphasis given to applications in computational biology and bioinformatics. Features: integrates different soft computing and machine learning methodologies with pattern recognition tasks; discusses in detail the integration of different techniques for handling uncertainties in decision-making and efficiently mining large biological datasets; presents a particular emphasis on real-life applications, such as microarray expression datasets and magnetic resonance images; includes numerous examples and experimental results to support the theoretical concepts described; concludes each chapter with directions for future research and a comprehensive bibliography.
This book constitutes the refereed proceedings of the 7th International Conference on Knowledge Engineering and the Semantic Web, KESW 2016, held in Prague, Czech Republic, in September 2016.The 17 revised full papers presented together with 9 short papers were carefully reviewed and selected from 53 submissions. The papers are organized in topical sections on ontologies; information and knowledge extraction; data management; applications.
On various examples ranging from geosciences to environmental sciences, this book explains how to generate an adequate description of uncertainty, how to justify semiheuristic algorithms for processing uncertainty, and how to make these algorithms more computationally efficient. It explains in what sense the existing approach to uncertainty as a combination of random and systematic components is only an approximation, presents a more adequate three-component model with an additional periodic error component, and explains how uncertainty propagation techniques can be extended to this model. The book provides a justification for a practically efficient heuristic technique (based on fuzzy decision-making). It explains how the computational complexity of uncertainty processing can be reduced. The book also shows how to take into account that in real life, the information about uncertainty is often only partially known, and, on several practical examples, explains how to extract the missing information about uncertainty from the available data.
This book contains the refereed proceedings of the 19th International Conference on Business Information Systems, BIS 2016, held in Leipzig, Germany, in July 2016. The BIS conference series follows trends in academia and business research; thus the theme of the BIS 2016 conference was Smart Business Ecosystems". This recognizes that no business is an island and competition is increasingly taking place between business networks and no longer between individual companies. A variety of aspects is relevant for designing and understanding smart business ecosystems. They reach from new business models, value chains and processes to all aspects of analytical, social and enterprise applications and platforms as well as cyber-physical infrastructures. The 33 full and 1 short papers were carefully reviewed and selected from 87 submissions. They are grouped into sections on ecosystems; big and smart data; smart infrastructures; process management; business and enterprise modeling; service science; social media; and applications.
New technologies have enabled us to collect massive amounts of data in many fields. However, our pace of discovering useful information and knowledge from these data falls far behind our pace of collecting the data. Data Mining: Theories, Algorithms, and Examples introduces and explains a comprehensive set of data mining algorithms from various data mining fields. The book reviews theoretical rationales and procedural details of data mining algorithms, including those commonly found in the literature and those presenting considerable difficulty, using small data examples to explain and walk through the algorithms. The book covers a wide range of data mining algorithms, including those commonly found in data mining literature and those not fully covered in most of existing literature due to their considerable difficulty. The book presents a list of software packages that support the data mining algorithms, applications of the data mining algorithms with references, and exercises, along with the solutions manual and PowerPoint slides of lectures. The author takes a practical approach to data mining algorithms so that the data patterns produced can be fully interpreted. This approach enables students to understand theoretical and operational aspects of data mining algorithms and to manually execute the algorithms for a thorough understanding of the data patterns produced by them.
This book describes the fundamentals of data acquisition systems, how they enable users to sample signals that measure real physical conditions and convert the resulting samples into digital, numeric values that can be analyzed by a computer. The author takes a problem-solving approach to data acquisition, providing the tools engineers need to use the concepts introduced. Coverage includes sensors that convert physical parameters to electrical signals, signal conditioning circuitry to convert sensor signals into a form that can be converted to digital values and analog-to-digital converters, which convert conditioned sensor signals to digital values. Readers will benefit from the hands-on approach, culminating with data acquisition projects, including hardware and software needed to build data acquisition systems.
The present book outlines a new approach to possibilistic clustering in which the sought clustering structure of the set of objects is based directly on the formal definition of fuzzy cluster and the possibilistic memberships are determined directly from the values of the pairwise similarity of objects. The proposed approach can be used for solving different classification problems. Here, some techniques that might be useful at this purpose are outlined, including a methodology for constructing a set of labeled objects for a semi-supervised clustering algorithm, a methodology for reducing analyzed attribute space dimensionality and a methods for asymmetric data processing. Moreover, a technique for constructing a subset of the most appropriate alternatives for a set of weak fuzzy preference relations, which are defined on a universe of alternatives, is described in detail, and a method for rapidly prototyping the Mamdani's fuzzy inference systems is introduced. This book addresses engineers, scientists, professors, students and post-graduate students, who are interested in and work with fuzzy clustering and its applications
In education today, technology alone doesn't always lead to immediate success for students or institutions. In order to gauge the efficacy of educational technology, we need ways to measure the efficacy of educational practices in their own right. Through a better understanding of how learning takes place, we may work toward establishing best practices for students, educators, and institutions. These goals can be accomplished with learning analytics. "Learning Analytics: From Research to Practice "updates this emerging field with the latest in theories, findings, strategies, and tools from across education and technological disciplines. Guiding readers through preparation, design, and examples of implementation, this pioneering reference clarifies LA methods as not mere data collection but sophisticated, systems-based analysis with practical applicability inside the classroom and in the larger world. Case studies illustrate applications of LA throughout academic settings (e.g., intervention, advisement, technology design), and their resulting impact on pedagogy and learning. The goal is to bring greater efficiency and deeper engagement to individual students, learning communities, and educators, as chapters show diverse uses of learning analytics to: Enhance student and faculty performance.Improve student understanding of course material.Assess and attend to the needs of struggling learners.Improve accuracy in grading.Allow instructors to assess and develop their own strengths.Encourage more efficient use of resources at the institutional level. Researchers and practitioners in educational technology, IT, and the learning sciences will hail the information in "Learning Analytics: From Research to Practice "as a springboard to new levels of student, instructor, and institutional success.
This book constitutes the refereed proceedings of the 7th International Symposium on Intelligence Computation and Applications, ISICA 2015, held in Guangzhou, China, in November 2015. The 77 revised full papers presented were carefully reviewed and selected from 189 submissions. The papers feature the most up-to-date research in analysis and theory of evolutionary computation, neural network architectures and learning; neuro-dynamics and neuro-engineering; fuzzy logic and control; collective intelligence and hybrid systems; deep learning; knowledge discovery; learning and reasoning.
This book offers a thorough yet easy-to-read reference guide to various aspects of cloud computing security. It begins with an introduction to the general concepts of cloud computing, followed by a discussion of security aspects that examines how cloud security differs from conventional information security and reviews cloud-specific classes of threats and attacks. A range of varying threats in cloud computing are covered, from threats of data loss and data breaches, to threats to availability and threats posed by malicious insiders. Further, the book discusses attacks launched on different levels, including attacks on the hypervisor, and on the confidentiality of data. Newer types, such as side-channel attacks and resource-freeing attacks, are also described. The work closes by providing a set of general security recommendations for the cloud.
This two volume set LNCS 9049 and LNCS 9050 constitutes the refereed proceedings of the 20th International Conference on Database Systems for Advanced Applications, DASFAA 2015, held in Hanoi, Vietnam, in April 2015. The 63 full papers presented were carefully reviewed and selected from a total of 287 submissions. The papers cover the following topics: data mining; data streams and time series; database storage and index; spatio-temporal data; modern computing platform; social networks; information integration and data quality; information retrieval and summarization; security and privacy; outlier and imbalanced data analysis; probabilistic and uncertain data; query processing.
This book introduces a new logic-based multi-paradigm programming language that integrates logic programming, functional programming, dynamic programming with tabling, and scripting, for use in solving combinatorial search problems, including CP, SAT, and MIP (mixed integer programming) based solver modules, and a module for planning that is implemented using tabling. The book is useful for undergraduate and graduate students, researchers, and practitioners.
This book constitutes the proceedings of the Fourth International Conference on Analysis of Images, Social Networks and Texts, AIST 2015, held in Yekaterinburg, Russia, in April 2015. The 24 full and 8 short papers were carefully reviewed and selected from 140 submissions. The papers are organized in topical sections on analysis of images and videos; pattern recognition and machine learning; social network analysis; text mining and natural language processing.
Data Mining and Multi agent Integration aims to re?ect state of the art research and development of agent mining interaction and integration (for short, agent min ing). The book was motivated by increasing interest and work in the agents data min ing, and vice versa. The interaction and integration comes about from the intrinsic challenges faced by agent technology and data mining respectively; for instance, multi agent systems face the problem of enhancing agent learning capability, and avoiding the uncertainty of self organization and intelligence emergence. Data min ing, if integrated into agent systems, can greatly enhance the learning skills of agents, and assist agents with predication of future states, thus initiating follow up action or intervention. The data mining community is now struggling with mining distributed, interactive and heterogeneous data sources. Agents can be used to man age such data sources for data access, monitoring, integration, and pattern merging from the infrastructure, gateway, message passing and pattern delivery perspectives. These two examples illustrate the potential of agent mining in handling challenges in respective communities. There is an excellent opportunity to create innovative, dual agent mining interac tion and integration technology, tools and systems which will deliver results in one new technology. |
You may like...
Untether - Inspiration for Living Free…
Jt Jester Mestdagh
Hardcover
Advances in Interdisciplinary Research…
P.K. Kapur, Gurinder Singh, …
Hardcover
R2,739
Discovery Miles 27 390
Statistical Applications from Clinical…
Jianchang Lin, Bushi Wang, …
Hardcover
R5,901
Discovery Miles 59 010
Comprehensive Metaheuristics…
S. Ali Mirjalili, Amir Hossein Gandomi
Paperback
R3,956
Discovery Miles 39 560
|