![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data mining
Earth Observation interacts with space, remote sensing, communication, and information technologies, and plays an increasingly significant role in Earth related scientific studies, resource management, homeland security, topographic mapping, and development of a healthy, sustainable environment and community. Geospatial Technology for Earth Observation provides an in-depth and broad collection of recent progress in Earth observation. Contributed by leading experts in this field, the book covers satellite, airborne and ground remote sensing systems and system integration, sensor orientation, remote sensing physics, image classification and analysis, information extraction, geospatial service, and various application topics, including cadastral mapping, land use change evaluation, water environment monitoring, flood mapping, and decision making support. Geospatial Technology for Earth Observation serves as a valuable training source for researchers, developers, and practitioners in geospatial science and technology industry. It is also suitable as a reference book for upper level college students and graduate students in geospatial technology, geosciences, resource management, and informatics.
In theory, there is no difference between theory and practice. But, in practice, there is. Jan L. A. van de Snepscheut The ?ow of academic ideas in the area of computational intelligence has penetrated industry with tremendous speed and persistence. Thousands of applications have proved the practical potential of fuzzy logic, neural networks, evolutionary com- tation, swarm intelligence, and intelligent agents even before their theoretical foundation is completely understood. And the popularity is rising. Some software vendors have pronounced the new machine learning gold rush to "Transfer Data into Gold". New buzzwords like "data mining", "genetic algorithms", and "swarm optimization" have enriched the top executives' vocabulary to make them look more "visionary" for the 21st century. The phrase "fuzzy math" became political jargon after being used by US President George W. Bush in one of the election debates in the campaign in 2000. Even process operators are discussing the perf- mance of neural networks with the same passion as the performance of the Dallas Cowboys. However, for most of the engineers and scientists introducing computational intelligence technologies into practice, looking at the growing number of new approaches, and understanding their theoretical principles and potential for value creation becomes a more and more dif?cult task.
This book constitutes the thoroughly revised selected papers of the 4th and 5th workshops on Big Data Benchmarks, Performance Optimization, and Emerging Hardware, BPOE 4 and BPOE 5, held respectively in Salt Lake City, in March 2014, and in Hangzhou, in September 2014. The 16 papers presented were carefully reviewed and selected from 30 submissions. Both workshops focus on architecture and system support for big data systems, such as benchmarking; workload characterization; performance optimization and evaluation; emerging hardware.
With the growing use of information technology and the recent advances in web systems, the amount of data available to users has increased exponentially. Thus, there is a critical need to understand the content of the data. As a result, data-mining has become a popular research topic in recent years for the treatment of the "data rich and information poor" syndrome. In this carefully edited volume a theoretical foundation as well as important new directions for data-mining research are presented. It brings together a set of well respected data mining theoreticians and researchers with practical data mining experiences. The presented theories will give data mining practitioners a scientific perspective in data mining and thus provide more insight into their problems, and the provided new data mining topics can be expected to stimulate further research in these important directions.
"Foundations of Data Mining and Knowledge Discovery" contains the latest results and new directions in data mining research. Data mining, which integrates various technologies, including computational intelligence, database and knowledge management, machine learning, soft computing, and statistics, is one of the fastest growing fields in computer science. Although many data mining techniques have been developed, further development of the field requires a close examination of its foundations. This volume presents the results of investigations into the foundations of the discipline, and represents the state of the art for much of the current research. This book will prove extremely valuable and fruitful for data mining researchers, no matter whether they would like to uncover the fundamental principles behind data mining, or apply the theories to practical applications.
The abundance of information and increase in computing power currently enable researchers to tackle highly complicated and challenging computational problems. Solutions to such problems are now feasible using advances and innovations from the area of Artificial Intelligence. The general focus of the AIAI conference is to provide insights on how Artificial Intelligence may be applied in real-world situations and serve the study, analysis and modeling of theoretical and practical issues. This volume contains papers selected for presentation at the 6th IFIP Conference on Artificial Intelligence Applications and Innovations (AIAI 2010) and held in Larnaca, Cyprus, during October 6-7, 2010. IFIP AIAI 2010 was co-organized by the University of Cyprus and the Cyprus University of Technology and was sponsored by the Cyprus University of Technology, Frederick University and the Cyprus Tourism Organization. AIAI 2010 is the official conference of the WG12.5 "Artificial Intel- gence Applications" working group of IFIP TC12, the International Federation for Information Processing Technical Committee on Artificial Intelligence (AI). AIAI is a conference that grows in significance every year attracting researchers from different countries around the globe. It maintains high quality, standards and welcomes research papers describing technical advances and engineering and ind- trial applications of intelligent systems. AIAI 2010 was not confined to introducing how AI may be applied in real-life situations, but also included innovative methods, techniques, tools and ideas of AI expressed at the algorithmic or systemic level.
This text reviews the evolution of the field of visualization, providing innovative examples from various disciplines, highlighting the important role that visualization plays in extracting and organizing the concepts found in complex data. Features: presents a thorough introduction to the discipline of knowledge visualization, its current state of affairs and possible future developments; examines how tables have been used for information visualization in historical textual documents; discusses the application of visualization techniques for knowledge transfer in business relationships, and for the linguistic exploration and analysis of sensory descriptions; investigates the use of visualization to understand orchestral music scores, the optical theory behind Renaissance art, and to assist in the reconstruction of an historic church; describes immersive 360 degree stereographic visualization, knowledge-embedded embodied interaction, and a novel methodology for the analysis of architectural forms.
This book constitutes the refereed proceedings of the 13th Pacific Rim Conference on Artificial Intelligence, PRICAI 2014, held in Gold Coast, Queensland, Australia, in December 2014. The 74 full papers and 20 short papers presented in this volume were carefully reviewed and selected from 203 submissions. The topics include inference; reasoning; robotics; social intelligence. AI foundations; applications of AI; agents; Bayesian networks; neural networks; Markov networks; bioinformatics; cognitive systems; constraint satisfaction; data mining and knowledge discovery; decision theory; evolutionary computation; games and interactive entertainment; heuristics; knowledge acquisition and ontology; knowledge representation, machine learning; multimodal interaction; natural language processing; planning and scheduling; probabilistic.
Enterprise Architecture, Integration, and Interoperability and the Networked enterprise have become the theme of many conferences in the past few years. These conferences were organised by IFIP TC5 with the support of its two working groups: WG 5. 12 (Architectures for Enterprise Integration) and WG 5. 8 (Enterprise Interoperability), both concerned with aspects of the topic: how is it possible to architect and implement businesses that are flexible and able to change, to interact, and use one another's s- vices in a dynamic manner for the purpose of (joint) value creation. The original qu- tion of enterprise integration in the 1980s was: how can we achieve and integrate - formation and material flow in the enterprise? Various methods and reference models were developed or proposed - ranging from tightly integrated monolithic system - chitectures, through cell-based manufacturing to on-demand interconnection of bu- nesses to form virtual enterprises in response to market opportunities. Two camps have emerged in the endeavour to achieve the same goal, namely, to achieve interoperability between businesses (whereupon interoperability is the ability to exchange information in order to use one another's services or to jointly implement a service). One school of researchers addresses the technical aspects of creating dynamic (and static) interconnections between disparate businesses (or parts thereof).
The book is about all aspects of computing, communication, general sciences and educational research covered at the Second International Conference on Computer & Communication Technologies held during 24-26 July 2015 at Hyderabad. It hosted by CMR Technical Campus in association with Division - V (Education & Research) CSI, India. After a rigorous review only quality papers are selected and included in this book. The entire book is divided into three volumes. Three volumes cover a variety of topics which include medical imaging, networks, data mining, intelligent computing, software design, image processing, mobile computing, digital signals and speech processing, video surveillance and processing, web mining, wireless sensor networks, circuit analysis, fuzzy systems, antenna and communication systems, biomedical signal processing and applications, cloud computing, embedded systems applications and cyber security and digital forensic. The readers of these volumes will be highly benefited from the technical contents of the topics.
This book constitutes the refereed proceedings of the Second International Conference on Advanced Machine Learning Technologies and Applications, AMLTA 2014, held in Cairo, Egypt, in November 2014. The 49 full papers presented were carefully reviewed and selected from 101 initial submissions. The papers are organized in topical sections on machine learning in Arabic text recognition and assistive technology; recommendation systems for cloud services; machine learning in watermarking/authentication and virtual machines; features extraction and classification; rough/fuzzy sets and applications; fuzzy multi-criteria decision making; Web-based application and case-based reasoning construction; social networks and big data sets.
"Foundations of Large-Scale Multimedia Information Management and Retrieval: Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and multimodal fusion. Part II - Scalability Issues presents indexing and distributed methods for scaling up these components for high-dimensional data and Web-scale datasets. The book presents some real-world applications and remarks on future research and development directions. The book is designed for researchers, graduate students, and practitioners in the fields of Computer Vision, Machine Learning, Large-scale Data Mining, Database, and Multimedia Information Retrieval. Dr. Edward Y. Chang was a professor at the Department of Electrical & Computer Engineering, University of California at Santa Barbara, before he joined Google as a research director in 2006. Dr. Chang received his M.S. degree in Computer Science and Ph.D degree in Electrical Engineering, both from Stanford University.
The three volume set LNCS 8834, LNCS 8835, and LNCS 8836 constitutes the proceedings of the 21st International Conference on Neural Information Processing, ICONIP 2014, held in Kuching, Malaysia, in November 2014. The 231 full papers presented were carefully reviewed and selected from 375 submissions. The selected papers cover major topics of theoretical research, empirical study, and applications of neural information processing research. The 3 volumes represent topical sections containing articles on cognitive science, neural networks and learning systems, theory and design, applications, kernel and statistical methods, evolutionary computation and hybrid intelligent systems, signal and image processing, and special sessions intelligent systems for supporting decision, making processes, theories and applications, cognitive robotics, and learning systems for social network and web mining.
The main benefit of the book is that it explores available methodologies for both conducting in-situ measurements and adequately exploring the results, based on a case study that illustrates the benefits and difficulties of concurrent methodologies. The case study corresponds to a set of 25 social housing dwellings where an extensive in situ measurement campaign was conducted. The dwellings are located in the same quarter of a city. Measurements included indoor temperature and relative humidity, with continuous log in different rooms of each dwelling, blower-door tests and complete outdoor conditions provided by a nearby weather station. The book includes a variety of scientific and engineering disciplines, such as building physics, probability and statistics and civil engineering. It presents a synthesis of the current state of knowledge for benefit of professional engineers and scientists.
The book is about all aspects of computing, communication, general sciences and educational research covered at the Second International Conference on Computer & Communication Technologies held during 24-26 July 2015 at Hyderabad. It hosted by CMR Technical Campus in association with Division - V (Education & Research) CSI, India. After a rigorous review only quality papers are selected and included in this book. The entire book is divided into three volumes. Three volumes cover a variety of topics which include medical imaging, networks, data mining, intelligent computing, software design, image processing, mobile computing, digital signals and speech processing, video surveillance and processing, web mining, wireless sensor networks, circuit analysis, fuzzy systems, antenna and communication systems, biomedical signal processing and applications, cloud computing, embedded systems applications and cyber security and digital forensic. The readers of these volumes will be highly benefited from the technical contents of the topics.
In recent years, as part of the increasing "informationization" of industry and the economy, enterprises have been accumulating vast amounts of detailed data such as high-frequency transaction data in nancial markets and point-of-sale information onindividualitems in theretail sector. Similarly,vast amountsof data arenow ava- able on business networks based on inter rm transactions and shareholdings. In the past, these types of information were studied only by economists and management scholars. More recently, however, researchers from other elds, such as physics, mathematics, and information sciences, have become interested in this kind of data and, based on novel empirical approaches to searching for regularities and "laws" akin to those in the natural sciences, have produced intriguing results. This book is the proceedings of the international conference THICCAPFA7 that was titled "New Approaches to the Analysis of Large-Scale Business and E- nomic Data," held in Tokyo, March 1-5, 2009. The letters THIC denote the Tokyo Tech (Tokyo Institute of Technology)-Hitotsubashi Interdisciplinary Conference. The conference series, titled APFA (Applications of Physics in Financial Analysis), focuses on the analysis of large-scale economic data. It has traditionally brought physicists and economists together to exchange viewpoints and experience (APFA1 in Dublin 1999, APFA2 in Liege ` 2000, APFA3 in London 2001, APFA4 in Warsaw 2003, APFA5 in Torino 2006, and APFA6 in Lisbon 2007). The aim of the conf- ence is to establish fundamental analytical techniques and data collection methods, taking into account the results from a variety of academic disciplines.
This book constitutes the refereed conference proceedings of the Third International Conference on Big Data Analytics, BDA 2014, held in New Delhi, India, in December 2014. The 11 revised full papers and 6 short papers were carefully reviewed and selected from 35 submissions and cover topics on media analytics; geospatial big data; semantics and data models; search and retrieval; graphics and visualization; application-specific big data.
This book constitutes the refereed proceedings of the 22nd International Symposium on String Processing and Information Retrieval, SPIRE 2015, held in London, UK, in September 2015. The 28 full and 6 short papers included in this volume were carefully reviewed and selected from 90 submissions. The papers cover research in all aspects of string processing, information retrieval, computational biology, pattern matching, semi-structured data, and related applications.
Beginning Apache Cassandra Development introduces you to one of the most robust and best-performing NoSQL database platforms on the planet. Apache Cassandra is a document database following the JSON document model. It is specifically designed to manage large amounts of data across many commodity servers without there being any single point of failure. This design approach makes Apache Cassandra a robust and easy-to-implement platform when high availability is needed. Apache Cassandra can be used by developers in Java, PHP, Python, and JavaScript-the primary and most commonly used languages. In Beginning Apache Cassandra Development, author and Cassandra expert Vivek Mishra takes you through using Apache Cassandra from each of these primary languages. Mishra also covers the Cassandra Query Language (CQL), the Apache Cassandra analog to SQL. You'll learn to develop applications sourcing data from Cassandra, query that data, and deliver it at speed to your application's users. Cassandra is one of the leading NoSQL databases, meaning you get unparalleled throughput and performance without the sort of processing overhead that comes with traditional proprietary databases. Beginning Apache Cassandra Development will therefore help you create applications that generate search results quickly, stand up to high levels of demand, scale as your user base grows, ensure operational simplicity, and-not least-provide delightful user experiences.
Modern terrorist networks pose an unprecedented threat to international security. The question of how to neutralize that threat is complicated radically by their fluid, non-hierarchical structures, religious and ideological motivations, and predominantly non-territorial objectives. Governments and militaries are crafting new policies and doctrines to combat terror, but they desperately need new technologies to make these efforts effective. This book collects a wide range of the most current computational research that addresses critical issues for countering terrorism, including: Finding, summarizing, and evaluating relevant information from large and changing data stores; Simulating and predicting enemy acts and outcomes; and Producing actionable intelligence by finding meaningful patterns hidden in huge amounts of noisy data. The book's four sections describe current research on discovering relevant information buried in vast amounts of unstructured data; extracting meaningful information from digitized documents in multiple languages; analyzing graphs and networks to shed light on adversaries' goals and intentions; and developing software systems that enable analysts to model, simulate, and predict the effects of real-world conflicts. The research described in this book is invaluable reading for governmental decision-makers designing new policies to counter terrorist threats, for members of the military, intelligence, and law enforcement communities devising counterterrorism strategies, and for researchers developing more effective methods for knowledge discovery in complicated and diverse datasets.
The book is a collection of high-quality peer-reviewed research papers presented in International Conference on Soft Computing Systems (ICSCS 2015) held at Noorul Islam Centre for Higher Education, Chennai, India. These research papers provide the latest developments in the emerging areas of Soft Computing in Engineering and Technology. The book is organized in two volumes and discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.
The proceedings of the 5th International Workshop on Parallel Tools for High Performance Computing provide an overview on supportive software tools and environments in the fields of System Management, Parallel Debugging and Performance Analysis. In the pursuit to maintain exponential growth for the performance of high performance computers the HPC community is currently targeting Exascale Systems. The initial planning for Exascale already started when the first Petaflop system was delivered. Many challenges need to be addressed to reach the necessary performance. Scalability, energy efficiency and fault-tolerance need to be increased by orders of magnitude. The goal can only be achieved when advanced hardware is combined with a suitable software stack. In fact, the importance of software is rapidly growing. As a result, many international projects focus on the necessary software.
Statistics and hypothesis testing are routinely used in areas (such as linguistics) that are traditionally not mathematically intensive. In such fields, when faced with experimental data, many students and researchers tend to rely on commercial packages to carry out statistical data analysis, often without understanding the logic of the statistical tests they rely on. As a consequence, results are often misinterpreted, and users have difficulty in flexibly applying techniques relevant to their own research - they use whatever they happen to have learned. A simple solution is to teach the fundamental ideas of statistical hypothesis testing without using too much mathematics. This book provides a non-mathematical, simulation-based introduction to basic statistical concepts and encourages readers to try out the simulations themselves using the source code and data provided (the freely available programming language R is used throughout). Since the code presented in the text almost always requires the use of previously introduced programming constructs, diligent students also acquire basic programming abilities in R. The book is intended for advanced undergraduate and graduate students in any discipline, although the focus is on linguistics, psychology, and cognitive science. It is designed for self-instruction, but it can also be used as a textbook for a first course on statistics. Earlier versions of the book have been used in undergraduate and graduate courses in Europe and the US. "Vasishth and Broe have written an attractive introduction to the foundations of statistics. It is concise, surprisingly comprehensive, self-contained and yet quite accessible. Highly recommended." Harald Baayen, Professor of Linguistics, University of Alberta, Canada "By using the text students not only learn to do the specific things outlined in the book, they also gain a skill set that empowers them to explore new areas that lie beyond the book's coverage." Colin Phillips, Professor of Linguistics, University of Maryland, USA
Data mining (DM) consists of extracting interesting knowledge from re- world, large & complex data sets; and is the core step of a broader process, called the knowledge discovery from databases (KDD) process. In addition to the DM step, which actually extracts knowledge from data, the KDD process includes several preprocessing (or data preparation) and post-processing (or knowledge refinement) steps. The goal of data preprocessing methods is to transform the data to facilitate the application of a (or several) given DM algorithm(s), whereas the goal of knowledge refinement methods is to validate and refine discovered knowledge. Ideally, discovered knowledge should be not only accurate, but also comprehensible and interesting to the user. The total process is highly computation intensive. The idea of automatically discovering knowledge from databases is a very attractive and challenging task, both for academia and for industry. Hence, there has been a growing interest in data mining in several AI-related areas, including evolutionary algorithms (EAs). The main motivation for applying EAs to KDD tasks is that they are robust and adaptive search methods, which perform a global search in the space of candidate solutions (for instance, rules or another form of knowledge representation).
This book constitutes the refereed proceedings of the Workshops held at the 8th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2012, in Halkidiki, Greece, in September 2012. The book includes a total of 66 interesting and innovative research papers from the following 8 workshops: the Second Artificial Intelligence Applications in Biomedicine Workshop (AIAB 2012), the First AI in Education Workshop: Innovations and Applications (AIeIA 2012), the Second International Workshop on Computational Intelligence in Software Engineering (CISE 2012), the First Conformal Prediction and Its Applications Workshop (COPA 2012), the First Intelligent Innovative Ways for Video-to-Video Communiccation in Modern Smart Cities Workshop (IIVC 2012), the Third Intelligent Systems for Quality of Life Information Services Workshop (ISQL 2012), the First Mining Humanistic Data Workshop (MHDW 2012), and the First Workshop on Algorithms for Data and Text Mining in Bioinformatics (WADTMB 2012). |
You may like...
Are you sitting comfortably? The book…
Peyton Skipwith, James Russell
Hardcover
(1)
Cache and Interconnect Architectures in…
Michel Dubois, Shreekant S. Thakkar
Hardcover
R2,808
Discovery Miles 28 080
Web Services - Concepts, Methodologies…
Information Reso Management Association
Hardcover
R8,957
Discovery Miles 89 570
Harnessing Performance Variability in…
William Fornaciari, Dimitrios Soudris
Hardcover
R2,692
Discovery Miles 26 920
|