![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > Data mining
This book constitutes the refereed conference proceedings of the Third International Conference on Big Data Analytics, BDA 2014, held in New Delhi, India, in December 2014. The 11 revised full papers and 6 short papers were carefully reviewed and selected from 35 submissions and cover topics on media analytics; geospatial big data; semantics and data models; search and retrieval; graphics and visualization; application-specific big data.
Earth Observation interacts with space, remote sensing, communication, and information technologies, and plays an increasingly significant role in Earth related scientific studies, resource management, homeland security, topographic mapping, and development of a healthy, sustainable environment and community. Geospatial Technology for Earth Observation provides an in-depth and broad collection of recent progress in Earth observation. Contributed by leading experts in this field, the book covers satellite, airborne and ground remote sensing systems and system integration, sensor orientation, remote sensing physics, image classification and analysis, information extraction, geospatial service, and various application topics, including cadastral mapping, land use change evaluation, water environment monitoring, flood mapping, and decision making support. Geospatial Technology for Earth Observation serves as a valuable training source for researchers, developers, and practitioners in geospatial science and technology industry. It is also suitable as a reference book for upper level college students and graduate students in geospatial technology, geosciences, resource management, and informatics.
Data-driven discovery is revolutionizing the modeling, prediction, and control of complex systems. This textbook brings together machine learning, engineering mathematics, and mathematical physics to integrate modeling and control of dynamical systems with modern methods in data science. It highlights many of the recent advances in scientific computing that enable data-driven methods to be applied to a diverse range of complex systems, such as turbulence, the brain, climate, epidemiology, finance, robotics, and autonomy. Aimed at advanced undergraduate and beginning graduate students in the engineering and physical sciences, the text presents a range of topics and methods from introductory to state of the art.
This book constitutes the refereed proceedings of the 11th International Workshop on Algorithms and Models for the Web Graph, WAW 2014, held in Beijing, China, in December 2014. The 12 papers presented were carefully reviewed and selected for inclusion in this volume. The aim of the workshop was to further the understanding of graphs that arise from the Web and various user activities on the Web, and stimulate the development of high-performance algorithms and applications that exploit these graphs. The workshop gathered the researchers who are working on graph-theoretic and algorithmic aspects of related complex networks, including social networks, citation networks, biological networks, molecular networks, and other networks arising from the Internet.
Modern terrorist networks pose an unprecedented threat to international security. The question of how to neutralize that threat is complicated radically by their fluid, non-hierarchical structures, religious and ideological motivations, and predominantly non-territorial objectives. Governments and militaries are crafting new policies and doctrines to combat terror, but they desperately need new technologies to make these efforts effective. This book collects a wide range of the most current computational research that addresses critical issues for countering terrorism, including: Finding, summarizing, and evaluating relevant information from large and changing data stores; Simulating and predicting enemy acts and outcomes; and Producing actionable intelligence by finding meaningful patterns hidden in huge amounts of noisy data. The book's four sections describe current research on discovering relevant information buried in vast amounts of unstructured data; extracting meaningful information from digitized documents in multiple languages; analyzing graphs and networks to shed light on adversaries' goals and intentions; and developing software systems that enable analysts to model, simulate, and predict the effects of real-world conflicts. The research described in this book is invaluable reading for governmental decision-makers designing new policies to counter terrorist threats, for members of the military, intelligence, and law enforcement communities devising counterterrorism strategies, and for researchers developing more effective methods for knowledge discovery in complicated and diverse datasets.
In theory, there is no difference between theory and practice. But, in practice, there is. Jan L. A. van de Snepscheut The ?ow of academic ideas in the area of computational intelligence has penetrated industry with tremendous speed and persistence. Thousands of applications have proved the practical potential of fuzzy logic, neural networks, evolutionary com- tation, swarm intelligence, and intelligent agents even before their theoretical foundation is completely understood. And the popularity is rising. Some software vendors have pronounced the new machine learning gold rush to "Transfer Data into Gold". New buzzwords like "data mining", "genetic algorithms", and "swarm optimization" have enriched the top executives' vocabulary to make them look more "visionary" for the 21st century. The phrase "fuzzy math" became political jargon after being used by US President George W. Bush in one of the election debates in the campaign in 2000. Even process operators are discussing the perf- mance of neural networks with the same passion as the performance of the Dallas Cowboys. However, for most of the engineers and scientists introducing computational intelligence technologies into practice, looking at the growing number of new approaches, and understanding their theoretical principles and potential for value creation becomes a more and more dif?cult task.
"Foundations of Data Mining and Knowledge Discovery" contains the latest results and new directions in data mining research. Data mining, which integrates various technologies, including computational intelligence, database and knowledge management, machine learning, soft computing, and statistics, is one of the fastest growing fields in computer science. Although many data mining techniques have been developed, further development of the field requires a close examination of its foundations. This volume presents the results of investigations into the foundations of the discipline, and represents the state of the art for much of the current research. This book will prove extremely valuable and fruitful for data mining researchers, no matter whether they would like to uncover the fundamental principles behind data mining, or apply the theories to practical applications.
With the growing use of information technology and the recent advances in web systems, the amount of data available to users has increased exponentially. Thus, there is a critical need to understand the content of the data. As a result, data-mining has become a popular research topic in recent years for the treatment of the "data rich and information poor" syndrome. In this carefully edited volume a theoretical foundation as well as important new directions for data-mining research are presented. It brings together a set of well respected data mining theoreticians and researchers with practical data mining experiences. The presented theories will give data mining practitioners a scientific perspective in data mining and thus provide more insight into their problems, and the provided new data mining topics can be expected to stimulate further research in these important directions.
The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasibility of these systems relies basically on P2P (peer-to-peer) techniques and the support of agent systems with scaling and decentralized control. Synergy between grids, P2P systems, and agent technologies is the key to data- and knowledge-centered systems in large-scale environments. This special issue contains extended and revised versions of 4 papers, selected from the 25 papers presented at the satellite events associated with the 17th East-European Conference on Advances in Databases and Information Systems (ADBIS 2013), held on September 1-4, 2013 in Genoa, Italy. The three satellite events were GID 2013, the Second International Workshop on GPUs in Databases; SoBI 2013, the First International Workshop on Social Business Intelligence: Integrating Social Content in Decision Making; and OAIS 2013, the Second International Workshop on Ontologies Meet Advanced Information Systems. The papers cover various topics in large-scale data and knowledge-centered systems, including GPU-accelerated database systems and GPU-based compression for large time series databases, design of parallel data warehouses, and schema matching. The special issue content, which combines both theoretical and application-based contributions, gives a useful overview of some of the current trends in large-scale data and knowledge management and will stimulate new ideas for further research and development within both the scientific and industrial communities.
This book constitutes the proceedings of the 10th International Conference on Advanced Data Mining and Applications, ADMA 2014, held in Guilin, China during December 2014. The 48 regular papers and 10 workshop papers presented in this volume were carefully reviewed and selected from 90 submissions. They deal with the following topics: data mining, social network and social media, recommend systems, database, dimensionality reduction, advance machine learning techniques, classification, big data and applications, clustering methods, machine learning, and data mining and database.
The abundance of information and increase in computing power currently enable researchers to tackle highly complicated and challenging computational problems. Solutions to such problems are now feasible using advances and innovations from the area of Artificial Intelligence. The general focus of the AIAI conference is to provide insights on how Artificial Intelligence may be applied in real-world situations and serve the study, analysis and modeling of theoretical and practical issues. This volume contains papers selected for presentation at the 6th IFIP Conference on Artificial Intelligence Applications and Innovations (AIAI 2010) and held in Larnaca, Cyprus, during October 6-7, 2010. IFIP AIAI 2010 was co-organized by the University of Cyprus and the Cyprus University of Technology and was sponsored by the Cyprus University of Technology, Frederick University and the Cyprus Tourism Organization. AIAI 2010 is the official conference of the WG12.5 "Artificial Intel- gence Applications" working group of IFIP TC12, the International Federation for Information Processing Technical Committee on Artificial Intelligence (AI). AIAI is a conference that grows in significance every year attracting researchers from different countries around the globe. It maintains high quality, standards and welcomes research papers describing technical advances and engineering and ind- trial applications of intelligent systems. AIAI 2010 was not confined to introducing how AI may be applied in real-life situations, but also included innovative methods, techniques, tools and ideas of AI expressed at the algorithmic or systemic level.
This text reviews the evolution of the field of visualization, providing innovative examples from various disciplines, highlighting the important role that visualization plays in extracting and organizing the concepts found in complex data. Features: presents a thorough introduction to the discipline of knowledge visualization, its current state of affairs and possible future developments; examines how tables have been used for information visualization in historical textual documents; discusses the application of visualization techniques for knowledge transfer in business relationships, and for the linguistic exploration and analysis of sensory descriptions; investigates the use of visualization to understand orchestral music scores, the optical theory behind Renaissance art, and to assist in the reconstruction of an historic church; describes immersive 360 degree stereographic visualization, knowledge-embedded embodied interaction, and a novel methodology for the analysis of architectural forms.
This book constitutes the refereed proceedings of the 13th Pacific Rim Conference on Artificial Intelligence, PRICAI 2014, held in Gold Coast, Queensland, Australia, in December 2014. The 74 full papers and 20 short papers presented in this volume were carefully reviewed and selected from 203 submissions. The topics include inference; reasoning; robotics; social intelligence. AI foundations; applications of AI; agents; Bayesian networks; neural networks; Markov networks; bioinformatics; cognitive systems; constraint satisfaction; data mining and knowledge discovery; decision theory; evolutionary computation; games and interactive entertainment; heuristics; knowledge acquisition and ontology; knowledge representation, machine learning; multimodal interaction; natural language processing; planning and scheduling; probabilistic.
Enterprise Architecture, Integration, and Interoperability and the Networked enterprise have become the theme of many conferences in the past few years. These conferences were organised by IFIP TC5 with the support of its two working groups: WG 5. 12 (Architectures for Enterprise Integration) and WG 5. 8 (Enterprise Interoperability), both concerned with aspects of the topic: how is it possible to architect and implement businesses that are flexible and able to change, to interact, and use one another's s- vices in a dynamic manner for the purpose of (joint) value creation. The original qu- tion of enterprise integration in the 1980s was: how can we achieve and integrate - formation and material flow in the enterprise? Various methods and reference models were developed or proposed - ranging from tightly integrated monolithic system - chitectures, through cell-based manufacturing to on-demand interconnection of bu- nesses to form virtual enterprises in response to market opportunities. Two camps have emerged in the endeavour to achieve the same goal, namely, to achieve interoperability between businesses (whereupon interoperability is the ability to exchange information in order to use one another's services or to jointly implement a service). One school of researchers addresses the technical aspects of creating dynamic (and static) interconnections between disparate businesses (or parts thereof).
The book is about all aspects of computing, communication, general sciences and educational research covered at the Second International Conference on Computer & Communication Technologies held during 24-26 July 2015 at Hyderabad. It hosted by CMR Technical Campus in association with Division - V (Education & Research) CSI, India. After a rigorous review only quality papers are selected and included in this book. The entire book is divided into three volumes. Three volumes cover a variety of topics which include medical imaging, networks, data mining, intelligent computing, software design, image processing, mobile computing, digital signals and speech processing, video surveillance and processing, web mining, wireless sensor networks, circuit analysis, fuzzy systems, antenna and communication systems, biomedical signal processing and applications, cloud computing, embedded systems applications and cyber security and digital forensic. The readers of these volumes will be highly benefited from the technical contents of the topics.
The three volume set LNCS 8834, LNCS 8835, and LNCS 8836 constitutes the proceedings of the 20th International Conference on Neural Information Processing, ICONIP 2014, held in Kuching, Malaysia, in November 2014. The 231 full papers presented were carefully reviewed and selected from 375 submissions. The selected papers cover major topics of theoretical research, empirical study, and applications of neural information processing research. The 3 volumes represent topical sections containing articles on cognitive science, neural networks and learning systems, theory and design, applications, kernel and statistical methods, evolutionary computation and hybrid intelligent systems, signal and image processing, and special sessions intelligent systems for supporting decision, making processes,theories and applications, cognitive robotics, and learning systems for social network and web mining.
The three volume set LNCS 8834, LNCS 8835, and LNCS 8836 constitutes the proceedings of the 21st International Conference on Neural Information Processing, ICONIP 2014, held in Kuching, Malaysia, in November 2014. The 231 full papers presented were carefully reviewed and selected from 375 submissions. The selected papers cover major topics of theoretical research, empirical study, and applications of neural information processing research. The 3 volumes represent topical sections containing articles on cognitive science, neural networks and learning systems, theory and design, applications, kernel and statistical methods, evolutionary computation and hybrid intelligent systems, signal and image processing, and special sessions intelligent systems for supporting decision, making processes, theories and applications, cognitive robotics, and learning systems for social network and web mining.
This book constitutes the refereed proceedings of the 22nd International Symposium on String Processing and Information Retrieval, SPIRE 2015, held in London, UK, in September 2015. The 28 full and 6 short papers included in this volume were carefully reviewed and selected from 90 submissions. The papers cover research in all aspects of string processing, information retrieval, computational biology, pattern matching, semi-structured data, and related applications.
The book is about all aspects of computing, communication, general sciences and educational research covered at the Second International Conference on Computer & Communication Technologies held during 24-26 July 2015 at Hyderabad. It hosted by CMR Technical Campus in association with Division - V (Education & Research) CSI, India. After a rigorous review only quality papers are selected and included in this book. The entire book is divided into three volumes. Three volumes cover a variety of topics which include medical imaging, networks, data mining, intelligent computing, software design, image processing, mobile computing, digital signals and speech processing, video surveillance and processing, web mining, wireless sensor networks, circuit analysis, fuzzy systems, antenna and communication systems, biomedical signal processing and applications, cloud computing, embedded systems applications and cyber security and digital forensic. The readers of these volumes will be highly benefited from the technical contents of the topics.
This book constitutes revised selected papers from the second ECML PKDD Workshop on Data Analytics for Renewable Energy Integration, DARE 2014, held in Nancy, France, in September 2014. The 11 papers presented in this volume were carefully reviewed and selected for inclusion in this book.
Beginning Apache Cassandra Development introduces you to one of the most robust and best-performing NoSQL database platforms on the planet. Apache Cassandra is a document database following the JSON document model. It is specifically designed to manage large amounts of data across many commodity servers without there being any single point of failure. This design approach makes Apache Cassandra a robust and easy-to-implement platform when high availability is needed. Apache Cassandra can be used by developers in Java, PHP, Python, and JavaScript-the primary and most commonly used languages. In Beginning Apache Cassandra Development, author and Cassandra expert Vivek Mishra takes you through using Apache Cassandra from each of these primary languages. Mishra also covers the Cassandra Query Language (CQL), the Apache Cassandra analog to SQL. You'll learn to develop applications sourcing data from Cassandra, query that data, and deliver it at speed to your application's users. Cassandra is one of the leading NoSQL databases, meaning you get unparalleled throughput and performance without the sort of processing overhead that comes with traditional proprietary databases. Beginning Apache Cassandra Development will therefore help you create applications that generate search results quickly, stand up to high levels of demand, scale as your user base grows, ensure operational simplicity, and-not least-provide delightful user experiences.
Imagine James Bond meets Sherlock Holmes: Counterterrorism and Cybersecurity is the sequel to Facebook Nation in the Total Information Awareness book series by Newton Lee. The book examines U.S. counterterrorism history, technologies, and strategies from a unique and thought-provoking approach that encompasses personal experiences, investigative journalism, historical and current events, ideas from great thought leaders, and even the make-believe of Hollywood. Demystifying Total Information Awareness, the author expounds on the U.S. intelligence community, artificial intelligence in data mining, social media and privacy, cyber attacks and prevention, causes and cures for terrorism, and longstanding issues of war and peace. The book offers practical advice for businesses, governments, and individuals to better secure the world and protect cyberspace. It quotes U.S. Navy Admiral and NATO's Supreme Allied Commander James Stavridis: "Instead of building walls to create security, we need to build bridges." The book also provides a glimpse into the future of Plan X and Generation Z, along with an ominous prediction from security advisor Marc Goodman at TEDGlobal 2012: "If you control the code, you control the world." Counterterrorism and Cybersecurity: Total Information Awareness will keep you up at night but at the same time give you some peace of mind knowing that "our problems are manmade - therefore they can be solved by man [or woman]," as President John F. Kennedy said at the American University commencement in June 1963.
The main benefit of the book is that it explores available methodologies for both conducting in-situ measurements and adequately exploring the results, based on a case study that illustrates the benefits and difficulties of concurrent methodologies. The case study corresponds to a set of 25 social housing dwellings where an extensive in situ measurement campaign was conducted. The dwellings are located in the same quarter of a city. Measurements included indoor temperature and relative humidity, with continuous log in different rooms of each dwelling, blower-door tests and complete outdoor conditions provided by a nearby weather station. The book includes a variety of scientific and engineering disciplines, such as building physics, probability and statistics and civil engineering. It presents a synthesis of the current state of knowledge for benefit of professional engineers and scientists.
The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasibility of these systems relies basically on P2P (peer-to-peer) techniques and the support of agent systems with scaling and decentralized control. Synergy between grids, P2P systems, and agent technologies is the key to data- and knowledge-centered systems in large-scale environments. This, the 14th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, contains four revised selected regular papers. Topics covered include data stream systems, top-k query processing, semantic web service (SWS) discovery, and XML functional dependencies.
Data mining has emerged as one of the most active areas in information and c- munication technologies(ICT). With the boomingof the global economy,and ub- uitouscomputingandnetworkingacrosseverysectorand business,data andits deep analysis becomes a particularly important issue for enhancing the soft power of an organization, its production systems, decision-making and performance. The last ten years have seen ever-increasingapplications of data mining in business, gove- ment, social networks and the like. However, a crucial problem that prevents data mining from playing a strategic decision-support role in ICT is its usually limited decision-support power in the real world. Typical concerns include its actionability, workability, transferability, and the trustworthy, dependable, repeatable, operable and explainable capabilities of data mining algorithms, tools and outputs. This monograph, Domain Driven Data Mining, is motivated by the real-world challenges to and complexities of the current KDD methodologies and techniques, which are critical issues faced by data mining, as well as the ?ndings, thoughts and lessons learned in conducting several large-scale real-world data mining bu- ness applications. The aim and objective of domain driven data mining is to study effective and ef?cient methodologies, techniques, tools, and applications that can discover and deliver actionable knowledge that can be passed on to business people for direct decision-making and action-taking. |
![]() ![]() You may like...
A Research Agenda for Transport Policy
John Stanley, David A. Hensher
Paperback
R1,040
Discovery Miles 10 400
Green Production Engineering and…
Carolina Machado, J. Paulo Davim
Paperback
R4,170
Discovery Miles 41 700
Managing For Excellence In The Public…
Gerrit van der Waldt
Paperback
South African municipal government and…
C. Thornhill, J. Cloete
Paperback
R550
Discovery Miles 5 500
|