![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > Data mining
This edited volume presents the best chapters presented during the international conference on computer and applications ICCA'17 which was held in Dubai, United Arab Emirates in September 2017. Selected chapters present new advances in digital information, communications and multimedia. Authors from different countries show and discuss their findings, propose new approaches, compare them with the existing ones and include recommendations. They address all applications of computing including (but not limited to) connected health, information security, assistive technology, edutainment and serious games, education, grid computing, transportation, social computing, natural language processing, knowledge extraction and reasoning, Arabic apps, image and pattern processing, virtual reality, cloud computing, haptics, information security, robotics, networks algorithms, web engineering, big data analytics, ontology, constraints satisfaction, cryptography and steganography, Fuzzy logic, soft computing, neural networks, artificial intelligence, biometry and bio-informatics, embedded systems, computer graphics, algorithms and optimization, Internet of things and smart cities. The book can be used by researchers and practitioners to discover the recent trends in computer applications. It opens a new horizon for research discovery works locally and internationally.
The book reports on the latest advances and challenges of soft computing. Itgathers original scientific contributions written by top scientists in the fieldand covering theories, methods and applications in a number of research areas related to soft-computing, such as decision-making, probabilistic reasoning, image processing, control, neural networks and data analysis."
This book offers a timely report on key theories and applications of soft-computing. Written in honour of Professor Gaspar Mayor on his 70th birthday, it primarily focuses on areas related to his research, including fuzzy binary operators, aggregation functions, multi-distances, and fuzzy consensus/decision models. It also discusses a number of interesting applications such as the implementation of fuzzy mathematical morphology based on Mayor-Torrens t-norms. Importantly, the different chapters, authored by leading experts, present novel results and offer new perspectives on different aspects of Mayor's research. The book also includes an overview of evolutionary fuzzy systems, a topic that is not one of Mayor's main areas of interest, and a final chapter written by the Spanish pioneer in fuzzy logic, Professor E. Trillas. Computer and decision scientists, knowledge engineers and mathematicians alike will find here an authoritative overview of key soft-computing concepts and techniques.
Data mining is the process of extracting hidden patterns from data, and it's commonly used in business, bioinformatics, counter-terrorism, and, increasingly, in professional sports. First popularized in Michael Lewis' best-selling Moneyball: The Art of Winning An Unfair Game, it is has become an intrinsic part of all professional sports the world over, from baseball to cricket to soccer. While an industry has developed based on statistical analysis services for any given sport, or even for betting behavior analysis on these sports, no research-level book has considered the subject in any detail until now. Sports Data Mining brings together in one place the state of the art as it concerns an international array of sports: baseball, football, basketball, soccer, greyhound racing are all covered, and the authors (including Hsinchun Chen, one of the most esteemed and well-known experts in data mining in the world) present the latest research, developments, software available, and applications for each sport. They even examine the hidden patterns in gaming and wagering, along with the most common systems for wager analysis.
The importance of having ef cient and effective methods for data mining and kn- ledge discovery (DM&KD), to which the present book is devoted, grows every day and numerous such methods have been developed in recent decades. There exists a great variety of different settings for the main problem studied by data mining and knowledge discovery, and it seems that a very popular one is formulated in terms of binary attributes. In this setting, states of nature of the application area under consideration are described by Boolean vectors de ned on some attributes. That is, by data points de ned in the Boolean space of the attributes. It is postulated that there exists a partition of this space into two classes, which should be inferred as patterns on the attributes when only several data points are known, the so-called positive and negative training examples. The main problem in DM&KD is de ned as nding rules for recognizing (cl- sifying) new data points of unknown class, i. e. , deciding which of them are positive and which are negative. In other words, to infer the binary value of one more attribute, called the goal or class attribute. To solve this problem, some methods have been suggested which construct a Boolean function separating the two given sets of positive and negative training data points.
Recently, the pressure for fast processing and efficient storage of large data with complexrelations increased beyond the capability of traditional databases. Typical examples include iPhone applications, computer aided design - both electrical and mechanical, biochemistry applications, and incremental compilers. Serialization, which is sometimes used in such situations is notoriously tedious and error prone. In this book, Jiri Soukup and Petr Macha ek show in detail how to write programs which store their internal data automatically and transparently to disk. Together with special data structure libraries which treat relations among objects as first-class entities, and with a UML class-diagram generator, the core application code is much simplified. The benchmark chapter shows a typical example where persistent data is faster by the order of magnitude than with a traditional database, in both traversing and accessing the data. The authors explore and exploit advanced features of object-oriented languages in a depth hardly seen in print before. Yet, you as a reader need only a basic knowledge of C++, Java, C#, or Objective C. These languages are quite similar with respect to persistency, and the authors explain their differences where necessary. The book targets professional programmers working on any industry applications, it teaches you how to design your own persistent data or how to use the existing packages efficiently. Researchers in areas like language design, compiler construction, performance evaluation, and no-SQL applications will find a wealth of novel ideas and valuable implementation tips. Under http: //www.codefarms.com/bk, you will find a blog and other information, including a downloadable zip file with the sources of all the listings that are longer than just a few lines - ready to compile and run."
Web usage mining is defined as the application of data mining technologies to online usage patterns as a way to better understand and serve the needs of web-based applications. Because the internet has become a central component in information sharing and commerce, having the ability to analyze user behavior on the web has become a critical component to a variety of industries. Web Usage Mining Techniques and Applications Across Industries addresses the systems and methodologies that enable organizations to predict web user behavior as a way to support website design and personalization of web-based services and commerce. Featuring perspectives from a variety of sectors, this publication is designed for use by IT specialists, business professionals, researchers, and graduate-level students interested in learning more about the latest concepts related to web-based information retrieval and mining.
This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.
This book introduces readers to the field of conformance checking as a whole and outlines the fundamental relation between modelled and recorded behaviour. Conformance checking interrelates the modelled and recorded behaviour of a given process and provides techniques and methods for comparing and analysing observed instances of a process in the presence of a model, independent of the model's origin. Its goal is to provide an overview of the essential techniques and methods in this field at an intuitive level, together with precise formalisations of its underlying principles. The book is divided into three parts, that are meant to cover different perspectives of the field of conformance checking. Part I presents a comprehensive yet accessible overview of the essential concepts used to interrelate modelled and recorded behaviour. It also serves as a reference for assessing how conformance checking efforts could be applied in specific domains. Next, Part II provides readers with detailed insights into algorithms for conformance checking, including the most commonly used formal notions and their instantiation for specific analysis questions. Lastly, Part III highlights applications that help to make sense of conformance checking results, thereby providing a necessary next step to increase the value of a given process model. They help to interpret the outcomes of conformance checking and incorporate them by means of enhancement and repair techniques. Providing the core building blocks of conformance checking and describing its main applications, this book mainly addresses students specializing in business process management, researchers entering process mining and conformance checking for the first time, and advanced professionals whose work involves process evaluation, modelling and optimization.
This book provides a technical approach to a Business Resilience System with its Risk Atom and Processing Data Point based on fuzzy logic and cloud computation in real time. Its purpose and objectives define a clear set of expectations for Organizations and Enterprises so their network system and supply chain are totally resilient and protected against cyber-attacks, manmade threats, and natural disasters. These enterprises include financial, organizational, homeland security, and supply chain operations with multi-point manufacturing across the world. Market shares and marketing advantages are expected to result from the implementation of the system. The collected information and defined objectives form the basis to monitor and analyze the data through cloud computation, and will guarantee the success of their survivability's against any unexpected threats. This book will be useful for advanced undergraduate and graduate students in the field of computer engineering, engineers that work for manufacturing companies, business analysts in retail and e-Commerce, and those working in the defense industry, Information Security, and Information Technology.
Tools and methods from complex systems science can have a considerable impact on the way in which the quantitative assessment of economic and financial issues is approached, as discussed in this thesis. First it is shown that the self-organization of financial markets is a crucial factor in the understanding of their dynamics. In fact, using an agent-based approach, it is argued that financial markets' stylized facts appear only in the self-organized state. Secondly, the thesis points out the potential of so-called big data science for financial market modeling, investigating how web-driven data can yield a picture of market activities: it has been found that web query volumes anticipate trade volumes. As a third achievement, the metrics developed here for country competitiveness and product complexity is groundbreaking in comparison to mainstream theories of economic growth and technological development. A key element in assessing the intangible variables determining the success of countries in the present globalized economy is represented by the diversification of the productive basket of countries. The comparison between the level of complexity of a country's productive system and economic indicators such as the GDP per capita discloses its hidden growth potential.
Solving nonsmooth optimization (NSO) problems is critical in many practical applications and real-world modeling systems. The aim of this book is to survey various numerical methods for solving NSO problems and to provide an overview of the latest developments in the field. Experts from around the world share their perspectives on specific aspects of numerical NSO. The book is divided into four parts, the first of which considers general methods including subgradient, bundle and gradient sampling methods. In turn, the second focuses on methods that exploit the problem's special structure, e.g. algorithms for nonsmooth DC programming, VU decomposition techniques, and algorithms for minimax and piecewise differentiable problems. The third part considers methods for special problems like multiobjective and mixed integer NSO, and problems involving inexact data, while the last part highlights the latest advancements in derivative-free NSO. Given its scope, the book is ideal for students attending courses on numerical nonsmooth optimization, for lecturers who teach optimization courses, and for practitioners who apply nonsmooth optimization methods in engineering, artificial intelligence, machine learning, and business. Furthermore, it can serve as a reference text for experts dealing with nonsmooth optimization.
This book presents innovative work in Climate Informatics, a new field that reflects the application of data mining methods to climate science, and shows where this new and fast growing field is headed. Given its interdisciplinary nature, Climate Informatics offers insights, tools and methods that are increasingly needed in order to understand the climate system, an aspect which in turn has become crucial because of the threat of climate change. There has been a veritable explosion in the amount of data produced by satellites, environmental sensors and climate models that monitor, measure and forecast the earth system. In order to meaningfully pursue knowledge discovery on the basis of such voluminous and diverse datasets, it is necessary to apply machine learning methods, and Climate Informatics lies at the intersection of machine learning and climate science. This book grew out of the fourth workshop on Climate Informatics held in Boulder, Colorado in Sep. 2014.
Web-based Support Systems (WSS) are an emerging multidisciplinary research area in which one studies the support of human activities with the Web as the common platform, mediumandinterface.TheInternetaffectseveryaspectofourmodernlife. Moving support systems to online is an increasing trend in many research domains. One of the goals of WSS research is to extend the human physical limitation of information processing in the information age. Research on WSS is motivated by the challenges and opportunities arising from the Internet. The availability, accessibility and ?exibility of information as well as the tools to access this information lead to a vast amount of opportunities. H- ever, there are also many challenges we face. For instance, we have to deal with more complex tasks, as there are increasing demands for quality and productivity. WSS research is a natural evolution of the studies on various computerized support systems such as Decision Support Systems (DSS), Computer Aided Design (CAD), and Computer Aided Software Engineering (CASE). The recent advancement of computer and Web technologies make the implementation of more feasible WSS. Nowadays, it is rare to see a system without some type of Web interaction. The research of WSS is classi?ed into four groups. WSS for speci?c domains."
This carefully edited and reviewed volume addresses the increasingly popular demand for seeking more clarity in the data that we are immersed in. It offers excellent examples of the intelligent ubiquitous computation, as well as recent advances in systems engineering and informatics. The content represents state-of-the-art foundations for researchers in the domain of modern computation, computer science, system engineering and networking, with many examples that are set in industrial application context. The book includes the carefully selected best contributions to APCASE 2014, the 2nd Asia-Pacific Conference on Computer Aided System Engineering, held February 10-12, 2014 in South Kuta, Bali, Indonesia. The book consists of four main parts that cover data-oriented engineering science research in a wide range of applications: computational models and knowledge discovery; communications networks and cloud computing; computer-based systems; and data-oriented and software-intensive systems.
Describing novel mathematical concepts for recommendation engines, Realtime Data Mining: Self-Learning Techniques for Recommendation Engines features a sound mathematical framework unifying approaches based on control and learning theories, tensor factorization, and hierarchical methods. Furthermore, it presents promising results of numerous experiments on real-world data. The area of realtime data mining is currently developing at an exceptionally dynamic pace, and realtime data mining systems are the counterpart of today's "classic" data mining systems. Whereas the latter learn from historical data and then use it to deduce necessary actions, realtime analytics systems learn and act continuously and autonomously. In the vanguard of these new analytics systems are recommendation engines. They are principally found on the Internet, where all information is available in realtime and an immediate feedback is guaranteed. This monograph appeals to computer scientists and specialists in machine learning, especially from the area of recommender systems, because it conveys a new way of realtime thinking by considering recommendation tasks as control-theoretic problems. Realtime Data Mining: Self-Learning Techniques for Recommendation Engines will also interest application-oriented mathematicians because it consistently combines some of the most promising mathematical areas, namely control theory, multilevel approximation, and tensor factorization.
This book constitutes the refereed proceedings of the Third International Conference on Intelligence Science, ICIS 2018, held in Beijing China, in November 2018. The 44 full papers and 5 short papers presented were carefully reviewed and selected from 85 submissions. They deal with key issues in intelligence science and have been organized in the following topical sections: brain cognition; machine learning; data intelligence; language cognition; perceptual intelligence; intelligent robots; fault diagnosis; and ethics of artificial intelligence.
This volume directly addresses the complexities involved in data mining and the development of new algorithms, built on an underlying theory consisting of linear and non-linear dynamics, data selection, filtering, and analysis, while including analytical projection and prediction. The results derived from the analysis are then further manipulated such that a visual representation is derived with an accompanying analysis. The book brings very current methods of analysis to the forefront of the discipline, provides researchers and practitioners the mathematical underpinning of the algorithms, and the non-specialist with a visual representation such that a valid understanding of the meaning of the adaptive system can be attained with careful attention to the visual representation. The book presents, as a collection of documents, sophisticated and meaningful methods that can be immediately understood and applied to various other disciplines of research. The content is composed of chapters addressing: An application of adaptive systems methodology in the field of post-radiation treatment involving brain volume differences in children; A new adaptive system for computer-aided diagnosis of the characterization of lung nodules; A new method of multi-dimensional scaling with minimal loss of information; A description of the semantics of point spaces with an application on the analysis of terrorist attacks in Afghanistan; The description of a new family of meta-classifiers; A new method of optimal informational sorting; A general method for the unsupervised adaptive classification for learning; and the presentation of two new theories, one in target diffusion and the other in twisting theory.
Data warehousing is an important topic that is of interest to both the industry and the knowledge engineering research communities. Both data mining and data warehousing technologies have similar objectives and can potentially benefit from each other's methods to facilitate knowledge discovery. Improving Knowledge Discovery through the Integration of Data Mining Techniques provides insight concerning the integration of data mining and data warehousing for enhancing the knowledge discovery process. Decision makers, academicians, researchers, advanced-level students, technology developers, and business intelligence professionals will find this book useful in furthering their research exposure to relevant topics in knowledge discovery.
The topic of preferences is a new branch of machine learning and data mining, and it has attracted considerable attention in artificial intelligence research in previous years. It involves learning from observations that reveal information about the preferences of an individual or a class of individuals. Representing and processing knowledge in terms of preferences is appealing as it allows one to specify desires in a declarative way, to combine qualitative and quantitative modes of reasoning, and to deal with inconsistencies and exceptions in a flexible manner. And, generalizing beyond training data, models thus learned may be used for preference prediction. This is the first book dedicated to this topic, and the treatment is comprehensive. The editors first offer a thorough introduction, including a systematic categorization according to learning task and learning technique, along with a unified notation. The first half of the book is organized into parts on label ranking, instance ranking, and object ranking; while the second half is organized into parts on applications of preference learning in multiattribute domains, information retrieval, and recommender systems. The book will be of interest to researchers and practitioners in artificial intelligence, in particular machine learning and data mining, and in fields such as multicriteria decision-making and operations research.
Community structure is a salient structural characteristic of
many real-world networks. Communities are generally hierarchical,
overlapping, multi-scale and coexist with other types of structural
regularities of networks. This poses major challenges for
conventional methods of community detection. This book will
comprehensively introduce the latest advances in community
detection, especially the detection of overlapping and hierarchical
community structures, the detection of multi-scale communities in
heterogeneous networks, and the exploration of multiple types of
structural regularities. These advances have been successfully
applied to analyze large-scale online social networks, such as
Facebook and Twitter. This book provides readers a convenient way
to grasp the cutting edge of community detection in complex
networks.
Organizations rely on data mining and warehousing technologies to store, integrate, query, and analyze essential data. Strategic Advancements in Utilizing Data Mining and Warehousing Technologies: New Concepts and Developments discusses developments in data mining and warehousing as well as techniques for successful implementation. Contributions investigate theoretical queries along with real-world applications, providing a useful foundation for academicians and practitioners to research new techniques and methodologies.
This book introduces the concepts, applications and development of data science in the telecommunications industry by focusing on advanced machine learning and data mining methodologies in the wireless networks domain. Mining Over Air describes the problems and their solutions for wireless network performance and quality, device quality readiness and returns analytics, wireless resource usage profiling, network traffic anomaly detection, intelligence-based self-organizing networks, telecom marketing, social influence, and other important applications in the telecom industry. Written by authors who study big data analytics in wireless networks and telecommunication markets from both industrial and academic perspectives, the book targets the pain points in telecommunication networks and markets through big data. Designed for both practitioners and researchers, the book explores the intersection between the development of new engineering technology and uses data from the industry to understand consumer behavior. It combines engineering savvy with insights about human behavior. Engineers will understand how the data generated from the technology can be used to understand the consumer behavior and social scientists will get a better understanding of the data generation process. |
![]() ![]() You may like...
Statistical Analysis of Networks
Konstantin Avrachenkov, Maximilien Dreveton
Hardcover
R2,916
Discovery Miles 29 160
Asymptotic, Algebraic and Geometric…
Frank Nijhoff, Yang Shi, …
Hardcover
R4,365
Discovery Miles 43 650
Many-Criteria Optimization and Decision…
Dimo Brockhoff, Michael Emmerich, …
Hardcover
R4,584
Discovery Miles 45 840
Discovering Computers 2018 - Digital…
Misty Vermaat, Steven Freund, …
Paperback
|