![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data mining
The proceedings from the eighth KMO conference represent the findings of this international meeting which brought together researchers and developers from industry and the academic world to report on the latest scientific and technical advances on knowledge management in organizations. This conference provided an international forum for authors to present and discuss research focused on the role of knowledge management for innovative services in industries, to shed light on recent advances in social and big data computing for KM as well as to identify future directions for researching the role of knowledge management in service innovation and how cloud computing can be used to address many of the issues currently facing KM in academia and industrial sectors.
In this third edition of Vehicle Accident Analysis & Reconstruction Methods, Raymond M. Brach and R. Matthew Brach have expanded and updated their essential work for professionals in the field of accident reconstruction. Most accidents can be reconstructed effectively using of calculations and investigative and experimental data: the authors present the latest scientific, engineering, and mathematical reconstruction methods, providing a firm scientific foundation for practitioners. Accidents that cannot be reconstructed using the methods in this book are rare. In recent decades, the field of crash reconstruction has been transformed through the use of technology. The advent of event data records (EDRs) on vehicles signaled the era of modern crash reconstruction, which utilizes the same physical evidence that was previously available as well as electronic data that are measured/captured before, during, and after the collision. There is increased demand for more professional and accurate reconstruction as more crash data is available from vehicle sensors. The third edition of this essential work includes a new chapter on the use of EDRs as well as examples using EDR data in accident reconstruction. Early chapters feature foundational material that is necessary for the understanding of vehicle collisions and vehicle motion; later chapters present applications of the methods and include example reconstructions. As a result, Vehicle Accident Analysis & Reconstruction Methods remains the definitive resource in accident reconstruction.
This book presents an overview of a variety of contemporary statistical, mathematical and computer science techniques which are used to further the knowledge in the medical domain. The authors focus on applying data mining to the medical domain, including mining the sets of clinical data typically found in patient's medical records, image mining, medical mining, data mining and machine learning applied to generic genomic data and more. This work also introduces modeling behavior of cancer cells, multi-scale computational models and simulations of blood flow through vessels by using patient-specific models. The authors cover different imaging techniques used to generate patient-specific models. This is used in computational fluid dynamics software to analyze fluid flow. Case studies are provided at the end of each chapter. Professionals and researchers with quantitative backgrounds will find Computational Medicine in Data Mining and Modeling useful as a reference. Advanced-level students studying computer science, mathematics, statistics and biomedicine will also find this book valuable as a reference or secondary text book.
This book focuses on recent technical advancements and state-of-the art technologies for analyzing characteristic features and probabilistic modelling of complex social networks and decentralized online network architectures. Such research results in applications related to surveillance and privacy, fraud analysis, cyber forensics, propaganda campaigns, as well as for online social networks such as Facebook. The text illustrates the benefits of using advanced social network analysis methods through application case studies based on practical test results from synthetic and real-world data. This book will appeal to researchers and students working in these areas.
Imagine yourself as a military officer in a conflict zone trying to identify locations of weapons caches supporting road-side bomb attacks on your country's troops. Or imagine yourself as a public health expert trying to identify the location of contaminated water that is causing diarrheal diseases in a local population. Geospatial abduction is a new technique introduced by the authors that allows such problems to be solved. Geospatial Abduction provides the mathematics underlying geospatial abduction and the algorithms to solve them in practice; it has wide applicability and can be used by practitioners and researchers in many different fields. Real-world applications of geospatial abduction to military problems are included. Compelling examples drawn from other domains as diverse as criminology, epidemiology and archaeology are covered as well. This book also includes access to a dedicated website on geospatial abduction hosted by University of Maryland. Geospatial Abduction targets practitioners working in general AI, game theory, linear programming, data mining, machine learning, and more. Those working in the fields of computer science, mathematics, geoinformation, geological and biological science will also find this book valuable.
This book not only discusses the important topics in the area of machine learning and combinatorial optimization, it also combines them into one. This was decisive for choosing the material to be included in the book and determining its order of presentation. Decision trees are a popular method of classification as well as of knowledge representation. At the same time, they are easy to implement as the building blocks of an ensemble of classifiers. Admittedly, however, the task of constructing a near-optimal decision tree is a very complex process. The good results typically achieved by the ant colony optimization algorithms when dealing with combinatorial optimization problems suggest the possibility of also using that approach for effectively constructing decision trees. The underlying rationale is that both problem classes can be presented as graphs. This fact leads to option of considering a larger spectrum of solutions than those based on the heuristic. Moreover, ant colony optimization algorithms can be used to advantage when building ensembles of classifiers. This book is a combination of a research monograph and a textbook. It can be used in graduate courses, but is also of interest to researchers, both specialists in machine learning and those applying machine learning methods to cope with problems from any field of R&D.
This book contains the combined proceedings of the 4th International Conference on Ubiquitous Computing Application and Wireless Sensor Network (UCAWSN-15) and the 16th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT-15). The combined proceedings present peer-reviewed contributions from academic and industrial researchers in fields including ubiquitous and context-aware computing, context-awareness reasoning and representation, location awareness services, and architectures, protocols and algorithms, energy, management and control of wireless sensor networks. The book includes the latest research results, practical developments and applications in parallel/distributed architectures, wireless networks and mobile computing, formal methods and programming languages, network routing and communication algorithms, database applications and data mining, access control and authorization and privacy preserving computation.
This book covers the latest advances in Big Data technologies and provides the readers with a comprehensive review of the state-of-the-art in Big Data processing, analysis, analytics, and other related topics. It presents new models, algorithms, software solutions and methodologies, covering the full data cycle, from data gathering to their visualization and interaction, and includes a set of case studies and best practices. New research issues, challenges and opportunities shaping the future agenda in the field of Big Data are also identified and presented throughout the book, which is intended for researchers, scholars, advanced students, software developers and practitioners working at the forefront in their field.
This book covers diverse aspects of advanced computer and communication engineering, focusing specifically on industrial and manufacturing theory and applications of electronics, communications, computing and information technology. Experts in research, industry, and academia present the latest developments in technology, describe applications involving cutting-edge communication and computer systems and explore likely future directions. In addition, access is offered to numerous new algorithms that assist in solving computer and communication engineering problems. The book is based on presentations delivered at ICOCOE 2014, the 1st International Conference on Communication and Computer Engineering. It will appeal to a wide range of professionals in the field, including telecommunication engineers, computer engineers and scientists, researchers, academics and students.
The development of business intelligence has enhanced the visualization of data to inform and facilitate business management and strategizing. By implementing effective data-driven techniques, this allows for advance reporting tools to cater to company-specific issues and challenges. The Handbook of Research on Advanced Data Mining Techniques and Applications for Business Intelligence is a key resource on the latest advancements in business applications and the use of mining software solutions to achieve optimal decision-making and risk management results. Highlighting innovative studies on data warehousing, business activity monitoring, and text mining, this publication is an ideal reference source for research scholars, management faculty, and practitioners.
This book presents the latest research advances in complex network structure analytics based on computational intelligence (CI) approaches, particularly evolutionary optimization. Most if not all network issues are actually optimization problems, which are mostly NP-hard and challenge conventional optimization techniques. To effectively and efficiently solve these hard optimization problems, CI based network structure analytics offer significant advantages over conventional network analytics techniques. Meanwhile, using CI techniques may facilitate smart decision making by providing multiple options to choose from, while conventional methods can only offer a decision maker a single suggestion. In addition, CI based network structure analytics can greatly facilitate network modeling and analysis. And employing CI techniques to resolve network issues is likely to inspire other fields of study such as recommender systems, system biology, etc., which will in turn expand CI's scope and applications. As a comprehensive text, the book covers a range of key topics, including network community discovery, evolutionary optimization, network structure balance analytics, network robustness analytics, community-based personalized recommendation, influence maximization, and biological network alignment. Offering a rich blend of theory and practice, the book is suitable for students, researchers and practitioners interested in network analytics and computational intelligence, both as a textbook and as a reference work.
Vast amounts of data are nowadays collected, stored and processed, in an effort to assist in making a variety of administrative and governmental decisions. These innovative steps considerably improve the speed, effectiveness and quality of decisions. Analyses are increasingly performed by data mining and profiling technologies that statistically and automatically determine patterns and trends. However, when such practices lead to unwanted or unjustified selections, they may result in unacceptable forms of discrimination. Processing vast amounts of data may lead to situations in which data controllers know many of the characteristics, behaviors and whereabouts of people. In some cases, analysts might know more about individuals than these individuals know about themselves. Judging people by their digital identities sheds a different light on our views of privacy and data protection. This book discusses discrimination and privacy issues related to data mining and profiling practices. It provides technological and regulatory solutions, to problems which arise in these innovative contexts. The book explains that common measures for mitigating privacy and discrimination, such as access controls and anonymity, fail to properly resolve privacy and discrimination concerns. Therefore, new solutions, focusing on technology design, transparency and accountability are called for and set forth.
This book addresses the impacts of various types of services such as infrastructure, platforms, software, and business processes that cloud computing and Big Data have introduced into business. Featuring chapters which discuss effective and efficient approaches in dealing with the inherent complexity and increasing demands in data science, a variety of application domains are covered. Various case studies by data management and analysis experts are presented in these chapters. Covered applications include banking, social networks, bioinformatics, healthcare, transportation and criminology. Highlighting the Importance of Big Data Management and Analysis for Various Applications will provide the reader with an understanding of how data management and analysis are adapted to these applications. This book will appeal to researchers and professionals in the field.
This book presents modeling methods and algorithms for data-driven prediction and forecasting of practical industrial process by employing machine learning and statistics methodologies. Related case studies, especially on energy systems in the steel industry are also addressed and analyzed. The case studies in this volume are entirely rooted in both classical data-driven prediction problems and industrial practice requirements. Detailed figures and tables demonstrate the effectiveness and generalization of the methods addressed, and the classifications of the addressed prediction problems come from practical industrial demands, rather than from academic categories. As such, readers will learn the corresponding approaches for resolving their industrial technical problems. Although the contents of this book and its case studies come from the steel industry, these techniques can be also used for other process industries. This book appeals to students, researchers, and professionals within the machine learning and data analysis and mining communities.
This book presents a collection of representative and novel work in the field of data mining, knowledge discovery, clustering and classification, based on expanded and reworked versions of a selection of the best papers originally presented in French at the EGC 2014 and EGC 2015 conferences held in Rennes (France) in January 2014 and Luxembourg in January 2015. The book is in three parts: The first four chapters discuss optimization considerations in data mining. The second part explores specific quality measures, dissimilarities and ultrametrics. The final chapters focus on semantics, ontologies and social networks. Written for PhD and MSc students, as well as researchers working in the field, it addresses both theoretical and practical aspects of knowledge discovery and management.
This book is intended to spark a discourse on, and contribute to finding a clear consensus in, the debate between conceptualizing a knowledge strategy and planning a knowledge strategy. It explores the complex relationship between the notions of knowledge and strategy in the business context, one that is of practical importance to companies. After reviewing the extant literature, the book shows how the concept of knowledge strategies can be seen as a new perspective for exploring business strategies. It proposes a new approach that clarifies how planned and emergent knowledge strategies allow companies to make projections into the uncertain and unpredictable future that dominates today's economy.
The rapid increase in computing power and communication speed, coupled with computer storage facilities availability, has led to a new age of multimedia app- cations. Multimedia is practically everywhere and all around us we can feel its presence in almost all applications ranging from online video databases, IPTV, - teractive multimedia and more recently in multimedia based social interaction. These new growing applications require high-quality data storage, easy access to multimedia content and reliable delivery. Moving ever closer to commercial - ployment also aroused a higher awareness of security and intellectual property management issues. All the aforementioned requirements resulted in higher demands on various - eas of research (signal processing, image/video processing and analysis, com- nication protocols, content search, watermarking, etc.). This book covers the most prominent research issues in multimedia and is divided into four main sections: i) content based retrieval, ii) storage and remote access, iii) watermarking and co- right protection and iv) multimedia applications. Chapter 1 of the first section presents an analysis on how color is used and why is it crucial in nowadays multimedia applications. In chapter 2 the authors give an overview of the advances in video abstraction for fast content browsing, transm- sion, retrieval and skimming in large video databases and chapter 3 extends the discussion on video summarization even further. Content retrieval problem is tackled in chapter 4 by describing a novel method for producing meaningful s- ments suitable for MPEG-7 description based on binary partition trees (BPTs).
This book discusses the psychological traits associated with drug consumption through the statistical analysis of a new database with information on 1885 respondents and use of 18 drugs. After reviewing published works on the psychological profiles of drug users and describing the data mining and machine learning methods used, it demonstrates that the personality traits (five factor model, impulsivity, and sensation seeking) together with simple demographic data make it possible to predict the risk of consumption of individual drugs with a sensitivity and specificity above 70% for most drugs. It also analyzes the correlations of use of different substances and describes the groups of drugs with correlated use, identifying significant differences in personality profiles for users of different drugs. The book is intended for advanced undergraduates and first-year PhD students, as well as researchers and practitioners. Although no previous knowledge of machine learning, advanced data mining concepts or modern psychology of personality is assumed, familiarity with basic statistics and some experience in the use of probabilities would be helpful. For a more detailed introduction to statistical methods, the book provides recommendations for undergraduate textbooks.
This book explores how PPPM, clinical practice, and basic research could be best served by information technology (IT). A use-case was developed for hepatocellular carcinoma (HCC). The subject was approached with four interrelated tasks: (1) review of clinical practices relating to HCC; (2) propose an IT system relating to HCC, including clinical decision support and research needs; (3) determine how a clinical liver cancer center can contribute; and, (4) examine the enhancements and impact that the first three tasks will have on the management of HCC. An IT System for Personalized Medicine (ITS-PM) for HCC will provide the means to identify and determine the relative value of the wide number of variables, including clinical assessment of the patient -- functional status, liver function, degree of cirrhosis, and comorbidities; tumor biology, at a molecular, genetic and anatomic level; tumor burden and individual patient response; medical and operative treatments and their outcomes.
This book is the proceedings of the 3rd World Conference on Soft Computing (WCSC), which was held in San Antonio, TX, USA, on December 16-18, 2013. It presents start-of-the-art theory and applications of soft computing together with an in-depth discussion of current and future challenges in the field, providing readers with a 360 degree view on soft computing. Topics range from fuzzy sets, to fuzzy logic, fuzzy mathematics, neuro-fuzzy systems, fuzzy control, decision making in fuzzy environments, image processing and many more. The book is dedicated to Lotfi A. Zadeh, a renowned specialist in signal analysis and control systems research who proposed the idea of fuzzy sets, in which an element may have a partial membership, in the early 1960s, followed by the idea of fuzzy logic, in which a statement can be true only to a certain degree, with degrees described by numbers in the interval [0,1]. The performance of fuzzy systems can often be improved with the help of optimization techniques, e.g. evolutionary computation, and by endowing the corresponding system with the ability to learn, e.g. by combining fuzzy systems with neural networks. The resulting "consortium" of fuzzy, evolutionary, and neural techniques is known as soft computing and is the main focus of this book.
In a world increasingly awash in information, the field of information science has become an umbrella stretched so broadly as to threaten its own integrity. However, while traditional information science seeks to make sense of information systems against a social, cultural, and political backdrop, there exists a lack of current literature exploring how such transactions can exert force in the other direction-that is, how information systems mold the individuals who utilize them and society as a whole. The Handbook of Research on Innovations in Information Retrieval, Analysis, and Management explores new developments in the field of information and communication technologies and explores how complex information systems interact with and affect one another, woven into the fabric of an information-rich world. Touching on such topics as machine learning, research methodologies, and mobile data aggregation, this book targets an audience of researchers, developers, managers, strategic planners, and advanced-level students. This handbook contains chapters on topics including, but not limited to, customer experience management, information systems planning, cellular networking, public policy development, and knowledge governance.
This book offers an introduction to artificial adaptive systems and a general model of the relationships between the data and algorithms used to analyze them. It subsequently describes artificial neural networks as a subclass of artificial adaptive systems, and reports on the backpropagation algorithm, while also identifying an important connection between supervised and unsupervised artificial neural networks. The book's primary focus is on the auto contractive map, an unsupervised artificial neural network employing a fixed point method versus traditional energy minimization. This is a powerful tool for understanding, associating and transforming data, as demonstrated in the numerous examples presented here. A supervised version of the auto contracting map is also introduced as an outstanding method for recognizing digits and defects. In closing, the book walks the readers through the theory and examples of how the auto contracting map can be used in conjunction with another artificial neural network, the "spin-net," as a dynamic form of auto-associative memory.
This book focuses on the development of a theory of info-dynamics to support the theory of info-statics in the general theory of information. It establishes the rational foundations of information dynamics and how these foundations relate to the general socio-natural dynamics from the primary to the derived categories in the universal existence and from the potential to the actual in the ontological space. It also shows how these foundations relate to the general socio-natural dynamics from the potential to the possible to give rise to the possibility space with possibilistic thinking; from the possible to the probable to give rise to possibility space with probabilistic thinking; and from the probable to the actual to give rise to the space of knowledge with paradigms of thought in the epistemological space. The theory is developed to explain the general dynamics through various transformations in quality-quantity space in relation to the nature of information flows at each variety transformation. The theory explains the past-present-future connectivity of the evolving information structure in a manner that illuminates the transformation problem and its solution in the never-ending information production within matter-energy space under socio-natural technologies to connect the theory of info-statics, which in turn presents explanations to the transformation problem and its solution. The theoretical framework is developed with analytical tools based on the principle of opposites, systems of actual-potential polarities, negative-positive dualities under different time-structures with the use of category theory, fuzzy paradigm of thought and game theory in the fuzzy-stochastic cost-benefit space. The rational foundations are enhanced with categorial analytics. The value of the theory of info-dynamics is demonstrated in the explanatory and prescriptive structures of the transformations of varieties and categorial varieties at each point of time and over time from parent-offspring sequences. It constitutes a general explanation of dynamics of information-knowledge production through info-processes and info-processors induced by a socio-natural infinite set of technologies in the construction-destruction space.
This book presents a contemporary view of the role of information quality in information fusion and decision making, and provides a formal foundation and the implementation strategies required for dealing with insufficient information quality in building fusion systems for decision making. Information fusion is the process of gathering, processing, and combining large amounts of information from multiple and diverse sources, including physical sensors to human intelligence reports and social media. That data and information may be unreliable, of low fidelity, insufficient resolution, contradictory, fake and/or redundant. Sources may provide unverified reports obtained from other sources resulting in correlations and biases. The success of the fusion processing depends on how well knowledge produced by the processing chain represents reality, which in turn depends on how adequate data are, how good and adequate are the models used, and how accurate, appropriate or applicable prior and contextual knowledge is. By offering contributions by leading experts, this book provides an unparalleled understanding of the problem of information quality in information fusion and decision-making for researchers and professionals in the field. |
You may like...
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Opinion Mining and Text Analytics on…
Pantea Keikhosrokiani, Moussa Pourya Asl
Hardcover
R9,276
Discovery Miles 92 760
Implementation of Machine Learning…
Veljko Milutinovi, Nenad Mitic, …
Hardcover
R6,648
Discovery Miles 66 480
Transforming Businesses With Bitcoin…
Dharmendra Singh Rajput, Ramjeevan Singh Thakur, …
Hardcover
R5,938
Discovery Miles 59 380
|