![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This book presents a unified framework, based on specialized evolutionary algorithms, for the global induction of various types of classification and regression trees from data. The resulting univariate or oblique trees are significantly smaller than those produced by standard top-down methods, an aspect that is critical for the interpretation of mined patterns by domain analysts. The approach presented here is extremely flexible and can easily be adapted to specific data mining applications, e.g. cost-sensitive model trees for financial data or multi-test trees for gene expression data. The global induction can be efficiently applied to large-scale data without the need for extraordinary resources. With a simple GPU-based acceleration, datasets composed of millions of instances can be mined in minutes. In the event that the size of the datasets makes the fastest memory computing impossible, the Spark-based implementation on computer clusters, which offers impressive fault tolerance and scalability potential, can be applied.
This book focuses on recent technical advancements and state-of-the art technologies for analyzing characteristic features and probabilistic modelling of complex social networks and decentralized online network architectures. Such research results in applications related to surveillance and privacy, fraud analysis, cyber forensics, propaganda campaigns, as well as for online social networks such as Facebook. The text illustrates the benefits of using advanced social network analysis methods through application case studies based on practical test results from synthetic and real-world data. This book will appeal to researchers and students working in these areas.
The Semantic Web is characterized by the existence of a very large number of distributed semantic resources, which together define a network of ontologies. These ontologies in turn are interlinked through a variety of different meta-relationships such as versioning, inclusion, and many more. This scenario is radically different from the relatively narrow contexts in which ontologies have been traditionally developed and applied, and thus calls for new methods and tools to effectively support the development of novel network-oriented semantic applications. This book by Suarez-Figueroa et al. provides the necessary methodological and technological support for the development and use of ontology networks, which ontology developers need in this distributed environment. After an introduction, in its second part the authors describe the NeOn Methodology framework. The book's third part details the key activities relevant to the ontology engineering life cycle. For each activity, a general introduction, methodological guidelines, and practical examples are provided. The fourth part then presents a detailed overview of the NeOn Toolkit and its plug-ins. Lastly, case studies from the pharmaceutical and the fishery domain round out the work. The book primarily addresses two main audiences: students (and their lecturers) who need a textbook for advanced undergraduate or graduate courses on ontology engineering, and practitioners who need to develop ontologies in particular or Semantic Web-based applications in general. Its educational value is maximized by its structured approach to explaining guidelines and combining them with case studies and numerous examples. The description of the open source NeOn Toolkit provides an additional asset, as it allows readers to easily evaluate and apply the ideas presented."
This book provides a critical examination of how the choice of what to believe is represented in the standard model of belief change. In particular the use of possible worlds and infinite remainders as objects of choice is critically examined. Descriptors are introduced as a versatile tool for expressing the success conditions of belief change, addressing both local and global descriptor revision. The book presents dynamic descriptors such as Ramsey descriptors that convey how an agent's beliefs tend to be changed in response to different inputs. It also explores sentential revision and demonstrates how local and global operations of revision by a sentence can be derived as a special case of descriptor revision. Lastly, the book examines revocation, a generalization of contraction in which a specified sentence is removed in a process that may possibly also involve the addition of some new information to the belief set.
This book highlights technical advances in knowledge management and their applications across a diverse range of domains. It explores the applications of knowledge computing methodologies in image processing, pattern recognition, health care and industrial contexts. The chapters also examine the knowledge engineering process involved in information management. Given its interdisciplinary nature, the book covers methods for identifying and acquiring valid, potentially useful knowledge sources. The ideas presented in the respective chapters illustrate how to effectively apply the perspectives of knowledge computing in specialized domains.
This book is an authoritative handbook of current topics, technologies and methodological approaches that may be used for the study of scholarly impact. The included methods cover a range of fields such as statistical sciences, scientific visualization, network analysis, text mining, and information retrieval. The techniques and tools enable researchers to investigate metric phenomena and to assess scholarly impact in new ways. Each chapter offers an introduction to the selected topic and outlines how the topic, technology or methodological approach may be applied to metrics-related research. Comprehensive and up-to-date, Measuring Scholarly Impact: Methods and Practice is designed for researchers and scholars interested in informetrics, scientometrics, and text mining. The hands-on perspective is also beneficial to advanced-level students in fields from computer science and statistics to information science.
This in-depth book addresses a key void in the literature surrounding the Internet of Things (IoT) and health. By systematically evaluating the benefits of mobile, wireless, and sensor-based IoT technologies when used in health and wellness contexts, the book sheds light on the next frontier for healthcare delivery. These technologies generate data with significant potential to enable superior care delivery, self-empowerment, and wellness management. Collecting valuable insights and recommendations in one accessible volume, chapter authors identify key areas in health and wellness where IoT can be used, highlighting the benefits, barriers, and facilitators of these technologies as well as suggesting areas for improvement in current policy and regulations. Four overarching themes provide a suitable setting to examine the critical insights presented in the 31 chapters: Mobile- and sensor-based solutions Opportunities to incorporate critical aspects of analytics to provide superior insights and thus support better decision-making Critical issues around aspects of IoT in healthcare contexts Applications of portals in healthcare contexts A comprehensive overview that introduces the critical issues regarding the role of IoT technologies for health, Delivering Superior Health and Wellness Management with IoT and Analytics paves the way for scholars, practitioners, students, and other stakeholders to understand how to substantially improve health and wellness management on a global scale.
In this third edition of Vehicle Accident Analysis & Reconstruction Methods, Raymond M. Brach and R. Matthew Brach have expanded and updated their essential work for professionals in the field of accident reconstruction. Most accidents can be reconstructed effectively using of calculations and investigative and experimental data: the authors present the latest scientific, engineering, and mathematical reconstruction methods, providing a firm scientific foundation for practitioners. Accidents that cannot be reconstructed using the methods in this book are rare. In recent decades, the field of crash reconstruction has been transformed through the use of technology. The advent of event data records (EDRs) on vehicles signaled the era of modern crash reconstruction, which utilizes the same physical evidence that was previously available as well as electronic data that are measured/captured before, during, and after the collision. There is increased demand for more professional and accurate reconstruction as more crash data is available from vehicle sensors. The third edition of this essential work includes a new chapter on the use of EDRs as well as examples using EDR data in accident reconstruction. Early chapters feature foundational material that is necessary for the understanding of vehicle collisions and vehicle motion; later chapters present applications of the methods and include example reconstructions. As a result, Vehicle Accident Analysis & Reconstruction Methods remains the definitive resource in accident reconstruction.
This book covers the latest advances in Big Data technologies and provides the readers with a comprehensive review of the state-of-the-art in Big Data processing, analysis, analytics, and other related topics. It presents new models, algorithms, software solutions and methodologies, covering the full data cycle, from data gathering to their visualization and interaction, and includes a set of case studies and best practices. New research issues, challenges and opportunities shaping the future agenda in the field of Big Data are also identified and presented throughout the book, which is intended for researchers, scholars, advanced students, software developers and practitioners working at the forefront in their field.
Knowledge Discovery and Data mining (KDD) is dedicated to exploring meaningful information from a large volume of data. ""Knowledge Discovery and Data Mining: Challenges and Realities"" is the most comprehensive reference publication for researchers and real-world data mining practitioners to advance knowledge discovery from low-quality data. This premier reference source presents in-depth experiences and methodologies, providing theoretical and empirical guidance to users who have suffered from underlying, low-quality data. International experts in the field of data mining have contributed all-inclusive chapters focusing on interdisciplinary collaborations among data quality, data processing, data mining, data privacy, and data sharing.
This book presents the latest research advances in complex network structure analytics based on computational intelligence (CI) approaches, particularly evolutionary optimization. Most if not all network issues are actually optimization problems, which are mostly NP-hard and challenge conventional optimization techniques. To effectively and efficiently solve these hard optimization problems, CI based network structure analytics offer significant advantages over conventional network analytics techniques. Meanwhile, using CI techniques may facilitate smart decision making by providing multiple options to choose from, while conventional methods can only offer a decision maker a single suggestion. In addition, CI based network structure analytics can greatly facilitate network modeling and analysis. And employing CI techniques to resolve network issues is likely to inspire other fields of study such as recommender systems, system biology, etc., which will in turn expand CI's scope and applications. As a comprehensive text, the book covers a range of key topics, including network community discovery, evolutionary optimization, network structure balance analytics, network robustness analytics, community-based personalized recommendation, influence maximization, and biological network alignment. Offering a rich blend of theory and practice, the book is suitable for students, researchers and practitioners interested in network analytics and computational intelligence, both as a textbook and as a reference work.
Information and communication technologies of the 20th century have had a significant impact on our daily lives. They have brought new opportunities as well as new challenges for human development. The Philosopher: Luciano Floridi claims that these new technologies have led to a revolutionary shift in our understanding of humanity's nature and its role in the universe. Florodi's philosophical analysis of new technologies leads to a novel metaphysical framework in which our understanding of the ultimate nature of reality shifts from a materialist one to an informational one. In this world, all entities, be they natural or artificial, are analyzed as informational entities. This book provides critical reflection to this idea, in four different areas: Information Ethics and The Method of Levels of Abstraction The Information Revolution and Alternative Categorizations of Technological Advancements Applications: Education, Internet and Information Science Epistemic and Ontic Aspects of the Philosophy of Information
This book focuses on new methods, architectures, and applications for the management of Cyber Physical Objects (CPOs) in the context of the Internet of Things (IoT). It covers a wide range of topics related to CPOs, such as resource management, hardware platforms, communication and control, and control and estimation over networks. It also discusses decentralized, distributed, and cooperative optimization as well as effective discovery, management, and querying of CPOs. Other chapters outline the applications of control, real-time aspects, and software for CPOs and introduce readers to agent-oriented CPOs, communication support for CPOs, real-world deployment of CPOs, and CPOs in Complex Systems. There is a focus on the importance of application of IoT technologies for Smart Cities.
In many decision support fields, the data that is exploited is becoming more and more complex. To take this phenomenon into account, classical architectures of data warehouses or data mining algorithms must be completely re-evaluated. ""Processing and Managing Complex Data for Decision Support"" provides readers with an overview of the emerging field of complex data processing by bringing together various research studies and surveys in different subfields, and by highlighting the similarities between the different data, issues, and approaches. This book deals with important topics, such as: complex data warehousing, including spatial, XML, and text warehousing; and complex data mining, including distance metrics and similarity measures, pattern management, multimedia, and gene sequence mining.
This book brings together scientists, researchers, practitioners, and students from academia and industry to present recent and ongoing research activities concerning the latest advances, techniques, and applications of natural language processing systems, and to promote the exchange of new ideas and lessons learned. Taken together, the chapters of this book provide a collection of high-quality research works that address broad challenges in both theoretical and applied aspects of intelligent natural language processing. The book presents the state-of-the-art in research on natural language processing, computational linguistics, applied Arabic linguistics and related areas. New trends in natural language processing systems are rapidly emerging - and finding application in various domains including education, travel and tourism, and healthcare, among others. Many issues encountered during the development of these applications can be resolved by incorporating language technology solutions. The topics covered by the book include: Character and Speech Recognition; Morphological, Syntactic, and Semantic Processing; Information Extraction; Information Retrieval and Question Answering; Text Classification and Text Mining; Text Summarization; Sentiment Analysis; Machine Translation Building and Evaluating Linguistic Resources; and Intelligent Language Tutoring Systems.
This book combines the analytic principles of digital business and data science with business practice and big data. The interdisciplinary, contributed volume provides an interface between the main disciplines of engineering and technology and business administration. Written for managers, engineers and researchers who want to understand big data and develop new skills that are necessary in the digital business, it not only discusses the latest research, but also presents case studies demonstrating the successful application of data in the digital business.
This book presents modeling methods and algorithms for data-driven prediction and forecasting of practical industrial process by employing machine learning and statistics methodologies. Related case studies, especially on energy systems in the steel industry are also addressed and analyzed. The case studies in this volume are entirely rooted in both classical data-driven prediction problems and industrial practice requirements. Detailed figures and tables demonstrate the effectiveness and generalization of the methods addressed, and the classifications of the addressed prediction problems come from practical industrial demands, rather than from academic categories. As such, readers will learn the corresponding approaches for resolving their industrial technical problems. Although the contents of this book and its case studies come from the steel industry, these techniques can be also used for other process industries. This book appeals to students, researchers, and professionals within the machine learning and data analysis and mining communities.
When digitized entities, connected devices and microservices interact purposefully, we end up with a massive amount of multi-structured streaming (real-time) data that is continuously generated by different sources at high speed. Streaming analytics allows the management, monitoring, and real-time analytics of live streaming data. The topic has grown in importance due to the emergence of online analytics and edge and IoT platforms. A real digital transformation is being achieved across industry verticals through meticulous data collection, cleansing and crunching in real time. Capturing and subjecting those value-adding events is considered to be the prime task for achieving trustworthy and timely insights. The authors articulate and accentuate the challenges widely associated with streaming data and analytics, describe data analytics algorithms and approaches, present edge and fog computing concepts and technologies and show how streaming analytics can be accomplished in edge device clouds. They also delineate several industry use cases across cloud system operations in transportation and cyber security and other business domains. The book will be of interest to ICTs industry and academic researchers, scientists and engineers as well as lecturers and advanced students in the fields of data science, cloud/fog/edge architecture, internet of things and artificial intelligence and related fields of applications. It will also be useful to cloud/edge/fog and IoT architects, analytics professionals, IT operations teams and site reliability engineers (SREs).
This book presents a mathematical treatment of the radio resource allocation of modern cellular communications systems in contested environments. It focuses on fulfilling the quality of service requirements of the living applications on the user devices, which leverage the cellular system, and with attention to elevating the users' quality of experience. The authors also address the congestion of the spectrum by allowing sharing with the band incumbents while providing with a quality-of-service-minded resource allocation in the network. The content is of particular interest to telecommunications scheduler experts in industry, communications applications academia, and graduate students whose paramount research deals with resource allocation and quality of service.
The "EPCglobal Architecture Framework" is currently the most
accepted technical approach to the Internet of Things and provides
a solid foundation for building Business-to-Business information
networks based on unique identifications of 'things'. Lately, the
vision of the Internet of Things has been extended to a more
holistic approach that integrates sensors as well as actuators and
includes non-business stakeholders. A detailed look at the current
state of the art in
In the mid 1990s, Tim Berners-Lee had the idea of developing the World Wide Web into a "Semantic Web", a web of information that could be interpreted by machines in order to allow the automatic exploitation of data, which until then had to be done by humans manually. One of the first people to research topics related to the Semantic Web was Professor Rudi Studer. From the beginning, Rudi drove projects like ONTOBROKER and On-to-Knowledge, which later resulted in W3C standards such as RDF and OWL. By the late 1990s, Rudi had established a research group at the University of Karlsruhe, which later became the nucleus and breeding ground for Semantic Web research, and many of today's well-known research groups were either founded by his disciples or benefited from close cooperation with this think tank. In this book, published in celebration of Rudi's 60th birthday, many of his colleagues look back on the main research results achieved during the last 20 years. Under the editorship of Dieter Fensel, once one of Rudi's early PhD students, an impressive list of contributors and contributions has been collected, covering areas like Knowledge Management, Ontology Engineering, Service Management, and Semantic Search. Overall, this book provides an excellent overview of the state of the art in Semantic Web research, by combining historical roots with the latest results, which may finally make the dream of a "Web of knowledge, software and services" come true.
This book contains the combined proceedings of the 4th International Conference on Ubiquitous Computing Application and Wireless Sensor Network (UCAWSN-15) and the 16th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT-15). The combined proceedings present peer-reviewed contributions from academic and industrial researchers in fields including ubiquitous and context-aware computing, context-awareness reasoning and representation, location awareness services, and architectures, protocols and algorithms, energy, management and control of wireless sensor networks. The book includes the latest research results, practical developments and applications in parallel/distributed architectures, wireless networks and mobile computing, formal methods and programming languages, network routing and communication algorithms, database applications and data mining, access control and authorization and privacy preserving computation.
Vast amounts of data are nowadays collected, stored and processed, in an effort to assist in making a variety of administrative and governmental decisions. These innovative steps considerably improve the speed, effectiveness and quality of decisions. Analyses are increasingly performed by data mining and profiling technologies that statistically and automatically determine patterns and trends. However, when such practices lead to unwanted or unjustified selections, they may result in unacceptable forms of discrimination. Processing vast amounts of data may lead to situations in which data controllers know many of the characteristics, behaviors and whereabouts of people. In some cases, analysts might know more about individuals than these individuals know about themselves. Judging people by their digital identities sheds a different light on our views of privacy and data protection. This book discusses discrimination and privacy issues related to data mining and profiling practices. It provides technological and regulatory solutions, to problems which arise in these innovative contexts. The book explains that common measures for mitigating privacy and discrimination, such as access controls and anonymity, fail to properly resolve privacy and discrimination concerns. Therefore, new solutions, focusing on technology design, transparency and accountability are called for and set forth.
As economies continue to evolve, knowledge is being recognized as a business asset and considered a crucial component of business strategy. The ability to manage knowledge is increasingly important for securing and maintaining organizational success and surviving in the knowledge economy. ""Knowledge Management Strategies for Business Development"" addresses the relevance of knowledge management strategies for the advancement of organizations worldwide. This reference book supplies business practitioners, academicians, and researchers with comprehensive tools to systematically guide through a process that focuses on data gathering, analysis, and decision making. |
You may like...
The Rise and Fall of the Scottish Cotton…
Anthony Cooke
Hardcover
Where Did We Go Wrong? - Industrial…
Gordon Roderick, Michael Stephens
Paperback
R1,040
Discovery Miles 10 400
|