![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Blockchain technology (BT) is quietly transforming the world, from financial infrastructure, to the internet-of-things, to healthcare applications. With increasing penetration of BT into various areas of our daily lives, the need arises for better awareness and greater knowledge about the capabilities, benefits, risks, and alternatives to distributed ledger applications. It is hoped that current book will be one of the pioneering collections focusing on blockchain implementations in the area of healthcare, with specific aim to present content in an easy-to-understand and readily accessible way for typical end-users of blockchain-based applications. There are important areas within the fabric of modern healthcare that stand to benefit from implementations of BT. These areas include electronic medical records, quality control, patient safety, finance, device tracking, biostamping/biocertification, redundant storage of critical data, health and liability insurance, medication utilization tracking (including opioid and antibiotic misuse), financial transactions, academics/education, asset tokenization, public health and pandemics, healthcare provider credentialing, and many other potential applications. The ultimate goal of the proposed book would be to provide an integrative, easy-to-understand, and comprehensive picture of the current state of blockchain use in healthcare while actively engaging the reader in a forward-looking, exploratory approach toward future developments in this space. To accomplish this goal, an expert panel of contributors has been assembled, featuring scholars from top global universities and think-tanks.
Topics covered include: The fundamentals of business process modeling, including workflow patterns, an in-depth treatment of process flexibility, including approaches to dealing with on-the-fly changes, unexpected exceptions, and constraint-based processes, Technological aspects of a modern BPM environment, including its architecture, process design environment, process engine, resource handler and other support services, a comparative insight into current approaches to business process modeling and execution such as BPMN, EPCs, BPEL, jBPM, OpenWFE, and Enhydra Shark, process mining, verification, integration and configuration; and case studies in health care and screen business. This book provides a comprehensive treatment of the field of Business Process Management (BPM) with a focus on Business Process Automation. It achieves this by covering a wide range of topics, both introductory and advanced, illustrated through and grounded in the YAWL (Yet Another Workflow Language) language and corresponding open-source support environment. In doing so it provides the reader with a deep, timeless, and vendor-independent understanding of the essential ingredients of business process automation. The BPM field is in a continual state of flux and is subject to both the ongoing proposal of new standards and the introduction of new tools and technology. Its fundamentals however are relatively stable and this book aims to equip the reader with both a thorough understanding of them and the ability to apply them to better understand, assess and utilize new developments in the BPM field. As a consequence of its topic-based format and the inclusion of a broad range of exercises, the book is eminently suitable for use in tertiary education, both at the undergraduate and the postgraduate level, for students of computer science and information systems. BPM researchers and practitioners will also find it a valuable resource. The book serves as a unique reference to a varied and comprehensive collection of topics that are relevant to the business process life-cycle.
Soft Computing Applications for Database Technologies: Techniques and Issues treats the new, emerging discipline of soft computing, which exploits this data through tolerance for imprecision and uncertainty to achieve solutions for complex problems. Soft computing methodologies include fuzzy sets, neural networks, genetic algorithms, Bayesian belief networks and rough sets, which are explored in detail through case studies and in-depth research. The advent of soft computing marks a significant paradigm shift in computing, with a wide range of applications and techniques which are presented and discussed in the chapters of this book.
The emerging field of Data Science has had a large impact on science and society. This book explores how one distinguishing feature of Data Science - its focus on data collected from social and environmental contexts within which learners often find themselves deeply embedded - suggests serious implications for learning and education. Drawing from theories of learning and identity development in the learning sciences, this volume investigates the impacts of these complex relationships on how learners think about, use, and share data, including their understandings of data in light of history, race, geography, and politics. More than just using 'real world examples' to motivate students to work with data, this book demonstrates how learners' relationships to data shape how they approach those data with agency, as part of their social and cultural lives. Together, the contributions offer a vision of how the learning sciences can contribute to a more expansive, socially aware, and transformative Data Science Education. The chapters in this book were originally published as a special issue of the Journal of the Learning Sciences.
Volume I is the first of two volumes that document the three
components of the CHILDES Project. It is divided into two parts
which provide an introduction to the use of computational tools for
studying language learning. The first part is the CHAT manual,
which describes the conventions and principles of CHAT
transcription and recommends specific methods for data collection
and digitization. The second part is the CLAN manual, which
describes the uses of the editor, sonic CHAT, and the various
analytic commands. The book will be useful for both novice and
experienced users of the CHILDES tools, as well as instructors and
students working with transcripts of child language.
This book intends to change the perception of modern day telecommunications. Communication systems, usually perceived as "dumb pipes", carrying information / data from one point to another, are evolved into intelligently communicating smart systems. The book introduces a new field of cloud communications. The concept, theory, and architecture of this new field of cloud communications are discussed. The book lays down nine design postulates that form the basis of the development of a first of its kind cloud communication paradigm entitled Green Symbiotic Cloud Communications or GSCC. The proposed design postulates are formulated in a generic way to form the backbone for development of systems and technologies of the future. The book can be used to develop courses that serve as an essential part of graduate curriculum in computer science and electrical engineering. Such courses can be independent or part of high-level research courses. The book will also be of interest to a wide range of readers including both scientific and non-scientific domains as it discusses innovations from a simplistic explanatory viewpoint.
The development of a methodology for using logic databases is essential if new users are to be able to use these systems effectively to solve their problems, and this remains a largely unrealized goal. A workshop was organized in conjunction with the ILPS '93 Conference in Vancouver in October 1993 to provide a forum for users and implementors of deductive systems to share their experience. The emphasis was on the use of deductive systems. In addition to paper presentations, a number of systems were demonstrated. The papers of this book were drawn largely from the papers presented at the workshop, which have been extended and revised for inclusion here, and also include some papers describing interesting applications that were not discussed at the workshop. The applications described here should be seen as a starting point: a number of promising application domains are identified, and several interesting application packages are described, which provide the inspiration for further development. Declarative rule-based database systems hold a lot of promise in a wide range of application domains, and we need a continued stream of application development to better understand this potential and how to use it effectively. This book contains the broadest collection to date of papers describing implemented, significant applications of logic databases, and database systems as well as potential database users in such areas as scientific data management and complex decision support.
Written by leading industry experts, the Data Management Handbook is a comprehensive, single-volume guide to the most innovative ideas on ho w to plan, develop, and run a powerful data management function - as w ell as handle day-to-day operations. The book provides practical, hand s-on guidance on the strategic, tactical, and technical aspects of dat a management, offering an inside look at how leading companies in vari ous industries meet the challenges of moving to a data-sharing environ ment.
Based on interdisciplinary research into "Directional Change", a new data-driven approach to financial data analysis, Detecting Regime Change in Computational Finance: Data Science, Machine Learning and Algorithmic Trading applies machine learning to financial market monitoring and algorithmic trading. Directional Change is a new way of summarising price changes in the market. Instead of sampling prices at fixed intervals (such as daily closing in time series), it samples prices when the market changes direction ("zigzags"). By sampling data in a different way, this book lays out concepts which enable the extraction of information that other market participants may not be able to see. The book includes a Foreword by Richard Olsen and explores the following topics: Data science: as an alternative to time series, price movements in a market can be summarised as directional changes Machine learning for regime change detection: historical regime changes in a market can be discovered by a Hidden Markov Model Regime characterisation: normal and abnormal regimes in historical data can be characterised using indicators defined under Directional Change Market Monitoring: by using historical characteristics of normal and abnormal regimes, one can monitor the market to detect whether the market regime has changed Algorithmic trading: regime tracking information can help us to design trading algorithms It will be of great interest to researchers in computational finance, machine learning and data science. About the Authors Jun Chen received his PhD in computational finance from the Centre for Computational Finance and Economic Agents, University of Essex in 2019. Edward P K Tsang is an Emeritus Professor at the University of Essex, where he co-founded the Centre for Computational Finance and Economic Agents in 2002.
The history of the computer, and of the industry it spawned, is the latest entrant into the field of historical studies. Scholars beginning to turn their attention to the subject of computing need James Cortada's "Archives of Data Procesing History" as a brief introduction to sources immediately available for investigation. Each essay provides an overview of a major government, academic, or industrial archival collection dealing with the history of computing, the industry, and its leaders and is written by the archivist/historian who has worked with or is responsible for the collection. The archives give practical information on hours, organization, contacts, telephone numbers, survey of contents, and assessments of the historical significance of the collections and their institutions. Reference and business librarians will definitely want to add this volume to their collections. Those interested in the history of technology, the business history of the industry, and the history of major institutions will want to consult it.
Grid computing promises to transform the way organizations and individuals compute, communicate, and collaborate. Computational and Data Grids: Principles, Applications and Design offers critical perspectives on theoretical frameworks, methodologies, implementations, and cutting edge research in grid computing, bridging the gap between academia and the latest achievements of the computer industry. Useful for professionals and students involved or interested in the study, use, design, and development of grid computing, this book highlights both the basics of the field and in depth analyses of grid networks.
This book introduces the quantum mechanical framework to information retrieval scientists seeking a new perspective on foundational problems. As such, it concentrates on the main notions of the quantum mechanical framework and describes an innovative range of concepts and tools for modeling information representation and retrieval processes. The book is divided into four chapters. Chapter 1 illustrates the main modeling concepts for information retrieval (including Boolean logic, vector spaces, probabilistic models, and machine-learning based approaches), which will be examined further in subsequent chapters. Next, chapter 2 briefly explains the main concepts of the quantum mechanical framework, focusing on approaches linked to information retrieval such as interference, superposition and entanglement. Chapter 3 then reviews the research conducted at the intersection between information retrieval and the quantum mechanical framework. The chapter is subdivided into a number of topics, and each description ends with a section suggesting the most important reference resources. Lastly, chapter 4 offers suggestions for future research, briefly outlining the most essential and promising research directions to fully leverage the quantum mechanical framework for effective and efficient information retrieval systems. This book is especially intended for researchers working in information retrieval, database systems and machine learning who want to acquire a clear picture of the potential offered by the quantum mechanical framework in their own research area. Above all, the book offers clear guidance on whether, why and when to effectively use the mathematical formalism and the concepts of the quantum mechanical framework to address various foundational issues in information retrieval.
This book shows healthcare professionals how to turn data points into meaningful knowledge upon which they can take effective action. Actionable intelligence can take many forms, from informing health policymakers on effective strategies for the population to providing direct and predictive insights on patients to healthcare providers so they can achieve positive outcomes. It can assist those performing clinical research where relevant statistical methods are applied to both identify the efficacy of treatments and improve clinical trial design. It also benefits healthcare data standards groups through which pertinent data governance policies are implemented to ensure quality data are obtained, measured, and evaluated for the benefit of all involved. Although the obvious constant thread among all of these important healthcare use cases of actionable intelligence is the data at hand, such data in and of itself merely represents one element of the full structure of healthcare data analytics. This book examines the structure for turning data into actionable knowledge and discusses: The importance of establishing research questions Data collection policies and data governance Principle-centered data analytics to transform data into information Understanding the "why" of classified causes and effects Narratives and visualizations to inform all interested parties Actionable Intelligence in Healthcare is an important examination of how proper healthcare-related questions should be formulated, how relevant data must be transformed to associated information, and how the processing of information relates to knowledge. It indicates to clinicians and researchers why this relative knowledge is meaningful and how best to apply such newfound understanding for the betterment of all.
Advanced Signature Indexing for Multimedia and Web Applications presents the latest research developments in signature-based indexing and query processing, specifically in multimedia and Web domains. These domains now demand a different designation of hashing information in bit-strings (i.e., signatures), and new indexes and query processing methods. The book provides solutions to these issues and addresses the resulting requirements, which are not adequately handled by existing approaches. Examples of these applications include: searching for similar images, representing multi-theme layers in maps, recommending products to Web-clients, and indexing large Web-log files. Special emphasis is given to structure description, implementation techniques and clear evaluation of operations performed (from a performance perspective). Advanced Signature Indexing for Multimedia and Web Applications is an excellent reference for professionals involved in the development of applications in multimedia databases or the Web and may also serve as a textbook for advanced level courses in database and information retrieval systems.
A timely survey of the field from the point of view of some of the subject's most active researchers. Divided into several parts organized by theme, the book first covers the underlying methodology regarding active rules, followed by formal specification, rule analysis, performance analysis, and support tools. It then moves on to the implementation of active rules in a number of commercial systems, before concluding with applications and future directions for research. All researchers in databases will find this a valuable overview of the topic.
As design complexity in chips and devices continues to rise, so,
too, does the demand for functional verification. Principles of
Functional Verification is a hands-on, practical text that will
help train professionals in the field of engineering on the
methodology and approaches to verification.
Time is ubiquitous in information systems. Almost every enterprise faces the problem of its data becoming out of date. However, such data is often valu able, so it should be archived and some means to access it should be provided. Also, some data may be inherently historical, e.g., medical, cadastral, or ju dicial records. Temporal databases provide a uniform and systematic way of dealing with historical data. Many languages have been proposed for tem poral databases, among others temporal logic. Temporal logic combines ab stract, formal semantics with the amenability to efficient implementation. This chapter shows how temporal logic can be used in temporal database applica tions. Rather than presenting new results, we report on recent developments and survey the field in a systematic way using a unified formal framework [GHR94; Ch094]. The handbook [GHR94] is a comprehensive reference on mathematical foundations of temporal logic. In this chapter we study how temporal logic is used as a query and integrity constraint language. Consequently, model-theoretic notions, particularly for mula satisfaction, are of primary interest. Axiomatic systems and proof meth ods for temporal logic [GHR94] have found so far relatively few applications in the context of information systems. Moreover, one needs to bear in mind that for the standard linearly-ordered time domains temporal logic is not re cursively axiomatizable [GHR94]' so recursive axiomatizations are by necessity incomplete.
"Handbook of Open Source Tools" introduces a comprehensive collection of advanced open source tools useful in developing software applications. The book contains information on more than 200 open-source tools which include software construction utilities for compilers, virtual-machines, database, graphics, high-performance computing, OpenGL, geometry, algebra, graph theory, GUIs and more. Special highlights for software construction utilities and application libraries are included. Each tool is covered in the context of a real like application development setting. This unique handbook presents a comprehensive discussion of advanced tools, a valuable asset used by most application developers and programmers; includes a special focus on Mathematical Open Source Software not available in most Open Source Software books, and introduces several tools (eg ACL2, CLIPS, CUDA, and COIN) which are not known outside of select groups, but are very powerful. "Handbook of Open Source Tools "is designed for application developers and programmers working with Open Source Tools. Advanced-level students concentrating on Engineering, Mathematics and Computer Science will find this reference a valuable asset as well.
CHARM '97 is the ninth in a series of working conferences devoted to the development and use of formal techniques in digital hardware design and verification. This series is held in collaboration with IFIP WG 10.5. Previous meetings were held in Europe every other year.
Probability, Statistics, and Random Signals offers a comprehensive treatment of probability, giving equal treatment to discrete and continuous probability. The topic of statistics is presented as the application of probability to data analysis, not as a cookbook of statistical recipes. This student-friendly text features accessible descriptions and highly engaging exercises on topics like gambling, the birthday paradox, and financial decision-making.
The need to electronically store, manipulate and analyze large-scale, high-dimensional data sets requires new computational methods. This book presents new intelligent data management methods and tools, including new results from the field of inference. Leading experts also map out future directions of intelligent data analysis. This book will be a valuable reference for researchers exploring the interdisciplinary area between statistics and computer science as well as for professionals applying advanced data analysis methods in industry.
This text provides deep and comprehensive coverage of the mathematical background for data science, including machine learning, optimal recovery, compressed sensing, optimization, and neural networks. In the past few decades, heuristic methods adopted by big tech companies have complemented existing scientific disciplines to form the new field of Data Science. This text embarks the readers on an engaging itinerary through the theory supporting the field. Altogether, twenty-seven lecture-length chapters with exercises provide all the details necessary for a solid understanding of key topics in data science. While the book covers standard material on machine learning and optimization, it also includes distinctive presentations of topics such as reproducing kernel Hilbert spaces, spectral clustering, optimal recovery, compressed sensing, group testing, and applications of semidefinite programming. Students and data scientists with less mathematical background will appreciate the appendices that provide more background on some of the more abstract concepts. |
You may like...
Machine Learning and Data Analytics for…
Manikant Roy, Lovi Raj Gupta
Hardcover
R10,591
Discovery Miles 105 910
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
R506
Discovery Miles 5 060
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Applied Big Data Analytics and Its Role…
Peng Zhao, Xin Wang, …
Hardcover
R6,648
Discovery Miles 66 480
Bitcoin And Cryptocurrency - The…
Crypto Trader & Crypto Gladiator
Hardcover
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
|