![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
The emerging field of Data Science has had a large impact on science and society. This book explores how one distinguishing feature of Data Science - its focus on data collected from social and environmental contexts within which learners often find themselves deeply embedded - suggests serious implications for learning and education. Drawing from theories of learning and identity development in the learning sciences, this volume investigates the impacts of these complex relationships on how learners think about, use, and share data, including their understandings of data in light of history, race, geography, and politics. More than just using 'real world examples' to motivate students to work with data, this book demonstrates how learners' relationships to data shape how they approach those data with agency, as part of their social and cultural lives. Together, the contributions offer a vision of how the learning sciences can contribute to a more expansive, socially aware, and transformative Data Science Education. The chapters in this book were originally published as a special issue of the Journal of the Learning Sciences.
Volume I is the first of two volumes that document the three
components of the CHILDES Project. It is divided into two parts
which provide an introduction to the use of computational tools for
studying language learning. The first part is the CHAT manual,
which describes the conventions and principles of CHAT
transcription and recommends specific methods for data collection
and digitization. The second part is the CLAN manual, which
describes the uses of the editor, sonic CHAT, and the various
analytic commands. The book will be useful for both novice and
experienced users of the CHILDES tools, as well as instructors and
students working with transcripts of child language.
This book intends to change the perception of modern day telecommunications. Communication systems, usually perceived as "dumb pipes", carrying information / data from one point to another, are evolved into intelligently communicating smart systems. The book introduces a new field of cloud communications. The concept, theory, and architecture of this new field of cloud communications are discussed. The book lays down nine design postulates that form the basis of the development of a first of its kind cloud communication paradigm entitled Green Symbiotic Cloud Communications or GSCC. The proposed design postulates are formulated in a generic way to form the backbone for development of systems and technologies of the future. The book can be used to develop courses that serve as an essential part of graduate curriculum in computer science and electrical engineering. Such courses can be independent or part of high-level research courses. The book will also be of interest to a wide range of readers including both scientific and non-scientific domains as it discusses innovations from a simplistic explanatory viewpoint.
The development of a methodology for using logic databases is essential if new users are to be able to use these systems effectively to solve their problems, and this remains a largely unrealized goal. A workshop was organized in conjunction with the ILPS '93 Conference in Vancouver in October 1993 to provide a forum for users and implementors of deductive systems to share their experience. The emphasis was on the use of deductive systems. In addition to paper presentations, a number of systems were demonstrated. The papers of this book were drawn largely from the papers presented at the workshop, which have been extended and revised for inclusion here, and also include some papers describing interesting applications that were not discussed at the workshop. The applications described here should be seen as a starting point: a number of promising application domains are identified, and several interesting application packages are described, which provide the inspiration for further development. Declarative rule-based database systems hold a lot of promise in a wide range of application domains, and we need a continued stream of application development to better understand this potential and how to use it effectively. This book contains the broadest collection to date of papers describing implemented, significant applications of logic databases, and database systems as well as potential database users in such areas as scientific data management and complex decision support.
Written by leading industry experts, the Data Management Handbook is a comprehensive, single-volume guide to the most innovative ideas on ho w to plan, develop, and run a powerful data management function - as w ell as handle day-to-day operations. The book provides practical, hand s-on guidance on the strategic, tactical, and technical aspects of dat a management, offering an inside look at how leading companies in vari ous industries meet the challenges of moving to a data-sharing environ ment.
The history of the computer, and of the industry it spawned, is the latest entrant into the field of historical studies. Scholars beginning to turn their attention to the subject of computing need James Cortada's "Archives of Data Procesing History" as a brief introduction to sources immediately available for investigation. Each essay provides an overview of a major government, academic, or industrial archival collection dealing with the history of computing, the industry, and its leaders and is written by the archivist/historian who has worked with or is responsible for the collection. The archives give practical information on hours, organization, contacts, telephone numbers, survey of contents, and assessments of the historical significance of the collections and their institutions. Reference and business librarians will definitely want to add this volume to their collections. Those interested in the history of technology, the business history of the industry, and the history of major institutions will want to consult it.
Grid computing promises to transform the way organizations and individuals compute, communicate, and collaborate. Computational and Data Grids: Principles, Applications and Design offers critical perspectives on theoretical frameworks, methodologies, implementations, and cutting edge research in grid computing, bridging the gap between academia and the latest achievements of the computer industry. Useful for professionals and students involved or interested in the study, use, design, and development of grid computing, this book highlights both the basics of the field and in depth analyses of grid networks.
This book introduces the quantum mechanical framework to information retrieval scientists seeking a new perspective on foundational problems. As such, it concentrates on the main notions of the quantum mechanical framework and describes an innovative range of concepts and tools for modeling information representation and retrieval processes. The book is divided into four chapters. Chapter 1 illustrates the main modeling concepts for information retrieval (including Boolean logic, vector spaces, probabilistic models, and machine-learning based approaches), which will be examined further in subsequent chapters. Next, chapter 2 briefly explains the main concepts of the quantum mechanical framework, focusing on approaches linked to information retrieval such as interference, superposition and entanglement. Chapter 3 then reviews the research conducted at the intersection between information retrieval and the quantum mechanical framework. The chapter is subdivided into a number of topics, and each description ends with a section suggesting the most important reference resources. Lastly, chapter 4 offers suggestions for future research, briefly outlining the most essential and promising research directions to fully leverage the quantum mechanical framework for effective and efficient information retrieval systems. This book is especially intended for researchers working in information retrieval, database systems and machine learning who want to acquire a clear picture of the potential offered by the quantum mechanical framework in their own research area. Above all, the book offers clear guidance on whether, why and when to effectively use the mathematical formalism and the concepts of the quantum mechanical framework to address various foundational issues in information retrieval.
This book shows healthcare professionals how to turn data points into meaningful knowledge upon which they can take effective action. Actionable intelligence can take many forms, from informing health policymakers on effective strategies for the population to providing direct and predictive insights on patients to healthcare providers so they can achieve positive outcomes. It can assist those performing clinical research where relevant statistical methods are applied to both identify the efficacy of treatments and improve clinical trial design. It also benefits healthcare data standards groups through which pertinent data governance policies are implemented to ensure quality data are obtained, measured, and evaluated for the benefit of all involved. Although the obvious constant thread among all of these important healthcare use cases of actionable intelligence is the data at hand, such data in and of itself merely represents one element of the full structure of healthcare data analytics. This book examines the structure for turning data into actionable knowledge and discusses: The importance of establishing research questions Data collection policies and data governance Principle-centered data analytics to transform data into information Understanding the "why" of classified causes and effects Narratives and visualizations to inform all interested parties Actionable Intelligence in Healthcare is an important examination of how proper healthcare-related questions should be formulated, how relevant data must be transformed to associated information, and how the processing of information relates to knowledge. It indicates to clinicians and researchers why this relative knowledge is meaningful and how best to apply such newfound understanding for the betterment of all.
Advanced Signature Indexing for Multimedia and Web Applications presents the latest research developments in signature-based indexing and query processing, specifically in multimedia and Web domains. These domains now demand a different designation of hashing information in bit-strings (i.e., signatures), and new indexes and query processing methods. The book provides solutions to these issues and addresses the resulting requirements, which are not adequately handled by existing approaches. Examples of these applications include: searching for similar images, representing multi-theme layers in maps, recommending products to Web-clients, and indexing large Web-log files. Special emphasis is given to structure description, implementation techniques and clear evaluation of operations performed (from a performance perspective). Advanced Signature Indexing for Multimedia and Web Applications is an excellent reference for professionals involved in the development of applications in multimedia databases or the Web and may also serve as a textbook for advanced level courses in database and information retrieval systems.
A timely survey of the field from the point of view of some of the subject's most active researchers. Divided into several parts organized by theme, the book first covers the underlying methodology regarding active rules, followed by formal specification, rule analysis, performance analysis, and support tools. It then moves on to the implementation of active rules in a number of commercial systems, before concluding with applications and future directions for research. All researchers in databases will find this a valuable overview of the topic.
As design complexity in chips and devices continues to rise, so,
too, does the demand for functional verification. Principles of
Functional Verification is a hands-on, practical text that will
help train professionals in the field of engineering on the
methodology and approaches to verification.
Time is ubiquitous in information systems. Almost every enterprise faces the problem of its data becoming out of date. However, such data is often valu able, so it should be archived and some means to access it should be provided. Also, some data may be inherently historical, e.g., medical, cadastral, or ju dicial records. Temporal databases provide a uniform and systematic way of dealing with historical data. Many languages have been proposed for tem poral databases, among others temporal logic. Temporal logic combines ab stract, formal semantics with the amenability to efficient implementation. This chapter shows how temporal logic can be used in temporal database applica tions. Rather than presenting new results, we report on recent developments and survey the field in a systematic way using a unified formal framework [GHR94; Ch094]. The handbook [GHR94] is a comprehensive reference on mathematical foundations of temporal logic. In this chapter we study how temporal logic is used as a query and integrity constraint language. Consequently, model-theoretic notions, particularly for mula satisfaction, are of primary interest. Axiomatic systems and proof meth ods for temporal logic [GHR94] have found so far relatively few applications in the context of information systems. Moreover, one needs to bear in mind that for the standard linearly-ordered time domains temporal logic is not re cursively axiomatizable [GHR94]' so recursive axiomatizations are by necessity incomplete.
"Handbook of Open Source Tools" introduces a comprehensive collection of advanced open source tools useful in developing software applications. The book contains information on more than 200 open-source tools which include software construction utilities for compilers, virtual-machines, database, graphics, high-performance computing, OpenGL, geometry, algebra, graph theory, GUIs and more. Special highlights for software construction utilities and application libraries are included. Each tool is covered in the context of a real like application development setting. This unique handbook presents a comprehensive discussion of advanced tools, a valuable asset used by most application developers and programmers; includes a special focus on Mathematical Open Source Software not available in most Open Source Software books, and introduces several tools (eg ACL2, CLIPS, CUDA, and COIN) which are not known outside of select groups, but are very powerful. "Handbook of Open Source Tools "is designed for application developers and programmers working with Open Source Tools. Advanced-level students concentrating on Engineering, Mathematics and Computer Science will find this reference a valuable asset as well.
CHARM '97 is the ninth in a series of working conferences devoted to the development and use of formal techniques in digital hardware design and verification. This series is held in collaboration with IFIP WG 10.5. Previous meetings were held in Europe every other year.
The need to electronically store, manipulate and analyze large-scale, high-dimensional data sets requires new computational methods. This book presents new intelligent data management methods and tools, including new results from the field of inference. Leading experts also map out future directions of intelligent data analysis. This book will be a valuable reference for researchers exploring the interdisciplinary area between statistics and computer science as well as for professionals applying advanced data analysis methods in industry.
This text provides deep and comprehensive coverage of the mathematical background for data science, including machine learning, optimal recovery, compressed sensing, optimization, and neural networks. In the past few decades, heuristic methods adopted by big tech companies have complemented existing scientific disciplines to form the new field of Data Science. This text embarks the readers on an engaging itinerary through the theory supporting the field. Altogether, twenty-seven lecture-length chapters with exercises provide all the details necessary for a solid understanding of key topics in data science. While the book covers standard material on machine learning and optimization, it also includes distinctive presentations of topics such as reproducing kernel Hilbert spaces, spectral clustering, optimal recovery, compressed sensing, group testing, and applications of semidefinite programming. Students and data scientists with less mathematical background will appreciate the appendices that provide more background on some of the more abstract concepts.
Requiring heterogeneous information systems to cooperate and communicate has now become crucial, especially in application areas like e-business, Web-based mash-ups and the life sciences. Such cooperating systems have to automatically and efficiently match, exchange, transform and integrate large data sets from different sources and of different structure in order to enable seamless data exchange and transformation. The book edited by Bellahsene, Bonifati and Rahm provides an overview of the ways in which the schema and ontology matching and mapping tools have addressed the above requirements and points to the open technical challenges. The contributions from leading experts are structured into three parts: large-scale and knowledge-driven schema matching, quality-driven schema mapping and evolution, and evaluation and tuning of matching tasks. The authors describe the state of the art by discussing the latest achievements such as more effective methods for matching data, mapping transformation verification, adaptation to the context and size of the matching and mapping tasks, mapping-driven schema evolution and merging, and mapping evaluation and tuning. The overall result is a coherent, comprehensive picture of the field. With this book, the editors introduce graduate students and advanced professionals to this exciting field. For researchers, they provide an up-to-date source of reference about schema and ontology matching, schema and ontology evolution, and schema merging.
This textbook integrates important mathematical foundations, efficient computational algorithms, applied statistical inference techniques, and cutting-edge machine learning approaches to address a wide range of crucial biomedical informatics, health analytics applications, and decision science challenges. Each concept in the book includes a rigorous symbolic formulation coupled with computational algorithms and complete end-to-end pipeline protocols implemented as functional R electronic markdown notebooks. These workflows support active learning and demonstrate comprehensive data manipulations, interactive visualizations, and sophisticated analytics. The content includes open problems, state-of-the-art scientific knowledge, ethical integration of heterogeneous scientific tools, and procedures for systematic validation and dissemination of reproducible research findings.Complementary to the enormous challenges related to handling, interrogating, and understanding massive amounts of complex structured and unstructured data, there are unique opportunities that come with access to a wealth of feature-rich, high-dimensional, and time-varying information. The topics covered in Data Science and Predictive Analytics address specific knowledge gaps, resolve educational barriers, and mitigate workforce information-readiness and data science deficiencies. Specifically, it provides a transdisciplinary curriculum integrating core mathematical principles, modern computational methods, advanced data science techniques, model-based machine learning, model-free artificial intelligence, and innovative biomedical applications. The book's fourteen chapters start with an introduction and progressively build foundational skills from visualization to linear modeling, dimensionality reduction, supervised classification, black-box machine learning techniques, qualitative learning methods, unsupervised clustering, model performance assessment, feature selection strategies, longitudinal data analytics, optimization, neural networks, and deep learning. The second edition of the book includes additional learning-based strategies utilizing generative adversarial networks, transfer learning, and synthetic data generation, as well as eight complementary electronic appendices. This textbook is suitable for formal didactic instructor-guided course education, as well as for individual or team-supported self-learning. The material is presented at the upper-division and graduate-level college courses and covers applied and interdisciplinary mathematics, contemporary learning-based data science techniques, computational algorithm development, optimization theory, statistical computing, and biomedical sciences. The analytical techniques and predictive scientific methods described in the book may be useful to a wide range of readers, formal and informal learners, college instructors, researchers, and engineers throughout the academy, industry, government, regulatory, funding, and policy agencies. The supporting book website provides many examples, datasets, functional scripts, complete electronic notebooks, extensive appendices, and additional materials.
The subject of error-control coding bridges several disciplines, in particular mathematics, electrical engineering and computer science. The theory of error-control codes is often described abstractly in mathematical terms only, for the benefit of other coding specialists. Such a theoretical approach to coding makes it difficult for engineers to understand the underlying concepts of error correction, the design of digital error-control systems, and the quantitative behavior of such systems. In this book only a minimal amount of mathematics is introduced in order to describe the many, sometimes mathematical, aspects of error-control coding. The concepts of error correction and detection are in many cases sufficiently straightforward to avoid highly theoretical algebraic constructions. The reader will find that the primary emphasis of the book is on practical matters, not on theoretical problems. In fact, much of the material covered is summarized by examples of real developments, and almost all of the error-correction and detection codes introduced are attached to related practical applications. Error-Control Coding for Data Networks takes a structured approach to channel-coding, starting with the basic coding concepts and working gradually towards the most sophisticated coding systems. The most popular applications are described throughout the book. These applications include the channel-coding techniques used in mobile communication systems, such as: the global system for mobile communications (GSM) and the code-division multiple-access (CDMA) system, coding schemes for High-Definition TeleVision (HDTV) system, the Compact Disk (CD), and Digital Video Disk (DVD), as well as theerror-control protocols for the data-link layers of networks, and much more. The book is compiled carefully to bring engineers, coding specialists, and students up to date in the important modern coding technologies. Both electrical engineering students and communication engineers will benefit from the information in this largely self-contained text on error-control system engineering.
Intrusion detection systems (IDS) are usually deployed along with other preventive security mechanisms, such as access control and authentication, as a second line of defense that protects information systems. Intrusion detection complements the protective mechanisms to improve the system security. Moreover, even if the preventive security mechanisms can protect information systems successfully, it is still desirable to know what intrusions have happened or are happening, so that the users can understand the security threats and risks and thus be better prepared for future attacks. Intrusion detection techniques are traditionally categorized into two classes: anomaly detection and misuse detection. Anomaly detection is based on the normal behavior of a subject (a user or a system); any action that significantly deviates from the normal behaviour is considered intrusive. Misuse detection catches intrusions in terms of characteristics of known attacks or system vulnerabilities; any action that conforms to the pattern of known attack or vulnerability is considered intrusive. and network based IDSs according to the source of the audit information used by each IDS. Host-based IDSs get audit data from host audit trails and usually aim at detecting attacks against a single host; distributed IDSs gather audit data from multiple hosts and possibly the network and connects the hosts, aiming at detecting attacks involving multiple hosts; network-based IDSs use network traffic as the audit data source, relieving the burden on the hosts that usually provide normal computing services. Intrusion Detection In Distributed Systems: An Abstraction-Based Approach presents research contributions in three areas with respect to intrusion detection in distributed systems. The first contribution is an abstraction-based approach to addressing heterogeneity and autonomy of distributed environments. The second contribution is a formal framework for modelling requests among co-operative IDSs and its application to Common Intrusion Detection Framework (CIDF). The third contribution is a novel approach to coordinating different IDSs for distributed event correlation. |
You may like...
Oracle Database 10g Data Warehouseing
Lilian Hobbs, Susan Hillson, …
Paperback
R1,827
Discovery Miles 18 270
|