![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Written by leading industry experts, the Data Management Handbook is a comprehensive, single-volume guide to the most innovative ideas on ho w to plan, develop, and run a powerful data management function - as w ell as handle day-to-day operations. The book provides practical, hand s-on guidance on the strategic, tactical, and technical aspects of dat a management, offering an inside look at how leading companies in vari ous industries meet the challenges of moving to a data-sharing environ ment.
Based on interdisciplinary research into "Directional Change", a new data-driven approach to financial data analysis, Detecting Regime Change in Computational Finance: Data Science, Machine Learning and Algorithmic Trading applies machine learning to financial market monitoring and algorithmic trading. Directional Change is a new way of summarising price changes in the market. Instead of sampling prices at fixed intervals (such as daily closing in time series), it samples prices when the market changes direction ("zigzags"). By sampling data in a different way, this book lays out concepts which enable the extraction of information that other market participants may not be able to see. The book includes a Foreword by Richard Olsen and explores the following topics: Data science: as an alternative to time series, price movements in a market can be summarised as directional changes Machine learning for regime change detection: historical regime changes in a market can be discovered by a Hidden Markov Model Regime characterisation: normal and abnormal regimes in historical data can be characterised using indicators defined under Directional Change Market Monitoring: by using historical characteristics of normal and abnormal regimes, one can monitor the market to detect whether the market regime has changed Algorithmic trading: regime tracking information can help us to design trading algorithms It will be of great interest to researchers in computational finance, machine learning and data science. About the Authors Jun Chen received his PhD in computational finance from the Centre for Computational Finance and Economic Agents, University of Essex in 2019. Edward P K Tsang is an Emeritus Professor at the University of Essex, where he co-founded the Centre for Computational Finance and Economic Agents in 2002.
The history of the computer, and of the industry it spawned, is the latest entrant into the field of historical studies. Scholars beginning to turn their attention to the subject of computing need James Cortada's "Archives of Data Procesing History" as a brief introduction to sources immediately available for investigation. Each essay provides an overview of a major government, academic, or industrial archival collection dealing with the history of computing, the industry, and its leaders and is written by the archivist/historian who has worked with or is responsible for the collection. The archives give practical information on hours, organization, contacts, telephone numbers, survey of contents, and assessments of the historical significance of the collections and their institutions. Reference and business librarians will definitely want to add this volume to their collections. Those interested in the history of technology, the business history of the industry, and the history of major institutions will want to consult it.
Grid computing promises to transform the way organizations and individuals compute, communicate, and collaborate. Computational and Data Grids: Principles, Applications and Design offers critical perspectives on theoretical frameworks, methodologies, implementations, and cutting edge research in grid computing, bridging the gap between academia and the latest achievements of the computer industry. Useful for professionals and students involved or interested in the study, use, design, and development of grid computing, this book highlights both the basics of the field and in depth analyses of grid networks.
This book introduces the quantum mechanical framework to information retrieval scientists seeking a new perspective on foundational problems. As such, it concentrates on the main notions of the quantum mechanical framework and describes an innovative range of concepts and tools for modeling information representation and retrieval processes. The book is divided into four chapters. Chapter 1 illustrates the main modeling concepts for information retrieval (including Boolean logic, vector spaces, probabilistic models, and machine-learning based approaches), which will be examined further in subsequent chapters. Next, chapter 2 briefly explains the main concepts of the quantum mechanical framework, focusing on approaches linked to information retrieval such as interference, superposition and entanglement. Chapter 3 then reviews the research conducted at the intersection between information retrieval and the quantum mechanical framework. The chapter is subdivided into a number of topics, and each description ends with a section suggesting the most important reference resources. Lastly, chapter 4 offers suggestions for future research, briefly outlining the most essential and promising research directions to fully leverage the quantum mechanical framework for effective and efficient information retrieval systems. This book is especially intended for researchers working in information retrieval, database systems and machine learning who want to acquire a clear picture of the potential offered by the quantum mechanical framework in their own research area. Above all, the book offers clear guidance on whether, why and when to effectively use the mathematical formalism and the concepts of the quantum mechanical framework to address various foundational issues in information retrieval.
This book shows healthcare professionals how to turn data points into meaningful knowledge upon which they can take effective action. Actionable intelligence can take many forms, from informing health policymakers on effective strategies for the population to providing direct and predictive insights on patients to healthcare providers so they can achieve positive outcomes. It can assist those performing clinical research where relevant statistical methods are applied to both identify the efficacy of treatments and improve clinical trial design. It also benefits healthcare data standards groups through which pertinent data governance policies are implemented to ensure quality data are obtained, measured, and evaluated for the benefit of all involved. Although the obvious constant thread among all of these important healthcare use cases of actionable intelligence is the data at hand, such data in and of itself merely represents one element of the full structure of healthcare data analytics. This book examines the structure for turning data into actionable knowledge and discusses: The importance of establishing research questions Data collection policies and data governance Principle-centered data analytics to transform data into information Understanding the "why" of classified causes and effects Narratives and visualizations to inform all interested parties Actionable Intelligence in Healthcare is an important examination of how proper healthcare-related questions should be formulated, how relevant data must be transformed to associated information, and how the processing of information relates to knowledge. It indicates to clinicians and researchers why this relative knowledge is meaningful and how best to apply such newfound understanding for the betterment of all.
Advanced Signature Indexing for Multimedia and Web Applications presents the latest research developments in signature-based indexing and query processing, specifically in multimedia and Web domains. These domains now demand a different designation of hashing information in bit-strings (i.e., signatures), and new indexes and query processing methods. The book provides solutions to these issues and addresses the resulting requirements, which are not adequately handled by existing approaches. Examples of these applications include: searching for similar images, representing multi-theme layers in maps, recommending products to Web-clients, and indexing large Web-log files. Special emphasis is given to structure description, implementation techniques and clear evaluation of operations performed (from a performance perspective). Advanced Signature Indexing for Multimedia and Web Applications is an excellent reference for professionals involved in the development of applications in multimedia databases or the Web and may also serve as a textbook for advanced level courses in database and information retrieval systems.
A timely survey of the field from the point of view of some of the subject's most active researchers. Divided into several parts organized by theme, the book first covers the underlying methodology regarding active rules, followed by formal specification, rule analysis, performance analysis, and support tools. It then moves on to the implementation of active rules in a number of commercial systems, before concluding with applications and future directions for research. All researchers in databases will find this a valuable overview of the topic.
As design complexity in chips and devices continues to rise, so,
too, does the demand for functional verification. Principles of
Functional Verification is a hands-on, practical text that will
help train professionals in the field of engineering on the
methodology and approaches to verification.
Time is ubiquitous in information systems. Almost every enterprise faces the problem of its data becoming out of date. However, such data is often valu able, so it should be archived and some means to access it should be provided. Also, some data may be inherently historical, e.g., medical, cadastral, or ju dicial records. Temporal databases provide a uniform and systematic way of dealing with historical data. Many languages have been proposed for tem poral databases, among others temporal logic. Temporal logic combines ab stract, formal semantics with the amenability to efficient implementation. This chapter shows how temporal logic can be used in temporal database applica tions. Rather than presenting new results, we report on recent developments and survey the field in a systematic way using a unified formal framework [GHR94; Ch094]. The handbook [GHR94] is a comprehensive reference on mathematical foundations of temporal logic. In this chapter we study how temporal logic is used as a query and integrity constraint language. Consequently, model-theoretic notions, particularly for mula satisfaction, are of primary interest. Axiomatic systems and proof meth ods for temporal logic [GHR94] have found so far relatively few applications in the context of information systems. Moreover, one needs to bear in mind that for the standard linearly-ordered time domains temporal logic is not re cursively axiomatizable [GHR94]' so recursive axiomatizations are by necessity incomplete.
"Handbook of Open Source Tools" introduces a comprehensive collection of advanced open source tools useful in developing software applications. The book contains information on more than 200 open-source tools which include software construction utilities for compilers, virtual-machines, database, graphics, high-performance computing, OpenGL, geometry, algebra, graph theory, GUIs and more. Special highlights for software construction utilities and application libraries are included. Each tool is covered in the context of a real like application development setting. This unique handbook presents a comprehensive discussion of advanced tools, a valuable asset used by most application developers and programmers; includes a special focus on Mathematical Open Source Software not available in most Open Source Software books, and introduces several tools (eg ACL2, CLIPS, CUDA, and COIN) which are not known outside of select groups, but are very powerful. "Handbook of Open Source Tools "is designed for application developers and programmers working with Open Source Tools. Advanced-level students concentrating on Engineering, Mathematics and Computer Science will find this reference a valuable asset as well.
CHARM '97 is the ninth in a series of working conferences devoted to the development and use of formal techniques in digital hardware design and verification. This series is held in collaboration with IFIP WG 10.5. Previous meetings were held in Europe every other year.
The need to electronically store, manipulate and analyze large-scale, high-dimensional data sets requires new computational methods. This book presents new intelligent data management methods and tools, including new results from the field of inference. Leading experts also map out future directions of intelligent data analysis. This book will be a valuable reference for researchers exploring the interdisciplinary area between statistics and computer science as well as for professionals applying advanced data analysis methods in industry.
This text provides deep and comprehensive coverage of the mathematical background for data science, including machine learning, optimal recovery, compressed sensing, optimization, and neural networks. In the past few decades, heuristic methods adopted by big tech companies have complemented existing scientific disciplines to form the new field of Data Science. This text embarks the readers on an engaging itinerary through the theory supporting the field. Altogether, twenty-seven lecture-length chapters with exercises provide all the details necessary for a solid understanding of key topics in data science. While the book covers standard material on machine learning and optimization, it also includes distinctive presentations of topics such as reproducing kernel Hilbert spaces, spectral clustering, optimal recovery, compressed sensing, group testing, and applications of semidefinite programming. Students and data scientists with less mathematical background will appreciate the appendices that provide more background on some of the more abstract concepts.
Requiring heterogeneous information systems to cooperate and communicate has now become crucial, especially in application areas like e-business, Web-based mash-ups and the life sciences. Such cooperating systems have to automatically and efficiently match, exchange, transform and integrate large data sets from different sources and of different structure in order to enable seamless data exchange and transformation. The book edited by Bellahsene, Bonifati and Rahm provides an overview of the ways in which the schema and ontology matching and mapping tools have addressed the above requirements and points to the open technical challenges. The contributions from leading experts are structured into three parts: large-scale and knowledge-driven schema matching, quality-driven schema mapping and evolution, and evaluation and tuning of matching tasks. The authors describe the state of the art by discussing the latest achievements such as more effective methods for matching data, mapping transformation verification, adaptation to the context and size of the matching and mapping tasks, mapping-driven schema evolution and merging, and mapping evaluation and tuning. The overall result is a coherent, comprehensive picture of the field. With this book, the editors introduce graduate students and advanced professionals to this exciting field. For researchers, they provide an up-to-date source of reference about schema and ontology matching, schema and ontology evolution, and schema merging.
The subject of error-control coding bridges several disciplines, in particular mathematics, electrical engineering and computer science. The theory of error-control codes is often described abstractly in mathematical terms only, for the benefit of other coding specialists. Such a theoretical approach to coding makes it difficult for engineers to understand the underlying concepts of error correction, the design of digital error-control systems, and the quantitative behavior of such systems. In this book only a minimal amount of mathematics is introduced in order to describe the many, sometimes mathematical, aspects of error-control coding. The concepts of error correction and detection are in many cases sufficiently straightforward to avoid highly theoretical algebraic constructions. The reader will find that the primary emphasis of the book is on practical matters, not on theoretical problems. In fact, much of the material covered is summarized by examples of real developments, and almost all of the error-correction and detection codes introduced are attached to related practical applications. Error-Control Coding for Data Networks takes a structured approach to channel-coding, starting with the basic coding concepts and working gradually towards the most sophisticated coding systems. The most popular applications are described throughout the book. These applications include the channel-coding techniques used in mobile communication systems, such as: the global system for mobile communications (GSM) and the code-division multiple-access (CDMA) system, coding schemes for High-Definition TeleVision (HDTV) system, the Compact Disk (CD), and Digital Video Disk (DVD), as well as theerror-control protocols for the data-link layers of networks, and much more. The book is compiled carefully to bring engineers, coding specialists, and students up to date in the important modern coding technologies. Both electrical engineering students and communication engineers will benefit from the information in this largely self-contained text on error-control system engineering.
It is good to mark the new Millennium by looking back as well as forward. Whatever Shines Should Be Observed looks to the nineteenth century to celebrate the achievements of five distinguished women, four of whom were born in Ireland while the fifth married into an Irish family, who made pioneering contributions to photography, microscopy, astronomy and astrophysics. The women featured came from either aristocratic or professional families. Thus, at first sight, they had many material advantages among their peers. In the ranks of the aristocracy there was often a great passion for learning, and the mansions in which these families lived contained libraries, technical equipment (microscopes and telescopes) and collections from the world of nature. More modest professional households of the time were rich in books, while activities such as observing the stars, collecting plants etc. typically formed an integral part of the children's education. To balance this it was the prevailing philosophy that boys could learn, in addition to basic subjects, mathematics, mechanics, physics, chemistry and classical languages, while girls were channelled into 'polite' subjects like music and needlework. This arrangement allowed boys to progress to University should they so wish, where a range of interesting career choices (including science and engineering) was open to them. Girls, on the other hand, usually received their education at home, often under the tutelage of a governess who would not herself had had any serious contact with scientific or technical subjects. In particular, progress to University was not during most of the nineteenth century an option for women, and access toscientific libraries and institutions was also prohibited. Although those women with aristocratic and professional backgrounds were in a materially privileged position and had an opportunity to 'see' through the activities of their male friends and relatives how professional scientific life was lived, to progress from their places in society to the professions required very special determination. Firstly, they had to individually acquire scientific and technical knowledge, as well as necessary laboratory methodology, without the advantage of formal training. Then, it was necessary to carve out a niche in a particular field, despite the special difficulties attending the publication of scientific books or articles by a woman. There was no easy road to science, or even any well worn track. To achieve recognition was a pioneering activity without discernible ground rules. With the hindsight of history, we recognise that the heroic efforts which the women featured in this volume made to overcome the social constraints that held them back from learning about, and participating in, scientific and technical subjects, had a consequence on a much broader canvas. In addition to what they each achieved professionally they contributed within society to a gradual erosion of those barriers raised against the participation of women in academic life, thereby assisting in allowing University places and professional opportunities to gradually become generally available. It is a privilege to salute and thank the wonderful women of the nineteenth century herein described for what they have contributed to the women of today. William Herschel's famous motto quicquid nitet notandum (whatever shinesshould be observed) applies in a particular way to the luminous quality of their individual lives, and those of us who presently observe their shining, as well as those who now wait in the wings of the coming centuries to emerge upon the scene, can each see a little further by their light.
This book surveys recent advances in Conversational Information Retrieval (CIR), focusing on neural approaches that have been developed in the last few years. Progress in deep learning has brought tremendous improvements in natural language processing (NLP) and conversational AI, leading to a plethora of commercial conversational services that allow naturally spoken and typed interaction, increasing the need for more human-centric interactions in IR. The book contains nine chapters. Chapter 1 motivates the research of CIR by reviewing the studies on how people search and subsequently defines a CIR system and a reference architecture which is described in detail in the rest of the book. Chapter 2 provides a detailed discussion of techniques for evaluating a CIR system – a goal-oriented conversational AI system with a human in the loop. Then Chapters 3 to 7 describe the algorithms and methods for developing the main CIR modules (or sub-systems). In Chapter 3, conversational document search is discussed, which can be viewed as a sub-system of the CIR system. Chapter 4 is about algorithms and methods for query-focused multi-document summarization. Chapter 5 describes various neural models for conversational machine comprehension, which generate a direct answer to a user query based on retrieved query-relevant documents, while Chapter 6 details neural approaches to conversational question answering over knowledge bases, which is fundamental to the knowledge base search module of a CIR system. Chapter 7 elaborates various techniques and models that aim to equip a CIR system with the capability of proactively leading a human-machine conversation. Chapter 8 reviews a variety of commercial systems for CIR and related tasks. It first presents an overview of research platforms and toolkits which enable scientists and practitioners to build conversational experiences, and continues with historical highlights and recent trends in a range of application areas. Chapter 9 eventually concludes the book with a brief discussion of research trends and areas for future work. The primary target audience of the book are the IR and NLP research communities. However, audiences with another background, such as machine learning or human-computer interaction, will also find it an accessible introduction to CIR.
News headlines about privacy invasions, discrimination, and biases discovered in the platforms of big technology companies are commonplace today, and big tech's reluctance to disclose how they operate counteracts ideals of transparency, openness, and accountability. This book is for computer science students and researchers who want to study big tech's corporate surveillance from an experimental, empirical, or quantitative point of view and thereby contribute to holding big tech accountable. As a comprehensive technical resource, it guides readers through the corporate surveillance landscape and describes in detail how corporate surveillance works, how it can be studied experimentally, and what existing studies have found. It provides a thorough foundation in the necessary research methods and tools, and introduces the current research landscape along with a wide range of open issues and challenges. The book also explains how to consider ethical issues and how to turn research results into real-world change.
This book collects ECM research from the academic discipline of Information Systems and related fields to support academics and practitioners who are interested in understanding the design, use and impact of ECM systems. It also provides a valuable resource for students and lecturers in the field. Enterprise content management in Information Systems research Foundations, methods and cases consolidates our current knowledge on how today s organizations can manage their digital information assets. The business challenges related to organizational information management include reducing search times, maintaining information quality, and complying with reporting obligations and standards. Many of these challenges are well-known in information management, but because of the vast quantities of information being generated today, they are more difficult to deal with than ever. Many companies use the term enterprise content management (ECM) to refer to the management of all forms of information, especially unstructured information. While ECM systems promise to increase and maintain information quality, to streamline content-related business processes, and to track the lifecycle of information, their implementation poses several questions and challenges: Which content objects should be put under the control of the ECM system? Which processes are affected by the implementation? How should outdated technology be replaced? Research is challenged to support practitioners in answering these questions."
This book captures and communicates the wealth of architecture experience Capgemini has gathered as a member of The Open Group a " a vendor- and technology-neutral consortium formed by major industry players a " in developing, deploying, and using its a oeIntegrated Architecture Frameworka (IAF) since its origination in 1993. Today, many elements of IAF have been incorporated into the new version 9 of TOGAF, the related Open Group standard. The authors, all working on and with IAF for many years, here provide a full reference to IAF and a guide on how to apply it. In addition, they describe in detail the relations between IAF and the architecture standards TOGAF and Archimate and other development or process frameworks like ITIL, CMMI, and RUP. Their presentation is targeted at architects, project managers, and process analysts who have either considered or are already working with IAF a " they will find many roadmaps, case studies, checklists, and tips and advice for their daily work.
The domains of Pattern Recognition and Machine Learning have experienced exceptional interest and growth, however the overwhelming number of methods and applications can make the fields seem bewildering. This text offers an accessible and conceptually rich introduction, a solid mathematical development emphasizing simplicity and intuition. Students beginning to explore pattern recognition do not need a suite of mathematically advanced methods or complicated computational libraries to understand and appreciate pattern recognition; rather the fundamental concepts and insights, eminently teachable at the undergraduate level, motivate this text. This book provides methods of analysis that the reader can realistically undertake on their own, supported by real-world examples, case-studies, and worked numerical / computational studies. |
You may like...
Blockchain Technology and Computational…
Shahnawaz Khan, Mohammad Haider, …
Hardcover
R6,648
Discovery Miles 66 480
Role of 6g Wireless Networks in AI and…
Malaya Dutta Borah, Steven A. Wright, …
Hardcover
R6,206
Discovery Miles 62 060
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Demystifying Graph Data Science - Graph…
Pethuru Raj, Abhishek Kumar, …
Hardcover
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
|