0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (44)
  • R250 - R500 (174)
  • R500+ (8,536)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Applications of computing > Databases > General

Enterprise Information Systems II (Hardcover, 2001 ed.): B. Sharp, Joaquim Filipe, Jose Cordeiro Enterprise Information Systems II (Hardcover, 2001 ed.)
B. Sharp, Joaquim Filipe, Jose Cordeiro
R2,774 Discovery Miles 27 740 Ships in 18 - 22 working days

This book comprises the refereed papers together with the invited keynote papers, presented at the Second International Conference on Enterprise Information Systems. The conference was organised by the School of Computing at Staffordshire University, UK, and the Escola Superior de Tecnologia of Setubal, Portugal, in cooperation with the British Computer Society and the International Federation for Information Processing, Working Group 8.1. The purpose of this 2nd International Conference was to bring together researchers, engineers and practitioners interested in the advances in and business applications of information systems. The papers demonstrate the vitality and vibrancy of the field of Enterprise Information Systems. The research papers included here were selected from among 143 submissions from 32 countries in the following four areas: Enterprise Database Applications, Artificial Intelligence Applications and Decision Support Systems, Systems Analysis and Specification, and Internet and Electronic Commerce. Every paper had at least two reVIewers drawn from 10 countries. The papers included in this book were recommended by the reviewers. On behalf of the conference organising committee we would like to thank all the members of the Programme Committee for their work in reviewing and selecting the papers that appear in this volume. We would also like to thank all the authors who have submitted their papers to this conference, and would like to apologise to the authors that we were unable to include and wish them success next year.

Indexing Techniques for Advanced Database Systems (Hardcover, 1997 ed.): Elisa Bertino, Beng Chin Ooi, Ron Sacks-Davis,... Indexing Techniques for Advanced Database Systems (Hardcover, 1997 ed.)
Elisa Bertino, Beng Chin Ooi, Ron Sacks-Davis, Kian-Lee Tan, Justin Zobel, …
R4,150 Discovery Miles 41 500 Ships in 18 - 22 working days

Recent years have seen an explosive growth in the use of new database applications such as CAD/CAM systems, spatial information systems, and multimedia information systems. The needs of these applications are far more complex than traditional business applications. They call for support of objects with complex data types, such as images and spatial objects, and for support of objects with wildly varying numbers of index terms, such as documents. Traditional indexing techniques such as the B-tree and its variants do not efficiently support these applications, and so new indexing mechanisms have been developed. As a result of the demand for database support for new applications, there has been a proliferation of new indexing techniques. The need for a book addressing indexing problems in advanced applications is evident. For practitioners and database and application developers, this book explains best practice, guiding the selection of appropriate indexes for each application. For researchers, this book provides a foundation for the development of new and more robust indexes. For newcomers, this book is an overview of the wide range of advanced indexing techniques. Indexing Techniques for Advanced Database Systems is suitable as a secondary text for a graduate level course on indexing techniques, and as a reference for researchers and practitioners in industry.

Semistructured Database Design (Hardcover, 2005 ed.): Tok Wang Ling, Gillian Dobbie Semistructured Database Design (Hardcover, 2005 ed.)
Tok Wang Ling, Gillian Dobbie
R2,752 Discovery Miles 27 520 Ships in 18 - 22 working days

Semistructured Database Design provides an essential reference for anyone interested in the effective management of semsistructured data. Since many new and advanced web applications consume a huge amount of such data, there is a growing need to properly design efficient databases.

This volume responds to that need by describing a semantically rich data model for semistructured data, called Object-Relationship-Attribute model for Semistructured data (ORA-SS). Focusing on this new model, the book discuss problems and present solutions for a number of topics, including schema extraction, the design of non-redundant storage organizations for semistructured data, and physical semsitructured database design, among others.

Semistructured Database Design, presents researchers and professionals with the most complete and up-to-date research in this fast-growing field.

Parallel Computing on Distributed Memory Multiprocessors (Hardcover, 1993 ed.): Fusun Oezguner, Fikret Ercal Parallel Computing on Distributed Memory Multiprocessors (Hardcover, 1993 ed.)
Fusun Oezguner, Fikret Ercal
R2,837 Discovery Miles 28 370 Ships in 18 - 22 working days

Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.

Ontology-Based Query Processing for Global Information Systems (Hardcover, 2001 ed.): Eduardo Mena, Arantza Illarramendi Ontology-Based Query Processing for Global Information Systems (Hardcover, 2001 ed.)
Eduardo Mena, Arantza Illarramendi
R4,132 Discovery Miles 41 320 Ships in 18 - 22 working days

Today we are witnessing an exponential growth of information accumulated within universities, corporations, and government organizations. Autonomous repositories that store different types of digital data in multiple formats are becoming available for use on the fast-evolving global information systems infrastructure. More concretely, with the World Wide Web and related internetworking technologies, there has been an explosion in the types, availability, and volume of data accessible to a global information system. However, this information overload makes it nearly impossible for users to be aware of the locations, organization or structures, query languages, and semantics of the information in various repositories. Available browsing and navigation tools assist users in locating information resources on the Internet. However, there is a real need to complement current browsing and keyword-based techniques with concept-based approaches. An important next step should be to support queries that do not contain information describing location or manipulation of relevant resources. Ontology-Based Query Processing for Global Information Systems describes an initiative for enhancing query processing in a global information system. The following are some of the relevant features: Providing semantic descriptions of data repositories using ontologies; Dealing with different vocabularies so that users are not forced to use a common one; Defining a strategy that permits the incremental enrichment of answers by visiting new ontologies; Managing imprecise answers and estimations of the incurred loss of information. In summary, technologies such as information brokerage, domain ontologies, andestimation of imprecision in answers based on vocabulary heterogeneity have been synthesized with Internet computing, representing an advance in developing semantics-based information access on the Web. Theoretical results are complemented by the presentation of a prototype that implements the main ideas presented in this book. Ontology-Based Query Processing for Global Information Systems is suitable as a secondary text for a graduate-level course, and as a reference for researchers and practitioners in industry.

Real-Time Systems Engineering and Applications - Engineering and Applications (Hardcover, 1992 ed.): Michael Schiebe, Saskia... Real-Time Systems Engineering and Applications - Engineering and Applications (Hardcover, 1992 ed.)
Michael Schiebe, Saskia Pferrer
R5,391 Discovery Miles 53 910 Ships in 18 - 22 working days

Real-Time Systems Engineering and Applications is a well-structured collection of chapters pertaining to present and future developments in real-time systems engineering. After an overview of real-time processing, theoretical foundations are presented. The book then introduces useful modeling concepts and tools. This is followed by concentration on the more practical aspects of real-time engineering with a thorough overview of the present state of the art, both in hardware and software, including related concepts in robotics. Examples are given of novel real-time applications which illustrate the present state of the art. The book concludes with a focus on future developments, giving direction for new research activities and an educational curriculum covering the subject. This book can be used as a source for academic and industrial researchers as well as a textbook for computing and engineering courses covering the topic of real-time systems engineering.

Data-Driven Optimization and Knowledge Discovery for an Enterprise Information System (Hardcover, 2015 ed.): Qing Duan,... Data-Driven Optimization and Knowledge Discovery for an Enterprise Information System (Hardcover, 2015 ed.)
Qing Duan, Krishnendu Chakrabarty, Jun Zeng
R2,658 Discovery Miles 26 580 Ships in 18 - 22 working days

This book provides a comprehensive set of optimization and prediction techniques for an enterprise information system. Readers with a background in operations research, system engineering, statistics, or data analytics can use this book as a reference to derive insight from data and use this knowledge as guidance for production management. The authors identify the key challenges in enterprise information management and present results that have emerged from leading-edge research in this domain. Coverage includes topics ranging from task scheduling and resource allocation, to workflow optimization, process time and status prediction, order admission policies optimization, and enterprise service-level performance analysis and prediction. With its emphasis on the above topics, this book provides an in-depth look at enterprise information management solutions that are needed for greater automation and reconfigurability-based fault tolerance, as well as to obtain data-driven recommendations for effective decision-making.

Brain Art - Brain-Computer Interfaces for Artistic Expression (Hardcover, 1st ed. 2019): Anton Nijholt Brain Art - Brain-Computer Interfaces for Artistic Expression (Hardcover, 1st ed. 2019)
Anton Nijholt
R4,661 Discovery Miles 46 610 Ships in 10 - 15 working days

This is the first book on brain-computer interfaces (BCI) that aims to explain how these BCI interfaces can be used for artistic goals. Devices that measure changes in brain activity in various regions of our brain are available and they make it possible to investigate how brain activity is related to experiencing and creating art. Brain activity can also be monitored in order to find out about the affective state of a performer or bystander and use this knowledge to create or adapt an interactive multi-sensorial (audio, visual, tactile) piece of art. Making use of the measured affective state is just one of the possible ways to use BCI for artistic expression. We can also stimulate brain activity. It can be evoked externally by exposing our brain to external events, whether they are visual, auditory, or tactile. Knowing about the stimuli and the effect on the brain makes it possible to translate such external stimuli to decisions and commands that help to design, implement, or adapt an artistic performance, or interactive installation. Stimulating brain activity can also be done internally. Brain activity can be voluntarily manipulated and changes can be translated into computer commands to realize an artistic vision. The chapters in this book have been written by researchers in human-computer interaction, brain-computer interaction, neuroscience, psychology and social sciences, often in cooperation with artists using BCI in their work. It is the perfect book for those seeking to learn about brain-computer interfaces used for artistic applications.

Writing for the Computer Screen (Hardcover): Hilary Goodall, Susan Smith Reilly Writing for the Computer Screen (Hardcover)
Hilary Goodall, Susan Smith Reilly
R2,028 Discovery Miles 20 280 Ships in 18 - 22 working days

As the use of computerized information continues to proliferate, so does the need for a writing method suited to this new medium. In "Writing for the Computer Screen," Hillary Goodall and Susan Smith Reilly call attention to new forms of information display unique to computers. The authors draw upon years of professional experience in business and education to present practical computer display techniques. This book examines the shortfalls of using established forms of writing for the computer where information needed in a hurry can be buried in a cluttered screen. Such problems can be minimized if screen design is guided by the characteristics of the medium.

The Modern Algebra of Information Retrieval (Hardcover, 2008 ed.): Sandor Dominich The Modern Algebra of Information Retrieval (Hardcover, 2008 ed.)
Sandor Dominich
R3,286 Discovery Miles 32 860 Ships in 18 - 22 working days

This book takes a unique approach to information retrieval by laying down the foundations for a modern algebra of information retrieval based on lattice theory. All major retrieval methods developed so far are described in detail a" Boolean, Vector Space and probabilistic methods, but also Web retrieval algorithms like PageRank, HITS, and SALSA a" and the author shows that they all can be treated elegantly in a unified formal way, using lattice theory as the one basic concept. Further, he also demonstrates that the lattice-based approach to information retrieval allows us to formulate new retrieval methods.

SAndor Dominicha (TM)s presentation is characterized by an engineering-like approach, describing all methods and technologies with as much mathematics as needed for clarity and exactness. His readers in both computer science and mathematics will learn how one single concept can be used to understand the most important retrieval methods, to propose new ones, and also to gain new insights into retrieval modeling in general. Thus, his book is appropriate for researchers and graduate students, who will additionally benefit from the many exercises at the end of each chapter.

Materials Challenges and Testing for Manufacturing, Mobility, Biomedical Applications and Climate (Hardcover, 2014 ed.):... Materials Challenges and Testing for Manufacturing, Mobility, Biomedical Applications and Climate (Hardcover, 2014 ed.)
Werasak Udomkichdecha, Thomas Boellinghaus, Anchalee Manonukul, Jurgen Lexow
R3,354 Discovery Miles 33 540 Ships in 10 - 15 working days

In two parts, the book focusses on materials science developments in the area of 1) Materials Data and Informatics: - Materials data quality and infrastructure - Materials databases - Materials data mining, image analysis, data driven materials discovery, data visualization. 2) Materials for Tomorrow's Energy Infrastructure: - Pipeline, transport and storage materials for future fuels: biofuels, hydrogen, natural gas, ethanol, etc. -Materials for renewable energy technologies This book presents selected contributions of exceptional young postdoctoral scientists to the 4th WMRIF Workshop for Young Scientists, hosted by the National Institute of Standards and Technology, at the NIST site in Boulder, Colorado, USA, September 8 to September 10, 2014.

Geographic Information Metadata for Spatial Data Infrastructures - Resources, Interoperability and Information Retrieval... Geographic Information Metadata for Spatial Data Infrastructures - Resources, Interoperability and Information Retrieval (Hardcover, 2005 ed.)
Javier Nogueras-Iso, Francisco Javier Zarazaga-Soria, Pedro R Muro-Medrano
R4,161 Discovery Miles 41 610 Ships in 18 - 22 working days

Metadata play a fundamental role in both DLs and SDIs. Commonly defined as "structured data about data" or "data which describe attributes of a resource" or, more simply, "information about data," it is an essential requirement for locating and evaluating available data. Therefore, this book focuses on the study of different metadata aspects, which contribute to a more efficient use of DLs and SDIs. The three main issues addressed are: the management of nested collections of resources, the interoperability between metadata schemas, and the integration of information retrieval techniques to the discovery services of geographic data catalogs (contributing in this way to avoid metadata content heterogeneity).

Advanced Transaction Models and Architectures (Hardcover, 1997 ed.): Sushil Jajodia, Larry Kerschberg Advanced Transaction Models and Architectures (Hardcover, 1997 ed.)
Sushil Jajodia, Larry Kerschberg
R4,226 Discovery Miles 42 260 Ships in 18 - 22 working days

Motivation Modem enterprises rely on database management systems (DBMS) to collect, store and manage corporate data, which is considered a strategic corporate re source. Recently, with the proliferation of personal computers and departmen tal computing, the trend has been towards the decentralization and distribution of the computing infrastructure, with autonomy and responsibility for data now residing at the departmental and workgroup level of the organization. Users want their data delivered to their desktops, allowing them to incor porate data into their personal databases, spreadsheets, word processing doc uments, and most importantly, into their daily tasks and activities. They want to be able to share their information while retaining control over its access and distribution. There are also pressures from corporate leaders who wish to use information technology as a strategic resource in offering specialized value-added services to customers. Database technology is being used to manage the data associated with corporate processes and activities. Increasingly, the data being managed are not simply formatted tables in relational databases, but all types of ob jects, including unstructured text, images, audio, and video. Thus, the database management providers are being asked to extend the capabilities of DBMS to include object-relational models as well as full object-oriented database man agement systems."

The Definitive Guide to Apache mod_rewrite (Hardcover, 1st ed.): Rich Bowen The Definitive Guide to Apache mod_rewrite (Hardcover, 1st ed.)
Rich Bowen
R1,417 Discovery Miles 14 170 Ships in 18 - 22 working days

Organizing websites is highly dynamic and often chaotic. Thus, it is crucial that host web servers manipulate URLs in order to cope with temporarily or permanently relocated resources, prevent attacks by automated worms, and control resource access.

The Apache mod_rewrite module has long inspired fits of joy because it offers an unparalleled toolset for manipulating URLs. "The Definitive Guide to Apache mod_rewrite" guides you through configuration and use of the module for a variety of purposes, including basic and conditional rewrites, access control, virtual host maintenance, and proxies.

This book was authored by Rich Bowen, noted Apache expert and Apache Software Foundation member, and draws on his years of experience administering, and regular speaking and writing about, the Apache server.

Nearest Neighbor Search: - A Database Perspective (Hardcover, 2005 ed.): Apostolos N. Papadopoulos, Yannis Manolopoulos Nearest Neighbor Search: - A Database Perspective (Hardcover, 2005 ed.)
Apostolos N. Papadopoulos, Yannis Manolopoulos
R2,755 Discovery Miles 27 550 Ships in 18 - 22 working days

Modern applications are both data and computationally intensive and require the storage and manipulation of voluminous traditional (alphanumeric) and nontraditional data sets (images, text, geometric objects, time-series). Examples of such emerging application domains are: Geographical Information Systems (GIS), Multimedia Information Systems, CAD/CAM, Time-Series Analysis, Medical Information Sstems, On-Line Analytical Processing (OLAP), and Data Mining. These applications pose diverse requirements with respect to the information and the operations that need to be supported. From the database perspective, new techniques and tools therefore need to be developed towards increased processing efficiency.

This monograph explores the way spatial database management systems aim at supporting queries that involve the space characteristics of the underlying data, and discusses query processing techniques for nearest neighbor queries. It provides both basic concepts and state-of-the-art results in spatial databases and parallel processing research, and studies numerous applications of nearest neighbor queries.

Mining Spatio-Temporal Information Systems (Hardcover, 2002 ed.): Roy Ladner, Kevin Shaw, Mahdi Abdelguerfi Mining Spatio-Temporal Information Systems (Hardcover, 2002 ed.)
Roy Ladner, Kevin Shaw, Mahdi Abdelguerfi
R2,746 Discovery Miles 27 460 Ships in 18 - 22 working days

Mining Spatio-Temporal Information Systems, an edited volume is composed of chapters from leading experts in the field of Spatial-Temporal Information Systems and addresses the many issues in support of modeling, creation, querying, visualizing and mining. Mining Spatio-Temporal Information Systems is intended to bring together a coherent body of recent knowledge relating to STIS data modeling, design, implementation and STIS in knowledge discovery. In particular, the reader is exposed to the latest techniques for the practical design of STIS, essential for complex query processing.
Mining Spatio-Temporal Information Systems is structured to meet the needs of practitioners and researchers in industry and graduate-level students in Computer Science.

Foundations of Dependable Computing - System Implementation (Hardcover, 1994 ed.): Gary M. Koob, Clifford G. Lau Foundations of Dependable Computing - System Implementation (Hardcover, 1994 ed.)
Gary M. Koob, Clifford G. Lau
R4,190 Discovery Miles 41 900 Ships in 18 - 22 working days

Foundations of Dependable Computing: System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead. A companion to this volume (published by Kluwer) subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems.

Facilitating Technology Transfer through Partnership (Hardcover, 1997 ed.): Tom McMaster, E. Mumford, E. B. Swanson, B Warboys,... Facilitating Technology Transfer through Partnership (Hardcover, 1997 ed.)
Tom McMaster, E. Mumford, E. B. Swanson, B Warboys, David Wastell
R4,230 Discovery Miles 42 300 Ships in 18 - 22 working days

The primary aim for this book is to gather and collate articles which represent the best and latest thinking in the domain of technology transfer, from research, academia and practice around the world. We envisage that the book will, as a result of this, represent an important source of knowledge in this domain to students (undergraduate and postgraduate), researchers, practitioners and consultants, chiefly in the software engineering and IT/industries, but also in management and other organisational and social disciplines. An important aspect of the book is the role that reflective practitioners (and not just academics) play. They will be involved in the production, and evaluation of contributions, as well as in the design and delivery of conference events, upon which of course, the book will be based.

Text Retrieval and Filtering - Analytic Models of Performance (Hardcover, 1998 ed.): Robert M. Losee Text Retrieval and Filtering - Analytic Models of Performance (Hardcover, 1998 ed.)
Robert M. Losee
R5,279 Discovery Miles 52 790 Ships in 18 - 22 working days

Text Retrieval and Filtering: Analytical Models of Performance is the first book that addresses the problem of analytically computing the performance of retrieval and filtering systems. The book describes means by which retrieval may be studied analytically, allowing one to describe current performance, predict future performance, and to understand why systems perform as they do. The focus is on retrieving and filtering natural language text, with material addressing retrieval performance for the simple case of queries with a single term, the more complex case with multiple terms, both with term independence and term dependence, and for the use of grammatical information to improve performance. Unambiguous statements of the conditions under which one method or system will be more effective than another are developed. Text Retrieval and Filtering: Analytical Models of Performance focuses on the performance of systems that retrieve natural language text, considering full sentences as well as phrases and individual words. The last chapter explicitly addresses how grammatical constructs and methods may be studied in the context of retrieval or filtering system performance. The book builds toward solving this problem, although the material in earlier chapters is as useful to those addressing non-linguistic, statistical concerns as it is to linguists. Those interested in grammatical information should be cautioned to carefully examine earlier chapters, especially Chapters 7 and 8, which discuss purely statistical relationships between terms, before moving on to Chapter 10, which explicitly addresses linguistic issues. Text Retrieval and Filtering: Analytical Models of Performance is suitable as a secondary text for a graduate level course on Information Retrieval or Linguistics, and as a reference for researchers and practitioners in industry.

Foundations of Dependable Computing - Models and Frameworks for Dependable Systems (Hardcover, 1994 ed.): Gary M. Koob,... Foundations of Dependable Computing - Models and Frameworks for Dependable Systems (Hardcover, 1994 ed.)
Gary M. Koob, Clifford G. Lau
R4,159 Discovery Miles 41 590 Ships in 18 - 22 working days

Foundations of Dependable Computing: Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. A companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.

Compression and Coding Algorithms (Hardcover, 2002 ed.): Alistair Moffat, Andrew Turpin Compression and Coding Algorithms (Hardcover, 2002 ed.)
Alistair Moffat, Andrew Turpin
R1,561 Discovery Miles 15 610 Ships in 18 - 22 working days

Compression and Coding Algorithms describes in detail the coding mechanisms that are available for use in data compression systems. The well known Huffman coding technique is one mechanism, but there have been many others developed over the past few decades, and this book describes, explains and assesses them. People undertaking research of software development in the areas of compression and coding algorithms will find this book an indispensable reference. In particular, the careful and detailed description of algorithms and their implementation, plus accompanying pseudo-code that can be readily implemented on computer, make this book a definitive reference in an area currently without one.

Foundations of Dependable Computing - Paradigms for Dependable Applications (Hardcover, 1994 ed.): Gary M. Koob, Clifford G. Lau Foundations of Dependable Computing - Paradigms for Dependable Applications (Hardcover, 1994 ed.)
Gary M. Koob, Clifford G. Lau
R4,132 Discovery Miles 41 320 Ships in 18 - 22 working days

Foundations of Dependable Computing: Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. The companion volume subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.

Intelligent Multimedia Databases and Information Retrieval - Advancing Applications and Technologies (Hardcover, New): Li Yan,... Intelligent Multimedia Databases and Information Retrieval - Advancing Applications and Technologies (Hardcover, New)
Li Yan, Zongmin Ma
R4,933 Discovery Miles 49 330 Ships in 18 - 22 working days

As consumer costs for multimedia devices such as digital cameras and Web phones have decreased and diversity in the market has skyrocketed, the amount of digital information has grown considerably. Intelligent Multimedia Databases and Information Retrieval: Advancing Applications and Technologies details the latest information retrieval technologies and applications, the research surrounding the field, and the methodologies and design related to multimedia databases. Together with academic researchers and developers from both information retrieval and artificial intelligence fields, this book details issues and semantics of data retrieval with contributions from around the globe. As the information and data from multimedia databases continues to expand, the research and documentation surrounding it should keep pace as best as possible, and this book provides an excellent resource for the latest developments.

The Testability of Distributed Real-Time Systems (Hardcover, 1993 ed.): Werner Schutz The Testability of Distributed Real-Time Systems (Hardcover, 1993 ed.)
Werner Schutz
R2,735 Discovery Miles 27 350 Ships in 18 - 22 working days

The Testability of Distributed Real-Time Systems starts by collecting and analyzing all principal problems, as well as their interrelations that one has to keep in mind wh4en testing a distributed real-time system. The book discusses them in some detail from the viewpoints of software engineering, distributed systems principles, and real-time system development. These problems are organization, observability, reproducibility, the host/target approach, environment simulation, and (test) representativity. Based on this framework, the book summarizes and evaluates the current work done in this area before going on to argue that the particular system architecture (hardware plus operating system) has a much greater influence on testing than is the case for ordinary', non-real-time software. The notions of event-triggered and time-triggered system architectures are introduced, and its is shown that time-triggered systems automatically' (i.e. by the nature of their system architecture) solve or greatly ease solving of some of the problems introduced earlier, i.e. observability, reproducibility, and (partly) representativity.A test methodology is derived for the time-triggered, distributed real-time system MARS. The book describes in detail how the author has taken advantage of its architecture, and shows how the remaining problems can be solved for this particular system architecture. Some experiments conducted to evaluate this test methodology are reported, including the experience gained from them, leading to a description of a number of prototype support tools.The Testability of Distributed Real-Time Systems can be used by both academic and industrial researchers interested in distributedand/or real-time systems, or in software engineering for such systems. This book can also be used as a text in advanced courses on distributed or real-time systems.

Text Mining - Predictive Methods for Analyzing Unstructured Information (Hardcover): Sholom M. Weiss, Nitin Indurkhya, Tong... Text Mining - Predictive Methods for Analyzing Unstructured Information (Hardcover)
Sholom M. Weiss, Nitin Indurkhya, Tong Zhang, Fred Damerau
R4,146 Discovery Miles 41 460 Ships in 18 - 22 working days

Data mining is a mature technology. The prediction problem, looking for predictive patterns in data, has been widely studied. Strong me- ods are available to the practitioner. These methods process structured numerical information, where uniform measurements are taken over a sample of data. Text is often described as unstructured information. So, it would seem, text and numerical data are different, requiring different methods. Or are they? In our view, a prediction problem can be solved by the same methods, whether the data are structured - merical measurements or unstructured text. Text and documents can be transformed into measured values, such as the presence or absence of words, and the same methods that have proven successful for pred- tive data mining can be applied to text. Yet, there are key differences. Evaluation techniques must be adapted to the chronological order of publication and to alternative measures of error. Because the data are documents, more specialized analytical methods may be preferred for text. Moreover, the methods must be modi?ed to accommodate very high dimensions: tens of thousands of words and documents. Still, the central themes are similar.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
The Hidden
Fiona Snyckers Paperback R340 R308 Discovery Miles 3 080
Competitive Transformation of the Postal…
Michael A. Crew, Paul R. Kleindorfer Hardcover R5,360 Discovery Miles 53 600
Pearson REVISE BTEC National Information…
Ian Bruce, Daniel Richardson, … Digital product license key R515 Discovery Miles 5 150
Practical Data Communications for…
Steve Mackay, Edwin Wright, … Paperback R1,504 Discovery Miles 15 040
Looking for Learning: Provocations
Laura England Paperback R366 R338 Discovery Miles 3 380
The Anti-Intellectual Presidency - The…
Elvin Lim Hardcover R2,616 Discovery Miles 26 160
Computer Design of Diffractive Optics
V.A. Soifer Hardcover R6,706 Discovery Miles 67 060
Race Otherwise - Forging A New Humanism…
Zimitri Erasmus Paperback  (3)
R380 R351 Discovery Miles 3 510
Democracy Works - Re-Wiring Politics To…
Greg Mills, Olusegun Obasanjo, … Paperback R320 R290 Discovery Miles 2 900
Elliptic Curves and Their Applications…
Andreas Enge Hardcover R5,916 Discovery Miles 59 160

 

Partners