0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (62)
  • R250 - R500 (381)
  • R500+ (14,665)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Applications of computing > Databases

Semistructured Database Design (Hardcover, 2005 ed.): Tok Wang Ling, Gillian Dobbie Semistructured Database Design (Hardcover, 2005 ed.)
Tok Wang Ling, Gillian Dobbie
R2,752 Discovery Miles 27 520 Ships in 18 - 22 working days

Semistructured Database Design provides an essential reference for anyone interested in the effective management of semsistructured data. Since many new and advanced web applications consume a huge amount of such data, there is a growing need to properly design efficient databases.

This volume responds to that need by describing a semantically rich data model for semistructured data, called Object-Relationship-Attribute model for Semistructured data (ORA-SS). Focusing on this new model, the book discuss problems and present solutions for a number of topics, including schema extraction, the design of non-redundant storage organizations for semistructured data, and physical semsitructured database design, among others.

Semistructured Database Design, presents researchers and professionals with the most complete and up-to-date research in this fast-growing field.

Information Filtering and Retrieval - DART 2014: Revised and Invited Papers (Hardcover, 1st ed. 2017): Cristian Lai, Alessandro... Information Filtering and Retrieval - DART 2014: Revised and Invited Papers (Hardcover, 1st ed. 2017)
Cristian Lai, Alessandro Giuliani, Giovanni Semeraro
R2,653 Discovery Miles 26 530 Ships in 18 - 22 working days

This book focuses on new research challenges in intelligent information filtering and retrieval. It collects invited chapters and extended research contributions from DART 2014 (the 8th International Workshop on Information Filtering and Retrieval), held in Pisa (Italy), on December 10, 2014, and co-hosted with the XIII AI*IA Symposium on Artificial Intelligence. The main focus of DART was to discuss and compare suitable novel solutions based on intelligent techniques and applied to real-world contexts. The chapters of this book present a comprehensive review of related works and the current state of the art. The contributions from both practitioners and researchers have been carefully reviewed by experts in the area, who also gave useful suggestions to improve the quality of the book.

Real-Time Systems Engineering and Applications - Engineering and Applications (Hardcover, 1992 ed.): Michael Schiebe, Saskia... Real-Time Systems Engineering and Applications - Engineering and Applications (Hardcover, 1992 ed.)
Michael Schiebe, Saskia Pferrer
R5,391 Discovery Miles 53 910 Ships in 18 - 22 working days

Real-Time Systems Engineering and Applications is a well-structured collection of chapters pertaining to present and future developments in real-time systems engineering. After an overview of real-time processing, theoretical foundations are presented. The book then introduces useful modeling concepts and tools. This is followed by concentration on the more practical aspects of real-time engineering with a thorough overview of the present state of the art, both in hardware and software, including related concepts in robotics. Examples are given of novel real-time applications which illustrate the present state of the art. The book concludes with a focus on future developments, giving direction for new research activities and an educational curriculum covering the subject. This book can be used as a source for academic and industrial researchers as well as a textbook for computing and engineering courses covering the topic of real-time systems engineering.

Data-Driven Optimization and Knowledge Discovery for an Enterprise Information System (Hardcover, 2015 ed.): Qing Duan,... Data-Driven Optimization and Knowledge Discovery for an Enterprise Information System (Hardcover, 2015 ed.)
Qing Duan, Krishnendu Chakrabarty, Jun Zeng
R2,658 Discovery Miles 26 580 Ships in 18 - 22 working days

This book provides a comprehensive set of optimization and prediction techniques for an enterprise information system. Readers with a background in operations research, system engineering, statistics, or data analytics can use this book as a reference to derive insight from data and use this knowledge as guidance for production management. The authors identify the key challenges in enterprise information management and present results that have emerged from leading-edge research in this domain. Coverage includes topics ranging from task scheduling and resource allocation, to workflow optimization, process time and status prediction, order admission policies optimization, and enterprise service-level performance analysis and prediction. With its emphasis on the above topics, this book provides an in-depth look at enterprise information management solutions that are needed for greater automation and reconfigurability-based fault tolerance, as well as to obtain data-driven recommendations for effective decision-making.

Journeys to Data Mining - Experiences from 15 Renowned Researchers (Hardcover, 2012 ed.): Mohamed Medhat Gaber Journeys to Data Mining - Experiences from 15 Renowned Researchers (Hardcover, 2012 ed.)
Mohamed Medhat Gaber
R1,424 Discovery Miles 14 240 Ships in 18 - 22 working days

Data mining, an interdisciplinary field combining methods from artificial intelligence, machine learning, statistics and database systems, has grown tremendously over the last 20 years and produced core results for applications like business intelligence, spatio-temporal data analysis, bioinformatics, and stream data processing. The fifteen contributors to this volume are successful and well-known data mining scientists and professionals. Although by no means an exhaustive list, all of them have helped the field to gain the reputation and importance it enjoys today, through the many valuable contributions they have made. Mohamed Medhat Gaber has asked them (and many others) to write down their journeys through the data mining field, trying to answer the following questions: 1. What are your motives for conducting research in the data mining field? 2. Describe the milestones of your research in this field. 3. What are your notable success stories? 4. How did you learn from your failures? 5. Have you encountered unexpected results? 6. What are the current research issues and challenges in your area? 7. Describe your research tools and techniques. 8. How would you advise a young researcher to make an impact? 9. What do you predict for the next two years in your area? 10. What are your expectations in the long term? In order to maintain the informal character of their contributions, they were given complete freedom as to how to organize their answers. This narrative presentation style provides PhD students and novices who are eager to find their way to successful research in data mining with valuable insights into career planning. In addition, everyone else interested in the history of computer science may be surprised about the stunning successes and possible failures computer science careers (still) have to offer.

Writing for the Computer Screen (Hardcover): Hilary Goodall, Susan Smith Reilly Writing for the Computer Screen (Hardcover)
Hilary Goodall, Susan Smith Reilly
R2,028 Discovery Miles 20 280 Ships in 18 - 22 working days

As the use of computerized information continues to proliferate, so does the need for a writing method suited to this new medium. In "Writing for the Computer Screen," Hillary Goodall and Susan Smith Reilly call attention to new forms of information display unique to computers. The authors draw upon years of professional experience in business and education to present practical computer display techniques. This book examines the shortfalls of using established forms of writing for the computer where information needed in a hurry can be buried in a cluttered screen. Such problems can be minimized if screen design is guided by the characteristics of the medium.

Trusted Recovery and Defensive Information Warfare (Hardcover, 2002 ed.): Peng Liu, Sushil Jajodia Trusted Recovery and Defensive Information Warfare (Hardcover, 2002 ed.)
Peng Liu, Sushil Jajodia
R2,728 Discovery Miles 27 280 Ships in 18 - 22 working days

Information security concerns the confidentiality, integrity, and availability of information processed by a computer system. With an emphasis on prevention, traditional information security research has focused little on the ability to survive successful attacks, which can seriously impair the integrity and availability of a system. Trusted Recovery And Defensive Information Warfare uses database trusted recovery, as an example, to illustrate the principles of trusted recovery in defensive information warfare. Traditional database recovery mechanisms do not address trusted recovery, except for complete rollbacks, which undo the work of benign transactions as well as malicious ones, and compensating transactions, whose utility depends on application semantics. Database trusted recovery faces a set of unique challenges. In particular, trusted database recovery is complicated mainly by (a) the presence of benign transactions that depend, directly or indirectly on malicious transactions; and (b) the requirement by many mission-critical database applications that trusted recovery should be done on-the-fly without blocking the execution of new user transactions. Trusted Recovery And Defensive Information Warfare proposes a new model and a set of innovative algorithms for database trusted recovery. Both read-write dependency based and semantics based trusted recovery algorithms are proposed. Both static and dynamic database trusted recovery algorithms are proposed. These algorithms can typically save a lot of work by innocent users and can satisfy a variety of attack recovery requirements of real world database applications. Trusted Recovery And Defensive Information Warfare is suitable as a secondary text for a graduate level course in computer science, and as a reference for researchers and practitioners in information security.

Materials Challenges and Testing for Manufacturing, Mobility, Biomedical Applications and Climate (Hardcover, 2014 ed.):... Materials Challenges and Testing for Manufacturing, Mobility, Biomedical Applications and Climate (Hardcover, 2014 ed.)
Werasak Udomkichdecha, Thomas Boellinghaus, Anchalee Manonukul, Jurgen Lexow
R3,354 Discovery Miles 33 540 Ships in 10 - 15 working days

In two parts, the book focusses on materials science developments in the area of 1) Materials Data and Informatics: - Materials data quality and infrastructure - Materials databases - Materials data mining, image analysis, data driven materials discovery, data visualization. 2) Materials for Tomorrow's Energy Infrastructure: - Pipeline, transport and storage materials for future fuels: biofuels, hydrogen, natural gas, ethanol, etc. -Materials for renewable energy technologies This book presents selected contributions of exceptional young postdoctoral scientists to the 4th WMRIF Workshop for Young Scientists, hosted by the National Institute of Standards and Technology, at the NIST site in Boulder, Colorado, USA, September 8 to September 10, 2014.

Social and Political Implications of Data Mining - Knowledge Management in E-government (Hardcover): Hakikur Rahman Social and Political Implications of Data Mining - Knowledge Management in E-government (Hardcover)
Hakikur Rahman
R4,951 Discovery Miles 49 510 Ships in 18 - 22 working days

In recent years, data mining has become a powerful tool in assisting society with its various layers and individual elements useful in obtaining intelligent information for making knowledgeable decisions. In the realm of knowledge discovery, data mining is becoming one of the most popular topics in information technology. ""Social and Political Implications of Data Mining: Knowledge Management in E-Government"" focuses on the data mining and knowledge management implications that lie within online government. This significant reference book contains cases on improvement of governance system, enhancement of security techniques, upgrade of social service sectors, and foremost empowerment of citizens and societies - a valuable added asset to academicians, researchers, and practitioners.

Intelligent Multimedia Databases and Information Retrieval - Advancing Applications and Technologies (Hardcover, New): Li Yan,... Intelligent Multimedia Databases and Information Retrieval - Advancing Applications and Technologies (Hardcover, New)
Li Yan, Zongmin Ma
R4,933 Discovery Miles 49 330 Ships in 18 - 22 working days

As consumer costs for multimedia devices such as digital cameras and Web phones have decreased and diversity in the market has skyrocketed, the amount of digital information has grown considerably. Intelligent Multimedia Databases and Information Retrieval: Advancing Applications and Technologies details the latest information retrieval technologies and applications, the research surrounding the field, and the methodologies and design related to multimedia databases. Together with academic researchers and developers from both information retrieval and artificial intelligence fields, this book details issues and semantics of data retrieval with contributions from around the globe. As the information and data from multimedia databases continues to expand, the research and documentation surrounding it should keep pace as best as possible, and this book provides an excellent resource for the latest developments.

Geographic Information Metadata for Spatial Data Infrastructures - Resources, Interoperability and Information Retrieval... Geographic Information Metadata for Spatial Data Infrastructures - Resources, Interoperability and Information Retrieval (Hardcover, 2005 ed.)
Javier Nogueras-Iso, Francisco Javier Zarazaga-Soria, Pedro R Muro-Medrano
R4,161 Discovery Miles 41 610 Ships in 18 - 22 working days

Metadata play a fundamental role in both DLs and SDIs. Commonly defined as "structured data about data" or "data which describe attributes of a resource" or, more simply, "information about data," it is an essential requirement for locating and evaluating available data. Therefore, this book focuses on the study of different metadata aspects, which contribute to a more efficient use of DLs and SDIs. The three main issues addressed are: the management of nested collections of resources, the interoperability between metadata schemas, and the integration of information retrieval techniques to the discovery services of geographic data catalogs (contributing in this way to avoid metadata content heterogeneity).

The Definitive Guide to Apache mod_rewrite (Hardcover, 1st ed.): Rich Bowen The Definitive Guide to Apache mod_rewrite (Hardcover, 1st ed.)
Rich Bowen
R1,417 Discovery Miles 14 170 Ships in 18 - 22 working days

Organizing websites is highly dynamic and often chaotic. Thus, it is crucial that host web servers manipulate URLs in order to cope with temporarily or permanently relocated resources, prevent attacks by automated worms, and control resource access.

The Apache mod_rewrite module has long inspired fits of joy because it offers an unparalleled toolset for manipulating URLs. "The Definitive Guide to Apache mod_rewrite" guides you through configuration and use of the module for a variety of purposes, including basic and conditional rewrites, access control, virtual host maintenance, and proxies.

This book was authored by Rich Bowen, noted Apache expert and Apache Software Foundation member, and draws on his years of experience administering, and regular speaking and writing about, the Apache server.

Nearest Neighbor Search: - A Database Perspective (Hardcover, 2005 ed.): Apostolos N. Papadopoulos, Yannis Manolopoulos Nearest Neighbor Search: - A Database Perspective (Hardcover, 2005 ed.)
Apostolos N. Papadopoulos, Yannis Manolopoulos
R2,755 Discovery Miles 27 550 Ships in 18 - 22 working days

Modern applications are both data and computationally intensive and require the storage and manipulation of voluminous traditional (alphanumeric) and nontraditional data sets (images, text, geometric objects, time-series). Examples of such emerging application domains are: Geographical Information Systems (GIS), Multimedia Information Systems, CAD/CAM, Time-Series Analysis, Medical Information Sstems, On-Line Analytical Processing (OLAP), and Data Mining. These applications pose diverse requirements with respect to the information and the operations that need to be supported. From the database perspective, new techniques and tools therefore need to be developed towards increased processing efficiency.

This monograph explores the way spatial database management systems aim at supporting queries that involve the space characteristics of the underlying data, and discusses query processing techniques for nearest neighbor queries. It provides both basic concepts and state-of-the-art results in spatial databases and parallel processing research, and studies numerous applications of nearest neighbor queries.

Data Streams - Models and Algorithms (Hardcover): Charu C. Aggarwal Data Streams - Models and Algorithms (Hardcover)
Charu C. Aggarwal
R4,387 Discovery Miles 43 870 Ships in 10 - 15 working days

Data Streams: Models and Algorithms primarily discusses issues related to the mining aspects of streams. Recent progress in hardware technology makes it possible for organizations to store and record large streams of transactional data. For example, even simple daily transactions, such as using the credit card or phone, result in automated data storage, which brings us to a fairly new topic called data streams. This volume covers mining aspects of data streams in a comprehensive style, in which each contributed chapter contains a survey on the topic, the key ideas in the field from that particular topic, and future research directions. Data Streams: Models and Algorithms is intended for a professional audience composed of researchers and practitioners in industry. This book is also appropriate for graduate-level students in computer science.

Mining Spatio-Temporal Information Systems (Hardcover, 2002 ed.): Roy Ladner, Kevin Shaw, Mahdi Abdelguerfi Mining Spatio-Temporal Information Systems (Hardcover, 2002 ed.)
Roy Ladner, Kevin Shaw, Mahdi Abdelguerfi
R2,746 Discovery Miles 27 460 Ships in 18 - 22 working days

Mining Spatio-Temporal Information Systems, an edited volume is composed of chapters from leading experts in the field of Spatial-Temporal Information Systems and addresses the many issues in support of modeling, creation, querying, visualizing and mining. Mining Spatio-Temporal Information Systems is intended to bring together a coherent body of recent knowledge relating to STIS data modeling, design, implementation and STIS in knowledge discovery. In particular, the reader is exposed to the latest techniques for the practical design of STIS, essential for complex query processing.
Mining Spatio-Temporal Information Systems is structured to meet the needs of practitioners and researchers in industry and graduate-level students in Computer Science.

Advanced Transaction Models and Architectures (Hardcover, 1997 ed.): Sushil Jajodia, Larry Kerschberg Advanced Transaction Models and Architectures (Hardcover, 1997 ed.)
Sushil Jajodia, Larry Kerschberg
R4,226 Discovery Miles 42 260 Ships in 18 - 22 working days

Motivation Modem enterprises rely on database management systems (DBMS) to collect, store and manage corporate data, which is considered a strategic corporate re source. Recently, with the proliferation of personal computers and departmen tal computing, the trend has been towards the decentralization and distribution of the computing infrastructure, with autonomy and responsibility for data now residing at the departmental and workgroup level of the organization. Users want their data delivered to their desktops, allowing them to incor porate data into their personal databases, spreadsheets, word processing doc uments, and most importantly, into their daily tasks and activities. They want to be able to share their information while retaining control over its access and distribution. There are also pressures from corporate leaders who wish to use information technology as a strategic resource in offering specialized value-added services to customers. Database technology is being used to manage the data associated with corporate processes and activities. Increasingly, the data being managed are not simply formatted tables in relational databases, but all types of ob jects, including unstructured text, images, audio, and video. Thus, the database management providers are being asked to extend the capabilities of DBMS to include object-relational models as well as full object-oriented database man agement systems."

Foundations of Dependable Computing - System Implementation (Hardcover, 1994 ed.): Gary M. Koob, Clifford G. Lau Foundations of Dependable Computing - System Implementation (Hardcover, 1994 ed.)
Gary M. Koob, Clifford G. Lau
R4,190 Discovery Miles 41 900 Ships in 18 - 22 working days

Foundations of Dependable Computing: System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead. A companion to this volume (published by Kluwer) subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems.

Information Security in Research and Business - Proceedings of the IFIP TC11 13th international conference on Information... Information Security in Research and Business - Proceedings of the IFIP TC11 13th international conference on Information Security (SEC '97): 14-16 May 1997, Copenhagen, Denmark (Hardcover, 1997 ed.)
Louise Yngstroem, Jan Carlsen
R5,403 Discovery Miles 54 030 Ships in 18 - 22 working days

Recently, IT has entered all important areas of society. Enterprises, individuals and civilisations all depend on functioning, safe and secure IT. Focus on IT security has previously been fractionalised, detailed and often linked to non-business applicaitons. The aim of this book is to address the current and future prospects of modern IT security, functionality in business, trade, industry, health care and government. The main topic areas covered include existing IT security tools and methodology for modern IT environments, laws, regulations and ethics in IT security environments, current and future prospects in technology, infrastructures, technique and methodology and IT security in retrospective.

Facilitating Technology Transfer through Partnership (Hardcover, 1997 ed.): Tom McMaster, E. Mumford, E. B. Swanson, B Warboys,... Facilitating Technology Transfer through Partnership (Hardcover, 1997 ed.)
Tom McMaster, E. Mumford, E. B. Swanson, B Warboys, David Wastell
R4,230 Discovery Miles 42 300 Ships in 18 - 22 working days

The primary aim for this book is to gather and collate articles which represent the best and latest thinking in the domain of technology transfer, from research, academia and practice around the world. We envisage that the book will, as a result of this, represent an important source of knowledge in this domain to students (undergraduate and postgraduate), researchers, practitioners and consultants, chiefly in the software engineering and IT/industries, but also in management and other organisational and social disciplines. An important aspect of the book is the role that reflective practitioners (and not just academics) play. They will be involved in the production, and evaluation of contributions, as well as in the design and delivery of conference events, upon which of course, the book will be based.

Text Retrieval and Filtering - Analytic Models of Performance (Hardcover, 1998 ed.): Robert M. Losee Text Retrieval and Filtering - Analytic Models of Performance (Hardcover, 1998 ed.)
Robert M. Losee
R5,279 Discovery Miles 52 790 Ships in 18 - 22 working days

Text Retrieval and Filtering: Analytical Models of Performance is the first book that addresses the problem of analytically computing the performance of retrieval and filtering systems. The book describes means by which retrieval may be studied analytically, allowing one to describe current performance, predict future performance, and to understand why systems perform as they do. The focus is on retrieving and filtering natural language text, with material addressing retrieval performance for the simple case of queries with a single term, the more complex case with multiple terms, both with term independence and term dependence, and for the use of grammatical information to improve performance. Unambiguous statements of the conditions under which one method or system will be more effective than another are developed. Text Retrieval and Filtering: Analytical Models of Performance focuses on the performance of systems that retrieve natural language text, considering full sentences as well as phrases and individual words. The last chapter explicitly addresses how grammatical constructs and methods may be studied in the context of retrieval or filtering system performance. The book builds toward solving this problem, although the material in earlier chapters is as useful to those addressing non-linguistic, statistical concerns as it is to linguists. Those interested in grammatical information should be cautioned to carefully examine earlier chapters, especially Chapters 7 and 8, which discuss purely statistical relationships between terms, before moving on to Chapter 10, which explicitly addresses linguistic issues. Text Retrieval and Filtering: Analytical Models of Performance is suitable as a secondary text for a graduate level course on Information Retrieval or Linguistics, and as a reference for researchers and practitioners in industry.

Fundamentals of Cryptology - A Professional Reference and Interactive Tutorial (Mixed media product, 2000 ed.): Henk C.A. van... Fundamentals of Cryptology - A Professional Reference and Interactive Tutorial (Mixed media product, 2000 ed.)
Henk C.A. van Tilborg
R1,540 Discovery Miles 15 400 Ships in 18 - 22 working days

The protection of sensitive information against unauthorized access or fraudulent changes has been of prime concern throughout the centuries. Modern communication techniques, using computers connected through networks, make all data even more vulnerable to these threats. In addition, new issues have surfaced that did not exist previously, e.g. adding a signature to an electronic document.Cryptology addresses the above issues - it is at the foundation of all information security. The techniques employed to this end have become increasingly mathematical in nature. Fundamentals of Cryptology serves as an introduction to modern cryptographic methods. After a brief survey of classical cryptosystems, it concentrates on three main areas. First, stream ciphers and block ciphers are discussed. These systems have extremely fast implementations, but sender and receiver must share a secret key. Second, the book presents public key cryptosystems, which make it possible to protect data without a prearranged key. Their security is based on intractable mathematical problems, such as the factorization of large numbers. The remaining chapters cover a variety of topics, including zero-knowledge proofs, secret sharing schemes and authentication codes. Two appendices explain all mathematical prerequisites in detail: one presents elementary number theory (Euclid's Algorithm, the Chinese Remainder Theorem, quadratic residues, inversion formulas, and continued fractions) and the other introduces finite fields and their algebraic structure.Fundamentals of Cryptology is an updated and improved version of An Introduction to Cryptology, originally published in 1988. Apart from a revision of the existing material, there are many new sections, and two new chapters on elliptic curves and authentication codes, respectively. In addition, the book is accompanied by a full text electronic version on CD-ROM as an interactive Mathematica manuscript.Fundamentals of Cryptology will be of interest to computer scientists, mathematicians, and researchers, students, and practitioners in the area of cryptography.

Foundations of Dependable Computing - Paradigms for Dependable Applications (Hardcover, 1994 ed.): Gary M. Koob, Clifford G. Lau Foundations of Dependable Computing - Paradigms for Dependable Applications (Hardcover, 1994 ed.)
Gary M. Koob, Clifford G. Lau
R4,132 Discovery Miles 41 320 Ships in 18 - 22 working days

Foundations of Dependable Computing: Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. The companion volume subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.

Data Mining and Computational Intelligence (Hardcover, 2001 ed.): Abraham Kandel, Mark Last, Horst Bunke Data Mining and Computational Intelligence (Hardcover, 2001 ed.)
Abraham Kandel, Mark Last, Horst Bunke
R4,208 Discovery Miles 42 080 Ships in 18 - 22 working days

Many business decisions are made in the absence of complete information about the decision consequences. Credit lines are approved without knowing the future behavior of the customers; stocks are bought and sold without knowing their future prices; parts are manufactured without knowing all the factors affecting their final quality; etc. All these cases can be categorized as decision making under uncertainty. Decision makers (human or automated) can handle uncertainty in different ways. Deferring the decision due to the lack of sufficient information may not be an option, especially in real-time systems. Sometimes expert rules, based on experience and intuition, are used. Decision tree is a popular form of representing a set of mutually exclusive rules. An example of a two-branch tree is: if a credit applicant is a student, approve; otherwise, decline. Expert rules are usually based on some hidden assumptions, which are trying to predict the decision consequences. A hidden assumption of the last rule set is: a student will be a profitable customer. Since the direct predictions of the future may not be accurate, a decision maker can consider using some information from the past. The idea is to utilize the potential similarity between the patterns of the past (e.g., "most students used to be profitable") and the patterns of the future (e.g., "students will be profitable").

Multimedia Processing, Communication and Computing Applications - Proceedings of the First International Conference, ICMCCA,... Multimedia Processing, Communication and Computing Applications - Proceedings of the First International Conference, ICMCCA, 13-15 December 2012 (Hardcover, 2013 ed.)
Punitha P. Swamy, Devanur S Guru
R6,325 Discovery Miles 63 250 Ships in 18 - 22 working days

ICMCCA 2012 is the first International Conference on Multimedia Processing, Communication and Computing Applications and the theme of the Conference is chosen as 'Multimedia Processing and its Applications'. Multimedia processing has been an active research area contributing in many frontiers of today's science and technology. This book presents peer-reviewed quality papers on multimedia processing, which covers a very broad area of science and technology. The prime objective of the book is to familiarize readers with the latest scientific developments that are taking place in various fields of multimedia processing and is widely used in many disciplines such as Medical Diagnosis, Digital Forensic, Object Recognition, Image and Video Analysis, Robotics, Military, Automotive Industries, Surveillance and Security, Quality Inspection, etc. The book will assist the research community to get the insight of the overlapping works which are being carried out across the globe at many medical hospitals and institutions, defense labs, forensic labs, academic institutions, IT companies and security & surveillance domains. It also discusses latest state-of-the-art research problems and techniques and helps to encourage, motivate and introduce the budding researchers to a larger domain of multimedia.

Advanced Topics in Information Retrieval (Hardcover, 2011 Ed.): Massimo Melucci, Ricardo Baeza-Yates Advanced Topics in Information Retrieval (Hardcover, 2011 Ed.)
Massimo Melucci, Ricardo Baeza-Yates
R2,702 Discovery Miles 27 020 Ships in 18 - 22 working days

Information retrieval is the science concerned with the effective and efficient retrieval of documents starting from their semantic content. It is employed to fulfill some information need from a large number of digital documents. Given the ever-growing amount of documents available and the heterogeneous data structures used for storage, information retrieval has recently faced and tackled novel applications.

In this book, Melucci and Baeza-Yates present a wide-spectrum illustration of recent research results in advanced areas related to information retrieval. Readers will find chapters on e.g. aggregated search, digital advertising, digital libraries, discovery of spam and opinions, information retrieval in context, multimedia resource discovery, quantum mechanics applied to information retrieval, scalability challenges in web search engines, and interactive information retrieval evaluation. All chapters are written by well-known researchers, are completely self-contained and comprehensive, and are complemented by an integrated bibliography and subject index.

With this selection, the editors provide the most up-to-date survey of topics usually not addressed in depth in traditional (text)books on information retrieval. The presentation is intended for a wide audience of people interested in information retrieval: undergraduate and graduate students, post-doctoral researchers, lecturers, and industrial researchers.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Genetic Databases
Martin J Bishop Hardcover R1,898 Discovery Miles 18 980
Ontologies, Taxonomies and Thesauri in…
Emilia Curras Paperback R1,320 Discovery Miles 13 200
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson Hardcover R1,451 Discovery Miles 14 510
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, … Hardcover R1,961 R1,830 Discovery Miles 18 300
Data Integrity and Quality
Santhosh Kumar Balan Hardcover R3,068 Discovery Miles 30 680
BTEC Nationals Information Technology…
Jenny Phillips, Alan Jarvis, … Paperback R1,018 Discovery Miles 10 180
Bitcoin And Cryptocurrency - The…
Crypto Trader & Crypto Gladiator Hardcover R669 R603 Discovery Miles 6 030
Natural Language Processing in the Real…
Jyotika Singh Paperback R1,514 Discovery Miles 15 140
Safety of Web Applications - Risks…
Eric Quinton Hardcover R2,330 Discovery Miles 23 300
Management Of Information Security
Michael Whitman, Herbert Mattord Paperback R1,321 R1,228 Discovery Miles 12 280

 

Partners