0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (41)
  • R250 - R500 (156)
  • R500+ (8,459)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Applications of computing > Databases > General

Mining Spatio-Temporal Information Systems (Hardcover, 2002 ed.): Roy Ladner, Kevin Shaw, Mahdi Abdelguerfi Mining Spatio-Temporal Information Systems (Hardcover, 2002 ed.)
Roy Ladner, Kevin Shaw, Mahdi Abdelguerfi
R2,746 Discovery Miles 27 460 Ships in 18 - 22 working days

Mining Spatio-Temporal Information Systems, an edited volume is composed of chapters from leading experts in the field of Spatial-Temporal Information Systems and addresses the many issues in support of modeling, creation, querying, visualizing and mining. Mining Spatio-Temporal Information Systems is intended to bring together a coherent body of recent knowledge relating to STIS data modeling, design, implementation and STIS in knowledge discovery. In particular, the reader is exposed to the latest techniques for the practical design of STIS, essential for complex query processing.
Mining Spatio-Temporal Information Systems is structured to meet the needs of practitioners and researchers in industry and graduate-level students in Computer Science.

Foundations of Dependable Computing - System Implementation (Hardcover, 1994 ed.): Gary M. Koob, Clifford G. Lau Foundations of Dependable Computing - System Implementation (Hardcover, 1994 ed.)
Gary M. Koob, Clifford G. Lau
R4,190 Discovery Miles 41 900 Ships in 18 - 22 working days

Foundations of Dependable Computing: System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead. A companion to this volume (published by Kluwer) subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems.

Facilitating Technology Transfer through Partnership (Hardcover, 1997 ed.): Tom McMaster, E. Mumford, E. B. Swanson, B Warboys,... Facilitating Technology Transfer through Partnership (Hardcover, 1997 ed.)
Tom McMaster, E. Mumford, E. B. Swanson, B Warboys, David Wastell
R4,230 Discovery Miles 42 300 Ships in 18 - 22 working days

The primary aim for this book is to gather and collate articles which represent the best and latest thinking in the domain of technology transfer, from research, academia and practice around the world. We envisage that the book will, as a result of this, represent an important source of knowledge in this domain to students (undergraduate and postgraduate), researchers, practitioners and consultants, chiefly in the software engineering and IT/industries, but also in management and other organisational and social disciplines. An important aspect of the book is the role that reflective practitioners (and not just academics) play. They will be involved in the production, and evaluation of contributions, as well as in the design and delivery of conference events, upon which of course, the book will be based.

Text Retrieval and Filtering - Analytic Models of Performance (Hardcover, 1998 ed.): Robert M. Losee Text Retrieval and Filtering - Analytic Models of Performance (Hardcover, 1998 ed.)
Robert M. Losee
R5,279 Discovery Miles 52 790 Ships in 18 - 22 working days

Text Retrieval and Filtering: Analytical Models of Performance is the first book that addresses the problem of analytically computing the performance of retrieval and filtering systems. The book describes means by which retrieval may be studied analytically, allowing one to describe current performance, predict future performance, and to understand why systems perform as they do. The focus is on retrieving and filtering natural language text, with material addressing retrieval performance for the simple case of queries with a single term, the more complex case with multiple terms, both with term independence and term dependence, and for the use of grammatical information to improve performance. Unambiguous statements of the conditions under which one method or system will be more effective than another are developed. Text Retrieval and Filtering: Analytical Models of Performance focuses on the performance of systems that retrieve natural language text, considering full sentences as well as phrases and individual words. The last chapter explicitly addresses how grammatical constructs and methods may be studied in the context of retrieval or filtering system performance. The book builds toward solving this problem, although the material in earlier chapters is as useful to those addressing non-linguistic, statistical concerns as it is to linguists. Those interested in grammatical information should be cautioned to carefully examine earlier chapters, especially Chapters 7 and 8, which discuss purely statistical relationships between terms, before moving on to Chapter 10, which explicitly addresses linguistic issues. Text Retrieval and Filtering: Analytical Models of Performance is suitable as a secondary text for a graduate level course on Information Retrieval or Linguistics, and as a reference for researchers and practitioners in industry.

Foundations of Dependable Computing - Models and Frameworks for Dependable Systems (Hardcover, 1994 ed.): Gary M. Koob,... Foundations of Dependable Computing - Models and Frameworks for Dependable Systems (Hardcover, 1994 ed.)
Gary M. Koob, Clifford G. Lau
R4,159 Discovery Miles 41 590 Ships in 18 - 22 working days

Foundations of Dependable Computing: Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. A companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.

Compression and Coding Algorithms (Hardcover, 2002 ed.): Alistair Moffat, Andrew Turpin Compression and Coding Algorithms (Hardcover, 2002 ed.)
Alistair Moffat, Andrew Turpin
R1,561 Discovery Miles 15 610 Ships in 18 - 22 working days

Compression and Coding Algorithms describes in detail the coding mechanisms that are available for use in data compression systems. The well known Huffman coding technique is one mechanism, but there have been many others developed over the past few decades, and this book describes, explains and assesses them. People undertaking research of software development in the areas of compression and coding algorithms will find this book an indispensable reference. In particular, the careful and detailed description of algorithms and their implementation, plus accompanying pseudo-code that can be readily implemented on computer, make this book a definitive reference in an area currently without one.

Foundations of Dependable Computing - Paradigms for Dependable Applications (Hardcover, 1994 ed.): Gary M. Koob, Clifford G. Lau Foundations of Dependable Computing - Paradigms for Dependable Applications (Hardcover, 1994 ed.)
Gary M. Koob, Clifford G. Lau
R4,132 Discovery Miles 41 320 Ships in 18 - 22 working days

Foundations of Dependable Computing: Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. The companion volume subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.

Intelligent Multimedia Databases and Information Retrieval - Advancing Applications and Technologies (Hardcover, New): Li Yan,... Intelligent Multimedia Databases and Information Retrieval - Advancing Applications and Technologies (Hardcover, New)
Li Yan, Zongmin Ma
R4,933 Discovery Miles 49 330 Ships in 18 - 22 working days

As consumer costs for multimedia devices such as digital cameras and Web phones have decreased and diversity in the market has skyrocketed, the amount of digital information has grown considerably. Intelligent Multimedia Databases and Information Retrieval: Advancing Applications and Technologies details the latest information retrieval technologies and applications, the research surrounding the field, and the methodologies and design related to multimedia databases. Together with academic researchers and developers from both information retrieval and artificial intelligence fields, this book details issues and semantics of data retrieval with contributions from around the globe. As the information and data from multimedia databases continues to expand, the research and documentation surrounding it should keep pace as best as possible, and this book provides an excellent resource for the latest developments.

The Testability of Distributed Real-Time Systems (Hardcover, 1993 ed.): Werner Schutz The Testability of Distributed Real-Time Systems (Hardcover, 1993 ed.)
Werner Schutz
R2,735 Discovery Miles 27 350 Ships in 18 - 22 working days

The Testability of Distributed Real-Time Systems starts by collecting and analyzing all principal problems, as well as their interrelations that one has to keep in mind wh4en testing a distributed real-time system. The book discusses them in some detail from the viewpoints of software engineering, distributed systems principles, and real-time system development. These problems are organization, observability, reproducibility, the host/target approach, environment simulation, and (test) representativity. Based on this framework, the book summarizes and evaluates the current work done in this area before going on to argue that the particular system architecture (hardware plus operating system) has a much greater influence on testing than is the case for ordinary', non-real-time software. The notions of event-triggered and time-triggered system architectures are introduced, and its is shown that time-triggered systems automatically' (i.e. by the nature of their system architecture) solve or greatly ease solving of some of the problems introduced earlier, i.e. observability, reproducibility, and (partly) representativity.A test methodology is derived for the time-triggered, distributed real-time system MARS. The book describes in detail how the author has taken advantage of its architecture, and shows how the remaining problems can be solved for this particular system architecture. Some experiments conducted to evaluate this test methodology are reported, including the experience gained from them, leading to a description of a number of prototype support tools.The Testability of Distributed Real-Time Systems can be used by both academic and industrial researchers interested in distributedand/or real-time systems, or in software engineering for such systems. This book can also be used as a text in advanced courses on distributed or real-time systems.

Text Mining - Predictive Methods for Analyzing Unstructured Information (Hardcover): Sholom M. Weiss, Nitin Indurkhya, Tong... Text Mining - Predictive Methods for Analyzing Unstructured Information (Hardcover)
Sholom M. Weiss, Nitin Indurkhya, Tong Zhang, Fred Damerau
R4,146 Discovery Miles 41 460 Ships in 18 - 22 working days

Data mining is a mature technology. The prediction problem, looking for predictive patterns in data, has been widely studied. Strong me- ods are available to the practitioner. These methods process structured numerical information, where uniform measurements are taken over a sample of data. Text is often described as unstructured information. So, it would seem, text and numerical data are different, requiring different methods. Or are they? In our view, a prediction problem can be solved by the same methods, whether the data are structured - merical measurements or unstructured text. Text and documents can be transformed into measured values, such as the presence or absence of words, and the same methods that have proven successful for pred- tive data mining can be applied to text. Yet, there are key differences. Evaluation techniques must be adapted to the chronological order of publication and to alternative measures of error. Because the data are documents, more specialized analytical methods may be preferred for text. Moreover, the methods must be modi?ed to accommodate very high dimensions: tens of thousands of words and documents. Still, the central themes are similar.

Data Mining and Computational Intelligence (Hardcover, 2001 ed.): Abraham Kandel, Mark Last, Horst Bunke Data Mining and Computational Intelligence (Hardcover, 2001 ed.)
Abraham Kandel, Mark Last, Horst Bunke
R4,208 Discovery Miles 42 080 Ships in 18 - 22 working days

Many business decisions are made in the absence of complete information about the decision consequences. Credit lines are approved without knowing the future behavior of the customers; stocks are bought and sold without knowing their future prices; parts are manufactured without knowing all the factors affecting their final quality; etc. All these cases can be categorized as decision making under uncertainty. Decision makers (human or automated) can handle uncertainty in different ways. Deferring the decision due to the lack of sufficient information may not be an option, especially in real-time systems. Sometimes expert rules, based on experience and intuition, are used. Decision tree is a popular form of representing a set of mutually exclusive rules. An example of a two-branch tree is: if a credit applicant is a student, approve; otherwise, decline. Expert rules are usually based on some hidden assumptions, which are trying to predict the decision consequences. A hidden assumption of the last rule set is: a student will be a profitable customer. Since the direct predictions of the future may not be accurate, a decision maker can consider using some information from the past. The idea is to utilize the potential similarity between the patterns of the past (e.g., "most students used to be profitable") and the patterns of the future (e.g., "students will be profitable").

Organizational Data Mining - Leveraging Enterprise Data Resources for Optimal Performance (Hardcover, New): Organizational Data Mining - Leveraging Enterprise Data Resources for Optimal Performance (Hardcover, New)
R2,153 Discovery Miles 21 530 Ships in 18 - 22 working days

Successfully competing in the new global economy requires immediate decision capability. This immediate decision capability requires quick analysis of both timely and relevant data. To support this analysis, organizations are piling up mountains of business data in their databases every day. Terabyte-sized (1,000 megabytes) databases are commonplace in organizations today, and this enormous growth will make petabyte-sized databases (1,000 terabytes) a reality within the next few years (Whiting, 2002). Those organizations making swift, fact-based decisions by optimally leveraging their data resources will outperform those organizations that do not. A technology that facilitates this process of optimal decision-making is known as Organizational Data Mining (ODM). Organizational Data Mining: Leveraging Enterprise Data Resources for Optimal Performance demonstrates how organizations can leverage ODM for enhanced competitiveness and optimal performance.

Oracle 10g Developing Media Rich Applications (Paperback, New): Lynne Dunckley, Larry Guros Oracle 10g Developing Media Rich Applications (Paperback, New)
Lynne Dunckley, Larry Guros
R1,994 Discovery Miles 19 940 Ships in 10 - 15 working days

Oracle 10g Developing Media Rich Applications is focused squarely on database administrators and programmers as the foundation of multimedia database applications. With the release of Oracle8 Database in 1997, Oracle became the first commercial database with integrated multimedia technology for application developers. Since that time, Oracle has enhanced and extended these features to include native support for image, audio, video and streaming media storage; indexing, retrieval and processing in the Oracle Database, Application Server; and development tools. Databases are not only words and numbers for accountants, but they also should utilize a full range of media to satisfy customer needs, from race car engineers, to manufacturing processes to security.
The full range of audio, video and integration of media into databases is mission critical to these applications. This book details the most recent features in Oracle's multimedia technology including those of the Oracle10gR2 Database and the Oracle9i Application Server. The technology covered includes: object relational media storage and services within the database, middle tier application development interfaces, wireless delivery mechanisms, and Java-based tools.
* Gives broad coverage to integration of multimedia features such as audio and instrumentation video, from race cars to analyze performance, to voice and picture recognition for security data bases. As well as full multimedia for presentations
* Includes field tested examples in enterprise environments
* Provides coverage in a thorough and clear fashion developed in a London University Professional Course

Advanced Topics in Information Retrieval (Hardcover, 2011 Ed.): Massimo Melucci, Ricardo Baeza-Yates Advanced Topics in Information Retrieval (Hardcover, 2011 Ed.)
Massimo Melucci, Ricardo Baeza-Yates
R2,702 Discovery Miles 27 020 Ships in 18 - 22 working days

Information retrieval is the science concerned with the effective and efficient retrieval of documents starting from their semantic content. It is employed to fulfill some information need from a large number of digital documents. Given the ever-growing amount of documents available and the heterogeneous data structures used for storage, information retrieval has recently faced and tackled novel applications.

In this book, Melucci and Baeza-Yates present a wide-spectrum illustration of recent research results in advanced areas related to information retrieval. Readers will find chapters on e.g. aggregated search, digital advertising, digital libraries, discovery of spam and opinions, information retrieval in context, multimedia resource discovery, quantum mechanics applied to information retrieval, scalability challenges in web search engines, and interactive information retrieval evaluation. All chapters are written by well-known researchers, are completely self-contained and comprehensive, and are complemented by an integrated bibliography and subject index.

With this selection, the editors provide the most up-to-date survey of topics usually not addressed in depth in traditional (text)books on information retrieval. The presentation is intended for a wide audience of people interested in information retrieval: undergraduate and graduate students, post-doctoral researchers, lecturers, and industrial researchers.

Advances in Digital Government - Technology, Human Factors, and Policy (Hardcover, 2002 ed.): William J. McIver Jr., Ahmed K.... Advances in Digital Government - Technology, Human Factors, and Policy (Hardcover, 2002 ed.)
William J. McIver Jr., Ahmed K. Elmagarmid
R5,329 Discovery Miles 53 290 Ships in 18 - 22 working days

Advances In Digital Government presents a collection of in-depth articles that addresses a representative cross-section of the matrix of issues involved in implementing digital government systems. These articles constitute a survey of both the technical and policy dimensions related to the design, planning and deployment of digital government systems. The research and development projects within the technical dimension represent a wide range of governmental functions, including the provisioning of health and human services, management of energy information, multi-agency integration, and criminal justice applications. The technical issues dealt with in these projects include database and ontology integration, distributed architectures, scalability, and security and privacy. The human factors research emphasizes compliance with access standards for the disabled and the policy articles contain both conceptual models for developing digital government systems as well as real management experiences and results in deploying them. Advances In Digital Government presents digital government issues from the perspectives of different communities and societies. This geographic and social diversity illuminates a unique array of policy and social perspectives, exposing practitioners to new and useful ways of thinking about digital government.

MARS Applications in Geotechnical Engineering Systems - Multi-Dimension with Big Data (Hardcover, 1st ed. 2020): Wengang Zhang MARS Applications in Geotechnical Engineering Systems - Multi-Dimension with Big Data (Hardcover, 1st ed. 2020)
Wengang Zhang
R2,673 Discovery Miles 26 730 Ships in 18 - 22 working days

This book presents the application of a comparatively simple nonparametric regression algorithm, known as the multivariate adaptive regression splines (MARS) surrogate model, which can be used to approximate the relationship between the inputs and outputs, and express that relationship mathematically. The book first describes the MARS algorithm, then highlights a number of geotechnical applications with multivariate big data sets to explore the approach's generalization capabilities and accuracy. As such, it offers a valuable resource for all geotechnical researchers, engineers, and general readers interested in big data analysis.

Web Service Composition (Hardcover, 1st ed. 2016): Charles J. Petrie Web Service Composition (Hardcover, 1st ed. 2016)
Charles J. Petrie
R1,408 Discovery Miles 14 080 Ships in 18 - 22 working days

This book carefully defines the technologies involved in web service composition and provides a formal basis for all of the composition approaches and shows the trade-offs among them. By considering web services as a deep formal topic, some surprising results emerge, such as the possibility of eliminating workflows. It examines the immense potential of web services composition for revolutionizing business IT as evidenced by the marketing of Service Oriented Architectures (SOAs). The author begins with informal considerations and builds to the formalisms slowly, with easily-understood motivating examples. Chapters examine the importance of semantics for web services and ways to apply semantic technologies. Topics included range from model checking and Golog to WSDL and AI planning. This book is based upon lectures given to economics students and is suitable for business technologist with some computer science background. The reader can delve as deeply into the technologies as desired.

Mining the Sky - Proceedings of the MPA/ESO/MPE Workshop Held at Garching, Germany, July 31 - August 4, 2000 (Hardcover, 2001... Mining the Sky - Proceedings of the MPA/ESO/MPE Workshop Held at Garching, Germany, July 31 - August 4, 2000 (Hardcover, 2001 ed.)
A.J. Banday, S. Zaroubi, M. Bartelmann
R4,402 Discovery Miles 44 020 Ships in 18 - 22 working days

The book reviews methods for the numerical and statistical analysis of astronomical datasets with particular emphasis on the very large databases that arise from both existing and forthcoming projects, as well as current large-scale computer simulation studies. Leading experts give overviews of cutting-edge methods applicable in the area of astronomical data mining. Case studies demonstrate the interplay between these techniques and interesting astronomical problems. The book demonstrates specific new methods for storing, accessing, reducing, analysing, describing and visualising astronomical data which are necessary to fully exploit its potential.

Fully Integrated Data Environments - Persistent Programming Languages, Object Stores, and Programming Environments (Hardcover,... Fully Integrated Data Environments - Persistent Programming Languages, Object Stores, and Programming Environments (Hardcover, 2000 ed.)
Malcolm P. Atkinson, Ray Welland
R2,774 Discovery Miles 27 740 Ships in 18 - 22 working days

Research into Fully Integrated Data Environments (FIDE) has the goal of substantially improving the quality of application systems while reducing the cost of building and maintaining them. Application systems invariably involve the long-term storage of data over months or years. Much unnecessary complexity obstructs the construction of these systems when conventional databases, file systems, operating systems, communication systems, and programming languages are used. This complexity limits the sophistication of the systems that can be built, generates operational and usability problems, and deleteriously impacts both reliability and performance. This book reports on the work of researchers in the Esprit FIDE projects to design and develop a new integrated environment to support the construction and operation of such persistent application systems. It reports on the principles they employed to design it, the prototypes they built to test it, and their experience using it.

Sustainable Interdependent Networks II - From Smart Power Grids to Intelligent Transportation Networks (Hardcover, 1st ed.... Sustainable Interdependent Networks II - From Smart Power Grids to Intelligent Transportation Networks (Hardcover, 1st ed. 2019)
M. Hadi Amini, Kianoosh G. Boroojeni, S.S. Iyengar, Panos M. Pardalos, Frede Blaabjerg, …
R2,690 Discovery Miles 26 900 Ships in 18 - 22 working days

This book paves the way for researchers working on the sustainable interdependent networks spread over the fields of computer science, electrical engineering, and smart infrastructures. It provides the readers with a comprehensive insight to understand an in-depth big picture of smart cities as a thorough example of interdependent large-scale networks in both theory and application aspects. The contributors specify the importance and position of the interdependent networks in the context of developing the sustainable smart cities and provide a comprehensive investigation of recently developed optimization methods for large-scale networks. There has been an emerging concern regarding the optimal operation of power and transportation networks. In the second volume of Sustainable Interdependent Networks book, we focus on the interdependencies of these two networks, optimization methods to deal with the computational complexity of them, and their role in future smart cities. We further investigate other networks, such as communication networks, that indirectly affect the operation of power and transportation networks. Our reliance on these networks as global platforms for sustainable development has led to the need for developing novel means to deal with arising issues. The considerable scale of such networks, due to the large number of buses in smart power grids and the increasing number of electric vehicles in transportation networks, brings a large variety of computational complexity and optimization challenges. Although the independent optimization of these networks lead to locally optimum operation points, there is an exigent need to move towards obtaining the globally-optimum operation point of such networks while satisfying the constraints of each network properly. The book is suitable for senior undergraduate students, graduate students interested in research in multidisciplinary areas related to future sustainable networks, and the researchers working in the related areas. It also covers the application of interdependent networks which makes it a perfect source of study for audience out of academia to obtain a general insight of interdependent networks.

Multi-dimensional Optical Storage (Hardcover, 1st ed. 2016): Duanyi Xu Multi-dimensional Optical Storage (Hardcover, 1st ed. 2016)
Duanyi Xu
R2,848 Discovery Miles 28 480 Ships in 18 - 22 working days

This book presents principles and applications to expand the storage space from 2-D to 3-D and even multi-D, including gray scale, color (light with different wavelength), polarization and coherence of light. These actualize the improvements of density, capacity and data transfer rate for optical data storage. Moreover, the applied implementation technologies to make mass data storage devices are described systematically. Some new mediums, which have linear absorption characteristics for different wavelength and intensity to light with high sensitivity, are introduced for multi-wavelength and multi-level optical storage. This book can serve as a useful reference for researchers, engineers, graduate and undergraduate students in material science, information science and optics.

Fuzzy Database Modeling (Hardcover, 1999 ed.): Adnan Yazici, Roy George Fuzzy Database Modeling (Hardcover, 1999 ed.)
Adnan Yazici, Roy George
R2,782 Discovery Miles 27 820 Ships in 18 - 22 working days

Some recent fuzzy database modeling advances for the non-traditional applications are introduced in this book. The focus is on database models for modeling complex information and uncertainty at the conceptual, logical, physical design levels and from integrity constraints defined on the fuzzy relations.
The database models addressed here are; the conceptual data models, including the ExIFO and ExIFO2 data models, the logical database models, including the extended NF2 database model, fuzzy object-oriented database model, and the fuzzy deductive object-oriented database model. Integrity constraints are defined on the fuzzy relations are also addressed. A continuing reason for the limited adoption of fuzzy database systems has been performance. There have been few efforts at defining physical structures that accomodate fuzzy information. A new access structure and data organization for fuzzy information is introduced in this book.

Imprecise and Approximate Computation (Hardcover, 1995 ed.): Swaminathan Natarajan Imprecise and Approximate Computation (Hardcover, 1995 ed.)
Swaminathan Natarajan
R2,754 Discovery Miles 27 540 Ships in 18 - 22 working days

Real-time systems are now used in a wide variety of applications. Conventionally, they were configured at design to perform a given set of tasks and could not readily adapt to dynamic situations. The concept of imprecise and approximate computation has emerged as a promising approach to providing scheduling flexibility and enhanced dependability in dynamic real-time systems. The concept can be utilized in a wide variety of applications, including signal processing, machine vision, databases, networking, etc. For those who wish to build dynamic real-time systems which must deal safely with resource unavailability while continuing to operate, leading to situations where computations may not be carried through to completion, the techniques of imprecise and approximate computation facilitate the generation of partial results that may enable the system to operate safely and avert catastrophe. Audience: Of special interest to researchers. May be used as a supplementary text in courses on real-time systems.

Next Generation Data Technologies for Collective Computational Intelligence (Hardcover, 2011 Ed.): Nik Bessis, Fatos Xhafa Next Generation Data Technologies for Collective Computational Intelligence (Hardcover, 2011 Ed.)
Nik Bessis, Fatos Xhafa
R5,498 Discovery Miles 54 980 Ships in 18 - 22 working days

This book focuses on next generation data technologies in support of collective and computational intelligence. The book brings various next generation data technologies together to capture, integrate, analyze, mine, annotate and visualize distributed data - made available from various community users - in a meaningful and collaborative for the organization manner. A unique perspective on collective computational intelligence is offered by embracing both theory and strategies fundamentals such as data clustering, graph partitioning, collaborative decision making, self-adaptive ant colony, swarm and evolutionary agents. It also covers emerging and next generation technologies in support of collective computational intelligence such as Web 2.0 social networks, semantic web for data annotation, knowledge representation and inference, data privacy and security, and enabling distributed and collaborative paradigms such as P2P, Grid and Cloud Computing due to the geographically dispersed and distributed nature of the data. The book aims to cover in a comprehensive manner the combinatorial effort of utilizing and integrating various next generations collaborative and distributed data technologies for computational intelligence in various scenarios. The book also distinguishes itself by assessing whether utilization and integration of next generation data technologies can assist in the identification of new opportunities, which may also be strategically fit for purpose.

Trust Management IV - 4th IFIP WG 11.11 International Conference, IFIPTM 2010, Morioka, Japan, June 16-18, 2010, Proceedings... Trust Management IV - 4th IFIP WG 11.11 International Conference, IFIPTM 2010, Morioka, Japan, June 16-18, 2010, Proceedings (Hardcover, Edition.)
Masakatsu Nishigaki, Audun Josang, Yuko Murayama, Stephen Marsh
R1,435 Discovery Miles 14 350 Ships in 18 - 22 working days

This volume contains the proceedings of IFIPTM 2010, the 4th IFIP WG 11.11 International Conference on Trust Management, held in Morioka, Iwate, Japan during June 16-18, 2010. IFIPTM 2010 provided a truly global platform for the reporting of research, development, policy, and practice in the interdependent arrears of privacy, se- rity, and trust. Building on the traditions inherited from the highly succe- ful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, the IFIPTM 2008 conference in Trondheim, Norway, and the IFIPTM 2009 conference at Purdue University in Indiana, USA, IFIPTM 2010 focused on trust, privacy and security from multidisciplinary persp- tives. The conference is an arena for discussion on relevant problems from both research and practice in the areas of academia, business, and government. IFIPTM 2010 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2010 received 61 submissions from 25 di?erent countries: Japan (10), UK (6), USA (6), Canada (5), Germany (5), China (3), Denmark (2), India (2), Italy (2), Luxembourg (2), The Netherlands (2), Switzerland (2), Taiwan (2), Austria, Estonia, Finland, France, Ireland, Israel, Korea, Malaysia, Norway, Singapore, Spain, Turkey. The Program Committee selected 18 full papers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include two invited papers by academic experts in the ?elds of trust management, privacy and security, namely, Toshio Yamagishi and Pamela Briggs

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Yukhíti Kóy - A Reference Grammar of…
Geoffrey D. Kimball Hardcover R1,829 Discovery Miles 18 290
English Vocabulary Elements - A Course…
William R. Leben, Brett Kessler, … Hardcover R2,444 Discovery Miles 24 440
Southern Regional French - A Linguistic…
Damien Mooney Hardcover R2,380 Discovery Miles 23 800
Aspects of Split Ergativity
Jessica Coon Hardcover R3,843 Discovery Miles 38 430
Monolingualism and Linguistic…
Anjali Pandey Hardcover R2,795 R1,894 Discovery Miles 18 940
Predicates of Gratification in English…
Katarzyna Gora Hardcover R1,427 Discovery Miles 14 270
An Introduction to the Japonic Languages…
Michinori Shimoji Hardcover R3,823 Discovery Miles 38 230
Smuggling in Syntax
Adriana Belletti, Chris Collins Hardcover R3,069 Discovery Miles 30 690
Approaches to Teaching the History of…
Mary Hayes, Allison Burkette Hardcover R3,305 Discovery Miles 33 050
Learn New Testament Greek
John H Dobson Paperback R954 Discovery Miles 9 540

 

Partners