0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (4)
  • R250 - R500 (20)
  • R500+ (1,653)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems

TRON Project 1988 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988): Ken Sakamura TRON Project 1988 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988)
Ken Sakamura
R1,567 Discovery Miles 15 670 Ships in 10 - 15 working days

It has been almost 5 years since the inauguration of the TRON project, a concept first proposed by Dr. K. Sakamura of the University of Tokyo. The TRON Association, which was founded as an independent organization in March 1988, has been taking over the activities of the earlier TRON Association, which was a division of Japan Electronic Industry Development Association (JEIDA). It has been expanding various operations to globalize the organizations activities. The number of member companies already exceeds 100, with increasing participation from overseas companies. It is truly an awaring historical event that so many members with the same qualifications and aims engaged in the research and development of the computer environment could be gathered together. The TRON concept aims at the creation of a new and complete environment beneficial to both computer and mankind. It has a very wide scope and great diversity. As it includes the open architecture concept and as the TRON machine should be able to work with various foreign languages, the TRON is targetted to be used internationally. In order for us to create a complete TRON world, at though there are several TRON products already on the market, continuous and aggressive participation from as members together with concentration as further development are indispensable. We, the TRON promoters, are much encouraged by such a driving force.

Methodologies for Control of Jump Time-Delay Systems (Paperback, Softcover reprint of the original 1st ed. 2003): Magdi S.... Methodologies for Control of Jump Time-Delay Systems (Paperback, Softcover reprint of the original 1st ed. 2003)
Magdi S. Mahmoud, Peng Shi
R4,049 Discovery Miles 40 490 Ships in 10 - 15 working days

This book is about time-domain modelling, stability, stabilization, control design and filtering for JTDS. It gives readers a thorough understanding of the basic mathematical analysis and fundamentals, offers a straightforward treatment of the different topics and provides broad coverage of the recent methodologies.

Intelligent CAD Systems I - Theoretical and Methodological Aspects (Paperback, Softcover reprint of the original 1st ed. 1987):... Intelligent CAD Systems I - Theoretical and Methodological Aspects (Paperback, Softcover reprint of the original 1st ed. 1987)
Paul J.W. Ten Hagen, Tetsuo Tomiyama
R1,559 Discovery Miles 15 590 Ships in 10 - 15 working days

CAD (Computer Aided Design) technology is now crucial for every division of modern industry, from a viewpoint of higher productivity and better products. As technologies advance, the amount of information and knowledge that engineers have to deal with is constantly increasing. This results in seeking more advanced computer technology to achieve higher functionalities, flexibility, and efficient performance of the CAD systems. Knowledge engineering, or more broadly artificial intelligence, is considered a primary candidate technology to build a new generation of CAD systems. Since design is a very intellectual human activity, this approach seems to make sense. The ideas of intelligent CAD systems (ICAD) are now increasingly discussed everywhere. We can observe many conferences and workshops reporting a number of research efforts on this particular subject. Researchers are coming from computer science, artificial intelligence, mechanical engineering, electronic engineering, civil engineering, architectural science, control engineering, etc. But, still we cannot see the direction of this concept, or at least, there is no widely accepted concept of ICAD. What can designers expect from these future generation CAD systems? In which direction must developers proceed? The situation is somewhat confusing.

VLSI Technology - Fundamentals and Applications (Paperback, Softcover reprint of the original 1st ed. 1986): Yasuo Tarui VLSI Technology - Fundamentals and Applications (Paperback, Softcover reprint of the original 1st ed. 1986)
Yasuo Tarui
R2,922 Discovery Miles 29 220 Ships in 10 - 15 working days

The origin of the development of integrated circuits up to VLSI is found in the invention of the transistor, which made it possible to achieve the ac- tion of a vacuum tube in a semiconducting solid. The structure of the tran- sistor can be constructed by a manufacturing technique such as the intro- duction of a small amount of an impurity into a semiconductor and, in ad- dition, most transistor characteristics can be improved by a reduction of dimensions. These are all important factors in the development. Actually, the microfabrication of the integrated circuit can be used for two purposes, namely to increase the integration density and to obtain an improved perfor- mance, e. g. a high speed. When one of these two aims is pursued, the result generally satisfies both. We use the Engl ish translation "very large scale integration (VLSIl" for "Cho LSI" in Japanese. In the United States of America, however, similar technology is bei ng developed under the name "very hi gh speed integrated circuits (VHSIl". This also originated from the nature of the integrated circuit which satisfies both purposes. Fortunately, the Japanese word "Cho LSI" has a wider meani ng than VLSI, so it can be used ina broader area. However, VLSI has a larger industrial effect than VHSI.

Predictably Dependable Computing Systems (Paperback, Softcover reprint of the original 1st ed. 1995): Brian Randell, Jean... Predictably Dependable Computing Systems (Paperback, Softcover reprint of the original 1st ed. 1995)
Brian Randell, Jean Claude Laprie, Hermann Kopetz, Bev Littlewood
R2,964 Discovery Miles 29 640 Ships in 10 - 15 working days

The first ESPRIT Basic Research Project on Predictably Dependable Computing Systems (No. 3092, PDCS) commenced in May 1989, and ran until March 1992. The institutions and principal investigators that were involved in PDCS were: City University, London, UK (Bev Littlewood), lEI del CNR, Pisa, Italy (Lorenzo Strigini), Universitiit Karlsruhe, Germany (Tom Beth), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The work continued after March 1992, and a three-year successor project (No. 6362, PDCS2) officially started in August 1992, with a slightly changed membership: Chalmers University of Technology, Goteborg, Sweden (Erland Jonsson), City University, London, UK (Bev Littlewood), CNR, Pisa, Italy (Lorenzo Strigini), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), Universite Catholique de Louvain, Belgium (Pierre-Jacques Courtois), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The summary objective of both projects has been "to contribute to making the process of designing and constructing dependable computing systems much more predictable and cost-effective." In the case of PDCS2, the concentration has been on the problems of producing dependable distributed real-time systems and especially those where the dependability requirements centre on issues of safety and/or security.

Verification and Validation of Real-Time Software (Paperback, Softcover reprint of the original 1st ed. 1985): William J. Quirk Verification and Validation of Real-Time Software (Paperback, Softcover reprint of the original 1st ed. 1985)
William J. Quirk
R1,522 Discovery Miles 15 220 Ships in 10 - 15 working days

W.J.Quirk 1.1 Real-time software and the real world Real-time software and the real world are inseparably related. Real time cannot be turned back and the real world will not always forget its history. The consequences of previous influences may last for a long time and the undesired effects may range from being inconvenient to disastrous in both economic and human terms. As a result, there is much pressure to develop and apply techniques to improve the reliability of real-time software so that the frequency and consequences of failure are reduced to a level that is as low as reasonably achievable. This report is about such techniques. After a detailed description of the software life cycle, a chapter is devoted to each of the four principle categories of technique available at present. These cover all stages of the software development process and each chapter identifies relevant techniques, the stages to which they are applicable and their effectiveness in improving real-time software reliability. 1.2 The characteristics of real-time software As well as the enhanced reliability requirement discussed above, real-time software has a number of other distinguishing characteristics. First, the sequencing and timing of inputs are determined by the real world and not by the programmer. Thus the program needs to be prepared for the unexpected and the demands made on the system may be conflicting. Second, the demands on the system may occur in parallel rather than in sequence.

Evaluating AAL Systems Through Competitive Benchmarking - Indoor Localization and Tracking - International Competition, EvAAL... Evaluating AAL Systems Through Competitive Benchmarking - Indoor Localization and Tracking - International Competition, EvAAL 2011, Competition in Valencia, Spain, July 25-29, 2011, and Final Workshop in Lecce ,Italy, September 26, 2011. Revised Selected Papers (Paperback, 2012 ed.)
Stefano Chessa, Stefan Knauth
R1,471 Discovery Miles 14 710 Ships in 10 - 15 working days

This book constitutes the refereed proceedings of the international competition aimed at the evaluation and assessment of Ambient Assisted Living (AAL) systems and services, EvAAL 2011, which was organized in two major events, the Competition in Valencia, Spain, in July 2011, and the Final workshop in Lecce, Italy, in September 2011. The papers included in this book describe the organization and technical aspects of the competition, and provide a complete technical description of the competing artefacts and report on the experience lessons learned by the teams during the competition.

Artificial Intelligence & Expert Systems Sourcebook (Paperback, Softcover reprint of the original 1st ed. 1986): V.Daniel Hunt Artificial Intelligence & Expert Systems Sourcebook (Paperback, Softcover reprint of the original 1st ed. 1986)
V.Daniel Hunt
R1,532 Discovery Miles 15 320 Ships in 10 - 15 working days

Artificial Intelligence and expert systems research, development, and demonstration have rapidly expanded over the past several years; as a result, new terminology is appearing at a phenomenal rate. This sourcebook provides an introduction to artificial intelligence and expert systems, it provides brief definitions, it includes brief descriptions of software products, and vendors, and notes leaders in the field. Extensive support material is provided by delineating points of contact for receiving additional information, acronyms, a detailed bibliography, and other reference data. The terminology includes artificial intelligence and expert system elements for: * Artificial Intelligence * Expert Systems * Natural language Processing * Smart Robots * Machine Vision * Speech Synthesis The Artificial Intelligence and Expert System Sourcebook is compiled from informa tion acquired from numerous books, journals, and authorities in the field of artificial intelligence and expert systems. I hope this compilation of information will help clarify the terminology for artificial intelligence and expert systems' activities. Your comments, revisions, or questions are welcome. V. Daniel Hunt Springfield, Virginia May, 1986 ix Acknowledgments The information in Artificial Intelligence and Expert Systems Sourcebook has been compiled from a wide variety of authorities who are specialists in their respective fields. The following publications were used as the basic technical resources for this book. Portions of these publications may have been used in the book. Those definitions or artwork used have been reproduced with the permission to reprint of the respective publisher.

Transactions on Large-Scale Data- and Knowledge-Centered Systems IV - Special Issue on Database Systems for Biomedical... Transactions on Large-Scale Data- and Knowledge-Centered Systems IV - Special Issue on Database Systems for Biomedical Applications (Paperback, 2011)
Abdelkader Hameurlain, Josef Kung, Roland Wagner; Edited by Christian Boehm, Johann Eder, …
R1,500 Discovery Miles 15 000 Ships in 10 - 15 working days

The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasibility of these systems relies basically on P2P (peer-to-peer) techniques and the support of agent systems with scaling and decentralized control. Synergy between Grids, P2P systems, and agent technologies is the key to data- and knowledge-centered systems in large-scale environments. This special issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems highlights some of the major challenges emerging from the biomedical applications that are currently inspiring and promoting database research. These include the management, organization, and integration of massive amounts of heterogeneous data; the semantic gap between high-level research questions and low-level data; and privacy and efficiency. The contributions cover a large variety of biological and medical applications, including genome-wide association studies, epidemic research, and neuroscience.

Software Performability: From Concepts to Applications (Paperback, Softcover reprint of the original 1st ed. 1996): Ann T. Tai,... Software Performability: From Concepts to Applications (Paperback, Softcover reprint of the original 1st ed. 1996)
Ann T. Tai, John F. Meyer, Algirdas Avizienis
R4,326 Discovery Miles 43 260 Ships in 10 - 15 working days

Computers are currently used in a variety of critical applications, including systems for nuclear reactor control, flight control (both aircraft and spacecraft), and air traffic control. Moreover, experience has shown that the dependability of such systems is particularly sensitive to that of its software components, both the system software of the embedded computers and the application software they support. Software Performability: From Concepts to Applications addresses the construction and solution of analytic performability models for critical-application software. The book includes a review of general performability concepts along with notions which are peculiar to software performability. Since fault tolerance is widely recognized as a viable means for improving the dependability of computer system (beyond what can be achieved by fault prevention), the examples considered are fault-tolerant software systems that incorporate particular methods of design diversity and fault recovery. Software Performability: From Concepts to Applications will be of direct benefit to both practitioners and researchers in the area of performance and dependability evaluation, fault-tolerant computing, and dependable systems for critical applications. For practitioners, it supplies a basis for defining combined performance-dependability criteria (in the form of objective functions) that can be used to enhance the performability (performance/dependability) of existing software designs. For those with research interests in model-based evaluation, the book provides an analytic framework and a variety of performability modeling examples in an application context of recognized importance. The material contained in this book will both stimulate future research on related topics and, for teaching purposes, serve as a reference text in courses on computer system evaluation, fault-tolerant computing, and dependable high-performance computer systems.

Recent Advances in Interval Type-2 Fuzzy Systems (Paperback, 2012 ed.): Oscar Castillo, Patricia Melin Recent Advances in Interval Type-2 Fuzzy Systems (Paperback, 2012 ed.)
Oscar Castillo, Patricia Melin
R1,465 Discovery Miles 14 650 Ships in 10 - 15 working days

This book reviews current state of the art methods for building intelligent systems using type-2 fuzzy logic and bio-inspired optimization techniques. Combining type-2 fuzzy logic with optimization algorithms, powerful hybrid intelligent systems have been built using the advantages that each technique offers. This book is intended to be a reference for scientists and engineers interested in applying type-2 fuzzy logic for solving problems in pattern recognition, intelligent control, intelligent manufacturing, robotics and automation. This book can also be used as a reference for graduate courses like the following: soft computing, intelligent pattern recognition, computer vision, applied artificial intelligence, and similar ones. We consider that this book can also be used to get novel ideas for new lines of re-search, or to continue the lines of research proposed by the authors.

Soft Computing and Fractal Theory for Intelligent Manufacturing (Paperback, Softcover reprint of the original 1st ed. 2003):... Soft Computing and Fractal Theory for Intelligent Manufacturing (Paperback, Softcover reprint of the original 1st ed. 2003)
Oscar Castillo, Patricia Melin
R4,350 Discovery Miles 43 500 Ships in 10 - 15 working days

We describe in this book, new methods for intelligent manufacturing using soft computing techniques and fractal theory. Soft Computing (SC) consists of several computing paradigms, including fuzzy logic, neural networks, and genetic algorithms, which can be used to produce powerful hybrid intelligent systems. Fractal theory provides us with the mathematical tools to understand the geometrical complexity of natural objects and can be used for identification and modeling purposes. Combining SC techniques with fractal theory, we can take advantage of the "intelligence" provided by the computer methods and also take advantage of the descriptive power of the fractal mathematical tools. Industrial manufacturing systems can be considered as non-linear dynamical systems, and as a consequence can have highly complex dynamic behaviors. For this reason, the need for computational intelligence in these manufacturing systems has now been well recognized. We consider in this book the concept of "intelligent manufacturing" as the application of soft computing techniques and fractal theory for achieving the goals of manufacturing, which are production planning and control, monitoring and diagnosis of faults, and automated quality control. As a prelude, we provide a brief overview of the existing methodologies in Soft Computing. We then describe our own approach in dealing with the problems in achieving intelligent manufacturing. Our particular point of view is that to really achieve intelligent manufacturing in real-world applications we need to use SC techniques and fractal theory.

Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Paperback, Softcover reprint... Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
Nandit R. Soparkar, Henry F. Korth, Abraham Silberschatz
R2,834 Discovery Miles 28 340 Ships in 10 - 15 working days

Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."

Architecture-Independent Loop Parallelisation (Paperback, Softcover reprint of the original 1st ed. 2000): Radu C. Calinescu Architecture-Independent Loop Parallelisation (Paperback, Softcover reprint of the original 1st ed. 2000)
Radu C. Calinescu
R2,844 Discovery Miles 28 440 Ships in 10 - 15 working days

Architecture-independent programming and automatic parallelisation have long been regarded as two different means of alleviating the prohibitive costs of parallel software development. Building on recent advances in both areas, Architecture-Independent Loop Parallelisation proposes a unified approach to the parallelisation of scientific computing code. This novel approach is based on the bulk-synchronous parallel model of computation, and succeeds in automatically generating parallel code that is architecture-independent, scalable, and of analytically predictable performance.

Real World Speech Processing (Paperback, Softcover reprint of the original 1st ed. 2004): Jhing-Fa Wang, Sadaoki Furui,... Real World Speech Processing (Paperback, Softcover reprint of the original 1st ed. 2004)
Jhing-Fa Wang, Sadaoki Furui, Biing-Hwang Juang
R2,581 Discovery Miles 25 810 Ships in 10 - 15 working days

Real World Speech Processing brings together in one place important contributions and up-to-date research results in this fast-moving area. The contributors to this work were selected from the leading researchers and practitioners in this field.
The work, originally published as Volume 36, Numbers 2-3 of the Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology, will be valuable to anyone working or researching in the field of speech processing. It serves as an excellent reference, providing insight into some of the most challenging issues being examined today.

Formal Methods for Industrial Critical Systems - 16th International Workshop, FMICS 2011, Trento, Italy, August 29-30, 2011,... Formal Methods for Industrial Critical Systems - 16th International Workshop, FMICS 2011, Trento, Italy, August 29-30, 2011, Proceedings (Paperback, 2011)
Gwen Salaun, Bernhard Schatz
R1,514 Discovery Miles 15 140 Ships in 10 - 15 working days

This book constitutes the proceedings of the 16th International Workshop on Formal Methods for Industrial Critical Systems, FMICS 2011, held in Trento, Italy, in August 2011. The 16 papers presented together with 2 invited talks were carefully reviewed and selected from 39 submissions. The aim of the FMICS workshop series is to provide a forum for researchers who are interested in the development and application of formal methods in industry. It also strives to promote research and development for the improvement of formal methods and tools for industrial applications.

Computer Safety, Reliability, and Security - 30th International Conference, SAFECOMP 2011, Naples, Italy, September 19-22,... Computer Safety, Reliability, and Security - 30th International Conference, SAFECOMP 2011, Naples, Italy, September 19-22, 2011, Proceedings (Paperback, 2011 ed.)
Francesco Flammini, Sandro Bologna, Valeria Vittorini
R1,578 Discovery Miles 15 780 Ships in 10 - 15 working days

This book constitutes the refereed proceedings of the 30th International Conference on Computer Safety, Reliability, and Security, SAFECOMP 2011, held in Naples, Italy, in September 2011. The 34 full papers presented together were carefully reviewed and selected from 100 submissions. The papers are organized in topical sections on RAM evaluation, complex systems dependability, formal verification, risk and hazard analysis, cybersecurity and optimization methods.

Middleware'98 - IFIP International Conference on Distributed Systems Platforms and Open Distributed Processing (Paperback,... Middleware'98 - IFIP International Conference on Distributed Systems Platforms and Open Distributed Processing (Paperback, Softcover reprint of the original 1st ed. 1998)
Nigel Davies, Kerry Raymond, Jochen Seitz
R2,929 Discovery Miles 29 290 Ships in 10 - 15 working days

Welcome to Middleware'98 and to one of England's most beautiful regions. In recent years the distributed systems community has witnessed a growth in the number of conferences, leading to difficulties in tracking the literature and a consequent loss of awareness of work done by others in this important field. The aim of Middleware'98 is to synthesise many of the smaller workshops and conferences in this area, bringing together research communities which were becoming fragmented. The conference has been designed to maximise the experience for attendees. This is reflected in the choice of a resort venue (rather than a big city) to ensure a strong focus on interaction with other distributed systems researchers. The programme format incorporates a question-and-answer panel in each session, enabling significant issues to be discussed in the context of related papers and presentations. The invited speakers and tutorials are intended to not only inform the attendees, but also to stimulate discussion and debate.

Information Security - 14th International Conference, ISC 2011, Xi'an, China, October 26-29, 2011, Proceedings (Paperback,... Information Security - 14th International Conference, ISC 2011, Xi'an, China, October 26-29, 2011, Proceedings (Paperback, 2011 ed.)
Xuejia Lai, Jianying Zhou, Hui Li
R1,552 Discovery Miles 15 520 Ships in 10 - 15 working days

This book constitutes the refereed proceedings of the 14th International Conference on Information Security, ISC 2011, held in Xi'an, China, in October 2011. The 25 revised full papers were carefully reviewed and selected from 95 submissions. The papers are organized in topical sections on attacks; protocols; public-key cryptosystems; network security; software security; system security; database security; privacy; digital signatures.

Multicore Software Engineering, Performance and Tools - International Conference, MSEPT 2012, Prague, Czech Republic, May... Multicore Software Engineering, Performance and Tools - International Conference, MSEPT 2012, Prague, Czech Republic, May 31--June 1, 2012, Proceedings (Paperback, 2012)
Victor Pankratius, Michael Philippsen
R1,835 Discovery Miles 18 350 Ships in 10 - 15 working days

This book constitutes the refereed proceedings of the International Conference on Multiscore Software Engineering, Performance, and Tools, MSEPT 2012, held in Prague in May/June 2012. The 9 revised papers, 4 of which are short papers were carefully reviewed and selected from 24 submissions. The papers address new work on optimization of multicore software, program analysis, and automatic parallelization. They also provide new perspectives on programming models as well as on applications of multicore systems.

Design of Reservation Protocols for Multimedia Communication (Paperback, Softcover reprint of the original 1st ed. 1996): Luca... Design of Reservation Protocols for Multimedia Communication (Paperback, Softcover reprint of the original 1st ed. 1996)
Luca Delgrossi
R4,353 Discovery Miles 43 530 Ships in 10 - 15 working days

The advent of multimedia technology is creating a number of new problems in the fields of computer and communication systems. Perhaps the most important of these problems in communication, and certainly the most interesting, is that of designing networks to carry multimedia traffic, including digital audio and video, with acceptable quality. The main challenge in integrating the different services needed by the different types of traffic into the same network (an objective that is made worthwhile by its obvious economic advantages) is to satisfy the performance requirements of continuous media applications, as the quality of audio and video streams at the receiver can be guaranteed only if bounds on delay, delay jitters, bandwidth, and reliability are guaranteed by the network. Since such guarantees cannot be provided by traditional packet-switching technology, a number of researchers and research groups during the last several years have tried to meet the challenge by proposing new protocols or modifications of old ones, to make packet-switching networks capable of delivering audio and video with good quality while carrying all sorts of other traffic. The focus of this book is on HeiTS (the Heidelberg Transport System), and its contributions to integrated services network design. The HeiTS architecture is based on using the Internet Stream Protocol Version 2 (ST-II) at the network layer. The Heidelberg researchers were the first to implement ST-II. The author documents this activity in the book and provides thorough coverage of the improvements made to the protocol. The book also includes coverage of HeiTP as used in error handling, error control, congestion control, and the full specification of ST2+, a new version of ST-II. The ideas and techniques implemented by the Heidelberg group and their coverage in this volume apply to many other approaches to multimedia networking.

Adaptive Signal Processing - Theory and Applications (Paperback, Softcover reprint of the original 1st ed. 1986): Thomas S... Adaptive Signal Processing - Theory and Applications (Paperback, Softcover reprint of the original 1st ed. 1986)
Thomas S Alexander
R2,597 Discovery Miles 25 970 Ships in 10 - 15 working days

The creation of the text really began in 1976 with the author being involved with a group of researchers at Stanford University and the Naval Ocean Systems Center, San Diego. At that time, adaptive techniques were more laboratory (and mental) curiosities than the accepted and pervasive categories of signal processing that they have become. Over the lasl 10 years, adaptive filters have become standard components in telephony, data communications, and signal detection and tracking systems. Their use and consumer acceptance will undoubtedly only increase in the future. The mathematical principles underlying adaptive signal processing were initially fascinating and were my first experience in seeing applied mathematics work for a paycheck. Since that time, the application of even more advanced mathematical techniques have kept the area of adaptive signal processing as exciting as those initial days. The text seeks to be a bridge between the open literature in the professional journals, which is usually quite concentrated, concise, and advanced, and the graduate classroom and research environment where underlying principles are often more important.

Euro-Par 2011 Parallel Processing - 17th International Euro-ParConference, Bordeaux, France, August 29 - September 2, 2011,... Euro-Par 2011 Parallel Processing - 17th International Euro-ParConference, Bordeaux, France, August 29 - September 2, 2011, Proceedings, Part II (Paperback, 2011 ed.)
Emmanuel Jeannot, Raymond Namyst, Jean Roman
R1,579 Discovery Miles 15 790 Ships in 10 - 15 working days

The two-volume set LNCS 6852/6853 constitutes the refereed proceedings of the 17th International Euro-Par Conference held in Bordeaux, France, in August/September 2011.The 81 revised full papers presented were carefully reviewed and selected from 271 submissions. The papers are organized in topical sections on support tools and environments; performance prediction and evaluation; scheduling and load-balancing; high-performance architectures and compilers; parallel and distributed data management; grid, cluster and cloud computing; peer to peer computing; distributed systems and algorithms; parallel and distributed programming; parallel numerical algorithms; multicore and manycore programming; theory and algorithms for parallel computation; high performance networks and mobile ubiquitous computing.

Stabilization, Safety, and Security of Distributed Systems - 13th International Symposium, SSS 2011, Grenoble, France, October... Stabilization, Safety, and Security of Distributed Systems - 13th International Symposium, SSS 2011, Grenoble, France, October 10-12, 2011, Proceedings (Paperback)
Xavier Defago, Franck Petit, Vincent Villain
R1,570 Discovery Miles 15 700 Ships in 10 - 15 working days

This book constitutes the proceedings of the 13th International Symposium on Stabilization, Safety, and Security of Distributed Systems, SSS 2011, held in Grenoble, France, in October 2011. The 29 papers presented were carefully reviewed and selected from 79 submissions. They cover the following areas: ad-hoc, sensor, and peer-to-peer networks; safety and verification; security; self-organizing and autonomic systems; and self-stabilization.

Sensing and Systems in Pervasive Computing - Engineering Context Aware Systems (Paperback, Edition.): Dan Chalmers Sensing and Systems in Pervasive Computing - Engineering Context Aware Systems (Paperback, Edition.)
Dan Chalmers
R1,152 Discovery Miles 11 520 Ships in 10 - 15 working days

Focus on issues and principles in context awareness, sensor processing and software design (rather than sensor networks or HCI or particular commercial systems).

Designed as a textbook, with readings and lab problems in most chapters.

Focus on concepts, algorithms and ideas rather than particular technologies.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
A Generalized Framework of Linear…
Liansheng Tan Paperback R2,474 R2,339 Discovery Miles 23 390
Analog Communications - Problems and…
Kasturi Vasudevan Hardcover R1,579 Discovery Miles 15 790
Fundamentals of Femtosecond Optics
S. A. Kozlov, V.V. Samartsev Hardcover R3,263 Discovery Miles 32 630
Advanced Nanoscale ULSI Interconnects…
Yosi Shacham-Diamand, Tetsuya Osaka, … Hardcover R4,479 Discovery Miles 44 790
Electromagnetic Compatibility in Power…
Francesco Lattarulo Hardcover R4,055 Discovery Miles 40 550
Robotics and Automation in the Food…
Darwin G Caldwell Hardcover R5,255 Discovery Miles 52 550
The Architecture of Determiners
Thomas Leu Hardcover R3,634 Discovery Miles 36 340
Verb Doubling and Dummy Verb - Gap…
Johannes Hein Hardcover R4,128 Discovery Miles 41 280
Handbook of Self Assembled Semiconductor…
Mohamed Henini Hardcover R4,254 Discovery Miles 42 540
Handbook of Electronic Assistive…
Ladan Najafi, Donna Cowan Paperback R3,829 R3,570 Discovery Miles 35 700

 

Partners