0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (13)
  • R250 - R500 (41)
  • R500+ (3,163)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

Application-Driven Architecture Synthesis (Paperback, Softcover reprint of the original 1st ed. 1993): Francky Catthoor,... Application-Driven Architecture Synthesis (Paperback, Softcover reprint of the original 1st ed. 1993)
Francky Catthoor, Lars-Gunnar Svensson
R4,221 Discovery Miles 42 210 Ships in 10 - 15 working days

Application-Driven Architecture Synthesis describes the state of the art of architectural synthesis for complex real-time processing. In order to deal with the stringent timing requirements and the intricacies of complex real-time signal and data processing, target architecture styles and target application domains have been adopted to make the synthesis approach feasible. These approaches are also heavily application-driven, which is illustrated by many realistic demonstrations, used as examples in the book. The focus is on domains where application-specific solutions are attractive, such as significant parts of audio, telecom, instrumentation, speech, robotics, medical and automotive processing, image and video processing, TV, multi-media, radar, sonar. Application-Driven Architecture Synthesis is of interest to both academics and senior design engineers and CAD managers in industry. It provides an excellent overview of what capabilities to expect from future practical design tools, and includes an extensive bibliography.

Scheduling in Parallel Computing Systems - Fuzzy and Annealing Techniques (Paperback, Softcover reprint of the original 1st ed.... Scheduling in Parallel Computing Systems - Fuzzy and Annealing Techniques (Paperback, Softcover reprint of the original 1st ed. 1999)
Shaharuddin Salleh, Albert Y. Zomaya
R4,197 Discovery Miles 41 970 Ships in 10 - 15 working days

Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques advocates the viability of using fuzzy and annealing methods in solving scheduling problems for parallel computing systems. The book proposes new techniques for both static and dynamic scheduling, using emerging paradigms that are inspired by natural phenomena such as fuzzy logic, mean-field annealing, and simulated annealing. Systems that are designed using such techniques are often referred to in the literature as intelligent' because of their capability to adapt to sudden changes in their environments. Moreover, most of these changes cannot be anticipated in advance or included in the original design of the system. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques provides results that prove such approaches can become viable alternatives to orthodox solutions to the scheduling problem, which are mostly based on heuristics. Although heuristics are robust and reliable when solving certain instances of the scheduling problem, they do not perform well when one needs to obtain solutions to general forms of the scheduling problem. On the other hand, techniques inspired by natural phenomena have been successfully applied for solving a wide range of combinatorial optimization problems (e.g. traveling salesman, graph partitioning). The success of these methods motivated their use in this book to solve scheduling problems that are known to be formidable combinatorial problems. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques is an excellent reference and may be used for advanced courses on the topic.

Distributed Sensor Networks - A Multiagent Perspective (Paperback, Softcover reprint of the original 1st ed. 2003): Victor... Distributed Sensor Networks - A Multiagent Perspective (Paperback, Softcover reprint of the original 1st ed. 2003)
Victor Lesser, Charles L. Ortiz Jr, Milind Tambe
R4,255 Discovery Miles 42 550 Ships in 10 - 15 working days

Distributed Sensor Networks is the first book of its kind to examine solutions to this problem using ideas taken from the field of multiagent systems. The field of multiagent systems has itself seen an exponential growth in the past decade, and has developed a variety of techniques for distributed resource allocation. Distributed Sensor Networks contains contributions from leading, international researchers describing a variety of approaches to this problem based on examples of implemented systems taken from a common distributed sensor network application; each approach is motivated, demonstrated and tested by way of a common challenge problem. The book focuses on both practical systems and their theoretical analysis, and is divided into three parts: the first part describes the common sensor network challenge problem; the second part explains the different technical approaches to the common challenge problem; and the third part provides results on the formal analysis of a number of approaches taken to address the challenge problem.

Quality by Design for Electronics (Paperback, Softcover reprint of the original 1st ed. 1996): W. Fleischammer Quality by Design for Electronics (Paperback, Softcover reprint of the original 1st ed. 1996)
W. Fleischammer
R4,939 Discovery Miles 49 390 Ships in 10 - 15 working days

This book concentrates on the quality of electronic products. Electronics in general, including semiconductor technology and software, has become the key technology for wide areas of industrial production. In nearly all expanding branches of industry electronics, especially digital electronics, is involved. And the spread of electronic technology has not yet come to an end. This rapid development, coupled with growing competition and the shorter innovation cycle, have caused economic problems which tend to have adverse effects on quality. Therefore, good quality at low cost is a very attractive goal in industry today. The demand for better quality continues along with a demand for more studies in quality assurance. At the same time, many companies are experiencing a drop in profits just when better quality of their products is essential in order to survive against the competition. There have been many proposals in the past to improve quality without increase in cost, or to reduce cost for quality assurance without loss of quality. This book tries to summarize the practical content of many of these proposals and to give some advice, above all to the designer and manufacturer of electronic devices. It mainly addresses practically minded engineers and managers. It is probably of less interest to pure scientists. The book covers all aspects of quality assurance of components used in electronic devices. Integrated circuits (lCs) are considered to be the most important components because the degree of integration is still rising.

Computational Aerosciences in the 21st Century - Proceedings of the ICASE/LaRC/NSF/ARO Workshop, conducted by the Institute for... Computational Aerosciences in the 21st Century - Proceedings of the ICASE/LaRC/NSF/ARO Workshop, conducted by the Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, The National Science Foundation and the Army Research Office, April 22-24, 1998 (Paperback, Softcover reprint of the original 1st ed. 2000)
Manuel D. Salas, W. Kyle Anderson
R2,796 Discovery Miles 27 960 Ships in 10 - 15 working days

Over the last decade, the role of computational simulations in all aspects of aerospace design has steadily increased. However, despite the many advances, the time required for computations is far too long. This book examines new ideas and methodologies that may, in the next twenty years, revolutionize scientific computing. The book specifically looks at trends in algorithm research, human computer interface, network-based computing, surface modeling and grid generation and computer hardware and architecture. The book provides a good overview of the current state-of-the-art and provides guidelines for future research directions. The book is intended for computational scientists active in the field and program managers making strategic research decisions.

Modeling Microprocessor Performance (Paperback, Softcover reprint of the original 1st ed. 1998): Bibiche Geuskens, Kenneth Rose Modeling Microprocessor Performance (Paperback, Softcover reprint of the original 1st ed. 1998)
Bibiche Geuskens, Kenneth Rose
R2,766 Discovery Miles 27 660 Ships in 10 - 15 working days

Modeling Microprocessor Performance focuses on the development of a design and evaluation tool, named RIPE (Rensselaer Interconnect Performance Estimator). This tool analyzes the impact on wireability, clock frequency, power dissipation, and the reliability of single chip CMOS microprocessors as a function of interconnect, device, circuit, design and architectural parameters. It can accurately predict the overall performance of existing microprocessor systems. For the three major microprocessor architectures, DEC, PowerPC and Intel, the results have shown agreement within 10% on key parameters. The models cover a broad range of issues that relate to the implementation and performance of single chip CMOS microprocessors. The book contains a detailed discussion of the various models and the underlying assumptions based on actual design practices. As such, RIPE and its models provide an insightful tool into single chip microprocessor design and its performance aspects. At the same time, it provides design and process engineers with the capability to model, evaluate, compare and optimize single chip microprocessor systems using advanced technology and design techniques at an early design stage without costly and time consuming implementation. RIPE and its models demonstrate the factors which must be considered when estimating tradeoffs in device and interconnect technology and architecture design on microprocessor performance.

Distributed Systems for System Architects (Paperback, Softcover reprint of the original 1st ed. 2001): Paulo Verissimo, Luis... Distributed Systems for System Architects (Paperback, Softcover reprint of the original 1st ed. 2001)
Paulo Verissimo, Luis Rodrigues
R2,892 Discovery Miles 28 920 Ships in 10 - 15 working days

The primary audience for this book are advanced undergraduate students and graduate students. Computer architecture, as it happened in other fields such as electronics, evolved from the small to the large, that is, it left the realm of low-level hardware constructs, and gained new dimensions, as distributed systems became the keyword for system implementation. As such, the system architect, today, assembles pieces of hardware that are at least as large as a computer or a network router or a LAN hub, and assigns pieces of software that are self-contained, such as client or server programs, Java applets or pro tocol modules, to those hardware components. The freedom she/he now has, is tremendously challenging. The problems alas, have increased too. What was before mastered and tested carefully before a fully-fledged mainframe or a closely-coupled computer cluster came out on the market, is today left to the responsibility of computer engineers and scientists invested in the role of system architects, who fulfil this role on behalf of software vendors and in tegrators, add-value system developers, R&D institutes, and final users. As system complexity, size and diversity grow, so increases the probability of in consistency, unreliability, non responsiveness and insecurity, not to mention the management overhead. What System Architects Need to Know The insight such an architect must have includes but goes well beyond, the functional properties of distributed systems.

Handbook of Electronics Manufacturing Engineering (Paperback, Softcover reprint of the original 3rd ed. 1997): Bernie Matisoff Handbook of Electronics Manufacturing Engineering (Paperback, Softcover reprint of the original 3rd ed. 1997)
Bernie Matisoff
R5,545 Discovery Miles 55 450 Ships in 10 - 15 working days

This single source reference offers a pragmatic and accessible approach to the basic methods and procedures used in the manufacturing and design of modern electronic products. Providing a stategic yet simplified layout, this handbook is set up with an eye toward maximizing productivity in each phase of the eletronics manufacturing process. Not only does this handbook inform the reader on vital issues concerning electronics manufacturing and design, it also provides practical insight and will be of essential use to manufacturing and process engineers in electronics and aerospace manufacturing. In addition, electronics packaging engineers and electronics manufacturing managers and supervisors will gain a wealth of knowledge.

Matrix Computations on Systolic-Type Arrays (Paperback, Softcover reprint of the original 1st ed. 1992): Jaime Moreno, Tomas... Matrix Computations on Systolic-Type Arrays (Paperback, Softcover reprint of the original 1st ed. 1992)
Jaime Moreno, Tomas Lang
R4,233 Discovery Miles 42 330 Ships in 10 - 15 working days

Matrix Computations on Systolic-Type Arrays provides a framework which permits a good understanding of the features and limitations of processor arrays for matrix algorithms. It describes the tradeoffs among the characteristics of these systems, such as internal storage and communication bandwidth, and the impact on overall performance and cost. A system which allows for the analysis of methods for the design/mapping of matrix algorithms is also presented. This method identifies stages in the design/mapping process and the capabilities required at each stage. Matrix Computations on Systolic-Type Arrays provides a much needed description of the area of processor arrays for matrix algorithms and of the methods used to derive those arrays. The ideas developed here reduce the space of solutions in the design/mapping process by establishing clear criteria to select among possible options as well as by a-priori rejection of alternatives which are not adequate (but which are considered in other approaches). The end result is a method which is more specific than other techniques previously available (suitable for a class of matrix algorithms) but which is more systematic, better defined and more effective in reaching the desired objectives. Matrix Computations on Systolic-Type Arrays will interest researchers and professionals who are looking for systematic mechanisms to implement matrix algorithms either as algorithm-specific structures or using specialized architectures. It provides tools that simplify the design/mapping process without introducing degradation, and that permit tradeoffs between performance/cost measures selected by the designer.

Scalable Shared Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1992): Michel Dubois, Shreekant S.... Scalable Shared Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1992)
Michel Dubois, Shreekant S. Thakkar
R4,243 Discovery Miles 42 430 Ships in 10 - 15 working days

The workshop on Scalable Shared Memory Multiprocessors took place on May 26 and 27 1990 at the Stouffer Madison Hotel in Seattle, Washington as a prelude to the 1990 International Symposium on Computer Architecture. About 100 participants listened for two days to the presentations of 22 invited The motivation for this workshop was to speakers, from academia and industry. promote the free exchange of ideas among researchers working on shared-memory multiprocessor architectures. There was ample opportunity to argue with speakers, and certainly participants did not refrain a bit from doing so. Clearly, the problem of scalability in shared-memory multiprocessors is still a wide-open question. We were even unable to agree on a definition of "scalability." Authors had more than six months to prepare their manuscript, and therefore the papers included in this proceedings are refinements of the speakers' presentations, based on the criticisms received at the workshop. As a result, 17 authors contributed to these proceedings. We wish to thank them for their diligence and care. The contributions in these proceedings can be partitioned into four categories 1. Access Order and Synchronization 2. Performance 3. Cache Protocols and Architectures 4. Distributed Shared Memory Particular topics on which new ideas and results are presented in these proceedings include: efficient schemes for combining networks, formal specification of shared memory models, correctness of trace-driven simulations, synchronization, various coherence protocols, ."

Synchronization Design for Digital Systems (Paperback, Softcover reprint of the original 1st ed. 1991): Teresa H. Meng Synchronization Design for Digital Systems (Paperback, Softcover reprint of the original 1st ed. 1991)
Teresa H. Meng
R2,762 Discovery Miles 27 620 Ships in 10 - 15 working days

Synchronization is one of the important issues in digital system design. While other approaches have always been intriguing, up until now synchro nous operation using a common clock has been the dominant design philo sophy. However, we have reached the point, with advances in technology, where other options should be given serious consideration. This is because the clock periods are getting much smaller in relation to the interconnect propagation delays, even within a single chip and certainly at the board and backplane level. To a large extent, this problem can be overcome with care ful clock distribution in synchronous design, and tools for computer-aided design of clock distribution. However, this places global constraints on the design, making it necessary, for example, to redesign the clock distribution each time any part of the system is changed. In this book, some alternative approaches to synchronization in digital sys tem design are described and developed. We owe these techniques to a long history of effort in both digital system design and in digital communica tions, the latter field being relevant because large propagation delays have always been a dominant consideration in design. While synchronous design is discussed and contrasted to the other techniques in Chapter 6, the dom inant theme of this book is alternative approaches.

Arrays, Functional Languages, and Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1991): Lenore... Arrays, Functional Languages, and Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1991)
Lenore M.Restifo Mullin; Contributions by Michael Jenkins, Gaetan Hains, Robert Bernecky, Guang R. Gao
R4,240 Discovery Miles 42 400 Ships in 10 - 15 working days

During a meeting in Toronto last winter, Mike Jenkins, Bob Bernecky and I were discussing how the two existing theories on arrays influenced or were in fluenced by programming languages and systems. More's Army Theory was the basis for NIAL and APL2 and Mullin's A Mathematics of A rmys(MOA), is being used as an algebra of arrays in functional and A-calculus based pro gramming languages. MOA was influenced by Iverson's initial and extended algebra, the foundations for APL and J respectively. We discussed that there is a lot of interest in the Computer Science and Engineering communities concerning formal methods for languages that could support massively parallel operations in scientific computing, a back to-roots interest for both Mike and myself. Languages for this domain can no longer be informally developed since it is necessary to map languages easily to many multiprocessor architectures. Software systems intended for parallel computation require a formal basis so that modifications can be done with relative ease while ensuring integrity in design. List based lan guages are profiting from theoretical foundations such as the Bird-Meertens formalism. Their theory has been successfully used to describe list based parallel algorithms across many classes of architectures."

Workload Characterization for Computer System Design (Paperback, Softcover reprint of the original 1st ed. 2000): Lizy Kurian... Workload Characterization for Computer System Design (Paperback, Softcover reprint of the original 1st ed. 2000)
Lizy Kurian John, Ann Marie Grizzaffi Maynard
R2,769 Discovery Miles 27 690 Ships in 10 - 15 working days

The advent of the world-wide web and web-based applications have dramatically changed the nature of computer applications. Computer system design, in the light of these changes, involves understanding these modem workloads, identifying bottlenecks during their execution, and appropriately tailoring microprocessors, memory systems, and the overall system to minimize bottlenecks. This book contains ten chapters dealing with several contemporary programming paradigms including Java, web server and database workloads. The first two chapters concentrate on Java. While Barisone et al.'s characterization in Chapter 1 deals with instruction set usage of Java applications, Kim et al.'s analysis in Chapter 2 focuses on memory referencing behavior of Java workloads. Several applications including the SPECjvm98 suite are studied using interpreter and Just-In-Time (TIT) compilers. Barisone et al.'s work includes an analytical model to compute the utilization of various functional units. Kim et al. present information on locality, live-range of objects, object lifetime distribution, etc. Studying database workloads has been a challenge to research groups, due to the difficulty in accessing standard benchmarks. Configuring hardware and software for database benchmarks such as those from the Transactions Processing Council (TPC) requires extensive effort. In Chapter 3, Keeton and Patterson present a simplified workload (microbenchmark) that approximates the characteristics of complex standardized benchmarks.

Image and Text Compression (Paperback, Softcover reprint of the original 1st ed. 1992): James A. Storer Image and Text Compression (Paperback, Softcover reprint of the original 1st ed. 1992)
James A. Storer
R4,249 Discovery Miles 42 490 Ships in 10 - 15 working days

This book presents exciting recent research on the compression of images and text. Part 1 presents the (lossy) image compression techniques of vector quantization, iterated transforms (fractal compression), and techniques that employ optical hardware. Part 2 presents the (lossless) text compression techniques of arithmetic coding, context modeling, and dictionary methods (LZ methods); this part of the book also addresses practical massively parallel architectures for text compression. Part 3 presents theoretical work in coding theory that has applications to both text and image compression. The book ends with an extensive bibliography of data compression papers and books which can serve as a valuable aid to researchers in the field. Points of Interest: * Data compression is becoming a key factor in the digital storage of text, speech graphics, images, and video, digital communications, data bases, and supercomputing. * The book addresses 'hot' data compression topics such as vector quantization, fractal compression, optical data compression hardware, massively parallel hardware, LZ methods, arithmetic coding. * Contributors are all accomplished researchers.* Extensive bibliography to aid researchers in the field.

Robust Model-Based Fault Diagnosis for Dynamic Systems (Paperback, Softcover reprint of the original 1st ed. 1999): Jie Chen,... Robust Model-Based Fault Diagnosis for Dynamic Systems (Paperback, Softcover reprint of the original 1st ed. 1999)
Jie Chen, R.J. Patton
R8,092 Discovery Miles 80 920 Ships in 10 - 15 working days

There is an increasing demand for dynamic systems to become safer and more reliable. This requirement extends beyond the normally accepted safety-critical systems such as nuclear reactors and aircraft, where safety is of paramount importance, to systems such as autonomous vehicles and process control systems where the system availability is vital. It is clear that fault diagnosis is becoming an important subject in modern control theory and practice. Robust Model-Based Fault Diagnosis for Dynamic Systems presents the subject of model-based fault diagnosis in a unified framework. It contains many important topics and methods; however, total coverage and completeness is not the primary concern. The book focuses on fundamental issues such as basic definitions, residual generation methods and the importance of robustness in model-based fault diagnosis approaches. In this book, fault diagnosis concepts and methods are illustrated by either simple academic examples or practical applications. The first two chapters are of tutorial value and provide a starting point for newcomers to this field.The rest of the book presents the state of the art in model-based fault diagnosis by discussing many important robust approaches and their applications. This will certainly appeal to experts in this field. Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research. The book is useful for both researchers in academia and professional engineers in industry because both theory and applications are discussed. Although this is a research monograph, it will be an important text for postgraduate research students world-wide. The largest market, however, will be academics, libraries and practicing engineers and scientists throughout the world.

Hierarchical Scheduling in Parallel and Cluster Systems (Paperback, Softcover reprint of the original 1st ed. 2003): Sivarama... Hierarchical Scheduling in Parallel and Cluster Systems (Paperback, Softcover reprint of the original 1st ed. 2003)
Sivarama Dandamudi
R4,224 Discovery Miles 42 240 Ships in 10 - 15 working days

Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e. , to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space.

Data Management for Mobile Computing (Paperback, Softcover reprint of the original 1st ed. 1998): Evaggelia Pitoura, George... Data Management for Mobile Computing (Paperback, Softcover reprint of the original 1st ed. 1998)
Evaggelia Pitoura, George Samaras
R5,392 Discovery Miles 53 920 Ships in 10 - 15 working days

Universal access and management of information has been one of the driving forces in the evolution of computer technology. Central computing gave the ability to perform large and complex computations and advanced information manipulation. Advances in networking connected computers together and led to distributed computing. Web technology and the Internet went even further to provide hyper-linked information access and global computing. However, restricting access stations to physical locations limits the boundary of the vision. The real global network can be achieved only via the ability to compute and access information from anywhere and anytime. This is the fundamental wish that motivates mobile computing. This evolution is the cumulative result of both hardware and software advances at various levels motivated by tangible application needs. Infrastructure research on communications and networking is essential for realizing wireless systems.Equally important is the design and implementation of data management applications for these systems, a task directly affected by the characteristics of the wireless medium and the resulting mobility of data resources and computation. Although a relatively new area, mobile data management has provoked a proliferation of research efforts motivated both by a great market potential and by many challenging research problems. The focus of Data Management for Mobile Computing is on the impact of mobile computing on data management beyond the networking level. The purpose is to provide a thorough and cohesive overview of recent advances in wireless and mobile data management. The book is written with a critical attitude. This volume probes the new issues introduced by wireless and mobile access to data and their conceptual and practical consequences. Data Management for Mobile Computing provides a single source for researchers and practitioners who want to keep abreast of the latest innovations in the field.It can also serve as a textbook for an advanced course on mobile computing or as a companion text for a variety of courses including courses on distributed systems, database management, transaction management, operating or file systems, information retrieval or dissemination, and web computing.

Conductor: Distributed Adaptation for Heterogeneous Networks (Paperback, Softcover reprint of the original 1st ed. 2002): Mark... Conductor: Distributed Adaptation for Heterogeneous Networks (Paperback, Softcover reprint of the original 1st ed. 2002)
Mark D. Yarvis, Peter Reiher, Gerald J. Popek
R2,777 Discovery Miles 27 770 Ships in 10 - 15 working days

Internet heterogeneity is driving a new challenge in application development: adaptive software. Together with the increased Internet capacity and new access technologies, network congestion and the use of older technologies, wireless access, and peer-to-peer networking are increasing the heterogeneity of the Internet. Applications should provide gracefully degraded levels of service when network conditions are poor, and enhanced services when network conditions exceed expectations. Existing adaptive technologies, which are primarily end-to-end or proxy-based and often focus on a single deficient link, can perform poorly in heterogeneous networks. Instead, heterogeneous networks frequently require multiple, coordinated, and distributed remedial actions. Conductor: Distributed Adaptation for Heterogeneous Networks describes a new approach to graceful degradation in the face of network heterogeneity - distributed adaptation - in which adaptive code is deployed at multiple points within a network. The feasibility of this approach is demonstrated by conductor, a middleware framework that enables distributed adaptation of connection-oriented, application-level protocols. By adapting protocols, conductor provides application-transparent adaptation, supporting both existing applications and applications designed with adaptation in mind. Conductor: Distributed Adaptation for Heterogeneous Networks introduces new techniques that enable distributed adaptation, making it automatic, reliable, and secure. In particular, we introduce the notion of semantic segmentation, which maintains exactly-once delivery of the semantic elements of a data stream while allowing the stream to be arbitrarily adapted in transit. We also introduce a secure architecture for automatic adaptor selection, protecting user data from unauthorized adaptation. These techniques are described both in the context of conductor and in the broader context of distributed systems. Finally, this book presents empirical evidence from several case studies indicating that distributed adaptation can allow applications to degrade gracefully in heterogeneous networks, providing a higher quality of service to users than other adaptive techniques. Further, experimental results indicate that the proposed techniques can be employed without excessive cost. Thus, distributed adaptation is both practical and beneficial. Conductor: Distributed Adaptation for Heterogeneous Networks is designed to meet the needs of a professional audience composed of researchers and practitioners in industry and graduate-level students in computer science.

Multiscalar Processors (Paperback, Softcover reprint of the original 1st ed. 2003): Manoj Franklin Multiscalar Processors (Paperback, Softcover reprint of the original 1st ed. 2003)
Manoj Franklin
R2,778 Discovery Miles 27 780 Ships in 10 - 15 working days

Multiscalar Processors presents a comprehensive treatment of the basic principles of Multiscalar execution, and advanced techniques for implementing the Multiscalar concepts. Special emphasis is placed on highlighting the major challenges involved in Multiscalar processing. This book is organized into nine chapters, and provides an excellent synopsis of a large body of research carried out on multiscalar processors in the last decade. It starts with technology trends that provide an impetus to the development of multiscalar processors and shape the development of future processors. The work ends with a review of the recent developments related to multiscalar processors.

Ontology Learning for the Semantic Web (Paperback, Softcover reprint of the original 1st ed. 2002): Alexander Maedche Ontology Learning for the Semantic Web (Paperback, Softcover reprint of the original 1st ed. 2002)
Alexander Maedche
R2,782 Discovery Miles 27 820 Ships in 10 - 15 working days

Ontology Learning for the Semantic Web explores techniques for applying knowledge discovery techniques to different web data sources (such as HTML documents, dictionaries, etc.), in order to support the task of engineering and maintaining ontologies. The approach of ontology learning proposed in Ontology Learning for the Semantic Web includes a number of complementary disciplines that feed in different types of unstructured and semi-structured data. This data is necessary in order to support a semi-automatic ontology engineering process. Ontology Learning for the Semantic Web is designed for researchers and developers of semantic web applications. It also serves as an excellent supplemental reference to advanced level courses in ontologies and the semantic web.

Content-Based Access to Multimedia Information - From Technology Trends to State of the Art (Paperback, Softcover reprint of... Content-Based Access to Multimedia Information - From Technology Trends to State of the Art (Paperback, Softcover reprint of the original 1st ed. 1999)
Brad Perry, Shi-Kuo Chang, J. Dinsmore, David Doermann, Azriel Rosenfeld, …
R2,749 Discovery Miles 27 490 Ships in 10 - 15 working days

In the past five years, the field of electrostatic discharge (ESD) control has under gone some notable changes. Industry standards have multiplied, though not all of these, in our view, are realistic and meaningful. Increasing importance has been ascribed to the Charged Device Model (CDM) versus the Human Body Model (HBM) as a cause of device damage and, presumably, premature (latent) failure. Packaging materials have significantly evolved. Air ionization techniques have improved, and usage has grown. Finally, and importantly, the government has ceased imposing MIL-STD-1686 on all new contracts, leaving companies on their own to formulate an ESD-control policy and write implementing documents. All these changes are dealt with in five new chapters and ten new reprinted papers added to this revised edition of ESD from A to Z. Also, the original chapters have been augmented with new material such as more troubleshooting examples in Chapter 8 and a 20-question multiple-choice test for certifying operators in Chapter 9. More than ever, the book seeks to provide advice, guidance, and practical ex amples, not just a jumble of facts and generalizations. For instance, the added tailored versions of the model specifications for ESD-safe handling and packaging are actually in use at medium-sized corporations and could serve as patterns for many readers.

Parallel-Vector Equation Solvers for Finite Element Engineering Applications (Paperback, Softcover reprint of the original 1st... Parallel-Vector Equation Solvers for Finite Element Engineering Applications (Paperback, Softcover reprint of the original 1st ed. 2002)
Duc Thai Nguyen
R4,278 Discovery Miles 42 780 Ships in 10 - 15 working days

Despite the ample number of articles on parallel-vector computational algorithms published over the last 20 years, there is a lack of texts in the field customized for senior undergraduate and graduate engineering research. Parallel-Vector Equation Solvers for Finite Element Engineering Applications aims to fill this gap, detailing both the theoretical development and important implementations of equation-solution algorithms. The mathematical background necessary to understand their inception balances well with descriptions of their practical uses. Illustrated with a number of state-of-the-art FORTRAN codes developed as examples for the book, Dr. Nguyen's text is a perfect choice for instructors and researchers alike.

Computation and Storage in the Cloud - Understanding the Trade-Offs (Paperback, New): Dong Yuan, Yun Yang, Jinjun Chen Computation and Storage in the Cloud - Understanding the Trade-Offs (Paperback, New)
Dong Yuan, Yun Yang, Jinjun Chen
R1,028 R831 Discovery Miles 8 310 Save R197 (19%) Ships in 12 - 17 working days

Computation and Storage in the Cloud is the first comprehensive and systematic work investigating the issue of computation and storage trade-off in the cloud in order to reduce the overall application cost. Scientific applications are usually computation and data intensive, where complex computation tasks take a long time for execution and the generated datasets are often terabytes or petabytes in size. Storing valuable generated application datasets can save their regeneration cost when they are reused, not to mention the waiting time caused by regeneration. However, the large size of the scientific datasets is a big challenge for their storage. By proposing innovative concepts, theorems and algorithms, this book will help bring the cost down dramatically for both cloud users and service providers to run computation and data intensive scientific applications in the cloud. Covers cost models and benchmarking that explain the necessary tradeoffs for both cloud providers and usersDescribes several novel strategies for storing application datasets in the cloudIncludes real-world case studies of scientific research applications
Covers cost models and benchmarking that explain the necessary tradeoffs for both cloud providers and users

Describes several novel strategies for storing application datasets in the cloud

Includes real-world case studies of scientific research applications

Compositional Verification of Concurrent and Real-Time Systems (Paperback, Softcover reprint of the original 1st ed. 2002):... Compositional Verification of Concurrent and Real-Time Systems (Paperback, Softcover reprint of the original 1st ed. 2002)
Eric Y.T. Juan, Jeffrey J.P. Tsai
R2,767 Discovery Miles 27 670 Ships in 10 - 15 working days

With the rapid growth of networking and high-computing power, the demand for large-scale and complex software systems has increased dramatically. Many of the software systems support or supplant human control of safety-critical systems such as flight control systems, space shuttle control systems, aircraft avionics control systems, robotics, patient monitoring systems, nuclear power plant control systems, and so on. Failure of safety-critical systems could result in great disasters and loss of human life. Therefore, software used for safety critical systems should preserve high assurance properties. In order to comply with high assurance properties, a safety-critical system often shares resources between multiple concurrently active computing agents and must meet rigid real-time constraints. However, concurrency and timing constraints make the development of a safety-critical system much more error prone and arduous. The correctness of software systems nowadays depends mainly on the work of testing and debugging. Testing and debugging involve the process of de tecting, locating, analyzing, isolating, and correcting suspected faults using the runtime information of a system. However, testing and debugging are not sufficient to prove the correctness of a safety-critical system. In contrast, static analysis is supported by formalisms to specify the system precisely. Formal verification methods are then applied to prove the logical correctness of the system with respect to the specification. Formal verifica tion gives us greater confidence that safety-critical systems meet the desired assurance properties in order to avoid disastrous consequences.

Neural Circuits and Networks - Proceedings of the NATO advanced Study Institute on Neuronal Circuits and Networks, held at the... Neural Circuits and Networks - Proceedings of the NATO advanced Study Institute on Neuronal Circuits and Networks, held at the Ettore Majorana Center, Erice, Italy, June 15-27 1997 (Paperback, Softcover reprint of the original 1st ed. 1998)
Vincent Torre, John Nicholls
R2,778 Discovery Miles 27 780 Ships in 10 - 15 working days

The understanding of parallel processing and of the mechanisms underlying neural networks in the brain is certainly one of the most challenging problems of contemporary science. During the last decades significant progress has been made by the combination of different techniques, which have elucidated properties at a cellular and molecular level. However, in order to make significant progress in this field, it is necessary to gather more direct experimental data on the parallel processing occurring in the nervous system. Indeed the nervous system overcomes the limitations of its elementary components by employing a massive degree of parallelism, through the extremely rich set of synaptic interconnections between neurons. This book gathers a selection of the contributions presented during the NATO ASI School "Neuronal Circuits and Networks" held at the Ettore Majorana Center in Erice, Sicily, from June 15 to 27, 1997. The purpose of the School was to present an overview of recent results on single cell properties, the dynamics of neuronal networks and modelling of the nervous system. The School and the present book propose an interdisciplinary approach of experimental and theoretical aspects of brain functions combining different techniques and methodologies.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey Hardcover R7,041 Discovery Miles 70 410
Heterogeneous Computing - Hardware and…
Mohamed Zahran Hardcover R1,585 Discovery Miles 15 850
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald Hardcover R5,608 Discovery Miles 56 080
The Next Era in Hardware Security - A…
Nikhil Rangarajan, Satwik Patnaik, … Hardcover R2,334 Discovery Miles 23 340
Blockchain - Novice to Expert - 2…
Keizer Soeze Hardcover R1,081 R875 Discovery Miles 8 750
Magnetic Core Memory Decoded
J.S. Walker Hardcover R930 Discovery Miles 9 300
Creativity in Load-Balance Schemes for…
Alberto Garcia-Robledo, Arturo Diaz Perez, … Hardcover R4,079 Discovery Miles 40 790
Kreislauf des Lebens
Jacob Moleschott Hardcover R1,185 Discovery Miles 11 850
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare Hardcover R3,075 Discovery Miles 30 750
CSS For Beginners - The Best CSS Guide…
Ethan Hall Hardcover R971 R797 Discovery Miles 7 970

 

Partners