0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (5)
  • R250 - R500 (23)
  • R500+ (2,620)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General

Formal Description Techniques and Protocol Specification, Testing and Verification - FORTE XI/PSTV XVIII'98 IFIP TC6 WG6.1... Formal Description Techniques and Protocol Specification, Testing and Verification - FORTE XI/PSTV XVIII'98 IFIP TC6 WG6.1 Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols (FORTE XI) and Protocol Specification, Testing and Verification (PSTV XVIII) 3-6 November 1998, Paris, France (Paperback, Softcover reprint of the original 1st ed. 1998)
Stan Budkowski, Ana Cavalli, Elie Najm
R5,194 Discovery Miles 51 940 Ships in 18 - 22 working days

Formal Description Techniques and Protocol Specification, Testing and Verification addresses formal description techniques (FDTs) applicable to distributed systems and communication protocols. It aims to present the state of the art in theory, application, tools and industrialization of FDTs. Among the important features presented are: FDT-based system and protocol engineering; FDT-application to distributed systems; Protocol engineering; Practical experience and case studies. Formal Description Techniques and Protocol Specification, Testing and Verification comprises the proceedings of the Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols and Protocol Specification, Testing and Verification, sponsored by the International Federation for Information Processing, held in November 1998, Paris, France. Formal Description Techniques and Protocol Specification, Testing and Verification is suitable as a secondary text for a graduate-level course on Distributed Systems or Communications, and as a reference for researchers and practitioners in industry.

High Performance Computing Systems and Applications (Paperback, Softcover reprint of the original 1st ed. 2002): Andrew... High Performance Computing Systems and Applications (Paperback, Softcover reprint of the original 1st ed. 2002)
Andrew Pollard, Douglas J.K. Mewhort, Donald F. Weaver
R4,093 Discovery Miles 40 930 Ships in 18 - 22 working days

High Performance Computing Systems and Applications contains the fully refereed papers from the 13th Annual Symposium on High Performance Computing, held in Kingston, Canada, in June 1999. This book presents the latest research in HPC architectures, distributed and shared memory performance, algorithms and solvers, with special sessions on atmospheric science, computational chemistry and physics. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.

Supercomputer Architecture (Paperback, Softcover reprint of the original 1st ed. 1987): Paul B. Schneck Supercomputer Architecture (Paperback, Softcover reprint of the original 1st ed. 1987)
Paul B. Schneck
R1,387 Discovery Miles 13 870 Ships in 18 - 22 working days

Supercomputers are the largest and fastest computers available at any point in time. The term was used for the first time in the New York World, March 1920, to describe "new statistical machines with the mental power of 100 skilled mathematicians in solving even highly complex algebraic problems. " Invented by Mendenhall and Warren, these machines were used at Columbia University'S Statistical Bureau. Recently, supercomputers have been used primarily to solve large-scale prob lems in science and engineering. Solutions of systems of partial differential equa tions, such as those found in nuclear physics, meteorology, and computational fluid dynamics, account for the majority of supercomputer use today. The early computers, such as EDVAC, SSEC, 701, and UNIVAC, demonstrated the feasibility of building fast electronic computing machines which could become commercial products. The next generation of computers focused on attaining the highest possible computational speeds. This book discusses the architectural approaches used to yield significantly higher computing speeds while preserving the conventional, von Neumann, machine organization (Chapters 2-4). Subsequent improvements depended on developing a new generation of computers employing a new model of computation: single-instruction multiple data (SIMD) processors (Chapters 5-7). Later machines refmed SIMD architec ture and technology (Chapters 8-9). SUPERCOMPUTER ARCHITECI'URE CHAPTER INTRODUCTION THREE ERAS OF SUPERCOMPUTERS Supercomputers -- the largest and fastest computers available at any point in time -- have been the products of complex interplay among technological, architectural, and algorithmic developments."

Data Communications and their Performance - Proceedings of the Sixth IFIP WG6.3 Conference on Performance of Computer Networks,... Data Communications and their Performance - Proceedings of the Sixth IFIP WG6.3 Conference on Performance of Computer Networks, Istanbul, Turkey, 1995 (Paperback, Softcover reprint of the original 1st ed. 1996)
Serge Fdida, Raif O. Onvural
R5,180 Discovery Miles 51 800 Ships in 18 - 22 working days

This is the sixth conference in the series which started in 1981 in Paris, followed by conferences held in Zurich (1984), Rio de Janeirio (1987), Barcelona (1991), and Raleigh (1993). The main objective of this IFIP conference series is to provide a platform for the exchange of recent and original contributions in communications systems in the areas of performance analysis, architectures, and applications. There are many exiciting trends and developments in the communications industry, several of which are related to advances in Asynchronous Transfer Mode*(ATM), multimedia services, and high speed protocols. It is commonly believed in the communications industry that ATM represents the next generation of networking. Yet, there are a number of issues that has been worked on in various standards bodies, government and industry research and development labs, and universities towards enabling high speed networks in general and ATM networks in particular. Reflecting these trends, the technical program of the Sixth IFIP W.G. 6.3 Conference on Performance of Computer Networks consists of papers addressing a wide range of technical challenges and proposing various state of the art solutions to a subset of them. The program includes 25 papers selected by the program committee out of 57 papers submitted.

Content-Based Access to Multimedia Information - From Technology Trends to State of the Art (Paperback, Softcover reprint of... Content-Based Access to Multimedia Information - From Technology Trends to State of the Art (Paperback, Softcover reprint of the original 1st ed. 1999)
Brad Perry, Shi-Kuo Chang, J. Dinsmore, David Doermann, Azriel Rosenfeld, …
R2,617 Discovery Miles 26 170 Ships in 18 - 22 working days

In the past five years, the field of electrostatic discharge (ESD) control has under gone some notable changes. Industry standards have multiplied, though not all of these, in our view, are realistic and meaningful. Increasing importance has been ascribed to the Charged Device Model (CDM) versus the Human Body Model (HBM) as a cause of device damage and, presumably, premature (latent) failure. Packaging materials have significantly evolved. Air ionization techniques have improved, and usage has grown. Finally, and importantly, the government has ceased imposing MIL-STD-1686 on all new contracts, leaving companies on their own to formulate an ESD-control policy and write implementing documents. All these changes are dealt with in five new chapters and ten new reprinted papers added to this revised edition of ESD from A to Z. Also, the original chapters have been augmented with new material such as more troubleshooting examples in Chapter 8 and a 20-question multiple-choice test for certifying operators in Chapter 9. More than ever, the book seeks to provide advice, guidance, and practical ex amples, not just a jumble of facts and generalizations. For instance, the added tailored versions of the model specifications for ESD-safe handling and packaging are actually in use at medium-sized corporations and could serve as patterns for many readers.

Distributed Systems for System Architects (Paperback, Softcover reprint of the original 1st ed. 2001): Paulo Verissimo, Luis... Distributed Systems for System Architects (Paperback, Softcover reprint of the original 1st ed. 2001)
Paulo Verissimo, Luis Rodrigues
R2,748 Discovery Miles 27 480 Ships in 18 - 22 working days

The primary audience for this book are advanced undergraduate students and graduate students. Computer architecture, as it happened in other fields such as electronics, evolved from the small to the large, that is, it left the realm of low-level hardware constructs, and gained new dimensions, as distributed systems became the keyword for system implementation. As such, the system architect, today, assembles pieces of hardware that are at least as large as a computer or a network router or a LAN hub, and assigns pieces of software that are self-contained, such as client or server programs, Java applets or pro tocol modules, to those hardware components. The freedom she/he now has, is tremendously challenging. The problems alas, have increased too. What was before mastered and tested carefully before a fully-fledged mainframe or a closely-coupled computer cluster came out on the market, is today left to the responsibility of computer engineers and scientists invested in the role of system architects, who fulfil this role on behalf of software vendors and in tegrators, add-value system developers, R&D institutes, and final users. As system complexity, size and diversity grow, so increases the probability of in consistency, unreliability, non responsiveness and insecurity, not to mention the management overhead. What System Architects Need to Know The insight such an architect must have includes but goes well beyond, the functional properties of distributed systems.

Advances in High Performance Computing (Paperback, Softcover reprint of the original 1st ed. 1997): Lucio Grandinetti, J.S.... Advances in High Performance Computing (Paperback, Softcover reprint of the original 1st ed. 1997)
Lucio Grandinetti, J.S. Kowalik, Marian Vajtersic
R1,433 Discovery Miles 14 330 Ships in 18 - 22 working days

Most of the papers in this volume were presented at the NATO Advanced Research Workshop High Performance Computing: Technology and Application, held in Cetraro, Italy from 24 to 26 of June, 1996. The main purpose of the Workshop was to discuss some key scientific and technological developments in high performance computing, identify significant trends and defme desirable research objectives. The volume structure corresponds, in general, to the outline of the workshop technical agenda: general concepts and emerging systems, software technology, algorithms and applications. One of the Workshop innovations was an effort to extend slightly the scope of the meeting from scientific/engineering computing to enterprise-wide computing. The papers on performance and scalability of database servers, and Oracle DBMS reflect this attempt We hope that after reading this collection of papers the readers will have a good idea about some important research and technological issues in high performance computing. We wish to give our thanks to the NATO Scientific and Environmental Affairs Division for being the principal sponsor for the Workshop. Also we are pleased to acknowledge other institutions and companies that supported the Workshop: European Union: European Commission DGIII-Industry, CNR: National Research Council of Italy, University of Calabria, Alenia Spazio, Centro Italiano Ricerche Aerospaziali, ENEA: Italian National Agency for New Technology, Energy and the Environment, Fujitsu, Hewlett Packard-Convex, Hitachi, NEC, Oracle, and Silicon Graphics-Cray Research. Editors January 1997 vii LIST OF CONTRIBUTORS Ecole Nonnale Sucentsrieure de Lyon, 69364 Abarbanel. Robert

The gm/ID Methodology, a sizing tool for low-voltage analog CMOS Circuits - The semi-empirical and compact model approaches... The gm/ID Methodology, a sizing tool for low-voltage analog CMOS Circuits - The semi-empirical and compact model approaches (Paperback, Previously published in hardcover)
Paul Jespers
R3,303 Discovery Miles 33 030 Ships in 18 - 22 working days

IC designers appraise currently MOS transistor geometries and currents to compromise objectives like gain-bandwidth, slew-rate, dynamic range, noise, non-linear distortion, etc. Making optimal choices is a difficult task. How to minimize for instance the power consumption of an operational amplifier without too much penalty regarding area while keeping the gain-bandwidth unaffected in the same time? Moderate inversion yields high gains, but the concomitant area increase adds parasitics that restrict bandwidth. Which methodology to use in order to come across the best compromise(s)? Is synthesis a mixture of design experience combined with cut and tries or is it a constrained multivariate optimization problem, or a mixture? Optimization algorithms are attractive from a system perspective of course, but what about low-voltage low-power circuits, requiring a more physical approach? The connections amid transistor physics and circuits are intricate and their interactions not always easy to describe in terms of existing software packages. The gm/ID synthesis methodology is adapted to CMOS analog circuits for the transconductance over drain current ratio combines most of the ingredients needed in order to determine transistors sizes and DC currents.

Instruction-Level Parallelism - A Special Issue of The Journal of Supercomputing (Paperback, Softcover reprint of the original... Instruction-Level Parallelism - A Special Issue of The Journal of Supercomputing (Paperback, Softcover reprint of the original 1st ed. 1993)
B.R. Rau, J.A. Fisher
R5,141 Discovery Miles 51 410 Ships in 18 - 22 working days

Instruction-Level Parallelism presents a collection of papers that attempts to capture the most significant work that took place during the 1980s in the area of instruction-level (ILP) parallel processing. The papers in this book discuss both compiler techniques and actual implementation experience on very long instruction word (VLIW) and superscalar architectures.

The Design and Implementation of a Log-structured file system (Paperback, Softcover reprint of the original 1st ed. 1995):... The Design and Implementation of a Log-structured file system (Paperback, Softcover reprint of the original 1st ed. 1995)
Mendel Rosenblum
R2,614 Discovery Miles 26 140 Ships in 18 - 22 working days

Computersystemsresearch is heavilyinfluencedby changesincomputertechnol- ogy. As technology changes alterthe characteristics ofthe underlying hardware com- ponents of the system, the algorithms used to manage the system need to be re- examinedand newtechniques need to bedeveloped. Technological influencesare par- ticularly evident in the design of storage management systems such as disk storage managers and file systems. The influences have been so pronounced that techniques developed as recently as ten years ago are being made obsolete. The basic problem for disk storage managers is the unbalanced scaling of hard- warecomponenttechnologies. Disk storage managerdesign depends on the technolo- gy for processors, main memory, and magnetic disks. During the 1980s, processors and main memories benefited from the rapid improvements in semiconductortechnol- ogy and improved by several orders ofmagnitude in performance and capacity. This improvement has not been matched by disk technology, which is bounded by the me- chanics ofrotating magnetic media. Magnetic disks ofthe 1980s have improved by a factor of 10in capacity butonly a factor of2 in performance. This unbalanced scaling ofthe hardware components challenges the disk storage manager to compensate for the slower disks and allow performance to scale with the processor and main memory technology. Unless the performance of file systems can be improved over that of the disks, I/O-bound applications will be unable to use the rapid improvements in processor speeds to improve performance for computer users. Disk storage managers must break this bottleneck and decouple application perfor- mance from the disk.

Reversible Computation - 4th International Workshop, RC 2012, Copenhagen, Denmark, July 2-3, 2012, Revised Papers (Paperback,... Reversible Computation - 4th International Workshop, RC 2012, Copenhagen, Denmark, July 2-3, 2012, Revised Papers (Paperback, 2013 ed.)
Robert Gluck, Tetsuo Yokoyama
R1,294 Discovery Miles 12 940 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 4th International Workshop on Reversible Computation, RC 2012, held in Copenhagen, Denmark, in July 2012. The 19 contributions presented in this volume were carefully reviewed and selected from 46 submissions. The papers cover theoretical considerations, reversible software and reversible hardware, and physical realizations and applications in quantum computing.

Software Architectures and Component Technology (Paperback, Softcover reprint of the original 1st ed. 2002): Mehmed Aksit Software Architectures and Component Technology (Paperback, Softcover reprint of the original 1st ed. 2002)
Mehmed Aksit
R5,173 Discovery Miles 51 730 Ships in 18 - 22 working days

Software architectures have gained wide popularity in the last decade. They generally play a fundamental role in coping with the inherent difficulties of the development of large-scale and complex software systems. Component-oriented and aspect-oriented programming enables software engineers to implement complex applications from a set of pre-defined components. Software Architectures and Component Technology collects excellent chapters on software architectures and component technologies from well-known authors, who not only explain the advantages, but also present the shortcomings of the current approaches while introducing novel solutions to overcome the shortcomings.The unique features of this book are: * evaluates the current architecture design methods and component composition techniques and explains their shortcomings; * presents three practical architecture design methods in detail; * gives four industrial architecture design examples; * presents conceptual models for distributed message-based architectures; * explains techniques for refining architectures into components; * presents the recent developments in component and aspect-oriented techniques; * explains the status of research on Piccola, Hyper/J(R), Pluggable Composite Adapters and Composition Filters. Software Architectures and Component Technology is a suitable text for graduate level students in computer science and engineering, and as a reference for researchers and practitioners in industry.

Computer Communication Networks (Paperback, Softcover reprint of the original 1st ed. 1975): R. L. Grimsdale, F. F. Kuo Computer Communication Networks (Paperback, Softcover reprint of the original 1st ed. 1975)
R. L. Grimsdale, F. F. Kuo
R1,464 Discovery Miles 14 640 Ships in 18 - 22 working days

In 1968 the Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense began implementation of a computer communication network which permits the interconnection of heter ogeneous computers at geographically distributed centres through out the United States. This network has come to be known as the ARPANET and has grown from the initial four node configuration in 1969 to almost forty nodes (including satellite nodes in Hawaii, Norway, and London) in late 1973. The major goal of ARPANET is to achieve resource sharing among the network users. The resources to be shared include not only programs, but also unique facilities such as the powerful ILLIAC IV computer and large global weather data bases that are economically feasible when widely shared. The ARPANEr employs a distributed store-and-forward packet switching approach that is much better suited for computer communications networks than the more conventional circuit-switch ing approach. Reasons favouring packet switching include lower cost, higher capacity, greater reliability and minimal delay. All of these factors are discussed in these Proceedings."

Multicore Processors and Systems (Paperback, 2009 ed.): Stephen W. Keckler, Kunle Olukotun, H. Peter Hofstee Multicore Processors and Systems (Paperback, 2009 ed.)
Stephen W. Keckler, Kunle Olukotun, H. Peter Hofstee
R4,018 Discovery Miles 40 180 Ships in 18 - 22 working days

Multicore Processors and Systems provides a comprehensive overview of emerging multicore processors and systems. It covers technology trends affecting multicores, multicore architecture innovations, multicore software innovations, and case studies of state-of-the-art commercial multicore systems. A cross-cutting theme of the book is the challenges associated with scaling up multicore systems to hundreds of cores.

The book provides an overview of significant developments in the architectures for multicore processors and systems. It includes chapters on fundamental requirements for multicore systems, including processing, memory systems, and interconnect. It also includes several case studies on commercial multicore systems that have recently been developed and deployed across multiple application domains. The architecture chapters focus on innovative multicore execution models as well as infrastructure for multicores, including memory systems and on-chip interconnections. The case studies examine multicore implementations across different application domains, including general purpose, server, media/broadband, network processing, and signal processing.

Multicore Processors and Systems is the first book that focuses solely on multicore processors and systems, and in particular on the unique technology implications, architectures, and implementations. The book has contributing authors that are from both the academic and industrial communities.

Trust Networks for Recommender Systems (Paperback, 2011 ed.): Patricia Victor, Chris Cornelis, Martine de Cock Trust Networks for Recommender Systems (Paperback, 2011 ed.)
Patricia Victor, Chris Cornelis, Martine de Cock
R1,387 Discovery Miles 13 870 Ships in 18 - 22 working days

This book describes research performed in the context of trust/distrust propagation and aggregation, and their use in recommender systems. This is a hot research topic with important implications for various application areas. The main innovative contributions of the work are: -new bilattice-based model for trust and distrust, allowing for ignorance and inconsistency -proposals for various propagation and aggregation operators, including the analysis of mathematical properties -Evaluation of these operators on real data, including a discussion on the data sets and their characteristics. -A novel approach for identifying controversial items in a recommender system -An analysis on the utility of including distrust in recommender systems -Various approaches for trust based recommendations (a.o. base on collaborative filtering), an in depth experimental analysis, and proposal for a hybrid approach -Analysis of various user types in recommender systems to optimize bootstrapping of cold start users.

Logic of Domains (Paperback, Softcover reprint of the original 1st ed. 1991): G. Zhang Logic of Domains (Paperback, Softcover reprint of the original 1st ed. 1991)
G. Zhang
R2,648 Discovery Miles 26 480 Ships in 18 - 22 working days

This monograph studies the logical aspects of domains as used in de notational semantics of programming languages. Frameworks of domain logics are introduced; these serve as foundations for systematic derivations of proof systems from denotational semantics of programming languages. Any proof system so derived is guaranteed to agree with denotational se mantics in the sense that the denotation of any program coincides with the set of assertions true of it. The study focuses on two categories for dena tational semantics: SFP domains, and the less standard, but important, category of stable domains. The intended readership of this monograph includes researchers and graduate students interested in the relation between semantics of program ming languages and formal means of reasoning about programs. A basic knowledge of denotational semantics, mathematical logic, general topology, and category theory is helpful for a full understanding of the material. Part I SFP Domains Chapter 1 Introduction This chapter provides a brief exposition to domain theory, denotational se mantics, program logics, and proof systems. It discusses the importance of ideas and results on logic and topology to the understanding of the relation between denotational semantics and program logics. It also describes the motivation for the work presented by this monograph, and how that work fits into a more general program. Finally, it gives a short summary of the results of each chapter. 1. 1 Domain Theory Programming languages are languages with which to perform computa tion."

Assignment Problems in Parallel and Distributed Computing (Paperback, Softcover reprint of the original 1st ed. 1987): Shahid... Assignment Problems in Parallel and Distributed Computing (Paperback, Softcover reprint of the original 1st ed. 1987)
Shahid H. Bokhari
R2,623 Discovery Miles 26 230 Ships in 18 - 22 working days

This book has been written for practitioners, researchers and stu dents in the fields of parallel and distributed computing. Its objective is to provide detailed coverage of the applications of graph theoretic tech niques to the problems of matching resources and requirements in multi ple computer systems. There has been considerable research in this area over the last decade and intense work continues even as this is being written. For the practitioner, this book serves as a rich source of solution techniques for problems that are routinely encountered in the real world. Algorithms are presented in sufficient detail to permit easy implementa tion; background material and fundamental concepts are covered in full. The researcher will find a clear exposition of graph theoretic tech niques applied to parallel and distributed computing. Research results are covered and many hitherto unpublished spanning the last decade results by the author are included. There are many unsolved problems in this field-it is hoped that this book will stimulate further research."

Compact Models and Measurement Techniques for High-Speed Interconnects (Paperback, 2012): Rohit Sharma, Tapas Chakravarty Compact Models and Measurement Techniques for High-Speed Interconnects (Paperback, 2012)
Rohit Sharma, Tapas Chakravarty
R1,352 Discovery Miles 13 520 Ships in 18 - 22 working days

Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.

Supercomputational Science (Paperback, Softcover reprint of the original 1st ed. 1990): R.G. Evans Supercomputational Science (Paperback, Softcover reprint of the original 1st ed. 1990)
R.G. Evans
R1,449 Discovery Miles 14 490 Ships in 18 - 22 working days

In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School held at The Coseners House in Abingdon, which was an attempt to disseminate research methods in the different areas in which supercomputers are used. It is hoped that the publication of the lectures in this form will enable the experiences and achievements of supercomputer users to be shared with a larger audience. We thank all the lecturers and participants for making the Summer School an enjoyable and profitable experience. Finally, we thank the Science and Engineering Research Council and The Computer Board for supporting the Summer School.

Fundamentals of Computer Organization and Design (Paperback, Softcover reprint of the original 1st ed. 2003): Sivarama P... Fundamentals of Computer Organization and Design (Paperback, Softcover reprint of the original 1st ed. 2003)
Sivarama P Dandamudi
R3,160 Discovery Miles 31 600 Ships in 18 - 22 working days

A new advanced textbook/reference providing a comprehensive survey of hardware and software architectural principles and methods of computer systems organization and design. The book is suitable for a first course in computer organization. The style is similar to that of the author's book on assembly language in that it strongly supports self-study by students. This organization facilitates compressed presentation of material. Emphasis is also placed on related concepts to practical designs/chips. Topics: material presentation suitable for self- study; concepts related to practical designs and implementations; extensive examples and figures; details provided on several digital logic simulation packages; free MASM download instructions provided; and end-of-chapter exercises.

Foundations of Real-Time Computing: Scheduling and Resource Management (Paperback, Softcover reprint of the original 1st ed.... Foundations of Real-Time Computing: Scheduling and Resource Management (Paperback, Softcover reprint of the original 1st ed. 1991)
Andre M.Van Tilborg, Gary M. Koob
R4,024 Discovery Miles 40 240 Ships in 18 - 22 working days

This volume contains a selection of papers that focus on the state-of the-art in real-time scheduling and resource management. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D.C. A companion volume by the title Foundations of Real-Time Computing: Fonnal Specifications and Methods complements this book by addressing many of the most advanced approaches currently being investigated in the arena of formal specification and verification of real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real-time computing systems on a scientific basis. Many of the papers in this book take care to define the notion of real-time system precisely, because it is often easy to misunderstand what is meant by that term. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the very difficult problems of scheduling tasks and resource management in computer systems whose performance is inextricably fused with the achievement of deadlines. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End-use applications of deadline-driven real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."

Compilers and Operating Systems for Low Power (Paperback, Softcover reprint of the original 1st ed. 2003): Luca Benini, Mahmut... Compilers and Operating Systems for Low Power (Paperback, Softcover reprint of the original 1st ed. 2003)
Luca Benini, Mahmut Kandemir, J. Ramanujam
R2,640 Discovery Miles 26 400 Ships in 18 - 22 working days

Compilers and Operating Systems for Low Power focuses on both application-level compiler directed energy optimization and low-power operating systems. Chapters have been written exclusively for this volume by several of the leading researchers and application developers active in the field. The first six chapters focus on low energy operating systems, or more in general, energy-aware middleware services. The next five chapters are centered on compilation and code optimization. Finally, the last chapter takes a more general viewpoint on mobile computing. The material demonstrates the state-of-the-art work and proves that to obtain the best energy/performance characteristics, compilers, system software, and architecture must work together. The relationship between energy-aware middleware and wireless microsensors, mobile computing and other wireless applications are covered. This work will be of interest to researchers in the areas of low-power computing, embedded systems, compiler optimizations, and operating systems.

Underwater Acoustic Data Processing (Paperback, Softcover reprint of the original 1st ed. 1989): Y.T. Chan Underwater Acoustic Data Processing (Paperback, Softcover reprint of the original 1st ed. 1989)
Y.T. Chan
R7,741 Discovery Miles 77 410 Ships in 18 - 22 working days

This book contains the papers that were accepted for presentation at the 1988 NATO Advanced Study Institute on Underwater Acoustic Data Processing, held at the Royal Military College of Canada from 18 to 29 July, 1988. Approximately 110 participants from various NATO countries were in attendance during this two week period. Their research interests range from underwater acoustics to signal processing and computer science; some are renowned scientists and some are recent Ph.D. graduates. The purpose of the ASI was to provide an authoritative summing up of the various research activities related to sonar technology. The exposition on each subject began with one or two tutorials prepared by invited lecturers, followed by research papers which provided indications of the state of development in that specific area. I have broadly classified the papers into three sections under the titles of I. Propagation and Noise, II. Signal Processing and III. Post Processing. The reader will find in Section I papers on low frequency acoustic sources and effects of the medium on underwater acoustic propagation. Problems such as coherence loss due to boundary interaction, wavefront distortion and multipath transmission were addressed. Besides the medium, corrupting noise sources also have a strong influence on the performance of a sonar system and several researchers described methods of modeling these sources.

Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems (Paperback, 2011 ed.): Paul Lokuciejewski, Peter... Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems (Paperback, 2011 ed.)
Paul Lokuciejewski, Peter Marwedel
R4,011 Discovery Miles 40 110 Ships in 18 - 22 working days

For real-time systems, the worst-case execution time (WCET) is the key objective to be considered. Traditionally, code for real-time systems is generated without taking this objective into account and the WCET is computed only after code generation. Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems presents the first comprehensive approach integrating WCET considerations into the code generation process. Based on the proposed reconciliation between a compiler and a timing analyzer, a wide range of novel optimization techniques is provided. Among others, the techniques cover source code and assembly level optimizations, exploit machine learning techniques and address the design of modern systems that have to meet multiple objectives. Using these optimizations, the WCET of real-time applications can be reduced by about 30% to 45% on the average. This opens opportunities for decreasing clock speeds, costs and energy consumption of embedded processors. The proposed techniques can be used for all types real-time systems, including automotive and avionics IT systems.

Analysis of Cache Performance for Operating Systems and Multiprogramming (Paperback, Softcover reprint of the original 1st ed.... Analysis of Cache Performance for Operating Systems and Multiprogramming (Paperback, Softcover reprint of the original 1st ed. 1989)
Agarwal
R2,633 Discovery Miles 26 330 Ships in 18 - 22 working days

As we continue to build faster and fast. er computers, their performance is be coming increasingly dependent on the memory hierarchy. Both the clock speed of the machine and its throughput per clock depend heavily on the memory hierarchy. The time to complet. e a cache acce88 is oft. en the factor that det. er mines the cycle time. The effectiveness of the hierarchy in keeping the average cost of a reference down has a major impact on how close the sustained per formance is to the peak performance. Small changes in the performance of the memory hierarchy cause large changes in overall system performance. The strong growth of ruse machines, whose performance is more tightly coupled to the memory hierarchy, has created increasing demand for high performance memory systems. This trend is likely to accelerate: the improvements in main memory performance will be small compared to the improvements in processor performance. This difference will lead to an increasing gap between prOCe880r cycle time and main memory acce. time. This gap must be closed by improving the memory hierarchy. Computer architects have attacked this gap by designing machines with cache sizes an order of magnitude larger than those appearing five years ago. Microproce880r-based RISe systems now have caches that rival the size of those in mainframes and supercomputers."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Electromigration Inside Logic Cells…
Gracieli Posser, Sachin S Sapatnekar, … Hardcover R2,003 R1,778 Discovery Miles 17 780
Switched-Mode Power Supply Simulation…
Steven M. Sandler Hardcover R1,404 Discovery Miles 14 040
Analog Circuit Theory and Filter Design…
George S. Moschytz Hardcover R3,252 Discovery Miles 32 520
Shared-Memory Parallelism Can Be Simple…
Julian Shun Hardcover R2,946 Discovery Miles 29 460
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues Paperback R4,669 Discovery Miles 46 690
Systems Engineering Neural Networks
A Migliaccio Hardcover R2,817 Discovery Miles 28 170
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald Hardcover R5,315 Discovery Miles 53 150
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, … Paperback R2,281 Discovery Miles 22 810
Advancements in Instrumentation and…
Srijan Bhattacharya Hardcover R6,138 Discovery Miles 61 380
CSS and HTML for beginners - A Beginners…
Ethan Hall Hardcover R1,027 R881 Discovery Miles 8 810

 

Partners