![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Foundations of Dependable Computing: Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. The companion volume subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.
The last decade has seen tremendous growth in usage of the World Wide Web. Web caching is a technology aimed at reducing the transmission of redundant network traffic and improving access to the Web. The key idea in Web caching is to cache frequently- accessed content so that it may be used profitably later. This leads to cost savings, reduction in network traffic, improved access and better content availability. Web Caching and Its Applications gives the reader an understanding of the latest developments in Web caching research. Topics covered include architectural aspects, aspects requiring coordination among caches, aspects related to network traffic, techniques that complement caching, practical aspects, and aspects related to performance. While Web Caching and Its Applications is designed for a professional audience, students will appreciate the exercises for applying the knowledge to solving practical problems related to Web caching and Internet performance. The book includes an exhaustive list of references for further study.
Many real-time systems rely on static scheduling algorithms. This includes cyclic scheduling, rate monotonic scheduling and fixed schedules created by off-line scheduling techniques such as dynamic programming, heuristic search, and simulated annealing. However, for many real-time systems, static scheduling algorithms are quite restrictive and inflexible. For example, highly automated agile manufacturing, command, control and communications, and distributed real-time multimedia applications all operate over long lifetimes and in highly non-deterministic environments. Dynamic real-time scheduling algorithms are more appropriate for these systems and are used in such systems. Many of these algorithms are based on earliest deadline first (EDF) policies. There exists a wealth of literature on EDF-based scheduling with many extensions to deal with sophisticated issues such as precedence constraints, resource requirements, system overload, multi-processors, and distributed systems.Deadline Scheduling for Real-Time Systems: EDF and Related Algorithms aims at collecting a significant body of knowledge on EDF scheduling for real-time systems, but it does not try to be all-inclusive (the literature is too extensive). The book primarily presents the algorithms and associated analysis, but guidelines, rules, and implementation considerations are also discussed, especially for the more complicated situations where mathematical analysis is difficult. In general, it is very difficult to codify and taxonomize scheduling knowledge because there are many performance metrics, task characteristics, and system configurations. Also, adding to the complexity is the fact that a variety of algorithms have been designed for different combinations of these considerations. In spite of the recent advances there are still gaps in the solution space and there is a need to integrate the available solutions.For example, a list of issues to consider includes: * preemptive versus non-preemptive tasks, * uni-processors versus multi-processors, * using EDF at dispatch time versus EDF-based planning, * precedence constraints among tasks, * resource constraints, * periodic versus aperiodic versus sporadic tasks, * scheduling during overload, * fault tolerance requirements, and * providing guarantees and levels of guarantees (meeting quality of service requirements). Deadline Scheduling for Real-Time Systems: EDF and Related Algorithms should be of interest to researchers, real-time system designers, and instructors and students, either as a focussed course on deadline-based scheduling for real-time systems, or, more likely, as part of a more general course on real-time computing. The book serves as an invaluable reference in this fast-moving field.
Anyone who can interpret decision diagrams using the spectral approach can advance both the utility and understanding of classical DD techniques. This approach also provides a framework for developing advanced solutions for digital design and a host of other applications. Scientists, computer science and engineering professionals, and researchers with an interest in the spectral methods of representing discrete functions, as well as the foundations of logic design, will find the book a clearly explained, well-organized, and essential resource.
Load Balancing in Parallel Computers: Theory and Practice is about the essential software technique of load balancing in distributed memory message-passing parallel computers, also called multicomputers. Each processor has its own address space and has to communicate with other processors by message passing. In general, a direct, point-to-point interconnection network is used for the communications. Many commercial parallel computers are of this class, including the Intel Paragon, the Thinking Machine CM-5, and the IBM SP2. Load Balancing in Parallel Computers: Theory and Practice presents a comprehensive treatment of the subject using rigorous mathematical analyses and practical implementations. The focus is on nearest-neighbor load balancing methods in which every processor at every step is restricted to balancing its workload with its direct neighbours only. Nearest-neighbor methods are iterative in nature because a global balanced state can be reached through processors' successive local operations. Since nearest-neighbor methods have a relatively relaxed requirement for the spread of local load information across the system, they are flexible in terms of allowing one to control the balancing quality, effective for preserving communication locality, and can be easily scaled in parallel computers with a direct communication network. Load Balancing in Parallel Computers: Theory and Practice serves as an excellent reference source and may be used as a text for advanced courses on the subject.
Transformational programming and parallel computation are two emerging fields that may ultimately depend on each other for success. Perhaps because ad hoc programming on sequential machines is so straightforward, sequential programming methodology has had little impact outside the academic community, and transformational methodology has had little impact at all. However, because ad hoc programming for parallel machines is so hard, and because progress in software construction has lagged behind architectural advances for such machines, there is a much greater need to develop parallel programming and transformational methodologies. Parallel Algorithm Derivation and Program Transformation stimulates the investigation of formal ways to overcome problems of parallel computation, with respect to both software development and algorithm design. It represents perspectives from two different communities: transformational programming and parallel algorithm design, to discuss programming, transformational, and compiler methodologies for parallel architectures, and algorithmic paradigms, techniques, and tools for parallel machine models.Parallel Algorithm Derivation and Program Transformation is an excellent reference for graduate students and researchers in parallel programming and transformational methodology. Each chapter contains a few initial sections in the style of a first-year, graduate textbook with many illustrative examples. The book may also be used as the text for a graduate seminar course or as a reference book for courses in software engineering, parallel programming or formal methods in program development.
Formal Methods for Open Object-Based Distributed Systems V brings together research in three important and related fields: * Formal methods; * Distributed systems; * Object-based technology. Such a convergence is representative of recent advances in the field of distributed systems, and provides links between several scientific and technological communities. The wide scope of topics covered in this volume range in subject from UML to object-based languages and calculi and security, and in approach from specification to case studies and verification. This volume comprises the proceedings of the Fifth International Conference on Formal Methods for Open Object-Based Distributed Systems (FMOODS 2002), which was sponsored by the International Federation for Information Processing (IFIP) and held in Enschede, The Netherlands in March 2002.
Automatic transformation of a sequential program into a parallel form is a subject that presents a great intellectual challenge and promises great practical rewards. There is a tremendous investment in existing sequential programs, and scientists and engineers continue to write their application programs in sequential languages (primarily in Fortran),but the demand for increasing speed is constant. The job of a restructuring compiler is to discover the dependence structure of a given program and transform the program in a way that is consistent with both that dependence structure and the characteristics of the given machine. Much attention in this field of research has been focused on the Fortran do loop. This is where one expects to find major chunks of computation that need to be performed repeatedly for different values of the index variable. Many loop transformations have been designed over the years, and several of them can be found in any parallelizing compiler currently in use in industry or at a university research facility. Loop Transformations for Restructuring Compilers: The Foundations provides a rigorous theory of loop transformations. The transformations are developed in a consistent mathematical framework using objects like directed graphs, matrices and linear equations. The algorithms that implement the transformations can then be precisely described in terms of certain abstract mathematical algorithms. The book provides the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discusses data dependence, and introduces the major transformations. The next volume will build a detailed theory of loop transformations based on the material developed here. Loop Transformations for Restructuring Compilers: The Foundations presents a theory of loop transformations that is rigorous and yet reader-friendly.
Active networking is an exciting new paradigm in digital networking that has the potential to revolutionize the manner in which communication takes place. It is an emerging technology, one in which new ideas are constantly being formulated and new topics of research are springing up even as this book is being written. This technology is very likely to appeal to a broad spectrum of users from academia and industry. Therefore, this book was written in a way that enables all these groups to understand the impact of active networking in their sphere of interest. Information services managers, network administrators, and e-commerce developers would like to know the potential benefits of the new technology to their businesses, networks, and applications. The book introduces the basic active networking paradigm and its potential impacts on the future of information handling in general and on communications in particular. This is useful for forward-looking businesses that wish to actively participate in the development of active networks and ensure a head start in the integration of the technology in their future products, be they applications or networks. Areas in which active networking is likely to make significant impact are identified, and the reader is pointed to any related ongoing research efforts in the area. The book also provides a deeper insight into the active networking model for students and researchers, who seek challenging topics that define or extend frontiers of the technology. It describes basic components of the model, explains some of the terms used by the active networking community, and provides the reader with taxonomy of the research being conducted at the time this book was written. Current efforts are classified based on typical research areas such as mobility, security, and management. The intent is to introduce the serious reader to the background regarding some of the models adopted by the community, to outline outstanding issues concerning active networking, and to provide a snapshot of the fast-changing landscape in active networking research. Management is a very important issue in active networks because of its open nature. The latter half of the book explains the architectural concepts of a model for managing active networks and the motivation for a reference model that addresses limitations of the current network management framework by leveraging the powerful features of active networking to develop an integrated framework. It also describes a novel application enabled by active network technology called the Active Virtual Network Management Prediction (AVNMP) algorithm. AVNMP is a pro-active management system; in other words, it provides the ability to solve a potential problem before it impacts the system by modeling network devices within the network itself and running that model ahead of real time.
Due to the decreasing production costs of IT systems, applications that had to be realised as expensive PCBs formerly, can now be realised as a system-on-chip. Furthermore, low cost broadband communication media for wide area communication as well as for the realisation of local distributed systems are available. Typically the market requires IT systems that realise a set of specific features for the end user in a given environment, so called embedded systems. Some examples for such embedded systems are control systems in cars, airplanes, houses or plants, information and communication devices like digital TV, mobile phones or autonomous systems like service- or edutainment robots. For the design of embedded systems the designer has to tackle three major aspects: * The application itself including the man-machine interface, * The (target) architecture of the system including all functional and non-functional constraints and, * the design methodology including modelling, specification, synthesis, test and validation. The last two points are a major focus of this book.This book documents the high quality approaches and results that were presented at the International Workshop on Distributed and Parallel Embedded Systems (DIPES 2000), which was sponsored by the International Federation for Information Processing (IFIP), and organised by IFIP working groups WG10.3, WG10.4 and WG10.5. The workshop took place on October 18-19, 2000, in Schloss Eringerfeld near Paderborn, Germany. Architecture and Design of Distributed Embedded Systems is organised similar to the workshop. Chapters 1 and 4 (Methodology I and II) deal with different modelling and specification paradigms and the corresponding design methodologies. Generic system architectures for different classes of embedded systems are presented in Chapter 2. In Chapter 3 several design environments for the support of specific design methodologies are presented. Problems concerning test and validation are discussed in Chapter 5. The last two chapters include distribution and communication aspects (Chapter 6) and synthesis techniques for embedded systems (Chapter 7). This book is essential reading for computer science researchers and application developers.
Study the past, if you would divine the future. -CONFUCIUS A well written, organized, and concise survey is an important tool in any newly emerging field of study. This present text is the first of a new series that has been established to promote the publications of such survey books. A survey serves several needs. Virtually every new research area has its roots in several diverse areas and many of the initial fundamental results are dispersed across a wide range of journals, books, and conferences in many dif ferent sub fields. A good survey should bring together these results. But just a collection of articles is not enough. Since terminology and notation take many years to become standardized, it is often difficult to master the early papers. In addition, when a new research field has its foundations outside of computer science, all the papers may be difficult to read. Each field has its own view of el egance and its own method of presenting results. A good survey overcomes such difficulties by presenting results in a notation and terminology that is familiar to most computer scientists. A good survey can give a feel for the whole field. It helps identify trends, both successful and unsuccessful, and it should point new researchers in the right direction.
Dependence Analysis may be considered to be the second edition of the author's 1988 book, Dependence Analysis for Supercomputing. It is, however, a completely new work that subsumes the material of the 1988 publication. This book is the third volume in the series Loop Transformations for Restructuring Compilers. This series has been designed to provide a complete mathematical theory of transformations that can be used to automatically change a sequential program containing FORTRAN-like do loops into an equivalent parallel form. In Dependence Analysis, the author extends the model to a program consisting of do loops and assignment statements, where the loops need not be sequentially nested and are allowed to have arbitrary strides. In the context of such a program, the author studies, in detail, dependence between statements of the program caused by program variables that are elements of arrays. Dependence Analysis is directed toward graduate and undergraduate students, and professional writers of restructuring compilers. The prerequisite for the book consists of some knowledge of programming languages, and familiarity with calculus and graph theory. No knowledge of linear programming is required.
High Performance Data Mining: Scaling Algorithms, Applications and Systems brings together in one place important contributions and up-to-date research results in this fast moving area. High Performance Data Mining: Scaling Algorithms, Applications and Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Analog Integrated Circuits deals with the design and analysis of modem analog circuits using integrated bipolar and field-effect transistor technologies. This book is suitable as a text for a one-semester course for senior level or first-year graduate students as well as a reference work for practicing engin eers. Advanced students will also find the text useful in that some of the material presented here is not covered in many first courses on analog circuits. Included in this is an extensive coverage of feedback amplifiers, current-mode circuits, and translinear circuits. Suitable background would be fundamental courses in electronic circuits and semiconductor devices. This book contains numerous examples, many of which include commercial analog circuits. End-of-chapter problems are given, many illustrating practical circuits. Chapter 1 discuses the models commonly used to represent devices used in modem analog integrated circuits. Presented are models for bipolar junction transistors, junction diodes, junction field-effect transistors, and metal-oxide semiconductor field-effect transistors. Both large-signal and small-signal models are developed as well as their implementation in the SPICE circuit simulation program. The basic building blocks used in a large variety of analog circuits are analyzed in Chapter 2; these consist of current sources, dc level-shift stages, single-transistor gain stages, two-transistor gain stages, and output stages. Both bipolar and field-effect transistor implementations are presented. Chapter 3 deals with operational amplifier circuits. The four basic op-amp circuits are analyzed: (1) voltage-feedback amplifiers, (2) current-feedback amplifiers, (3) current-differencing amplifiers, and (4) transconductance ampli fiers. Selected applications are also presented.
Artificial Intelligence is entering the mainstream of com- puter applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the pro- cessing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an archi- tecture specialized for Prolog can achieve a ten-fold improve- ment in performance over conventional, general-purpose architec- tures. This book presents such an architecture for high perfor- mance execution of Prolog programs. The architecture is based on the abstract machine descrip- tion introduced by David H.D. Warren known as the Warren Abstract Machine (W AM). The execution model of the W AM is described and extended to provide a complete Instruction Set Architecture (lSA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design. The work described here represents one of the first efforts to implement the W AM model in hardware. The approach taken is that of direct implementation of the high level WAM instruction set in hardware resulting in a elSe style archi- tecture.
A new advanced textbook/reference providing a comprehensive survey of hardware and software architectural principles and methods of computer systems organization and design. The book is suitable for a first course in computer organization. The style is similar to that of the author's book on assembly language in that it strongly supports self-study by students. This organization facilitates compressed presentation of material. Emphasis is also placed on related concepts to practical designs/chips. Topics: material presentation suitable for self- study; concepts related to practical designs and implementations; extensive examples and figures; details provided on several digital logic simulation packages; free MASM download instructions provided; and end-of-chapter exercises.
In this volume, designed for computational scientists and engineers working on applications requiring the memories and processing rates of large-scale parallelism, leading algorithmicists survey their own field-defining contributions, together with enough historical and bibliographical perspective to permit working one's way to the frontiers. This book is distinguished from earlier surveys in parallel numerical algorithms by its extension of coverage beyond core linear algebraic methods into tools more directly associated with partial differential and integral equations - though still with an appealing generality - and by its focus on practical medium-granularity parallelism, approachable through traditional programming languages. Several of the authors used their invitation to participate as a chance to stand back and create a unified overview, which nonspecialists will appreciate.
Welcome to the third International Conference on Management of Multimedia Networks and Services (MMNS'2000) in Fortaleza (Brazil)! The first MMNS was held in Montreal ( Canada) in july 1997 and the second MMNS was held in Versailles (France) in November 1998. The MMNS conference takes place every year and a half and is aimed to be a truly international event by bringing together researchers and practitioners from all around the world and by organising the conference each time in a different continent/country. Over the past several years, there has been a considerable amount of research within the fields of multimedia networking and network management. Much of that work has taken place within the context of managing Quality-of Service in broadband integrated services digital networks such as the A TM, and more recently in IP-based networks, to respond to the requirements of emerging multimedia applications. A TM networks were designed to support multimedia traffic with diverse characteristics and can be used as the transfer mode for both wired and wireless networks. A new set of Internet protocols is being developed to provide better quality of service, which is a prerequisite for supporting multimedia applications. Multimedia applications have a different set of requirements, which impacts the design of the underlying communication network as well as its management. Several QoS management mechanisms intervening at different layers of the communication network are required including QoS-routing, QoS-based transport, QoS negotiation, QoS adaptation, FCAPS management, and mobility management.
Testing of Communicating Systems XIV presents the latest international results in both the theory and industrial practice of the testing of communicating systems, ranging from tools and techniques for testing to test standards, frameworks, notations, algorithms, fundamentals of testing, and industrial experiences and issues. The tools and techniques discussed apply to conformance testing, interoperability testing, performance testing, Internet protocols and applications, and multimedia and distributed systems in general.
High Performance Networking is a state-of-the-art book that deals with issues relating to the fast-paced evolution of public, corporate and residential networks. It focuses on the practical and experimental aspects of high performance networks and introduces novel approaches and concepts aimed at improving the performance, usability, interoperability and scalability of such systems. Among others, the topics covered include: * Java applets and applications; * distributed virtual environments; * new internet streaming protocols; * web telecollaboration tools; * Internet, Intranet; * real-time services like multimedia; * quality of service; * mobility. High Performance Networking comprises the proceedings of the Eighth International Conference on High Performance Networking, sponsored by the International Federation for Information Processing (IFIP), and was held at Vienna Univrsity of Technology, Vienna, Austria, in September 1998. High Performance Networking is suitable as a secondary text for a graduate level course on high performance networking, and as a reference for researchers and practitioners in industry.
Computing systems are of growing importance because of their wide use in many areas including those in safety-critical systems. This book describes the basic models and approaches to the reliability analysis of such systems. An extensive review is provided and models are categorized into different types. Some Markov models are extended to the analysis of some specific computing systems such as combined software and hardware, imperfect debugging processes, failure correlation, multi-state systems, heterogeneous subsystems, etc. One of the aims of the presentation is that based on the sound analysis and simplicity of the approaches, the use of Markov models can be better implemented in the computing system reliability.
Following an exchange of correspondence, I met Ross in Adelaide in June 1988. I was approached by the University of Adelaide about being an external examiner for this dissertation and willingly agreed. Upon receiving a copy of this work, what struck me most was the scholarship with which Ross approaches and advances this relatively new field of adaptive data compression. This scholarship, coupled with the ability to express himself clearly using figures, tables, and incisive prose, demanded that Ross's dissertation be given a wider audience. And so this thesis was brought to the attention of Kluwer. The modern data compression paradigm furthered by this work is based upon the separation of adaptive context modelling, adaptive statistics, and arithmetic coding. This work offers the most complete bibliography on this subject I am aware of. It provides an excellent and lucid review of the field, and should be equally as beneficial to newcomers as to those of us already in the field.
The area of virtual organizations, and industrial virtual enterprises in particular, is attracting a large and growing interest both in terms of the research and development and the implementation of new business practices. An ever-increasing number of international projects and national initiatives have been launched recently. Most of the earlier efforts are focused on the development of supporting infrastructures, although more and more initiatives now pursue the exploitation of this concept in business terms. Being a recent research and development area, and in spite of the mentioned interest, there is a lack of a structured and comprehensive text that can be used as a reference source. Most available literature is dispersed in several conference proceedings, journals, and book chapters. This book represents an attempt towards such structured text. Although the book was prepared in the framework of PRO-VE'99, a working conference on infrastructures for virtual enterprises organized by the Esprit project PRODNET II and IFIP, it has the goal of covering more generic VE requirements and addressing several other approaches and important aspects in this paradigm.
JR is an extension of the Java programming language with additional concurrency mechanisms based on those in the SR (Synchronizing Resources) programming language. The JR implementation executes on UNIX-based systems (Linux, Mac OS X, and Solaris) and Windows-based systems. It is available free from the JR webpage. This book describes the JR programming language and illustrates how it can be used to write concurrent programs for a variety of applications. This text presents numerous small and large example programs. The source code for all programming examples and the given parts of all programming exercises are available on the JR webpage. Dr. Ronald A. Olsson and Dr. Aaron W. Keen, the authors of this text, are the designers and implementors of JR.
1 This year marks the l0 h anniversary of the IFIP International Workshop on Protocols for High-Speed Networks (PfHSN). It began in May 1989, on a hillside overlooking Lake Zurich in Switzerland, and arrives now in Salem Massachusetts 6,000 kilometers away and 10 years later, in its sixth incarnation, but still with a waterfront view (the Atlantic Ocean). In between, it has visited some picturesque views of other lakes and bays of the world: Palo Alto (1990 - San Francisco Bay), Stockholm (1993 - Baltic Sea), Vancouver (1994- the Strait of Georgia and the Pacific Ocean), and Sophia Antipolis I Nice (1996- the Mediterranean Sea). PfHSN is a workshop providing an international forum for the exchange of information on high-speed networks. It is a relatively small workshop, limited to 80 participants or less, to encourage lively discussion and the active participation of all attendees. A significant component of the workshop is interactive in nature, with a long history of significant time reserved for discussions. This was enhanced in 1996 by Christophe Diot and W allid Dabbous with the institution of Working Sessions chaired by an "animator," who is a distinguished researcher focusing on topical issues of the day. These sessions are an audience participation event, and are one of the things that makes PfHSN a true "working conference. |
![]() ![]() You may like...
Object Management in Distributed…
Wujuan Lin, Bharadwaj Veeravalli
Hardcover
R2,977
Discovery Miles 29 770
Database Systems: The Complete Book…
Hector Garcia-Molina, Jeffrey Ullman, …
Paperback
R2,849
Discovery Miles 28 490
|