![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.
There is an increasing demand for dynamic systems to become safer and more reliable. This requirement extends beyond the normally accepted safety-critical systems such as nuclear reactors and aircraft, where safety is of paramount importance, to systems such as autonomous vehicles and process control systems where the system availability is vital. It is clear that fault diagnosis is becoming an important subject in modern control theory and practice. Robust Model-Based Fault Diagnosis for Dynamic Systems presents the subject of model-based fault diagnosis in a unified framework. It contains many important topics and methods; however, total coverage and completeness is not the primary concern. The book focuses on fundamental issues such as basic definitions, residual generation methods and the importance of robustness in model-based fault diagnosis approaches. In this book, fault diagnosis concepts and methods are illustrated by either simple academic examples or practical applications. The first two chapters are of tutorial value and provide a starting point for newcomers to this field.The rest of the book presents the state of the art in model-based fault diagnosis by discussing many important robust approaches and their applications. This will certainly appeal to experts in this field. Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research. The book is useful for both researchers in academia and professional engineers in industry because both theory and applications are discussed. Although this is a research monograph, it will be an important text for postgraduate research students world-wide. The largest market, however, will be academics, libraries and practicing engineers and scientists throughout the world.
The State of Memory Technology Over the past decade there has been rapid growth in the speed of micropro cessors. CPU speeds are approximately doubling every eighteen months, while main memory speed doubles about every ten years. The International Tech nology Roadmap for Semiconductors (ITRS) study suggests that memory will remain on its current growth path. The ITRS short-and long-term targets indicate continued scaling improvements at about the current rate by 2016. This translates to bit densities increasing at two times every two years until the introduction of 8 gigabit dynamic random access memory (DRAM) chips, after which densities will increase four times every five years. A similar growth pattern is forecast for other high-density chip areas and high-performance logic (e.g., microprocessors and application specific inte grated circuits (ASICs)). In the future, molecular devices, 64 gigabit DRAMs and 28 GHz clock signals are targeted. Although densities continue to grow, we still do not see significant advances that will improve memory speed. These trends have created a problem that has been labeled the Memory Wall or Memory Gap."
With the rapid growth of networking and high-computing power, the demand for large-scale and complex software systems has increased dramatically. Many of the software systems support or supplant human control of safety-critical systems such as flight control systems, space shuttle control systems, aircraft avionics control systems, robotics, patient monitoring systems, nuclear power plant control systems, and so on. Failure of safety-critical systems could result in great disasters and loss of human life. Therefore, software used for safety critical systems should preserve high assurance properties. In order to comply with high assurance properties, a safety-critical system often shares resources between multiple concurrently active computing agents and must meet rigid real-time constraints. However, concurrency and timing constraints make the development of a safety-critical system much more error prone and arduous. The correctness of software systems nowadays depends mainly on the work of testing and debugging. Testing and debugging involve the process of de tecting, locating, analyzing, isolating, and correcting suspected faults using the runtime information of a system. However, testing and debugging are not sufficient to prove the correctness of a safety-critical system. In contrast, static analysis is supported by formalisms to specify the system precisely. Formal verification methods are then applied to prove the logical correctness of the system with respect to the specification. Formal verifica tion gives us greater confidence that safety-critical systems meet the desired assurance properties in order to avoid disastrous consequences.
Ontology Learning for the Semantic Web explores techniques for applying knowledge discovery techniques to different web data sources (such as HTML documents, dictionaries, etc.), in order to support the task of engineering and maintaining ontologies. The approach of ontology learning proposed in Ontology Learning for the Semantic Web includes a number of complementary disciplines that feed in different types of unstructured and semi-structured data. This data is necessary in order to support a semi-automatic ontology engineering process. Ontology Learning for the Semantic Web is designed for researchers and developers of semantic web applications. It also serves as an excellent supplemental reference to advanced level courses in ontologies and the semantic web.
This single source reference offers a pragmatic and accessible approach to the basic methods and procedures used in the manufacturing and design of modern electronic products. Providing a stategic yet simplified layout, this handbook is set up with an eye toward maximizing productivity in each phase of the eletronics manufacturing process. Not only does this handbook inform the reader on vital issues concerning electronics manufacturing and design, it also provides practical insight and will be of essential use to manufacturing and process engineers in electronics and aerospace manufacturing. In addition, electronics packaging engineers and electronics manufacturing managers and supervisors will gain a wealth of knowledge.
The papers in this volume were presented at the Second Annual Work shop on Active Middleware Services and were selected for inclusion here by the Editors. The AMS workshop was organized with support from both the National Science Foundation and the CAT center at the Uni versity of Arizona, and was held in Pittsburgh, Pennsylvania, on August 1, 2000, in conjunction with the 9th IEEE International Symposium on High Performance Distributed Computing (HPDC-9). The explosive growth of Internet-based applications and the prolifer ation of networking technologies has been transforming most areas of computer science and engineering as well as computational science and commercial application areas. This opens an outstanding opportunity to explore new, Internet-oriented software technologies that will open new research and application opportunities not only for the multimedia and commercial world, but also for the scientific and high-performance computing applications community. Two emerging technologies - agents and active networks - allow increased programmability to enable bring ing new services to Internet based applications. The AMS workshop presented research results and working papers in the areas of active net works, mobile and intelligent agents, software tools for high performance distributed computing, network operating systems, and application pro gramming models and environments. The success of an endeavor such as this depends on the contributions of many individuals. We would like to thank Dr. Frederica Darema and the NSF for sponsoring the workshop.
Language, Compilers and Run-time Systems for Scalable Computers contains 20 articles based on presentations given at the third workshop of the same title, and 13 extended abstracts from the poster session. Starting with new developments in classical problems of parallel compiler design, such as dependence analysis and an exploration of loop parallelism, the book goes on to address the issues of compiler strategy for specific architectures and programming environments. Several chapters investigate support for multi-threading, object orientation, irregular computation, locality enhancement, and communication optimization. Issues of the interface between language and operating system support are also discussed. Finally, the load balance issues are discussed in different contexts, including sparse matrix computation and iteratively balanced adaptive solvers for partial differential equations. Some additional topics are also discussed in the extended abstracts. Each chapter provides a bibliography of relevant papers and the book can thus be used as a reference to the most up-to-date research in parallel software engineering.
Disseminating Security Updates at Internet Scale describes a new system, "Revere", that addresses these problems. "Revere" builds large-scale, self-organizing and resilient overlay networks on top of the Internet to push security updates from dissemination centers to individual nodes. "Revere" also sets up repository servers for individual nodes to pull missed security updates. This book further discusses how to protect this push-and-pull dissemination procedure and how to secure "Revere" overlay networks, considering possible attacks and countermeasures. Disseminating Security Updates at Internet Scale presents experimental measurements of a prototype implementation of "Revere" gathered using a large-scale oriented approach. These measurements suggest that "Revere" can deliver security updates at the required scale, speed and resiliency for a reasonable cost. Disseminating Security Updates at Internet Scale will be helpful to those trying to design peer systems at large scale when security is a concern, since many of the issues faced by these designs are also faced by "Revere". The "Revere" solutions may not always be appropriate for other peer systems with very different goals, but the analysis of the problems and possible solutions discussed here will be helpful in designing a customized approach for such systems.
Universal access and management of information has been one of the driving forces in the evolution of computer technology. Central computing gave the ability to perform large and complex computations and advanced information manipulation. Advances in networking connected computers together and led to distributed computing. Web technology and the Internet went even further to provide hyper-linked information access and global computing. However, restricting access stations to physical locations limits the boundary of the vision. The real global network can be achieved only via the ability to compute and access information from anywhere and anytime. This is the fundamental wish that motivates mobile computing. This evolution is the cumulative result of both hardware and software advances at various levels motivated by tangible application needs. Infrastructure research on communications and networking is essential for realizing wireless systems.Equally important is the design and implementation of data management applications for these systems, a task directly affected by the characteristics of the wireless medium and the resulting mobility of data resources and computation. Although a relatively new area, mobile data management has provoked a proliferation of research efforts motivated both by a great market potential and by many challenging research problems. The focus of Data Management for Mobile Computing is on the impact of mobile computing on data management beyond the networking level. The purpose is to provide a thorough and cohesive overview of recent advances in wireless and mobile data management. The book is written with a critical attitude. This volume probes the new issues introduced by wireless and mobile access to data and their conceptual and practical consequences. Data Management for Mobile Computing provides a single source for researchers and practitioners who want to keep abreast of the latest innovations in the field.It can also serve as a textbook for an advanced course on mobile computing or as a companion text for a variety of courses including courses on distributed systems, database management, transaction management, operating or file systems, information retrieval or dissemination, and web computing.
Mobile Computation with Functions explores distributed computation with languages which adopt functions as the main programming abstraction and support code mobility through the mobility of functions between remote sites. It aims to highlight the benefits of using languages of this family in dealing with the challenges of mobile computation. The possibility of exploiting existing static analysis techniques suggests that having functions at the core of mobile code language is a particularly apt choice. A range of problems which have impact on the safety, security and performance are discussed. It is shown that types extended with effects and other annotations can capture a significant amount of information about the dynamic behavior of mobile functions, and offer solutions to the problems under investigation. This book includes a survey of the languages Concurrent ML, Facile and PLAN which inherit the strengths of the functional paradigm in the context of concurrent and distributed computation. The languages which are defined in the subsequent chapters have their roots in these languages.
Despite five decades of research, parallel computing remains an exotic, frontier technology on the fringes of mainstream computing. Its much-heralded triumph over sequential computing has yet to materialize. This is in spite of the fact that the processing needs of many signal processing applications continue to eclipse the capabilities of sequential computing. The culprit is largely the software development environment. Fundamental shortcomings in the development environment of many parallel computer architectures thwart the adoption of parallel computing. Foremost, parallel computing has no unifying model to accurately predict the execution time of algorithms on parallel architectures. Cost and scarce programming resources prohibit deploying multiple algorithms and partitioning strategies in an attempt to find the fastest solution. As a consequence, algorithm design is largely an intuitive art form dominated by practitioners who specialize in a particular computer architecture. This, coupled with the fact that parallel computer architectures rarely last more than a couple of years, makes for a complex and challenging design environment. To navigate this environment, algorithm designers need a road map, a detailed procedure they can use to efficiently develop high performance, portable parallel algorithms. The focus of this book is to draw such a road map. The Parallel Algorithm Synthesis Procedure can be used to design reusable building blocks of adaptable, scalable software modules from which high performance signal processing applications can be constructed. The hallmark of the procedure is a semi-systematic process for introducing parameters to control the partitioning and scheduling of computation and communication. This facilitates the tailoring of software modules to exploit different configurations of multiple processors, multiple floating-point units, and hierarchical memories. To showcase the efficacy of this procedure, the book presents three case studies requiring various degrees of optimization for parallel execution.
This volume contains a selection of papers that focus on the state-of the-art in formal specification and verification of real-time computing systems. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D. C. A companion volume by the title Foundations of Real-Time Computing: Scheduling and Resource Management complements this hook by addressing many of the recently devised techniques and approaches for scheduling tasks and managing resources in real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real time computing systems on a scientific basis. The notion of real-time system has alternative interpretations, not all of which are intended usages in this collection of papers. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the formal specification and verification of computer software and systems whose correct performance is dependent on carefully orchestrated interactions with time, e. g., meeting deadlines and synchronizing with clocks. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End use applications of real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."
This book covers the entire spectrum of multicasting on the Internet from link- to application-layer issues, including multicasting in broadcast and non-broadcast links, multicast routing, reliable and real-time multicast transport, group membership and total ordering in multicast groups. In-depth consideration is given to describing IP multicast routing protocols, such as, DVMRP, MOSPF, PIM and CBT, quality of service issues in network-layer using RSVP and ST-2, as well as the relationship between ATM and IP multicast. These discussions include coverage of key concepts using illustrative diagrams and various real-world applications. The protocols and the architecture of MBone are described, real-time multicast transport issues are addressed and various reliable multicast transport protocols are compared both conceptually and analytically. Also included is a discussion of video multicast and other cutting-edge research on multicast with an assessment of their potential impact on future internetworks.Multicasting on the Internet and Its Applications is an invaluable reference work for networking professionals and researchers, network software developers, information technology managers and graduate students.
In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.
Distributed and Parallel Systems: From Instruction Parallelism to Cluster Computing is the proceedings of the third Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by the Austrian Computer Society and the MTA SZTAKI Computer and Automation Research Institute. This book contains 18 full papers and 12 short papers from 14 countries around the world, including Japan, Korea and Brazil. The paper sessions cover a broad range of research topics in the area of parallel and distributed systems, including software development environments, performance evaluation, architectures, languages, algorithms, web and cluster computing. This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.
New to the Second Edition: offers the latest developments in standards activities (JPEG-LS, MPEG-4, MPEG-7, and H.263) provides a comprehensive review of recent activities on multimedia enhanced processors, multimedia coprocessors, and dedicated processors, including examples from industry. Image and Video Compression Standards: Algorithms and Architectures, Second Edition presents an introduction to the algorithms and architectures that form the underpinnings of the image and video compressions standards, including JPEG (compression of still-images), H.261 and H.263 (video teleconferencing), and MPEG-1 and MPEG-2 (video storage and broadcasting). The next generation of audiovisual coding standards, such as MPEG-4 and MPEG-7, are also briefly described. In addition, the book covers the MPEG and Dolby AC-3 audio coding standards and emerging techniques for image and video compression, such as those based on wavelets and vector quantization. Image and Video Compression Standards: Algorithms and Architectures, Second Edition emphasizes the foundations of these standards; namely, techniques such as predictive coding, transform-based coding such as the discrete cosine transform (DCT), motion estimation, motion compensation, and entropy coding, as well as how they are applied in the standards. The implementation details of each standard are avoided; however, the book provides all the material necessary to understand the workings of each of the compression standards, including information that can be used by the reader to evaluate the efficiency of various software and hardware implementations conforming to these standards. Particular emphasis is placed on those algorithms and architectures that have been found to be useful in practical software or hardware implementations. Image and Video Compression Standards: Algorithms and Architectures, Second Edition uniquely covers all major standards (JPEG, MPEG-1, MPEG-2, MPEG-4, H.261, H.263) in a simple and tutorial manner, while fully addressing the architectural considerations involved when implementing these standards. As such, it serves as a valuable reference for the graduate student, researcher or engineer. The book is also used frequently as a text for courses on the subject, in both academic and professional settings.
Designing VLSI systems represents a challenging task. It is a transfonnation among different specifications corresponding to different levels of design: abstraction, behavioral, stntctural and physical. The behavioral level describes the functionality of the design. It consists of two components; static and dynamic. The static component describes operations, whereas the dynamic component describes sequencing and timing. The structural level contains infonnation about components, control and connectivity. The physical level describes the constraints that should be imposed on the floor plan, the placement of components, and the geometry of the design. Constraints of area, speed and power are also applied at this level. To implement such multilevel transfonnation, a design methodology should be devised, taking into consideration the constraints, limitations and properties of each level. The mapping process between any of these domains is non-isomorphic. A single behavioral component may be transfonned into more than one structural component. Design methodologies are the most recent evolution in the design automation era, which started off with the introduction and subsequent usage of module generation especially for regular structures such as PLA's and memories. A design methodology should offer an integrated design system rather than a set of separate unrelated routines and tools. A general outline of a desired integrated design system is as follows: * Decide on a certain unified framework for all design levels. * Derive a design method based on this framework. * Create a design environment to implement this design method.
Cooperative Computer-Aided Authoring and Learning: A Systems Approach describes in detail a practical system for computer assisted authoring and learning. Drawing from the experiences gained during the Nestor project, jointly run between the Universities of Karlsruhe, Kaiserslautern and Freiburg and the Digital Equipment Corp. Center for Research and Advanced Development, the book presents a concrete example of new concepts in the domain of computer-aided authoring and learning. The conceptual foundation is laid by a reference architecture for an integrated environment for authoring and learning. This overall architecture represents the nucleus, shell and common denominator for the R&D activities carried out. From its conception, the reference architecture was centered around three major issues: * Cooperation among and between authors and learners in an open, multimedia and distributed system as the most important attribute; * Authoring/learning as the central topic; * Laboratory as the term which evoked the most suitable association with the envisioned authoring/learning environment.Within this framework, the book covers four major topics which denote the most important technical domains, namely: * The system kernel, based on object orientation and hypermedia; * Distributed multimedia support; * Cooperation support, and * Reusable instructional design support. Cooperative Computer-Aided Authoring and Learning: A Systems Approach is a major contribution to the emerging field of collaborative computing and is essential reading for researchers and practitioners alike. Its pedagogic flavor also makes it suitable for use as a text for a course on the subject.
Over the past few years, the demand for high speed Digital Signal Proces sing (DSP) has increased dramatically. New applications in real-time image processing, satellite communications, radar signal processing, pattern recogni tion, and real-time signal detection and estimation require major improvements at several levels; algorithmic, architectural, and implementation. These perfor mance requirements can be achieved by employing parallel processing at all levels. Very Large Scale Integration (VLSI) technology supports and provides a good avenue for parallelism. Parallelism offers efficient sohitions to several problems which can arise in VLSI DSP architectures such as: 1. Intermediate data communication and routing: several DSP algorithms, such as FFT, involve excessive data routing and reordering. Parallelism is an efficient mechanism to minimize the silicon cost and speed up the pro cessing time of the intermediate middle stages. 2. Complex DSP applications: the required computation is almost doubled. Parallelism will allow two similar channels processing at the same time. The communication between the two channels has to be minimized. 3. Applicatilm specific systems: this emerging approach should achieve real-time performance in a cost-effective way. 4. Testability and fault tolerance: reliability has become a required feature in most of DSP systems. To achieve such property, the involved time overhead is significant. Parallelism may be the solution to maintain ac ceptable speed performance."
Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.
A Formal Approach to Hardware Design discusses designing computations to be realised by application specific hardware. It introduces a formal design approach based on a high-level design language called Synchronized Transitions. The models created using Synchronized Transitions enable the designer to perform different kinds of analysis and verification based on descriptions in a single language. It is, for example, possible to use exactly the same design description both for mechanically supported verification and synthesis. Synchronized Transitions is supported by a collection of public domain CAD tools. These tools can be used with the book in presenting a course on the subject. A Formal Approach to Hardware Design illustrates the benefits to be gained from adopting such techniques, but it does so without assuming prior knowledge of formal design methods. The book is thus not only an excellent reference, it is also suitable for use by students and practitioners.
Application-Driven Architecture Synthesis describes the state of the art of architectural synthesis for complex real-time processing. In order to deal with the stringent timing requirements and the intricacies of complex real-time signal and data processing, target architecture styles and target application domains have been adopted to make the synthesis approach feasible. These approaches are also heavily application-driven, which is illustrated by many realistic demonstrations, used as examples in the book. The focus is on domains where application-specific solutions are attractive, such as significant parts of audio, telecom, instrumentation, speech, robotics, medical and automotive processing, image and video processing, TV, multi-media, radar, sonar. Application-Driven Architecture Synthesis is of interest to both academics and senior design engineers and CAD managers in industry. It provides an excellent overview of what capabilities to expect from future practical design tools, and includes an extensive bibliography.
The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs.In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cache certain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns.This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.
Performance and Reliability Analysis of Computer Systems: An Example-Based Approach Using the SHARPE Software Package provides a variety of probabilistic, discrete-state models used to assess the reliability and performance of computer and communication systems. The models included are combinatorial reliability models (reliability block diagrams, fault trees and reliability graphs), directed, acyclic task precedence graphs, Markov and semi-Markov models (including Markov reward models), product-form queueing networks and generalized stochastic Petri nets. A practical approach to system modeling is followed; all of the examples described are solved and analyzed using the SHARPE tool. In structuring the book, the authors have been careful to provide the reader with a methodological approach to analytical modeling techniques. These techniques are not seen as alternatives but rather as an integral part of a single process of assessment which, by hierarchically combining results from different kinds of models, makes it possible to use state-space methods for those parts of a system that require them and non-state-space methods for the more well-behaved parts of the system. The SHARPE (Symbolic Hierarchical Automated Reliability and Performance Evaluator) package is the `toolchest' that allows the authors to specify stochastic models easily and solve them quickly, adopting model hierarchies and very efficient solution techniques. All the models described in the book are specified and solved using the SHARPE language; its syntax is described and the source code of almost all the examples discussed is provided. Audience: Suitable for use in advanced level courses covering reliability and performance of computer and communications systems and by researchers and practicing engineers whose work involves modeling of system performance and reliability. |
![]() ![]() You may like...
Pearson Edexcel International A Level…
Joe Skrakowski, Harry Smith
Paperback
R1,000
Discovery Miles 10 000
Energy-Aware System Design - Algorithms…
Chong-Min Kyung, Sung joo Yoo
Hardcover
R2,907
Discovery Miles 29 070
|