![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Artificial Intelligence is entering the mainstream of com- puter applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the pro- cessing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an archi- tecture specialized for Prolog can achieve a ten-fold improve- ment in performance over conventional, general-purpose architec- tures. This book presents such an architecture for high perfor- mance execution of Prolog programs. The architecture is based on the abstract machine descrip- tion introduced by David H.D. Warren known as the Warren Abstract Machine (W AM). The execution model of the W AM is described and extended to provide a complete Instruction Set Architecture (lSA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design. The work described here represents one of the first efforts to implement the W AM model in hardware. The approach taken is that of direct implementation of the high level WAM instruction set in hardware resulting in a elSe style archi- tecture.
A new advanced textbook/reference providing a comprehensive survey of hardware and software architectural principles and methods of computer systems organization and design. The book is suitable for a first course in computer organization. The style is similar to that of the author's book on assembly language in that it strongly supports self-study by students. This organization facilitates compressed presentation of material. Emphasis is also placed on related concepts to practical designs/chips. Topics: material presentation suitable for self- study; concepts related to practical designs and implementations; extensive examples and figures; details provided on several digital logic simulation packages; free MASM download instructions provided; and end-of-chapter exercises.
In this volume, designed for computational scientists and engineers working on applications requiring the memories and processing rates of large-scale parallelism, leading algorithmicists survey their own field-defining contributions, together with enough historical and bibliographical perspective to permit working one's way to the frontiers. This book is distinguished from earlier surveys in parallel numerical algorithms by its extension of coverage beyond core linear algebraic methods into tools more directly associated with partial differential and integral equations - though still with an appealing generality - and by its focus on practical medium-granularity parallelism, approachable through traditional programming languages. Several of the authors used their invitation to participate as a chance to stand back and create a unified overview, which nonspecialists will appreciate.
Welcome to the third International Conference on Management of Multimedia Networks and Services (MMNS'2000) in Fortaleza (Brazil)! The first MMNS was held in Montreal ( Canada) in july 1997 and the second MMNS was held in Versailles (France) in November 1998. The MMNS conference takes place every year and a half and is aimed to be a truly international event by bringing together researchers and practitioners from all around the world and by organising the conference each time in a different continent/country. Over the past several years, there has been a considerable amount of research within the fields of multimedia networking and network management. Much of that work has taken place within the context of managing Quality-of Service in broadband integrated services digital networks such as the A TM, and more recently in IP-based networks, to respond to the requirements of emerging multimedia applications. A TM networks were designed to support multimedia traffic with diverse characteristics and can be used as the transfer mode for both wired and wireless networks. A new set of Internet protocols is being developed to provide better quality of service, which is a prerequisite for supporting multimedia applications. Multimedia applications have a different set of requirements, which impacts the design of the underlying communication network as well as its management. Several QoS management mechanisms intervening at different layers of the communication network are required including QoS-routing, QoS-based transport, QoS negotiation, QoS adaptation, FCAPS management, and mobility management.
Testing of Communicating Systems XIV presents the latest international results in both the theory and industrial practice of the testing of communicating systems, ranging from tools and techniques for testing to test standards, frameworks, notations, algorithms, fundamentals of testing, and industrial experiences and issues. The tools and techniques discussed apply to conformance testing, interoperability testing, performance testing, Internet protocols and applications, and multimedia and distributed systems in general.
This is the sixth conference in the series which started in 1981 in Paris, followed by conferences held in Zurich (1984), Rio de Janeirio (1987), Barcelona (1991), and Raleigh (1993). The main objective of this IFIP conference series is to provide a platform for the exchange of recent and original contributions in communications systems in the areas of performance analysis, architectures, and applications. There are many exiciting trends and developments in the communications industry, several of which are related to advances in Asynchronous Transfer Mode*(ATM), multimedia services, and high speed protocols. It is commonly believed in the communications industry that ATM represents the next generation of networking. Yet, there are a number of issues that has been worked on in various standards bodies, government and industry research and development labs, and universities towards enabling high speed networks in general and ATM networks in particular. Reflecting these trends, the technical program of the Sixth IFIP W.G. 6.3 Conference on Performance of Computer Networks consists of papers addressing a wide range of technical challenges and proposing various state of the art solutions to a subset of them. The program includes 25 papers selected by the program committee out of 57 papers submitted.
Foundations of Dependable Computing: System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead. A companion to this volume (published by Kluwer) subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems.
This book introduces the fundamental concepts and practical simulation te- niques for modeling different aspects of operating systems to study their g- eral behavior and their performance. The approaches applied are obje- oriented modeling and process interaction approach to discrete-event simu- tion. The book depends on the basic modeling concepts and is more specialized than my previous book: Practical Process Simulation with Object-Oriented Techniques and C++, published by Artech House, Boston 1999. For a more detailed description see the Web location: http: //science.kennesaw.edu/ jgarrido/mybook, html. Most other books on performance modeling use only analytical approaches, and very few apply these concepts to the study of operating systems. Thus, the unique feature of the book is that it concentrates on design aspects of operating systems using practical simulation techniques. In addition, the book illustrates the dynamic behavior of different aspects of operating systems using the various simulation models, with a general hands-on approac
High Performance Networking is a state-of-the-art book that deals with issues relating to the fast-paced evolution of public, corporate and residential networks. It focuses on the practical and experimental aspects of high performance networks and introduces novel approaches and concepts aimed at improving the performance, usability, interoperability and scalability of such systems. Among others, the topics covered include: * Java applets and applications; * distributed virtual environments; * new internet streaming protocols; * web telecollaboration tools; * Internet, Intranet; * real-time services like multimedia; * quality of service; * mobility. High Performance Networking comprises the proceedings of the Eighth International Conference on High Performance Networking, sponsored by the International Federation for Information Processing (IFIP), and was held at Vienna Univrsity of Technology, Vienna, Austria, in September 1998. High Performance Networking is suitable as a secondary text for a graduate level course on high performance networking, and as a reference for researchers and practitioners in industry.
Computing systems are of growing importance because of their wide use in many areas including those in safety-critical systems. This book describes the basic models and approaches to the reliability analysis of such systems. An extensive review is provided and models are categorized into different types. Some Markov models are extended to the analysis of some specific computing systems such as combined software and hardware, imperfect debugging processes, failure correlation, multi-state systems, heterogeneous subsystems, etc. One of the aims of the presentation is that based on the sound analysis and simplicity of the approaches, the use of Markov models can be better implemented in the computing system reliability.
The fast progress in computer networks and their wide availability is complemented with on one hand the explosion of mobile computing and on the other hand the trends in the direction of ubiquitous computing. The merger of these technologies acts as a powerful enabler for new forms of highly dynamic collaborative organizations and the emergence of new business practices. Early efforts in the area of virtual enterprises (VE) were strongly constrained by the need to design and develop horizontal infrastructures aimed at supporting the basic collaboration needs of consortia of enterprises. Current trends, however, are more and more directed to the development of new vertical business models and corresponding support tools. In parallel to these efforts, after the first euphoria of the E-commerce wave and the disappointments caused by some simplistic approaches then adopted, there is a shift towards Business-to-Business solutions, as a way to effectively enable E-commerce. This is therefore a time of convergence of the virtual enterprise and e-business developments.This book contains selected articles from PRO-VE 2000, the second working conference on Infrastructures for Virtual Enterprises, which was sponsored by the International Federation for Information Processing (IFIP) and held in Florianopolis, Brazil in December 2000. The included articles represent relevant examples of the current state of the art in virtual enterprises and support for electronic business. Together with a diversity of application domains, the emphasis is mostly on: the new forms of virtual organizations, support for agility, modeling and execution of distributed business processes, management of enterprise clusters, distributed/federated information management, knowledge management, logistics for electronic commerce, and safe communication. In other words, the book is mainly focused on the management of business-to-business cooperation in virtual and smart organizations. The implantation of electronic business and the virtual enterprise area is not only a technological problem. Therefore, aspects such as socio-organizational transformations, training needs, legal and ethical issues, and intellectual property rights, are also addressed in the book.E-Business and Virtual Enterprises is essential reading for researchers, engineers, practitioners, and engineering students in production engineering, computer science, electrical engineering, mechanical engineering, organizational science, and industrial sociology.
Following an exchange of correspondence, I met Ross in Adelaide in June 1988. I was approached by the University of Adelaide about being an external examiner for this dissertation and willingly agreed. Upon receiving a copy of this work, what struck me most was the scholarship with which Ross approaches and advances this relatively new field of adaptive data compression. This scholarship, coupled with the ability to express himself clearly using figures, tables, and incisive prose, demanded that Ross's dissertation be given a wider audience. And so this thesis was brought to the attention of Kluwer. The modern data compression paradigm furthered by this work is based upon the separation of adaptive context modelling, adaptive statistics, and arithmetic coding. This work offers the most complete bibliography on this subject I am aware of. It provides an excellent and lucid review of the field, and should be equally as beneficial to newcomers as to those of us already in the field.
The area of virtual organizations, and industrial virtual enterprises in particular, is attracting a large and growing interest both in terms of the research and development and the implementation of new business practices. An ever-increasing number of international projects and national initiatives have been launched recently. Most of the earlier efforts are focused on the development of supporting infrastructures, although more and more initiatives now pursue the exploitation of this concept in business terms. Being a recent research and development area, and in spite of the mentioned interest, there is a lack of a structured and comprehensive text that can be used as a reference source. Most available literature is dispersed in several conference proceedings, journals, and book chapters. This book represents an attempt towards such structured text. Although the book was prepared in the framework of PRO-VE'99, a working conference on infrastructures for virtual enterprises organized by the Esprit project PRODNET II and IFIP, it has the goal of covering more generic VE requirements and addressing several other approaches and important aspects in this paradigm.
Replication Techniques in Distributed Systems organizes and surveys the spectrum of replication protocols and systems that achieve high availability by replicating entities in failure-prone distributed computing environments. The entities discussed in this book vary from passive untyped data objects, to typed and complex objects, to processes and messages. Replication Techniques in Distributed Systems contains definitions and introductory material suitable for a beginner, theoretical foundations and algorithms, an annotated bibliography of commercial and experimental prototype systems, as well as short guides to recommended further readings in specialized subtopics. This book can be used as recommended or required reading in graduate courses in academia, as well as a handbook for designers and implementors of systems that must deal with replication issues in distributed systems.
JR is an extension of the Java programming language with additional concurrency mechanisms based on those in the SR (Synchronizing Resources) programming language. The JR implementation executes on UNIX-based systems (Linux, Mac OS X, and Solaris) and Windows-based systems. It is available free from the JR webpage. This book describes the JR programming language and illustrates how it can be used to write concurrent programs for a variety of applications. This text presents numerous small and large example programs. The source code for all programming examples and the given parts of all programming exercises are available on the JR webpage. Dr. Ronald A. Olsson and Dr. Aaron W. Keen, the authors of this text, are the designers and implementors of JR.
Foundations of Dependable Computing: Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. A companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.
High Performance Computing Systems and Applications contains the fully refereed papers from the 13th Annual Symposium on High Performance Computing, held in Kingston, Canada, in June 1999. This book presents the latest research in HPC architectures, distributed and shared memory performance, algorithms and solvers, with special sessions on atmospheric science, computational chemistry and physics. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
1 This year marks the l0 h anniversary of the IFIP International Workshop on Protocols for High-Speed Networks (PfHSN). It began in May 1989, on a hillside overlooking Lake Zurich in Switzerland, and arrives now in Salem Massachusetts 6,000 kilometers away and 10 years later, in its sixth incarnation, but still with a waterfront view (the Atlantic Ocean). In between, it has visited some picturesque views of other lakes and bays of the world: Palo Alto (1990 - San Francisco Bay), Stockholm (1993 - Baltic Sea), Vancouver (1994- the Strait of Georgia and the Pacific Ocean), and Sophia Antipolis I Nice (1996- the Mediterranean Sea). PfHSN is a workshop providing an international forum for the exchange of information on high-speed networks. It is a relatively small workshop, limited to 80 participants or less, to encourage lively discussion and the active participation of all attendees. A significant component of the workshop is interactive in nature, with a long history of significant time reserved for discussions. This was enhanced in 1996 by Christophe Diot and W allid Dabbous with the institution of Working Sessions chaired by an "animator," who is a distinguished researcher focusing on topical issues of the day. These sessions are an audience participation event, and are one of the things that makes PfHSN a true "working conference.
Advances in optical technologies have made it possible to implement optical interconnections in future massively parallel processing systems. Photons are non-charged particles, and do not naturally interact. Consequently, there are many desirable characteristics of optical interconnects, e.g. high speed (speed of light), increased fanout, high bandwidth, high reliability, longer interconnection lengths, low power requirements, and immunity to EMI with reduced crosstalk. Optics can utilize free-space interconnects as well as guided wave technology, neither of which has the problems of VLSI technology mentioned above. Optical interconnections can be built at various levels, providing chip-to-chip, module-to-module, board-to-board, and node-to-node communications. Massively parallel processing using optical interconnections poses new challenges; new system configurations need to be designed, scheduling and data communication schemes based on new resource metrics need to be investigated, algorithms for a wide variety of applications need to be developed under the novel computation models that optical interconnections permit, and so on. Parallel Computing Using Optical Interconnections is a collection of survey articles written by leading and active scientists in the area of parallel computing using optical interconnections. This is the first book which provides current and comprehensive coverage of the field, reflects the state of the art from high-level architecture design and algorithmic points of view, and points out directions for further research and development.
Welcome to 1M 2003, the eighth in a series of the premier international technical conference in this field. As IT management has become mission critical to the economies of the developed world, our technical program has grown in relevance, strength and quality. Over the next few years, leading IT organizations will gradually move from identifying infrastructure problems to providing business services via automated, intelligent management systems. To be successful, these future management systems must provide global scalability, for instance, to support Grid computing and large numbers of pervasive devices. In Grid environments, organizations can pool desktops and servers, dynamically creating a virtual environment with huge processing power, and new management challenges. As the number, type, and criticality of devices connected to the Internet grows, new innovative solutions are required to address this unprecedented scale and management complexity. The growing penetration of technologies, such as WLANs, introduces new management challenges, particularly for performance and security. Management systems must also support the management of business processes and their supporting technology infrastructure as integrated entities. They will need to significantly reduce the amount of adventitious, bootless data thrown at consoles, delivering instead a cogent view of the system state, while leaving the handling of lower level events to self-managed, multifarious systems and devices. There is a new emphasis on "autonomic" computing, building systems that can perform routine tasks without administrator intervention and take prescient actions to rapidly recover from potential software or hardware failures.
Image and Video Compression Standards: Algorithms and Architectures presents an introduction to the algorithms and architectures that underpin the image and video compression standards, including JPEG (compression of still images), H.261 (video teleconferencing), MPEG-1 and MPEG-2 (video storage and broadcasting). In addition, the book covers the MPEG and Dolby AC-3 audio encoding standards, as well as emerging techniques for image and video compression, such as those based on wavelets and vector quantization. The book emphasizes the foundations of these standards, i.e. techniques such as predictive coding, transform-based coding, motion compensation, and entropy coding, as well as how they are applied in the standards. How each standard is implemented is not dealt with, but the book does provide all the material necessary to understand the workings of each of the compression standards, including information that can be used to evaluate the efficiency of various software and hardware implementations conforming to the standards. Particular emphasis is placed on those algorithms and architectures that have been found to be useful in practical software or hardware implementations. Audience: A valuable reference for the graduate student, researcher or engineer. May also be used as a text for a course on the subject.
Design and Analysis of Distributed Embedded Systems is organized similar to the conference. Chapters 1 and 2 deal with specification methods and their analysis while Chapter 6 concentrates on timing and performance analysis. Chapter 3 describes approaches to system verification at different levels of abstraction. Chapter 4 deals with fault tolerance and detection. Middleware and software reuse aspects are treated in Chapter 5. Chapters 7 and 8 concentrate on the distribution related topics such as partitioning, scheduling and communication. The book closes with a chapter on design methods and frameworks.
Real-time computer systems are very often subject to dependability requirements because of their application areas. Fly-by-wire airplane control systems, control of power plants, industrial process control systems and others are required to continue their function despite faults. Fault-tolerance and real-time requirements thus constitute a kind of natural combination in process control applications. Systematic fault-tolerance is based on redundancy, which is used to mask failures of individual components. The problem of replica determinism is thereby to ensure that replicated components show consistent behavior in the absence of faults. It might seem trivial that, given an identical sequence of inputs, replicated computer systems will produce consistent outputs. Unfortunately, this is not the case. The problem of replica non-determinism and the presentation of its possible solutions is the subject of Fault-Tolerant Real-Time Systems: The Problem of Replica Determinism. The field of automotive electronics is an important application area of fault-tolerant real-time systems. Systems like anti-lock braking, engine control, active suspension or vehicle dynamics control have demanding real-time and fault-tolerance requirements. These requirements have to be met even in the presence of very limited resources since cost is extremely important. Because of its interesting properties Fault-Tolerant Real-Time Systems gives an introduction to the application area of automotive electronics. The requirements of automotive electronics are a topic of discussion in the remainder of this work and are used as a benchmark to evaluate solutions to the problem of replica determinism.
Multithreaded architectures now appear across the entire range of computing devices, from the highest-performing general purpose devices to low-end embedded processors. Multithreading enables a processor core to more effectively utilize its computational resources, as a stall in one thread need not cause execution resources to be idle. This enables the computer architect to maximize performance within area constraints, power constraints, or energy constraints. However, the architectural options for the processor designer or architect looking to implement multithreading are quite extensive and varied, as evidenced not only by the research literature but also by the variety of commercial implementations. This book introduces the basic concepts of multithreading, describes a number of models of multithreading, and then develops the three classic models (coarse-grain, fine-grain, and simultaneous multithreading) in greater detail. It describes a wide variety of architectural and software design tradeoffs, as well as opportunities specific to multithreading architectures. Finally, it details a number of important commercial and academic hardware implementations of multithreading. Table of Contents: Introduction / Multithreaded Execution Models / Coarse-Grain Multithreading / Fine-Grain Multithreading / Simultaneous Multithreading / Managing Contention / New Opportunities for Multithreaded Processors / Experimentation and Metrics / Implementations of Multithreaded Processors / Conclusion Formal Description Techniques and Protocol Specification, Testing and Verification addresses formal description techniques (FDTs) applicable to distributed systems and communication protocols. It aims to present the state of the art in theory, application, tools and industrialization of FDTs. Among the important features presented are: FDT-based system and protocol engineering; FDT-application to distributed systems; Protocol engineering; Practical experience and case studies. Formal Description Techniques and Protocol Specification, Testing and Verification comprises the proceedings of the Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols and Protocol Specification, Testing and Verification, sponsored by the International Federation for Information Processing, held in November 1998, Paris, France. Formal Description Techniques and Protocol Specification, Testing and Verification is suitable as a secondary text for a graduate-level course on Distributed Systems or Communications, and as a reference for researchers and practitioners in industry. |
![]() ![]() You may like...
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R3,225
Discovery Miles 32 250
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,355
Discovery Miles 23 550
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,885
Discovery Miles 58 850
|