![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
The formal study of program behavior has become an essential ingredient in guiding the design of new computer architectures. Accurate characterization of applications leads to efficient design of high performing architectures. Quantitative and analytical characterization of workloads is important to understand and exploit the interesting features of workloads. This book includes ten chapters on various aspects of workload characterizati on. File caching characteristics of the industry-standard web-serving benchmark SPECweb99 are presented by Keller et al. in Chapter 1, while value locality of SPECJVM98 benchmarks are characterized by Rychlik et al. in Chapter 2. SPECJVM98 benchmarks are visited again in Chapter 3, where Tao et al. study the operating system activity in Java programs. In Chapter 4, KleinOsowski et al. describe how the SPEC2000 CPU benchmark suite may be adapted for computer architecture research and present the small, representative input data sets they created to reduce simulation time without compromising on accuracy. Their research has been recognized by the Standard Performance Evaluation Corporation (SPEC) and is listed on the official SPEC website, http://www. spec. org/osg/cpu2000/research/umnl. The main contribution of Chapter 5 is the proposal of a new measure called locality surface to characterize locality of reference in programs. Sorenson et al. describe how a three-dimensional surface can be used to represent both of programs. In Chapter 6, Thornock et al.
Soft computing is a consortium of computing methodologies that provide a foundation for the conception, design, and deployment of intelligent systems and aims to formalize the human ability to make rational decisions in an environment of uncertainty and imprecision. This book is based on a NATO Advanced Study Institute held in 1996 on soft computing and its applications. The distinguished contributors consider the principal constituents of soft computing, namely fuzzy logic, neurocomputing, genetic computing, and probabilistic reasoning, the relations between them, and their fusion in industrial applications. Two areas emphasized in the book are how to achieve a synergistic combination of the main constituents of soft computing and how the combination can be used to achieve a high Machine Intelligence Quotient.
The Second International Workshop on Cooperative Internet Computing (CIC2002) has brought together researchers, academics, and industry practitioners who are involved and interested in the development of advanced and emerging cooperative computing technologies. Cooperative computing is an important computing paradigm to enable different parties to work together towards a pre defined non-trivial goal. It encompasses important technological areas like computer supported cooperative work, workflow, computer assisted design and concurrent programming. As technologies continue to advance and evolve, there is an increasing need to research and develop new classes of middlewares and applications to leverage on the combined benefits of Internet and web to provide users and programmers with highly interactive and robust cooperative computing environment. It is the aim of this forum to promote close interactions and exchange of ideas among researchers, academics and practitioners on the state-of-the art researches in all of these exciting areas. We have partnered with Kluwer Acedamic Press this year to bring to you a book compilation of the papers that were presented at the CIC2002 workshop. The importance of the research area is reflected both in the quality and quantity of the submitted papers, where each paper was reviewed by at least three PC members. As a result, we were able to only accept 14 papers for full presentation at the workshop, while having to reject several excellent papers due to the limitations of the program schedule.
High Performance Computing Systems and Applications contains fully refereed papers from the 15th Annual Symposium on High Performance Computing. These papers cover both fundamental and applied topics in HPC: parallel algorithms, distributed systems and architectures, distributed memory and performance, high level applications, tools and solvers, numerical methods and simulation, advanced computing systems, and the emerging area of computational grids. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
Real-time computing systems are vital to a wide range of applications. For example, they are used in the control of nuclear reactors and automated manufacturing facilities, in controlling and tracking air traffic, and in communication systems. In recent years, real-time systems have also grown larger and become more critical. For instance, advanced aircraft such as the space shuttle must depend heavily on computer sys tems Carlow 84]. The centralized control of manufacturing facilities and assembly plants operated by robots are other examples at the heart of which lie embedded real-time systems. Military defense systems deployed in the air, on the ocean surface, land and underwater, have also been increasingly relying upon real-time systems for monitoring and operational safety purposes, and for retaliatory and containment measures. In telecommunications and in multi-media applications, real time characteristics are essential to maintain the integrity of transmitted data, audio and video signals. Many of these systems control, monitor or perform critical operations, and must respond quickly to emergency events in a wide range of embedded applications. They are therefore required to process tasks with stringent timing requirements and must perform these tasks in a way that these timing requirements are guaranteed to be met. Real-time scheduling al gorithms attempt to ensure that system timing behavior meets its specifications, but typically assume that tasks do not share logical or physical resources. Since resource-sharing cannot be eliminated, synchronization primitives must be used to ensure that resource consis tency constraints are not violated."
Past and current research in computer performance analysis has focused primarily on dedicated parallel machines. However, future applications in the area of high-performance computing will not only use individual parallel systems but a large set of networked resources. This scenario of computational and data Grids is attracting a great deal of attention from both computer and computational scientists. In addition to the inherent complexity of parallel machines, the sharing and transparency of the available resources introduces new challenges on performance analysis, techniques, and systems. In order to meet those challenges, a multi-disciplinary approach to the multi-faceted problems of performance is required. New degrees of freedom will come into play with a direct impact on the performance of Grid computing, including wide-area network performance, quality-of-service (QoS), heterogeneity, and middleware systems, to mention only a few.
The evolution of modern computers began more than 50 years ago and has been driven to a large extend by rapid advances in electronic technology during that period. The first computers ran one application (user) at a time. Without the benefit of operating systems or compilers, the application programmers were responsible for managing all aspects of the hardware. The introduction of compilers allowed programmers to express algorithms in abstract terms without being concerned with the bit level details of their implementation. Time sharing operating systems took computing systems one step further and allowed several users and/or applications to time share the computing services of com puters. With the advances of networks and software tools, users and applications were able to time share the logical and physical services that are geographically dispersed across one or more networks. Virtual Computing (VC) concept aims at providing ubiquitous open computing services in analogous way to the services offered by Telephone and Elec trical (utility) companies. The VC environment should be dynamically setup to meet the requirements of a single user and/or application. The design and development of a dynamically programmable virtual comput ing environments is a challenging research problem. However, the recent advances in processing and network technology and software tools have successfully solved many of the obstacles facing the wide deployment of virtual computing environments as will be outlined next."
This book constitutes the refereed proceedings of the 25th International Conference on Parallel Computational Fluid Dynamics, ParCFD 2013, held in Changsha, China, in May 2013. The 35 revised full papers presented were carefully reviewed and selected from more than 240 submissions. The papers address issues such as parallel algorithms, developments in software tools and environments, unstructured adaptive mesh applications, industrial applications, atmospheric and oceanic global simulation, interdisciplinary applications and evaluation of computer architectures and software environments.
Memory Issues in Embedded Systems-On-Chip: Optimizations and Explorations is designed for different groups in the embedded systems-on-chip arena. First, it is designed for researchers and graduate students who wish to understand the research issues involved in memory system optimization and exploration for embedded systems-on-chip. Second, it is intended for designers of embedded systems who are migrating from a traditional micro-controllers centered, board-based design methodology to newer design methodologies using IP blocks for processor-core-based embedded systems-on-chip. Also, since Memory Issues in Embedded Systems-on-Chip: Optimization and Explorations illustrates a methodology for optimizing and exploring the memory configuration of embedded systems-on-chip, it is intended for managers and system designers who may be interested in the emerging capabilities of embedded systems-on-chip design methodologies for memory-intensive applications.
Cooperating Heterogeneous Systems provides an in-depth introduction to the issues and techniques surrounding the integration and control of diverse and independent software components. Organizations increasingly rely upon diverse computer systems to perform a variety of knowledge-based tasks. This presents technical issues of interoperability and integration, as well as philosophical issues of how cooperation and interaction between computational entities is to be realized. Cooperating systems are systems that work together towards a common end. The concepts of cooperation must be realized in technically sound system architectures, having a uniform meta-layer between knowledge sources and the rest of the system. The layer consists of a family of interpreters, one for each knowledge source, and meta-knowledge. A system architecture to integrate and control diverse knowledge sources is presented. The architecture is based on the meta-level properties of the logic programming language Prolog. An implementation of the architecture is described, a Framework for Logic Programming Systems with Distributed Execution (FLiPSiDE). Knowledge-based systems play an important role in any up-to-date arsenal of decision support tools. The tremendous growth of computer communications infrastructure has made distributed computing a viable option, and often a necessity in geographically distributed organizations. It has become clear that to take knowledge-based systems to their next useful level, it is necessary to get independent knowledge-based systems to work together, much as we put together ad hoc work groups in our organizations to tackle complex problems. The book is for scientists and software engineers who have experience in knowledge-based systems and/or logic programming and seek a hands-on introduction to cooperating systems. Researchers investigating autonomous agents, distributed computation, and cooperating systems will find fresh ideas and new perspectives on well-established approaches to control, organization, and cooperation.
The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines computer architecture, computability theory, and the history of computers from the perspective of one instruction set computing - a novel approach in which the computer supports only one, simple instruction. This bold, new paradigm offers significant promise in biological, chemical, optical, and molecular scale computers. Features include: - Provides a comprehensive study of computer architecture using computability theory as a base. - Provides a fresh perspective on computer architecture not found in any other text. - Covers history, theory, and practice of computer architecture from a minimalist perspective. Includes a complete implementation of a one instruction computer.- Includes exercises and programming assignments. Computer Architecture: A Minimalist Perspective is designed to meet the needs of a professional audience composed of researchers, computer hardware engineers, software engineers computational theorists, and systems engineers. The book is also intended for use in upper division undergraduate students and early graduate students studying computer architecture or embedded systems. It is an excellent text for use as a supplement or alternative in traditional Computer Architecture Courses, or in courses entitled Special Topics in Computer Architecture.
Modern multimedia systems are becoming increasingly multiprocessor and heterogeneous to match the high performance and low power demands placed on them by the large number of applications. The concurrent execution of these applications causes interference and unpredictability in the performance of these systems. In Multimedia Multiprocessor Systems, an analysis mechanism is presented to accurately predict the performance of multiple applications executing concurrently. With high consumer demand the time-to-market has become significantly lower. To cope with the complexity in designing such systems, an automated design-flow is needed that can generate systems from a high-level architectural description such that they are not error-prone and consume less time. Such a design methodology is presented for multiple use-cases -- combinations of active applications. A resource manager is also presented to manage the various resources in the system, and to achieve the goals of performance prediction, admission control and budget enforcement.
Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library presents ARCH, a library built as an extension to MPI. ARCH relies on a small set of programming abstractions that allow the writing of well-structured multi-threaded parallel codes according to the object-oriented programming style. ARCH has been written with C++. The book describes the built-in classes, and illustrates their use through several template application cases in several fields of interest: Distributed Algorithms (global completion detection, distributed process serialization), Parallel Combinatorial Optimization (A* procedure), Parallel Image-Processing (segmentation by region growing). It shows how new application-level distributed data types - such as a distributed tree and a distributed graph - can be derived from the built-in classes. A feature of interest to readers is that both the library and the application codes used for illustration purposes are available via the Internet. The material can be downloaded for installation and personal parallel code development on the reader's computer system. ARCH can be run on Unix/Linux as well as Windows NT-based platforms. Current installations include the IBM-SP2, the CRAY-T3E, the Intel Paragon, PC-networks under Linux or Windows NT. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is aimed at scientists who need to implement parallel/distributed algorithms requiring complicated local and/or distributed control structures. It can also benefit parallel/distributed program developers who wish to write codes in the object-oriented style. The author has been using ARCH for several years as a medium to teach parallel and network programming. Teachers can employ the library for the same purpose while students can use it for training. Although ARCH has been used so far in an academic environment, it will be an effective tool for professionals as well. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is suitable as a secondary text for a graduate level course on Data Communications and Networks, Programming Languages, Algorithms and Computational Theory and Distributed Computing and as a reference for researchers and practitioners in industry.
The emphasis of this text is on data networking, internetworking and distributed computing issues. The material surveys recent work in the area of satellite networks, introduces certain state-of-the-art technologies, and presents recent research results in these areas.
Under Quality of Service (QoS) routing, paths for flows are selected based upon the knowledge of resource availability at network nodes and the QoS requirements of flows. QoS routing schemes proposed differ in the way they gather information about the network state and select paths based on this information. We broadly categorize these schemes into best-path routing and proportional routing. The best-path routing schemes gather global network state information and always select the best path for an incoming flow based on this global view. On the other hand, proportional routing schemes proportion incoming flows among a set of candidate paths. We have shown that it is possible to compute near-optimal proportions using only locally collected information. Furthermore, a few good candidate paths can be selected using infrequently exchanged global information and thus with minimal communication overhead. Localized Quality Of Service Routing For The Internet, describes these schemes in detail demonstrating that proportional routing schemes can achieve higher throughput with lower overhead than best-path routing schemes. It first addresses the issue of finding near-optimal proportions for a given set of candidate paths based on locally collected flow statistics. This book will also look into the selection of a few good candidate paths based on infrequently exchanged global information. The final phase of this book will describe extensions to proportional routing approach to provide hierarchical routing across multiple areas in a large network. Localized Quality Of Service Routing For The Internet is designed for researchers and practitioners in industry, and is suitable for graduate level students in computer science as a secondary text.
Software architectures have gained wide popularity in the last decade. They generally play a fundamental role in coping with the inherent difficulties of the development of large-scale and complex software systems. Component-oriented and aspect-oriented programming enables software engineers to implement complex applications from a set of pre-defined components. Software Architectures and Component Technology collects excellent chapters on software architectures and component technologies from well-known authors, who not only explain the advantages, but also present the shortcomings of the current approaches while introducing novel solutions to overcome the shortcomings.The unique features of this book are: * evaluates the current architecture design methods and component composition techniques and explains their shortcomings; * presents three practical architecture design methods in detail; * gives four industrial architecture design examples; * presents conceptual models for distributed message-based architectures; * explains techniques for refining architectures into components; * presents the recent developments in component and aspect-oriented techniques; * explains the status of research on Piccola, Hyper/J(R), Pluggable Composite Adapters and Composition Filters. Software Architectures and Component Technology is a suitable text for graduate level students in computer science and engineering, and as a reference for researchers and practitioners in industry.
Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems describes coding approaches for designing fault-tolerant systems, i.e., systems that exhibit structured redundancy that enables them to distinguish between correct and incorrect results or between valid and invalid states. Since redundancy is expensive and counter-intuitive to the traditional notion of system design, the book focuses on resource-efficient methodologies that avoid excessive use of redundancy by exploiting the algorithmic/dynamic structure of a particular combinational or dynamic system. The first part of Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems focuses on fault-tolerant combinational systems providing a review of von Neumann's classical work on Probabilistic Logics (including some more recent work on noisy gates) and describing the use of arithmetic coding and algorithm-based fault-tolerant schemes in algebraic settings. The second part of the book focuses on fault tolerance in dynamic systems. Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems also discusses how, in a dynamic system setting, one can relax the traditional assumption that the error-correcting mechanism is fault-free by using distributed error correcting mechanisms. The final chapter presents a methodology for fault diagnosis in discrete event systems that are described by Petri net models; coding techniques are used to quickly detect and identify failures. From the Foreword: "Hadjicostis has significantly expanded the setting to processes occurring in more general algebraic and dynamic systems... The book responds to the growing need to handle faults in complex digital chips and complex networked systems, and to consider the effects of faults at the design stage rather than afterwards." George Verghese, Massachusetts Institute of Technology Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems will be of interest to both researchers and practitioners in the area of fault tolerance, systems design and control.
Computers that `program themselves' has long been an aim of computer scientists. Recently genetic programming (GP) has started to show its promise by automatically evolving programs. Indeed in a small number of problems GP has evolved programs whose performance is similar to or even slightly better than that of programs written by people. The main thrust of GP has been to automatically create functions. While these can be of great use they contain no memory and relatively little work has addressed automatic creation of program code including stored data. This issue is the main focus of Genetic Programming, and Data Structures: Genetic Programming + Data Structures = Automatic Programming!. This book is motivated by the observation from software engineering that data abstraction (e.g., via abstract data types) is essential in programs created by human programmers. This book shows that abstract data types can be similarly beneficial to the automatic production of programs using GP. Genetic Programming and Data Structures: Genetic Programming + Data Structures = Automatic Programming! shows how abstract data types (stacks, queues and lists) can be evolved using genetic programming, demonstrates how GP can evolve general programs which solve the nested brackets problem, recognises a Dyck context free language, and implements a simple four function calculator. In these cases, an appropriate data structure is beneficial compared to simple indexed memory. This book also includes a survey of GP, with a critical review of experiments with evolving memory, and reports investigations of real world electrical network maintenance scheduling problems that demonstrate that Genetic Algorithms can find low cost viable solutions to such problems. Genetic Programming and Data Structures: Genetic Programming + Data Structures = Automatic Programming! should be of direct interest to computer scientists doing research on genetic programming, genetic algorithms, data structures, and artificial intelligence. In addition, this book will be of interest to practitioners working in all of these areas and to those interested in automatic programming.
The implementation of object-oriented languages has been an active topic of research since the 1960s when the first Simula compiler was written. The topic received renewed interest in the early 1980s with the growing popularity of object-oriented programming languages such as c++ and Smalltalk, and got another boost with the advent of Java. Polymorphic calls are at the heart of object-oriented languages, and even the first implementation of Simula-67 contained their classic implementation via virtual function tables. In fact, virtual function tables predate even Simula-for example, Ivan Sutherland's Sketchpad drawing editor employed very similar structures in 1960. Similarly, during the 1970s and 1980s the implementers of Smalltalk systems spent considerable efforts on implementing polymorphic calls for this dynamically typed language where virtual function tables could not be used. Given this long history of research into the implementation of polymorphic calls, and the relatively mature standing it achieved over time, why, one might ask, should there be a new book in this field? The answer is simple. Both software and hardware have changed considerably in recent years, to the point where many assumptions underlying the original work in this field are no longer true. In particular, virtual function tables are no longer sufficient to implement polymorphic calls even for statically typed languages; for example, Java's interface calls cannot be implemented this way. Furthermore, today's processors are deeply pipelined and can execute instructions out-of order, making it difficult to predict the execution time of even simple code sequences.
'Et moi, ..., si j'avait su comment en revenir, One service mathematics has rendered the je n'y se.rais point aile.' human race. It has put common sense back Jules Verne where it belongs, on be topmost shelf next to the dusty canister labelled 'disc: arded non sense'. The series is divergent; therefore we may be able to do something with it. Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic has rendered com puter science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series."
Multiprocessor platforms play important roles in modern computing systems, and appear in various applications, ranging from energy-limited hand-held devices to large data centers. As the performance requirements increase, energy-consumption in these systems also increases significantly. Dynamic Voltage and Frequency Scaling (DVFS), which allows processors to dynamically adjust the supply voltage and the clock frequency to operate on different power/energy levels, is considered an effective way to achieve the goal of energy-saving. This book surveys existing works that have been on energy-aware task scheduling on DVFS multiprocessor platforms. Energy-aware scheduling problems are intrinsically optimization problems, the formulations of which greatly depend on the platform and task models under consideration. Thus, Energy-aware Scheduling on Multiprocessor Platforms covers current research on this topic and classifies existing works according to two key standards, namely, homogeneity/heterogeneity of multi processor platforms and the task types considered. Under this classification, other sub-issues are also included, such as, slack reclamation, fixed/dynamic priority sched uling, partition-based/global scheduling, and application-specific power consumption, etc.
Parallel Numerical Computations with Applications contains selected edited papers presented at the 1998 Frontiers of Parallel Numerical Computations and Applications Workshop, along with invited papers from leading researchers around the world. These papers cover a broad spectrum of topics on parallel numerical computation with applications; such as advanced parallel numerical and computational optimization methods, novel parallel computing techniques, numerical fluid mechanics, and other applications related to material sciences, signal and image processing, semiconductor technology, and electronic circuits and systems design. This state-of-the-art volume will be an up-to-date resource for researchers in the areas of parallel and distributed computing.
This monograph develops techniques for equational reasoning in higher-order logic. Due to its expressiveness, higher-order logic is used for specification and verification of hardware, software, and mathematics. In these applica tions, higher-order logic provides the necessary level of abstraction for con cise and natural formulations. The main assets of higher-order logic are quan tification over functions or predicates and its abstraction mechanism. These allow one to represent quantification in formulas and other variable-binding constructs. In this book, we focus on equational logic as a fundamental and natural concept in computer science and mathematics. We present calculi for equa tional reasoning modulo higher-order equations presented as rewrite rules. This is followed by a systematic development from general equational rea soning towards effective calculi for declarative programming in higher-order logic and A-calculus. This aims at integrating and generalizing declarative programming models such as functional and logic programming. In these two prominent declarative computation models we can view a program as a logical theory and a computation as a deduction."
Challenges in Design and Implementation of Middlewares for Real-Time Systems brings together in one place important contributions and up-to-date research results in this fast moving area. Challenges in Design and Implementation of Middlewares for Real-Time Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Distributed Infrastructure Support For E-Commerce And Distributed Applications is organized in three parts. The first part constitutes an overview, a more detailed motivation of the problem context, and a tutorial-like introduction to middleware systems. The second part is comprised of a set of chapters that study solutions to leverage the trade-off between a transparent programming model and application-level enabled resource control. The third part of this book presents three detailed distributed application case studies and demonstrates how standard middleware platforms fail to adequately cope with resource control needs of the application designer in these three cases: -An electronic commerce framework for software leasing over the World Wide Web; -A remote building energy management system that has been experimentally deployed on several building sites; -A wireless computing infrastructure for efficient data transfer to non-stationary mobile clients that have been experimentally validated. |
You may like...
Clean Architecture - A Craftsman's Guide…
Robert Martin
Paperback
(1)
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,315
Discovery Miles 53 150
Computer Architecture Tutorial Using an…
Robert Dunne
Hardcover
|