![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
Challenges in Design and Implementation of Middlewares for Real-Time Systems brings together in one place important contributions and up-to-date research results in this fast moving area. Challenges in Design and Implementation of Middlewares for Real-Time Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Distributed Infrastructure Support For E-Commerce And Distributed Applications is organized in three parts. The first part constitutes an overview, a more detailed motivation of the problem context, and a tutorial-like introduction to middleware systems. The second part is comprised of a set of chapters that study solutions to leverage the trade-off between a transparent programming model and application-level enabled resource control. The third part of this book presents three detailed distributed application case studies and demonstrates how standard middleware platforms fail to adequately cope with resource control needs of the application designer in these three cases: -An electronic commerce framework for software leasing over the World Wide Web; -A remote building energy management system that has been experimentally deployed on several building sites; -A wireless computing infrastructure for efficient data transfer to non-stationary mobile clients that have been experimentally validated.
Loop tiling, as one of the most important compiler optimizations, is beneficial for both parallel machines and uniprocessors with a memory hierarchy. This book explores the use of loop tiling for reducing communication cost and improving parallelism for distributed memory machines. The author provides mathematical foundations, investigates loop permutability in the framework of nonsingular loop transformations, discusses the necessary machineries required, and presents state-of-the-art results for finding communication- and time-minimal tiling choices. Throughout the book, theorems and algorithms are illustrated with numerous examples and diagrams. The techniques presented in Loop Tiling for Parallelism can be adapted to work for a cluster of workstations, and are also directly applicable to shared-memory machines once the machines are modeled as BSP (Bulk Synchronous Parallel) machines.Features and key topics: * Detailed review of the mathematical foundations, including convex polyhedra and cones; * Self-contained treatment of nonsingular loop transformations, code generation, and full loop permutability; * Tiling loop nests by rectangles and parallelepipeds, including their mathematical definition, dependence analysis, legality test, and code generation; * A complete suite of techniques for generating SPMD code for a tiled loop nest; * Up-to-date results on tile size and shape selection for reducing communication and improving parallelism; * End-of-chapter references for further reading. Researchers and practitioners involved in optimizing compilers and students in advanced computer architecture studies will find this a lucid and well-presented reference work with numerous citations to original sources.
Distributed and Parallel Systems: From Instruction Parallelism to Cluster Computing is the proceedings of the third Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by the Austrian Computer Society and the MTA SZTAKI Computer and Automation Research Institute. This book contains 18 full papers and 12 short papers from 14 countries around the world, including Japan, Korea and Brazil. The paper sessions cover a broad range of research topics in the area of parallel and distributed systems, including software development environments, performance evaluation, architectures, languages, algorithms, web and cluster computing. This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.
Quality of Communication-Based Systems presents the research results of students of the Graduiertenkolleg Communication-Based Systems' to an international community. To stimulate the scientific discussion, renowned experts have been invited to give their views on the research areas: Formal specification and mathematical foundations of distributed systems using process algebra, graph transformations, process calculi and temporal logics Performance evaluation, dependability modelling and analysis of real-time systems with different kinds of timed Petri-nets Specification and analysis of communication protocols Reliability, security and dependability in distributed systems Object orientation in distributed systems architecture Software development and concepts for distributed applications Computer network architecture and management Language concepts for distributed systems.
A Flash memory is a Non Volatile Memory (NVM) whose "unit cells" are fabricated in CMOS technology and programmed and erased electrically. In 1971, Frohman-Bentchkowsky developed a folating polysilicon gate tran sistor [1, 2], in which hot electrons were injected in the floating gate and removed by either Ultra-Violet (UV) internal photoemission or by Fowler Nordheim tunneling. This is the "unit cell" of EPROM (Electrically Pro grammable Read Only Memory), which, consisting of a single transistor, can be very densely integrated. EPROM memories are electrically programmed and erased by UV exposure for 20-30 mins. In the late 1970s, there have been many efforts to develop an electrically erasable EPROM, which resulted in EEPROMs (Electrically Erasable Programmable ROMs). EEPROMs use hot electron tunneling for program and Fowler-Nordheim tunneling for erase. The EEPROM cell consists of two transistors and a tunnel oxide, thus it is two or three times the size of an EPROM. Successively, the combination of hot carrier programming and tunnel erase was rediscovered to achieve a single transistor EEPROM, called Flash EEPROM. The first cell based on this concept has been presented in 1979 [3]; the first commercial product, a 256K memory chip, has been presented by Toshiba in 1984 [4]. The market did not take off until this technology was proven to be reliable and manufacturable [5].
Application-Driven Architecture Synthesis describes the state of the art of architectural synthesis for complex real-time processing. In order to deal with the stringent timing requirements and the intricacies of complex real-time signal and data processing, target architecture styles and target application domains have been adopted to make the synthesis approach feasible. These approaches are also heavily application-driven, which is illustrated by many realistic demonstrations, used as examples in the book. The focus is on domains where application-specific solutions are attractive, such as significant parts of audio, telecom, instrumentation, speech, robotics, medical and automotive processing, image and video processing, TV, multi-media, radar, sonar. Application-Driven Architecture Synthesis is of interest to both academics and senior design engineers and CAD managers in industry. It provides an excellent overview of what capabilities to expect from future practical design tools, and includes an extensive bibliography.
Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques advocates the viability of using fuzzy and annealing methods in solving scheduling problems for parallel computing systems. The book proposes new techniques for both static and dynamic scheduling, using emerging paradigms that are inspired by natural phenomena such as fuzzy logic, mean-field annealing, and simulated annealing. Systems that are designed using such techniques are often referred to in the literature as intelligent' because of their capability to adapt to sudden changes in their environments. Moreover, most of these changes cannot be anticipated in advance or included in the original design of the system. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques provides results that prove such approaches can become viable alternatives to orthodox solutions to the scheduling problem, which are mostly based on heuristics. Although heuristics are robust and reliable when solving certain instances of the scheduling problem, they do not perform well when one needs to obtain solutions to general forms of the scheduling problem. On the other hand, techniques inspired by natural phenomena have been successfully applied for solving a wide range of combinatorial optimization problems (e.g. traveling salesman, graph partitioning). The success of these methods motivated their use in this book to solve scheduling problems that are known to be formidable combinatorial problems. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques is an excellent reference and may be used for advanced courses on the topic.
Database Recovery presents an in-depth discussion on all aspects of database recovery. Firstly, it introduces the topic informally to set the intuitive understanding, and then presents a formal treatment of recovery mechanism. In the past, recovery has been treated merely as a mechanism which is implemented on an ad-hoc basis. This book elevates the recovery from a mechanism to a concept, and presents its essential properties. A book on recovery is incomplete if it does not present how recovery is practiced in commercial systems. This book, therefore, presents a detailed description of recovery mechanisms as implemented on Informix, OpenIngres, Oracle, and Sybase commercial database systems. Database Recovery is suitable as a textbook for a graduate-level course on database recovery, as a secondary text for a graduate-level course on database systems, and as a reference for researchers and practitioners in industry.
Compiler technology is fundamental to computer science since it provides the means to implement many other tools. It is interesting that, in fact, many tools have a compiler framework - they accept input in a particular format, perform some processing and present output in another format. Such tools support the abstraction process and are crucial to productive systems development. The focus of Compiler Technology: Tools, Translators and Language Implementation is to enable quick development of analysis tools. Both lexical scanner and parser generator tools are provided as supplements to this book, since a hands-on approach to experimentation with a toy implementation aids in understanding abstract topics such as parse-trees and parse conflicts. Furthermore, it is through hands-on exercises that one discovers the particular intricacies of language implementation. Compiler Technology: Tools, Translators and Language Implementation is suitable as a textbook for an undergraduate or graduate level course on compiler technology, and as a reference for researchers and practitioners interested in compilers and language implementation.
The objective of this book is to bring together contributions by eminent researchers from industry and academia who specialize in the currently separate study and application of the key aspects of integration. The state of knowledge on integration and collaboration models and methods is reviewed, followed by an agenda for needed research that has been generated by the participants. The book is the result of a NATO Advanced Research Workshop on "Integration: Information and Collaboration Models" that took place at II Ciocco, Italy, during June 1993. Significant developments and research projects have been occurring internationally in a major effort to integrate increasingly complex systems. On one hand, advancements in computer technology and computing theories provide better, more timely, information. On of users and clients, and the the other hand, the geographic and organizational distribution proliferation of computers and communication, lead to an explosion of information and to the demand for integration. Two important examples of interest are computer integrated manufacturing and enterprises (CIM/E) and concurrent engineering (CE). CIM/E is the collection of computer technologies such as CNC, CAD, CAM. robotics and computer integrated engineering that integrate all the enterprise activities for competitiveness and timely response to changes. Concurrent engineering is the complete life-cycle approach to engineering of products. systems. and processes including customer requirements, design. planning. costing. service and recycling. In CIM/E and in CE, computer based information is the key to integration.
Multithreaded computer architecture has emerged as one of the most promising and exciting avenues for the exploitation of parallelism. This new field represents the confluence of several independent research directions which have united over a common set of issues and techniques. Multithreading draws on recent advances in dataflow, RISC, compiling for fine-grained parallel execution, and dynamic resource management. It offers the hope of dramatic performance increases through parallel execution for a broad spectrum of significant applications based on extensions to `traditional' approaches. Multithreaded Computer Architecture is divided into four parts, reflecting four major perspectives on the topic. Part I provides the reader with basic background information, definitions, and surveys of work which have in one way or another been pivotal in defining and shaping multithreading as an architectural discipline. Part II examines key elements of multithreading, highlighting the fundamental nature of latency and synchronization. This section presents clever techniques for hiding latency and supporting large synchronization name spaces. Part III looks at three major multithreaded systems, considering issues of machine organization and compilation strategy. Part IV concludes the volume with an analysis of multithreaded architectures, showcasing methodologies and actual measurements. Multithreaded Computer Architecture: A Summary of the State of the Art is an excellent reference source and may be used as a text for advanced courses on the subject.
The VHSIC Hardware Description Language (VHDL) provides a standard machine processable notation for describing hardware. VHDL is the result of a collaborative effort between IBM, Intermetrics, and Texas Instruments; sponsored by the Very High Speed Integrated Cir cuits (VHSIC) program office of the Department of Defense, beginning in 1981. Today it is an IEEE standard (1076-1987), and several simulators and other automated support tools for it are available commercially. By providing a standard notation for describing hardware, especially in the early stages of the hardware design process, VHDL is expected to reduce both the time lag and the cost involved in building new systems and upgrading existing ones. VHDL is the result of an evolutionary approach to language devel opment starting with high level hardware description languages existing in 1981. It has a decidedly programming language flavor, resulting both from the orientation of hardware languages of that time, and from a ma jor requirement that VHDL use Ada constructs wherever appropriate. During the 1980's there has been an increasing current of research into high level specification languages for systems, particularly in the software area, and new methods of utilizing specifications in systems de velopment. This activity is worldwide and includes, for example, object oriented design, various rigorous development methods, mathematical verification, and synthesis from high level specifications. VAL (VHDL Annotation Language) is a simple further step in the evolution of hardware description languages in the direction of applying new methods that have developed since VHDL was designed."
Multiprocessor Execution of Logic Programs addresses the problem of efficient implementation of logic programming languages, specifically Prolog, on multiprocessor architectures. The approaches and implementations developed attempt to take full advantage of sequential implementation technology developed for Prolog (such as the WAM) while exploiting all forms of control parallelism present in logic programs, namely, or-parallelism, independent and-parallelism and dependent and-parallelism. Coverage includes a thorough survey of parallel implementation techniques and parallel systems developed for Prolog. Multiprocessor Execution of Logic Programs is recommended for people implementing parallel logic programming systems, parallel symbolic systems, parallel AI systems, and parallel theorem proving systems. It will also be useful to people who wish to learn about the implementation of parallel logic programming systems.
Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.
Despite the growing interest in Real-Time Database Systems, there is no single book that acts as a reference to academics, professionals, and practitioners who wish to understand the issues involved in the design and development of RTDBS. Real-Time Database Systems: Issues and Applications fulfills this need. This book presents the spectrum of issues that may arise in various real-time database applications, the available solutions and technologies that may be used to address these issues, and the open problems that need to be tackled in the future. With rapid advances in this area, several concepts have been proposed without a widely accepted consensus on their definitions and implications. To address this need, the first chapter is an introduction to the key RTDBS concepts and definitions, which is followed by a survey of the state of the art in RTDBS research and practice. The remainder of the book consists of four sections: models and paradigms, applications and benchmarks, scheduling and concurrency control, and experimental systems. The chapters in each section are contributed by experts in the respective areas. Real-Time Database Systems: Issues and Applications is primarily intended for practicing engineers and researchers working in the growing area of real-time database systems. For practitioners, the book will provide a much needed bridge for technology transfer and continued education. For researchers, this book will provide a comprehensive reference for well-established results. This book can also be used in a senior or graduate level course on real-time systems, real-time database systems, and database systems or closely related courses.
For the first time in book form, this comprehensive and systematic monograph presents the methods for the reversible synthesis of logic functions and circuits. This methodology offers designers the capability to solve major problems in system design now and in the future, such as the high rate of power consumption, and the emergence of quantum effects for highly dense ICs. The challenge addressed here is to design reliable systems that consume as little power as possible and in which the signals are processed and transmitted at very high speeds with very high signal integrity. Researchers in academia or industry and graduate students, who work in logic synthesis, computer design, computer-aided design tools, and low power VLSI circuit design, will find this book a valuable resource.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
Matrix Computations on Systolic-Type Arrays provides a framework which permits a good understanding of the features and limitations of processor arrays for matrix algorithms. It describes the tradeoffs among the characteristics of these systems, such as internal storage and communication bandwidth, and the impact on overall performance and cost. A system which allows for the analysis of methods for the design/mapping of matrix algorithms is also presented. This method identifies stages in the design/mapping process and the capabilities required at each stage. Matrix Computations on Systolic-Type Arrays provides a much needed description of the area of processor arrays for matrix algorithms and of the methods used to derive those arrays. The ideas developed here reduce the space of solutions in the design/mapping process by establishing clear criteria to select among possible options as well as by a-priori rejection of alternatives which are not adequate (but which are considered in other approaches). The end result is a method which is more specific than other techniques previously available (suitable for a class of matrix algorithms) but which is more systematic, better defined and more effective in reaching the desired objectives. Matrix Computations on Systolic-Type Arrays will interest researchers and professionals who are looking for systematic mechanisms to implement matrix algorithms either as algorithm-specific structures or using specialized architectures. It provides tools that simplify the design/mapping process without introducing degradation, and that permit tradeoffs between performance/cost measures selected by the designer.
The workshop on Scalable Shared Memory Multiprocessors took place on May 26 and 27 1990 at the Stouffer Madison Hotel in Seattle, Washington as a prelude to the 1990 International Symposium on Computer Architecture. About 100 participants listened for two days to the presentations of 22 invited The motivation for this workshop was to speakers, from academia and industry. promote the free exchange of ideas among researchers working on shared-memory multiprocessor architectures. There was ample opportunity to argue with speakers, and certainly participants did not refrain a bit from doing so. Clearly, the problem of scalability in shared-memory multiprocessors is still a wide-open question. We were even unable to agree on a definition of "scalability." Authors had more than six months to prepare their manuscript, and therefore the papers included in this proceedings are refinements of the speakers' presentations, based on the criticisms received at the workshop. As a result, 17 authors contributed to these proceedings. We wish to thank them for their diligence and care. The contributions in these proceedings can be partitioned into four categories 1. Access Order and Synchronization 2. Performance 3. Cache Protocols and Architectures 4. Distributed Shared Memory Particular topics on which new ideas and results are presented in these proceedings include: efficient schemes for combining networks, formal specification of shared memory models, correctness of trace-driven simulations, synchronization, various coherence protocols, ."
During a meeting in Toronto last winter, Mike Jenkins, Bob Bernecky and I were discussing how the two existing theories on arrays influenced or were in fluenced by programming languages and systems. More's Army Theory was the basis for NIAL and APL2 and Mullin's A Mathematics of A rmys(MOA), is being used as an algebra of arrays in functional and A-calculus based pro gramming languages. MOA was influenced by Iverson's initial and extended algebra, the foundations for APL and J respectively. We discussed that there is a lot of interest in the Computer Science and Engineering communities concerning formal methods for languages that could support massively parallel operations in scientific computing, a back to-roots interest for both Mike and myself. Languages for this domain can no longer be informally developed since it is necessary to map languages easily to many multiprocessor architectures. Software systems intended for parallel computation require a formal basis so that modifications can be done with relative ease while ensuring integrity in design. List based lan guages are profiting from theoretical foundations such as the Bird-Meertens formalism. Their theory has been successfully used to describe list based parallel algorithms across many classes of architectures."
This book presents exciting recent research on the compression of images and text. Part 1 presents the (lossy) image compression techniques of vector quantization, iterated transforms (fractal compression), and techniques that employ optical hardware. Part 2 presents the (lossless) text compression techniques of arithmetic coding, context modeling, and dictionary methods (LZ methods); this part of the book also addresses practical massively parallel architectures for text compression. Part 3 presents theoretical work in coding theory that has applications to both text and image compression. The book ends with an extensive bibliography of data compression papers and books which can serve as a valuable aid to researchers in the field. Points of Interest: * Data compression is becoming a key factor in the digital storage of text, speech graphics, images, and video, digital communications, data bases, and supercomputing. * The book addresses 'hot' data compression topics such as vector quantization, fractal compression, optical data compression hardware, massively parallel hardware, LZ methods, arithmetic coding. * Contributors are all accomplished researchers.* Extensive bibliography to aid researchers in the field.
This book concentrates on the quality of electronic products. Electronics in general, including semiconductor technology and software, has become the key technology for wide areas of industrial production. In nearly all expanding branches of industry electronics, especially digital electronics, is involved. And the spread of electronic technology has not yet come to an end. This rapid development, coupled with growing competition and the shorter innovation cycle, have caused economic problems which tend to have adverse effects on quality. Therefore, good quality at low cost is a very attractive goal in industry today. The demand for better quality continues along with a demand for more studies in quality assurance. At the same time, many companies are experiencing a drop in profits just when better quality of their products is essential in order to survive against the competition. There have been many proposals in the past to improve quality without increase in cost, or to reduce cost for quality assurance without loss of quality. This book tries to summarize the practical content of many of these proposals and to give some advice, above all to the designer and manufacturer of electronic devices. It mainly addresses practically minded engineers and managers. It is probably of less interest to pure scientists. The book covers all aspects of quality assurance of components used in electronic devices. Integrated circuits (lCs) are considered to be the most important components because the degree of integration is still rising.
Modeling Microprocessor Performance focuses on the development of a design and evaluation tool, named RIPE (Rensselaer Interconnect Performance Estimator). This tool analyzes the impact on wireability, clock frequency, power dissipation, and the reliability of single chip CMOS microprocessors as a function of interconnect, device, circuit, design and architectural parameters. It can accurately predict the overall performance of existing microprocessor systems. For the three major microprocessor architectures, DEC, PowerPC and Intel, the results have shown agreement within 10% on key parameters. The models cover a broad range of issues that relate to the implementation and performance of single chip CMOS microprocessors. The book contains a detailed discussion of the various models and the underlying assumptions based on actual design practices. As such, RIPE and its models provide an insightful tool into single chip microprocessor design and its performance aspects. At the same time, it provides design and process engineers with the capability to model, evaluate, compare and optimize single chip microprocessor systems using advanced technology and design techniques at an early design stage without costly and time consuming implementation. RIPE and its models demonstrate the factors which must be considered when estimating tradeoffs in device and interconnect technology and architecture design on microprocessor performance.
Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e. , to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space. |
You may like...
North Country - Essays on the Upper…
Jon K. Lauck, Gleaves Whitney
Hardcover
R2,039
Discovery Miles 20 390
Jonathan Edwards and Scripture…
David P. Barshinger, Douglas A Sweeney
Hardcover
R3,283
Discovery Miles 32 830
|