![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
Grid Middleware and Services: Challenges and Solutions is the eighth volume of the CoreGRID series. The CoreGrid Proceedings is the premiere European event on Grid Computing. This book aims to strengthen and advance scientific and technological excellence in the area of Grid Computing. The main focus in this volume is on Grid middleware and service level agreement. Grid middleware and Grid services are two pillars of grid computing systems and applications. This book includes high-level contributions by leading researchers in both areas and presents current solutions together with future challenges. This volume includes sections on knowledge and data management on grids, Grid resource management and scheduling, Grid information, resource and workflow monitoring services, and service level agreements. Grid Middleware and Services: Challenges and Solutions is designed for a professional audience, composed of researchers and practitioners in industry. This volume is also suitable for graduate-level students in computer science.
A one-of-a-kind survey of the field of Reconfigurable Computing Gives a comprehensive introduction to a discipline that offers a 10X-100X acceleration of algorithms over microprocessors Discusses the impact of reconfigurable hardware on a wide range of applications: signal and image processing, network security, bioinformatics, and supercomputing Includes the history of the field as well as recent advances Includes an extensive bibliography of primary sources
Functional Design Errors in Digital Circuits Diagnosis covers a wide spectrum of innovative methods to automate the debugging process throughout the design flow: from Register-Transfer Level (RTL) all the way to the silicon die. In particular, this book describes: (1) techniques for bug trace minimization that simplify debugging; (2) an RTL error diagnosis method that identifies the root cause of errors directly; (3) a counterexample-guided error-repair framework to automatically fix errors in gate-level and RTL designs; (4) a symmetry-based rewiring technology for fixing electrical errors; (5) an incremental verification system for physical synthesis; and (6) an integrated framework for post-silicon debugging and layout repair. The solutions provided in this book can greatly reduce debugging effort, enhance design quality, and ultimately enable the design and manufacture of more reliable electronic devices.
Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
This book is intended to serve as a textbook for a second course in the im plementation (Le. microarchitecture) of computer architectures. The subject matter covered is the collection of techniques that are used to achieve the highest performance in single-processor machines; these techniques center the exploitation of low-level parallelism (temporal and spatial) in the processing of machine instructions. The target audience consists students in the final year of an undergraduate program or in the first year of a postgraduate program in computer science, computer engineering, or electrical engineering; professional computer designers will also also find the book useful as an introduction to the topics covered. Typically, the author has used the material presented here as the basis of a full-semester undergraduate course or a half-semester post graduate course, with the other half of the latter devoted to multiple-processor machines. The background assumed of the reader is a good first course in computer architecture and implementation - to the level in, say, Computer Organization and Design, by D. Patterson and H. Hennessy - and familiarity with digital-logic design. The book consists of eight chapters: The first chapter is an introduction to all of the main ideas that the following chapters cover in detail: the topics covered are the main forms of pipelining used in high-performance uniprocessors, a taxonomy of the space of pipelined processors, and performance issues. It is also intended that this chapter should be readable as a brief "stand-alone" survey."
Developing NoC based interconnect tailored to a particular application domain, satisfying the application performance constraints with minimum power-area overhead is a major challenge. With technology scaling, as the geometries of on-chip devices reach the physical limits of operation, another important design challenge for NoCs will be to provide dynamic (run-time) support against permanent and intermittent faults that can occur in the system. The purpose of Designing Reliable and Efficient Networks on Chips is to provide state-of-the-art methods to solve some of the most important and time-intensive problems encountered during NoC design.
The authors of this Festschrift prepared these papers to honour and express their friendship to Klaus Ritter on the occasion of his sixtieth birthday. Be cause of Ritter's many friends and his international reputation among math ematicians, finding contributors was easy. In fact, constraints on the size of the book required us to limit the number of papers. Klaus Ritter has done important work in a variety of areas, especially in var ious applications of linear and nonlinear optimization and also in connection with statistics and parallel computing. For the latter we have to mention Rit ter's development of transputer workstation hardware. The wide scope of his research is reflected by the breadth of the contributions in this Festschrift. After several years of scientific research in the U.S., Klaus Ritter was ap pointed as full professor at the University of Stuttgart. Since then, his name has become inextricably connected with the regularly scheduled conferences on optimization in Oberwolfach. In 1981 he became full professor of Applied Mathematics and Mathematical Statistics at the Technical University of Mu nich. In addition to his university teaching duties, he has made the activity of applying mathematical methods to problems of industry to be centrally important."
One of the very important parts of any digital system is the control unit, coordin- ing interplay of other system blocks. As a rule, control units have irregular str- ture, which makes process of their logic circuits design very sophisticated. In case of complex logic controllers, the problem of system design is reduced practically to the design of control units. Actually, we observe a real technical boom connected with achievements in semiconductor technology. One of these is the development of integrated circuit known as the "systems-on-a-programmable- chip" (SoPC), where the number of elements approaches one billion. Because of the extreme complexity of microchips, it is very important to develop effective design methods oriented on particular properties of logical elements. Solution of this problem permits impr- ing functional capabilities of the target digital system inside single SoPC chip. As majority of researches point out, design methods used in case of industrial packages are, in case of complex digital system design, far from optimal. Similar problems concern the design of control units with standard ?eld-programmable logic devices (FPLD), such as PLA, PAL, GAL, CPLD, and FPGA. Let us point out that modern SoPC are based on CPLD or FPGA technology. Thus, the development of eff- tive design methods oriented on FPLD implementation of logic circuits used in the control units still remains the problem of great importance.
Fault-Tolerant Parallel Computation presents recent advances in algorithmic ways of introducing fault-tolerance in multiprocessors under the constraint of preserving efficiency. The difficulty associated with combining fault-tolerance and efficiency is that the two have conflicting means: fault-tolerance is achieved by introducing redundancy, while efficiency is achieved by removing redundancy. This monograph demonstrates how in certain models of parallel computation it is possible to combine efficiency and fault-tolerance and shows how it is possible to develop efficient algorithms without concern for fault-tolerance, and then correctly and efficiently execute these algorithms on parallel machines whose processors are subject to arbitrary dynamic fail-stop errors. The efficient algorithmic approaches to multiprocessor fault-tolerance presented in this monograph make a contribution towards bridging the gap between the abstract models of parallel computation and realizable parallel architectures. Fault-Tolerant Parallel Computation presents the state of the art in algorithmic approaches to fault-tolerance in efficient parallel algorithms. The monograph synthesizes work that was presented in recent symposia and published in refereed journals by the authors and other leading researchers. This is the first text that takes the reader on the grand tour of this new field summarizing major results and identifying hard open problems. This monograph will be of interest to academic and industrial researchers and graduate students working in the areas of fault-tolerance, algorithms and parallel computation and may also be used as a text in a graduate course on parallel algorithmic techniques and fault-tolerance.
A genuinely useful text that gives an overview of the state-of-the-art in system-level design trade-off explorations for concurrent tasks running on embedded heterogeneous multiple processors. The targeted application domain covers complex embedded real-time multi-media and communication applications. This material is mainly based on research at IMEC and its international university network partners in this area over the last decade. In all, the material those in the digital signal processing industry will find here is bang up-to-date.
The memory system is increasingly turning into a bottleneck in the design of embedded systems. The speed improvements of memory systems are lower than the speed improvements of processors, eventually leading to embedded systems whose performance is limited by the memory. This problem is known as the "memory wall" problem. Furthermore, memory systems may consume the largest share of the system s energy budget and may be the source of unpredictable timing behaviour. Hence, the design of the memory system deserves an increasing amount of attention. Fast, Efficient and Predictable Memory Accesses presents techniques for designing fast, energy-efficient and timing predictable memory systems. By using a careful combination of compiler optimizations and architectural improvements, we can achieve more than what would be feasible at one of the levels in isolation. The described optimization algorithms achieve the goals of high performance and low energy consumption. In addition to these benefits, the use of scratchpad memories significantly improves the timing predictability of the entire system, leading to tighter worst case execution time bounds (WCET). The WCET is a relevant design parameter for all timing critical systems. In addition, the book covers algorithms to exploit the power down modes of main memories in SDRAM technology, as well as the execute-in-place feature of Flash memories. The final chapter considers the impact of the register file, which is also part of the memory hierarchy."
This book contains papers presented at the fifth and sixth Teraflop Workshop. It presents the state-of-the-art in high performance computing and simulation on modern supercomputer architectures. It covers trends in hardware and software development in general and specifically the future of vector-based systems and heterogeneous architectures. It covers computational fluid dynamics, fluid-structure interaction, physics, chemistry, astrophysics, and climate research.
This is the first book dedicated to direct continuous-time model identification for 15 years. It cuts down on time spent hunting through journals by providing an overview of much recent research in an increasingly busy field. The CONTSID toolbox discussed in the final chapter gives an overview of developments and practical examples in which MATLAB(r) can be used for direct time-domain identification of continuous-time systems. This is a valuable reference for a broad audience.
Automatic transformation of a sequential program into a parallel form is a subject that presents a great intellectual challenge and promises a great practical award. There is a tremendous investment in existing sequential programs, and scientists and engineers continue to write their application programs in sequential languages (primarily in Fortran). The demand for higher speedups increases. The job of a restructuring compiler is to discover the dependence structure and the characteristics of the given machine. Much attention has been focused on the Fortran do loop. This is where one expects to find major chunks of computation that need to be performed repeatedly for different values of the index variable. Many loop transformations have been designed over the years, and several of them can be found in any parallelizing compiler currently in use in industry or at a university research facility. The book series on KappaLoop Transformations for Restructuring Compilerskappa provides a rigorous theory of loop transformations and dependence analysis. We want to develop the transformations in a consistent mathematical framework using objects like directed graphs, matrices, and linear equations. Then, the algorithms that implement the transformations can be precisely described in terms of certain abstract mathematical algorithms. The first volume, Loop Transformations for Restructuring Compilers: The Foundations, provided the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discussed data dependence, and introduced the major transformations. The current volume, Loop Parallelization, builds a detailed theory of iteration-level loop transformations based on the material developed in the previous book.
"Introduction to Embedded System Design Using Field Programmable Gate Arrays" provides a starting point for the use of field programmable gate arrays in the design of embedded systems. The text considers a hypothetical robot controller as an embedded application and weaves around it related concepts of FPGA-based digital design. The book details: use of FPGA vis-a-vis general purpose processor and microcontroller; design using Verilog hardware description language; digital design synthesis using Verilog and Xilinx(r) SpartanTM 3 FPGA; FPGA-based embedded processors and peripherals; overview of serial data communications and signal conditioning using FPGA; FPGA-based motor drive controllers; and prototyping digital systems using FPGA. The book is a good introductory text for FPGA-based design for both students and digital systems designers. Its end-of-chapter exercises and frequent use of example can be used for teaching or for self-study."
In the world of information technology, it is no longer the computer in the classical sense where the majority of IT applications is executed; computing is everywhere. More than 20 billion processors have already been fabricated and the majority of them can be assumed to still be operational. At the same time, virtually every PC worldwide is connected via the Internet. This combination of traditional and embedded computing creates an artifact of a complexity, heterogeneity, and volatility unmanageable by classical means. Each of our technical artifacts with a built-in processor can be seen as a ''Thing that Thinks," a term introduced by MIT's Thinglab. It can be expected that in the near future these billions of Things that Think will become an ''Internet of Things," a term originating from ETH Zurich. This means that we will be constantly surrounded by a virtual "organism" of Things that Think. This organism needs novel, adequate design, evolution, and management means which is also one of the core challenges addressed by the recent German priority research program on Organic Computing.
Details RISC design principles as well as explains the differences between this and other designs. Helps readers acquire hands-on assembly language programming experience
Grids are a crucial enabling technology for scientific and industrial development. Peer-to-peer computing, grid, distributed storage technologies, emerging web service technologies, and other types of networked distributed computing have provided new paradigms exploiting distributed resources. Grids are revolutionizing computing as profoundly as e-mail and the Web. From Grids to Service and Pervasive Computing, the 10th edited volume of the CoreGRID series, is based on the 2008 CoreGRID Symposium, held August 25-26 in the Canary Islands, Spain. The CoreGRID Symposium is organized jointly with the Euro-Par 2008 conference. The aim of this symposium is to strengthen and advance scientific and technological excellence in the area of grid and peer-to-peer computing. This volume is designed for a professional audience composed of researchers and practitioners within the grid and peer-to-peer computing industry. This volume is also suitable for advanced-level students in computer science.
Here is an extremely useful book that provides insight into a number of different flavors of processor architectures and their design, software tool generation, implementation, and verification. After a brief introduction to processor architectures and how processor designers have sometimes failed to deliver what was expected, the authors introduce a generic flow for embedded on-chip processor design and start to explore the vast design space of on-chip processing. The authors cover a number of different types of processor core.
DAPSYS (International Conference on Distributed and Parallel Systems) is an international biannual conference series dedicated to all aspects of distributed and parallel computing. DAPSYS 2008, the 7th International Conference on Distributed and Parallel Systems was held in September 2008 in Hungary. Distributed and Parallel Systems: Desktop Grid Computing, based on DAPSYS 2008, presents original research, novel concepts and methods, and outstanding results. Contributors investigate parallel and distributed techniques, algorithms, models and applications; present innovative software tools, environments and middleware; focus on various aspects of grid computing; and introduce novel methods for development, deployment, testing and evaluation. This volume features a special focus on desktop grid computing as well. Designed for a professional audience composed of practitioners and researchers in industry, this book is also suitable for advanced-level students in computer science.
A set of original results in the ?eld of high-level design of logical control devices and systems is presented in this book. These concern different aspects of such important and long-term design problems, including the following, which seem to be the main ones. First, the behavior of a device under design must be described properly, and some adequate formal language should be chosen for that. Second, effective algorithmsshouldbeusedforcheckingtheprepareddescriptionforcorrectness, foritssyntacticandsemanticveri?cationattheinitialbehaviorlevel.Third, the problem of logic circuit implementation must be solved using some concrete technological base; ef?cient methods of logic synthesis, test, and veri?cation should be developed for that. Fourth, the task of the communication between the control device and controlled objects (and maybe between different control devices)waitsforitssolution.Alltheseproblemsarehardenoughandcannotbe successfully solved without ef?cient methods and algorithms oriented toward computer implementation. Some of these are described in this book. The languages used for behavior description have been descended usually from two well-known abstract models which became classic: Petri nets and ?nite state machines (FSMs). Anyhow, more detailed versions are developed and described in the book, which enable to give more complete information concerningspeci?cqualitiesoftheregardedsystems.Forexample, themodelof parallelautomatonispresented, whichunliketheconventional?niteautomaton can be placed simultaneously into several places, calledpartial. As a base for circuit implementation of control algorithms, FPGA is accepted in majority of cas
This book provides insight into the practical design of VLSI circuits. It is aimed at novice VLSI designers and other enthusiasts who would like to understand VLSI design flows. Coverage includes key concepts in CMOS digital design, design of DSP and communication blocks on FPGAs, ASIC front end and physical design, and analog and mixed signal design. The approach is designed to focus on practical implementation of key elements of the VLSI design process, in order to make the topic accessible to novices. The design concepts are demonstrated using software from Mathworks, Xilinx, Mentor Graphics, Synopsys and Cadence.
How do you design personalized user experiences that delight and
provide value to the customers of an eCommerce site?
Personalization does not guarantee high quality user experience: a
personalized user experience has the best chance of success if it
is developed using a set of best practices in HCI. In this book 35
experts from academia, industry and government focus on issues in
the design of personalized web sites. The topics range from the
design and evaluation of user interfaces and tools to information
architecture and computer programming related to commercial web
sites. The book covers four main areas:
This book presents the most recent concerns and research results in industrial fault diagnosis using intelligent techniques. It focuses on computational intelligence applications to fault diagnosis with real-world applications used in different chapters to validate the different diagnosis methods. The book includes one chapter dealing with a novel coherent fault diagnosis distributed methodology for complex systems.
Introduction in Reconfigurable Computing provides a comprehensive study of the field Reconfigurable Computing. It provides an entry point to the novice willing to move in the research field reconfigurable computing, FPGA and system on programmable chip design. The book can also be used as teaching reference for a graduate course in computer engineering, or as reference to advance electrical and computer engineers. It provides a very strong theoretical and practical background to the field of reconfigurable computing, from the early Estrin s machine to the very modern architecture like coarse-grained reconfigurable device and the embedded logic devices. Apart from the introduction and the conclusion, the main chapter of the book are the following:
|
You may like...
|