![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
The SCAN conference, the International Symposium on Scientific Com puting, Computer Arithmetic and Validated Numerics, takes place bian nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. It is important that the possibly first participation of 6 young researchers was made possible due to the obtained support. The number of East-European participants was relatively high. These results are especially valuable, since in contrast to the usual 2 years period, the present meeting was organized just one year after the last SCAN-xx conference."
Floating Gate Devices: Operation and Compact Modeling focuses on
standard operations and compact modeling of memory devices based on
Floating Gate architecture. Floating Gate devices are the building
blocks of Flash, EPROM, EEPROM memories. Flash memories, which are
the most versatile nonvolatile memories, are widely used to store
code (BIOS, Communication protocol, Identification code, ) and data
(solid-state Hard Disks, Flash cards for digital cameras, ).
Event-Triggered and Time-Triggered Control Paradigms presents a valuable survey about existing architectures for safety-critical applications and discusses the issues that must be considered when moving from a federated to an integrated architecture. The book focuses on one key topic - the amalgamation of the event-triggered and the time-triggered control paradigm into a coherent integrated architecture. The architecture provides for the integration of independent distributed application subsystems by introducing multi-criticality nodes and virtual networks of known temporal properties. The feasibility and the tangible advantages of this new architecture are demonstrated with practical examples taken from the automotive industry. Event-Triggered and Time-Triggered Control Paradigms offers significant insights into the architecture and design of integrated embedded systems, both at the conceptual and at the practical level.
What is exactly "Safety"? A safety system should be defined as a system that will not endanger human life or the environment. A safety-critical system requires utmost care in their specification and design in order to avoid possible errors in their implementation that should result in unexpected system's behavior during his operating "life." An inappropriate method could lead to loss of life, and will almost certainly result in financial penalties in the long run, whether because of loss of business or because the imposition of fines. Risks of this kind are usually managed with the methods and tools of the "safety engineering." A life-critical system is designed to 9 lose less than one life per billion (10 ). Nowadays, computers are used at least an order of magnitude more in safety-critical applications compared to two decades ago. Increasingly electronic devices are being used in applications where their correct operation is vital to ensure the safety of the human life and the environment. These application ranging from the anti-lock braking systems (ABS) in automobiles, to the fly-by-wire aircrafts, to biomedical supports to the human care. Therefore, it is vital that electronic designers be aware of the safety implications of the systems they develop. State of the art electronic systems are increasingly adopting progr- mable devices for electronic applications on earthling system. In particular, the Field Programmable Gate Array (FPGA) devices are becoming very interesting due to their characteristics in terms of performance, dimensions and cost.
Database Concurrency Control: Methods, Performance and Analysis is a review of developments in concurrency control methods for centralized database systems, with a quick digression into distributed databases and multicomputers, the emphasis being on performance. The main goals of Database Concurrency Control: Methods, Performance and Analysis are to succinctly specify various concurrency control methods; to describe models for evaluating the relative performance of concurrency control methods; to point out problem areas in earlier performance analyses; to introduce queuing network models to evaluate the baseline performance of transaction processing systems; to provide insights into the relative performance of transaction processing systems; to illustrate the application of basic analytic methods to the performance analysis of various concurrency control methods; to review transaction models which are intended to relieve the effect of lock contention; to provide guidelines for improving the performance of transaction processing systems due to concurrency control; and to point out areas for further investigation. This monograph should be of direct interest to computer scientists doing research on concurrency control methods for high performance transaction processing systems, designers of such systems, and professionals concerned with improving (tuning) the performance of transaction processing systems.
Various approaches for finding optimal values for the parameters of analog cells have made their entrance in commercial applications. However, a larger impact on the performance is expected if tools are developed which operate on a higher abstraction level and consider multiple architectural choices to realize a particular functionality. This book examines the opportunities, conditions, problems, solutions and systematic methodologies for this new generation of analog CAD tools.
Realizing maximum performance from high bit-rate and RF circuits requires close attention to IC technology, circuit-to-circuit interconnections (i.e., the interconnect ) and circuit design. Circuit and Interconnet Design for RF and High Bit-rate Applications covers each of these topics from theory to practice, with sufficient detail to help you produce circuits that are first-time right . A thorough analysis of the interplay between on-chip circuits and interconnects is presented, including practical examples in high bit-rate and RF applications. Optimum interconnect geometries for the distribution of RF signals are described, together with simple models for standard interconnect geometries that capture characteristic impedance and propagation delay across a broad frequency range. The analyses also covers single-ended and differential geometries, so that the designer can incorporate the effects of interconnections as soon as estimated interconnect lengths are available. Application of interconnect design is illustrated using a 12.5 Gb/s crosspoint switch example taken from a volume production part."
Multithreaded Processor Design takes the unique approach of designing a multithreaded processor from the ground up. Every aspect is carefully considered to form a balanced design rather than making incremental changes to an existing design and then ignoring problem areas. The general purpose parallel computer is an elusive goal. Multithreaded processors have emerged as a promising solution to this conundrum by forming some amalgam of the commonplace control-flow (von Neumann) processor model with the more exotic data-flow approach. This new processor model offers many exciting possibilities and there is much research to be performed to make this technology widespread. Multithreaded processors utilize the simple and efficient sequential execution technique of control-flow, and also data-flow like concurrency primitives. This supports the conceptually simple but powerful idea of rescheduling rather than blocking when waiting for data, e.g. from large and distributed memories, thereby tolerating long data transmission latencies. This makes multiprocessing far more efficient because the cost of moving data between distributed memories and processors can be hidden by other activity. The same hardware mechanisms may also be used to synchronize interprocess communications to awaiting threads, thereby alleviating operating system overheads. Supporting synchronization and scheduling mechanisms in hardware naturally adds complexity. Consequently, existing multithreaded processor designs have tended to make incremental changes to existing control-flow processor designs to resolve some problems but not others. Multithreaded Processor Design serves as an excellent reference source and is suitable as a text for advanced courses in computer architecture dealing with the subject.
Input/Output in Parallel and Distributed Computer Systems has attracted increasing attention over the last few years, as it has become apparent that input/output performance, rather than CPU performance, may be the key limiting factor in the performance of future systems. This I/O bottleneck is caused by the increasing speed mismatch between processing units and storage devices, the use of multiple processors operating simultaneously in parallel and distributed systems, and by the increasing I/O demands of new classes of applications, like multimedia. It is also important to note that, to varying degrees, the I/O bottleneck exists at multiple levels of the memory hierarchy. All indications are that the I/O bottleneck will be with us for some time to come, and is likely to increase in importance. Input/Output in Parallel and Distributed Computer Systems is based on papers presented at the 1994 and 1995 IOPADS workshops held in conjunction with the International Parallel Processing Symposium. This book is divided into three parts. Part I, the Introduction, contains four invited chapters which provide a tutorial survey of I/O issues in parallel and distributed systems. The chapters in Parts II and III contain selected research papers from the 1994 and 1995 IOPADS workshops; many of these papers have been substantially revised and updated for inclusion in this volume. Part II collects the papers from both years which deal with various aspects of system software, and Part III addresses architectural issues. Input/Output in Parallel and Distributed Computer Systems is suitable as a secondary text for graduate level courses in computer architecture, software engineering, and multimedia systems, and as a reference for researchers and practitioners in industry.
The second half of the 1970s was marked with impressive advances in array/vector architectures and vectorization techniques and compilers. This progress continued with a particular focus on vector machines until the middle of the 1980s. The major ity of supercomputers during this period were register-to-register (Cray 1) or memory-to-memory (CDC Cyber 205) vector (pipelined) machines. However, the increasing demand for higher computational rates lead naturally to parallel comput ers and software. Through the replication of autonomous processors in a coordinated system, one can skip over performance barriers due technology limitations. In princi ple, parallelism offers unlimited performance potential. Nevertheless, it is very difficult to realize this performance potential in practice. So far, we have seen only the tip of the iceberg called "parallel machines and parallel programming." Parallel programming in particular is a rapidly evolving art and, at present, highly empirical. In this book we discuss several aspects of parallel programming and parallelizing compilers. Instead of trying to develop parallel programming methodologies and paradigms, we often focus on more advanced topics assuming that the reader has an adequate background in parallel processing. The book is organized in three main parts. In the first part (Chapters 1 and 2) we set the stage and focus on program transformations and parallelizing compilers. The second part of this book (Chapters 3 and 4) discusses scheduling for parallel machines from the practical point of view macro and microtasking and supporting environments). Finally, the last part (Le."
Service computing is a cutting-edge area, popular in both industry and academia. New challenges have been introduced to develop service-oriented systems with high assurance requirements. High Assurance Services Computing captures and makes accessible the most recent practical developments in service-oriented high-assurance systems. An edited volume contributed by well-established researchers in this field worldwide, this book reports the best current practices and emerging methods in the areas of service-oriented techniques for high assurance systems. Available results from industry and government, R&D laboratories and academia are included, along with unreported results from the "hands-on" experiences of software professionals in the respective domains. Designed for practitioners and researchers working for industrial organizations and government agencies, High Assurance Services Computing is also suitable for advanced-level students in computer science and engineering.
Grids are a crucial enabling technology for scientific and industrial development. Grid and Services Evolution, the 11th edited volume of the CoreGRID series, was based on The CoreGRID Middleware Workshop, held in Barcelona, Spain, June 5-6, 2008. Grid and Services Evolution provides a bridge between the application community and the developers of middleware services, especially in terms of parallel computing. This edited volume brings together a critical mass of well-established researchers worldwide, from forty-two institutions active in the fields of distributed systems and middleware, programming models, algorithms, tools and environments. Grid and Services Evolution is designed for a professional audience composed of researchers and practitioners within the Grid community industry. This volume is also suitable for advanced-level students in computer science.
Analog Circuit Design contains the contribution of 18 tutorials of the 17th workshop on Advances in Analog Circuit Design. Each part discusses a specific to-date topic on new and valuable design ideas in the area of analog circuit design. Each part is presented by six experts in that field and state of the art information is shared and overviewed. This book is number 17 in this successful series of Analog Circuit Design.
The extreme ?exibility of recon?gurable architectures and their performance pot- tial have made them a vehicle of choice in a wide range of computing domains, from rapid circuit prototyping to high-performance computing. The increasing availab- ity of transistors on a die has allowed the emergence of recon?gurable architectures with a large number of computing resources and interconnection topologies. To - ploit the potential of these recon?gurable architectures, programmers are forced to map their applications, typically written in high-level imperative programming l- guages, such as C or MATLAB, to hardware-oriented languages such as VHDL or Verilog. In this process, they must assume the role of hardware designers and software programmers and navigate a maze of program transformations, mapping, and synthesis steps to produce ef?cient recon?gurable computing implementations. The richness and sophistication of any of these application mapping steps make the mapping of computations to these architectures an increasingly daunting process. It is thus widely believed that automatic compilation from high-level programming languages is the key to the success of recon?gurable computing. This book describes a wide range of code transformations and mapping te- niques for programs described in high-level programming languages, most - tably imperative languages, to recon?gurable architectures.
Grid Middleware and Services: Challenges and Solutions is the eighth volume of the CoreGRID series. The CoreGrid Proceedings is the premiere European event on Grid Computing. This book aims to strengthen and advance scientific and technological excellence in the area of Grid Computing. The main focus in this volume is on Grid middleware and service level agreement. Grid middleware and Grid services are two pillars of grid computing systems and applications. This book includes high-level contributions by leading researchers in both areas and presents current solutions together with future challenges. This volume includes sections on knowledge and data management on grids, Grid resource management and scheduling, Grid information, resource and workflow monitoring services, and service level agreements. Grid Middleware and Services: Challenges and Solutions is designed for a professional audience, composed of researchers and practitioners in industry. This volume is also suitable for graduate-level students in computer science.
A one-of-a-kind survey of the field of Reconfigurable Computing Gives a comprehensive introduction to a discipline that offers a 10X-100X acceleration of algorithms over microprocessors Discusses the impact of reconfigurable hardware on a wide range of applications: signal and image processing, network security, bioinformatics, and supercomputing Includes the history of the field as well as recent advances Includes an extensive bibliography of primary sources
Functional Design Errors in Digital Circuits Diagnosis covers a wide spectrum of innovative methods to automate the debugging process throughout the design flow: from Register-Transfer Level (RTL) all the way to the silicon die. In particular, this book describes: (1) techniques for bug trace minimization that simplify debugging; (2) an RTL error diagnosis method that identifies the root cause of errors directly; (3) a counterexample-guided error-repair framework to automatically fix errors in gate-level and RTL designs; (4) a symmetry-based rewiring technology for fixing electrical errors; (5) an incremental verification system for physical synthesis; and (6) an integrated framework for post-silicon debugging and layout repair. The solutions provided in this book can greatly reduce debugging effort, enhance design quality, and ultimately enable the design and manufacture of more reliable electronic devices.
Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
This book is intended to serve as a textbook for a second course in the im plementation (Le. microarchitecture) of computer architectures. The subject matter covered is the collection of techniques that are used to achieve the highest performance in single-processor machines; these techniques center the exploitation of low-level parallelism (temporal and spatial) in the processing of machine instructions. The target audience consists students in the final year of an undergraduate program or in the first year of a postgraduate program in computer science, computer engineering, or electrical engineering; professional computer designers will also also find the book useful as an introduction to the topics covered. Typically, the author has used the material presented here as the basis of a full-semester undergraduate course or a half-semester post graduate course, with the other half of the latter devoted to multiple-processor machines. The background assumed of the reader is a good first course in computer architecture and implementation - to the level in, say, Computer Organization and Design, by D. Patterson and H. Hennessy - and familiarity with digital-logic design. The book consists of eight chapters: The first chapter is an introduction to all of the main ideas that the following chapters cover in detail: the topics covered are the main forms of pipelining used in high-performance uniprocessors, a taxonomy of the space of pipelined processors, and performance issues. It is also intended that this chapter should be readable as a brief "stand-alone" survey."
Developing NoC based interconnect tailored to a particular application domain, satisfying the application performance constraints with minimum power-area overhead is a major challenge. With technology scaling, as the geometries of on-chip devices reach the physical limits of operation, another important design challenge for NoCs will be to provide dynamic (run-time) support against permanent and intermittent faults that can occur in the system. The purpose of Designing Reliable and Efficient Networks on Chips is to provide state-of-the-art methods to solve some of the most important and time-intensive problems encountered during NoC design.
The authors of this Festschrift prepared these papers to honour and express their friendship to Klaus Ritter on the occasion of his sixtieth birthday. Be cause of Ritter's many friends and his international reputation among math ematicians, finding contributors was easy. In fact, constraints on the size of the book required us to limit the number of papers. Klaus Ritter has done important work in a variety of areas, especially in var ious applications of linear and nonlinear optimization and also in connection with statistics and parallel computing. For the latter we have to mention Rit ter's development of transputer workstation hardware. The wide scope of his research is reflected by the breadth of the contributions in this Festschrift. After several years of scientific research in the U.S., Klaus Ritter was ap pointed as full professor at the University of Stuttgart. Since then, his name has become inextricably connected with the regularly scheduled conferences on optimization in Oberwolfach. In 1981 he became full professor of Applied Mathematics and Mathematical Statistics at the Technical University of Mu nich. In addition to his university teaching duties, he has made the activity of applying mathematical methods to problems of industry to be centrally important."
One of the very important parts of any digital system is the control unit, coordin- ing interplay of other system blocks. As a rule, control units have irregular str- ture, which makes process of their logic circuits design very sophisticated. In case of complex logic controllers, the problem of system design is reduced practically to the design of control units. Actually, we observe a real technical boom connected with achievements in semiconductor technology. One of these is the development of integrated circuit known as the "systems-on-a-programmable- chip" (SoPC), where the number of elements approaches one billion. Because of the extreme complexity of microchips, it is very important to develop effective design methods oriented on particular properties of logical elements. Solution of this problem permits impr- ing functional capabilities of the target digital system inside single SoPC chip. As majority of researches point out, design methods used in case of industrial packages are, in case of complex digital system design, far from optimal. Similar problems concern the design of control units with standard ?eld-programmable logic devices (FPLD), such as PLA, PAL, GAL, CPLD, and FPGA. Let us point out that modern SoPC are based on CPLD or FPGA technology. Thus, the development of eff- tive design methods oriented on FPLD implementation of logic circuits used in the control units still remains the problem of great importance.
Fault-Tolerant Parallel Computation presents recent advances in algorithmic ways of introducing fault-tolerance in multiprocessors under the constraint of preserving efficiency. The difficulty associated with combining fault-tolerance and efficiency is that the two have conflicting means: fault-tolerance is achieved by introducing redundancy, while efficiency is achieved by removing redundancy. This monograph demonstrates how in certain models of parallel computation it is possible to combine efficiency and fault-tolerance and shows how it is possible to develop efficient algorithms without concern for fault-tolerance, and then correctly and efficiently execute these algorithms on parallel machines whose processors are subject to arbitrary dynamic fail-stop errors. The efficient algorithmic approaches to multiprocessor fault-tolerance presented in this monograph make a contribution towards bridging the gap between the abstract models of parallel computation and realizable parallel architectures. Fault-Tolerant Parallel Computation presents the state of the art in algorithmic approaches to fault-tolerance in efficient parallel algorithms. The monograph synthesizes work that was presented in recent symposia and published in refereed journals by the authors and other leading researchers. This is the first text that takes the reader on the grand tour of this new field summarizing major results and identifying hard open problems. This monograph will be of interest to academic and industrial researchers and graduate students working in the areas of fault-tolerance, algorithms and parallel computation and may also be used as a text in a graduate course on parallel algorithmic techniques and fault-tolerance.
A genuinely useful text that gives an overview of the state-of-the-art in system-level design trade-off explorations for concurrent tasks running on embedded heterogeneous multiple processors. The targeted application domain covers complex embedded real-time multi-media and communication applications. This material is mainly based on research at IMEC and its international university network partners in this area over the last decade. In all, the material those in the digital signal processing industry will find here is bang up-to-date.
The memory system is increasingly turning into a bottleneck in the design of embedded systems. The speed improvements of memory systems are lower than the speed improvements of processors, eventually leading to embedded systems whose performance is limited by the memory. This problem is known as the "memory wall" problem. Furthermore, memory systems may consume the largest share of the system s energy budget and may be the source of unpredictable timing behaviour. Hence, the design of the memory system deserves an increasing amount of attention. Fast, Efficient and Predictable Memory Accesses presents techniques for designing fast, energy-efficient and timing predictable memory systems. By using a careful combination of compiler optimizations and architectural improvements, we can achieve more than what would be feasible at one of the levels in isolation. The described optimization algorithms achieve the goals of high performance and low energy consumption. In addition to these benefits, the use of scratchpad memories significantly improves the timing predictability of the entire system, leading to tighter worst case execution time bounds (WCET). The WCET is a relevant design parameter for all timing critical systems. In addition, the book covers algorithms to exploit the power down modes of main memories in SDRAM technology, as well as the execute-in-place feature of Flash memories. The final chapter considers the impact of the register file, which is also part of the memory hierarchy." |
You may like...
Higher Education in the Face of a Global…
Emnet Tadesse Woldegiorgis, Petronella Jonck
Hardcover
R3,495
Discovery Miles 34 950
Making The Transition To E-learning…
Mark Bullen, Diane Janes
Hardcover
R2,391
Discovery Miles 23 910
|