![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
Multimedia Information Systems brings together in one place important contributions and up-to-date research results in this fast moving area. Multimedia Information Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Optical media are now widely used in the telecommunication networks, and the evolution of optical and optoelectronic technologies tends to show that their wide range of techniques could be successfully introduced in shorter-distance interconnection systems. This book bridges the existing gap between research in optical interconnects and research in high-performance computing and communication systems, of which parallel processing is just an example. It also provides a more comprehensive understanding of the advantages and limitations of optics as applied to high-speed communications. Audience: The book will be a vital resource for researchers and graduate students of optical interconnects, computer architectures and high-performance computing and communication systems who wish to understand the trends in the newest technologies, models and communication issues in the field.
Nonlinear Assignment Problems (NAPs) are natural extensions of the classic Linear Assignment Problem, and despite the efforts of many researchers over the past three decades, they still remain some of the hardest combinatorial optimization problems to solve exactly. The purpose of this book is to provide in a single volume, major algorithmic aspects and applications of NAPs as contributed by leading international experts. The chapters included in this book are concerned with major applications and the latest algorithmic solution approaches for NAPs. Approximation algorithms, polyhedral methods, semidefinite programming approaches and heuristic procedures for NAPs are included, while applications of this problem class in the areas of multiple-target tracking in the context of military surveillance systems, of experimental high energy physics, and of parallel processing are presented. Audience: Researchers and graduate students in the areas of combinatorial optimization, mathematical programming, operations research, physics, and computer science.
After a brief introduction to low-power VLSI design, the design space of ASIP instruction set architectures (ISAs) is introduced with a special focus on important features for digital signal processing. Based on the degrees of freedom offered by this design space, a consistent ASIP design flow is proposed: this design flow starts with a given application and uses incremental optimization of the ASIP hardware, of ASIP coprocessors and of the ASIP software by using a top-down approach and by applying application-specific modifications on all levels of design hierarchy. A broad range of real-world signal processing applications serves as vehicle to illustrate each design decision and provides a hands-on approach to ASIP design. Finally, two complete case studies demonstrate the feasibility and the efficiency of the proposed methodology and quantitatively evaluate the benefits of ASIPs in an industrial context.
Efficient parallel solutions have been found to many problems. Some of them can be obtained automatically from sequential programs, using compilers. However, there is a large class of problems - irregular problems - that lack efficient solutions. IRREGULAR 94 - a workshop and summer school organized in Geneva - addressed the problems associated with the derivation of efficient solutions to irregular problems. This book, which is based on the workshop, draws on the contributions of outstanding scientists to present the state of the art in irregular problems, covering aspects ranging from scientific computing, discrete optimization, and automatic extraction of parallelism. Audience: This first book on parallel algorithms for irregular problems is of interest to advanced graduate students and researchers in parallel computer science.
Proceedings of the International Symposium on High Performance Computational Science and Engineering 2004 (IFIP World Computer Congress) is an essential reference for both academic and professional researchers in the field of computational science and engineering. Computational Science and Engineering is increasingly becoming an emerging and promising discipline in shaping future research and development activities in academia and industry ranging from engineering, science, finance, economics, arts and humanitarian fields. New challenges are in modeling of complex systems, sophisticated algorithms, advanced scientific and engineering computing, and associated (multi-disciplinary) problem solving environments. The papers presented in this volume are specially selected to address the most up-to-date ideas, results, work-in-progress and research experience in the area of high performance computational techniques for science and engineering applications. This state-of-the-are volume presents the proceedings of the International Symposium on High Performance Computational Science and Engineering, held in conjunction with the IFIP World Computer Congress, August 2004, in Toulouse, France. The collection will be important not only for computational science and engineering experts and researchers but for all teachers and administrators interested in high performance computational techniques.
Quality of Protection: Security Measurements and Metrics is an edited volume based on the Quality of Protection Workshop in Milano, Italy (September 2005). This volume discusses how security research can progress towards quality of protection in security comparable to quality of service in networking and software measurements, and metrics in empirical software engineering. Information security in the business setting has matured in the last few decades. Standards such as IS017799, the Common Criteria (ISO15408), and a number of industry certifications and risk analysis methodologies have raised the bar for good security solutions from a business perspective. Designed for a professional audience composed of researchers and practitioners in industry, Quality of Protection: Security Measurements and Metrics is also suitable for advanced-level students in computer science.
It is universally accepted today that parallel processing is here to stay but that software for parallel machines is still difficult to develop. However, there is little recognition of the fact that changes in processor architecture can significantly ease the development of software. In the seventies the availability of processors that could address a large name space directly, eliminated the problem of name management at one level and paved the way for the routine development of large programs. Similarly, today, processor architectures that can facilitate cheap synchronization and provide a global address space can simplify compiler development for parallel machines. If the cost of synchronization remains high, the pro gramming of parallel machines will remain significantly less abstract than programming sequential machines. In this monograph Bob Iannucci presents the design and analysis of an architecture that can be a better building block for parallel machines than any von Neumann processor. There is another very interesting motivation behind this work. It is rooted in the long and venerable history of dataflow graphs as a formalism for ex pressing parallel computation. The field has bloomed since 1974, when Dennis and Misunas proposed a truly novel architecture using dataflow graphs as the parallel machine language. The novelty and elegance of dataflow architectures has, however, also kept us from asking the real question: "What can dataflow architectures buy us that von Neumann ar chitectures can't?" In the following I explain in a round about way how Bob and I arrived at this question."
We are about to enter a period of radical change in computer architecture. It is made necessary by adL)anCeS in processing tech- nology that will make it possible to build devices exceeding in performance and complexity anything conceived in the past. These advances the logical extension of large - to very-large-scale in- J tegration (VLSI) are all but inevitable. With the large number of shlitching elements available in a sinqle chip as promised by VLSI technology, the question that arises naturally is: What can hle do hlith this technology and hOhl can hle best utilize it? The final anShler, hlhatever it may be, hlill be based on architectu- ral concepts that probably hlill depart, in several cases, from past and present practices. Furthermore, as hle continue to build increasingly pOhlerful microprocessors permitted by VLSI process advances, the method of efficiently interconnecting them hlill become more and more important. In fact one serious drahlback of VLSI technology is the limited number of pins on each chip. While VLSI chips provide an exponentially grOhling number of gates, the number of pins they provide remains almost constant. As a result communication becomes a very difficult design problem in the interconnection of VLSI chips. Due to the insufficient commu- nication pOhler and the high design cost of VLSI chips, computer systems employing VLSI technology hlill thus need to employ many architectural concepts that depart sharply from past and present practices.
Application Specific Processors is written for use by engineers who are developing specialized systems (application specific systems). Traditionally, most high performance signal processors have been realized with application specific processors. The explanation is that application specific processors can be tailored to exactly match the (usually very demanding) application requirements. The result is that no processing power' is wasted for unnecessary capabilities and maximum performance is achieved. A disadvantage is that such processors have been expensive to design since each is a unique design that is customized to the specific application. In the last decade, computer-aided design systems have been developed to facilitate the development of application specific integrated circuits. The success of such ASIC CAD systems suggests that it should be possible to streamline the process of application specific processor design. Application Specific Processors consists of eight chapters which provide a mixture of techniques and examples that relate to application specific processing. The inclusion of techniques is expected to suggest additional research and to assist those who are faced with the requirement to implement efficient application specific processors. The examples illustrate the application of the concepts and demonstrate the efficiency that can be achieved via application specific processors. The chapters were written by members and former members of the application specific processing group at the University of Texas at Austin. The first five chapters relate to specific arithmetic which often is the key to achieving high performance in application specific processors. The next two chapters focus on signal processing systems, and the final chapter examines the interconnection of possibly disparate elements to create systems.
Fault-tolerance in integrated circuits is not an exclusive concern regarding space designers or highly-reliable application engineers. Rather, designers of next generation products must cope with reduced margin noises due to technological advances. The continuous evolution of the fabrication technology process of semiconductor components, in terms of transistor geometry shrinking, power supply, speed, and logic density, has significantly reduced the reliability of very deep submicron integrated circuits, in face of the various internal and external sources of noise. The very popular Field Programmable Gate Arrays, customizable by SRAM cells, are a consequence of the integrated circuit evolution with millions of memory cells to implement the logic, embedded memories, routing, and more recently with embedded microprocessors cores. These re-programmable systems-on-chip platforms must be fault-tolerant to cope with present days requirements. This book discusses fault-tolerance techniques for SRAM-based Field Programmable Gate Arrays (FPGAs). It starts by showing the model of the problem and the upset effects in the programmable architecture. In the sequence, it shows the main fault tolerance techniques used nowadays to protect integrated circuits against errors. A large set of methods for designing fault tolerance systems in SRAM-based FPGAs is described. Some presented techniques are based on developing a new fault-tolerant architecture with new robustness FPGA elements. Other techniques are based on protecting the high-level hardware description before the synthesis in the FPGA. The reader has the flexibility of choosing the most suitable fault-tolerance technique for its project and to compare a set of fault tolerant techniques for programmable logic applications.
The SCAN conference, the International Symposium on Scientific Com puting, Computer Arithmetic and Validated Numerics, takes place bian nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. It is important that the possibly first participation of 6 young researchers was made possible due to the obtained support. The number of East-European participants was relatively high. These results are especially valuable, since in contrast to the usual 2 years period, the present meeting was organized just one year after the last SCAN-xx conference."
Floating Gate Devices: Operation and Compact Modeling focuses on
standard operations and compact modeling of memory devices based on
Floating Gate architecture. Floating Gate devices are the building
blocks of Flash, EPROM, EEPROM memories. Flash memories, which are
the most versatile nonvolatile memories, are widely used to store
code (BIOS, Communication protocol, Identification code, ) and data
(solid-state Hard Disks, Flash cards for digital cameras, ).
Event-Triggered and Time-Triggered Control Paradigms presents a valuable survey about existing architectures for safety-critical applications and discusses the issues that must be considered when moving from a federated to an integrated architecture. The book focuses on one key topic - the amalgamation of the event-triggered and the time-triggered control paradigm into a coherent integrated architecture. The architecture provides for the integration of independent distributed application subsystems by introducing multi-criticality nodes and virtual networks of known temporal properties. The feasibility and the tangible advantages of this new architecture are demonstrated with practical examples taken from the automotive industry. Event-Triggered and Time-Triggered Control Paradigms offers significant insights into the architecture and design of integrated embedded systems, both at the conceptual and at the practical level.
What is exactly "Safety"? A safety system should be defined as a system that will not endanger human life or the environment. A safety-critical system requires utmost care in their specification and design in order to avoid possible errors in their implementation that should result in unexpected system's behavior during his operating "life." An inappropriate method could lead to loss of life, and will almost certainly result in financial penalties in the long run, whether because of loss of business or because the imposition of fines. Risks of this kind are usually managed with the methods and tools of the "safety engineering." A life-critical system is designed to 9 lose less than one life per billion (10 ). Nowadays, computers are used at least an order of magnitude more in safety-critical applications compared to two decades ago. Increasingly electronic devices are being used in applications where their correct operation is vital to ensure the safety of the human life and the environment. These application ranging from the anti-lock braking systems (ABS) in automobiles, to the fly-by-wire aircrafts, to biomedical supports to the human care. Therefore, it is vital that electronic designers be aware of the safety implications of the systems they develop. State of the art electronic systems are increasingly adopting progr- mable devices for electronic applications on earthling system. In particular, the Field Programmable Gate Array (FPGA) devices are becoming very interesting due to their characteristics in terms of performance, dimensions and cost.
Database Concurrency Control: Methods, Performance and Analysis is a review of developments in concurrency control methods for centralized database systems, with a quick digression into distributed databases and multicomputers, the emphasis being on performance. The main goals of Database Concurrency Control: Methods, Performance and Analysis are to succinctly specify various concurrency control methods; to describe models for evaluating the relative performance of concurrency control methods; to point out problem areas in earlier performance analyses; to introduce queuing network models to evaluate the baseline performance of transaction processing systems; to provide insights into the relative performance of transaction processing systems; to illustrate the application of basic analytic methods to the performance analysis of various concurrency control methods; to review transaction models which are intended to relieve the effect of lock contention; to provide guidelines for improving the performance of transaction processing systems due to concurrency control; and to point out areas for further investigation. This monograph should be of direct interest to computer scientists doing research on concurrency control methods for high performance transaction processing systems, designers of such systems, and professionals concerned with improving (tuning) the performance of transaction processing systems.
Various approaches for finding optimal values for the parameters of analog cells have made their entrance in commercial applications. However, a larger impact on the performance is expected if tools are developed which operate on a higher abstraction level and consider multiple architectural choices to realize a particular functionality. This book examines the opportunities, conditions, problems, solutions and systematic methodologies for this new generation of analog CAD tools.
Realizing maximum performance from high bit-rate and RF circuits requires close attention to IC technology, circuit-to-circuit interconnections (i.e., the interconnect ) and circuit design. Circuit and Interconnet Design for RF and High Bit-rate Applications covers each of these topics from theory to practice, with sufficient detail to help you produce circuits that are first-time right . A thorough analysis of the interplay between on-chip circuits and interconnects is presented, including practical examples in high bit-rate and RF applications. Optimum interconnect geometries for the distribution of RF signals are described, together with simple models for standard interconnect geometries that capture characteristic impedance and propagation delay across a broad frequency range. The analyses also covers single-ended and differential geometries, so that the designer can incorporate the effects of interconnections as soon as estimated interconnect lengths are available. Application of interconnect design is illustrated using a 12.5 Gb/s crosspoint switch example taken from a volume production part."
Multithreaded Processor Design takes the unique approach of designing a multithreaded processor from the ground up. Every aspect is carefully considered to form a balanced design rather than making incremental changes to an existing design and then ignoring problem areas. The general purpose parallel computer is an elusive goal. Multithreaded processors have emerged as a promising solution to this conundrum by forming some amalgam of the commonplace control-flow (von Neumann) processor model with the more exotic data-flow approach. This new processor model offers many exciting possibilities and there is much research to be performed to make this technology widespread. Multithreaded processors utilize the simple and efficient sequential execution technique of control-flow, and also data-flow like concurrency primitives. This supports the conceptually simple but powerful idea of rescheduling rather than blocking when waiting for data, e.g. from large and distributed memories, thereby tolerating long data transmission latencies. This makes multiprocessing far more efficient because the cost of moving data between distributed memories and processors can be hidden by other activity. The same hardware mechanisms may also be used to synchronize interprocess communications to awaiting threads, thereby alleviating operating system overheads. Supporting synchronization and scheduling mechanisms in hardware naturally adds complexity. Consequently, existing multithreaded processor designs have tended to make incremental changes to existing control-flow processor designs to resolve some problems but not others. Multithreaded Processor Design serves as an excellent reference source and is suitable as a text for advanced courses in computer architecture dealing with the subject.
Input/Output in Parallel and Distributed Computer Systems has attracted increasing attention over the last few years, as it has become apparent that input/output performance, rather than CPU performance, may be the key limiting factor in the performance of future systems. This I/O bottleneck is caused by the increasing speed mismatch between processing units and storage devices, the use of multiple processors operating simultaneously in parallel and distributed systems, and by the increasing I/O demands of new classes of applications, like multimedia. It is also important to note that, to varying degrees, the I/O bottleneck exists at multiple levels of the memory hierarchy. All indications are that the I/O bottleneck will be with us for some time to come, and is likely to increase in importance. Input/Output in Parallel and Distributed Computer Systems is based on papers presented at the 1994 and 1995 IOPADS workshops held in conjunction with the International Parallel Processing Symposium. This book is divided into three parts. Part I, the Introduction, contains four invited chapters which provide a tutorial survey of I/O issues in parallel and distributed systems. The chapters in Parts II and III contain selected research papers from the 1994 and 1995 IOPADS workshops; many of these papers have been substantially revised and updated for inclusion in this volume. Part II collects the papers from both years which deal with various aspects of system software, and Part III addresses architectural issues. Input/Output in Parallel and Distributed Computer Systems is suitable as a secondary text for graduate level courses in computer architecture, software engineering, and multimedia systems, and as a reference for researchers and practitioners in industry.
The second half of the 1970s was marked with impressive advances in array/vector architectures and vectorization techniques and compilers. This progress continued with a particular focus on vector machines until the middle of the 1980s. The major ity of supercomputers during this period were register-to-register (Cray 1) or memory-to-memory (CDC Cyber 205) vector (pipelined) machines. However, the increasing demand for higher computational rates lead naturally to parallel comput ers and software. Through the replication of autonomous processors in a coordinated system, one can skip over performance barriers due technology limitations. In princi ple, parallelism offers unlimited performance potential. Nevertheless, it is very difficult to realize this performance potential in practice. So far, we have seen only the tip of the iceberg called "parallel machines and parallel programming." Parallel programming in particular is a rapidly evolving art and, at present, highly empirical. In this book we discuss several aspects of parallel programming and parallelizing compilers. Instead of trying to develop parallel programming methodologies and paradigms, we often focus on more advanced topics assuming that the reader has an adequate background in parallel processing. The book is organized in three main parts. In the first part (Chapters 1 and 2) we set the stage and focus on program transformations and parallelizing compilers. The second part of this book (Chapters 3 and 4) discusses scheduling for parallel machines from the practical point of view macro and microtasking and supporting environments). Finally, the last part (Le."
Service computing is a cutting-edge area, popular in both industry and academia. New challenges have been introduced to develop service-oriented systems with high assurance requirements. High Assurance Services Computing captures and makes accessible the most recent practical developments in service-oriented high-assurance systems. An edited volume contributed by well-established researchers in this field worldwide, this book reports the best current practices and emerging methods in the areas of service-oriented techniques for high assurance systems. Available results from industry and government, R&D laboratories and academia are included, along with unreported results from the "hands-on" experiences of software professionals in the respective domains. Designed for practitioners and researchers working for industrial organizations and government agencies, High Assurance Services Computing is also suitable for advanced-level students in computer science and engineering.
Grids are a crucial enabling technology for scientific and industrial development. Grid and Services Evolution, the 11th edited volume of the CoreGRID series, was based on The CoreGRID Middleware Workshop, held in Barcelona, Spain, June 5-6, 2008. Grid and Services Evolution provides a bridge between the application community and the developers of middleware services, especially in terms of parallel computing. This edited volume brings together a critical mass of well-established researchers worldwide, from forty-two institutions active in the fields of distributed systems and middleware, programming models, algorithms, tools and environments. Grid and Services Evolution is designed for a professional audience composed of researchers and practitioners within the Grid community industry. This volume is also suitable for advanced-level students in computer science.
Analog Circuit Design contains the contribution of 18 tutorials of the 17th workshop on Advances in Analog Circuit Design. Each part discusses a specific to-date topic on new and valuable design ideas in the area of analog circuit design. Each part is presented by six experts in that field and state of the art information is shared and overviewed. This book is number 17 in this successful series of Analog Circuit Design.
The extreme ?exibility of recon?gurable architectures and their performance pot- tial have made them a vehicle of choice in a wide range of computing domains, from rapid circuit prototyping to high-performance computing. The increasing availab- ity of transistors on a die has allowed the emergence of recon?gurable architectures with a large number of computing resources and interconnection topologies. To - ploit the potential of these recon?gurable architectures, programmers are forced to map their applications, typically written in high-level imperative programming l- guages, such as C or MATLAB, to hardware-oriented languages such as VHDL or Verilog. In this process, they must assume the role of hardware designers and software programmers and navigate a maze of program transformations, mapping, and synthesis steps to produce ef?cient recon?gurable computing implementations. The richness and sophistication of any of these application mapping steps make the mapping of computations to these architectures an increasingly daunting process. It is thus widely believed that automatic compilation from high-level programming languages is the key to the success of recon?gurable computing. This book describes a wide range of code transformations and mapping te- niques for programs described in high-level programming languages, most - tably imperative languages, to recon?gurable architectures. |
You may like...
|