0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (3)
  • R250 - R500 (23)
  • R500+ (2,634)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General

High Performance Computational Science and Engineering - IFIP TC5 Workshop on High Performance Computational Science and... High Performance Computational Science and Engineering - IFIP TC5 Workshop on High Performance Computational Science and Engineering (HPCSE), World Computer Congress, August 22-27, 2004, Toulouse, France (Paperback, Softcover reprint of hardcover 1st ed. 2005)
Michael K. Ng, Andrei Doncescu, Laurence T. Yang, Tau Leng
R2,640 Discovery Miles 26 400 Ships in 18 - 22 working days

Proceedings of the International Symposium on High Performance Computational Science and Engineering 2004 (IFIP World Computer Congress) is an essential reference for both academic and professional researchers in the field of computational science and engineering.

Computational Science and Engineering is increasingly becoming an emerging and promising discipline in shaping future research and development activities in academia and industry ranging from engineering, science, finance, economics, arts and humanitarian fields. New challenges are in modeling of complex systems, sophisticated algorithms, advanced scientific and engineering computing, and associated (multi-disciplinary) problem solving environments. The papers presented in this volume are specially selected to address the most up-to-date ideas, results, work-in-progress and research experience in the area of high performance computational techniques for science and engineering applications.

This state-of-the-are volume presents the proceedings of the International Symposium on High Performance Computational Science and Engineering, held in conjunction with the IFIP World Computer Congress, August 2004, in Toulouse, France.

The collection will be important not only for computational science and engineering experts and researchers but for all teachers and administrators interested in high performance computational techniques.

Self-Timed Control of Concurrent Processes - The Design of Aperiodic Logical Circuits in Computers and Discrete Systems... Self-Timed Control of Concurrent Processes - The Design of Aperiodic Logical Circuits in Computers and Discrete Systems (Paperback, Softcover reprint of the original 1st ed. 1990)
Victor I. Varshavsky
R2,689 Discovery Miles 26 890 Ships in 18 - 22 working days

'Et moi ... si j'avait su comment en revenir. One service mathematics has rendered thl je n'y serais point aile: human race. It has put common sense back where it belongs. on the topmost shelf nexl Jules Verne to the dusty canister labelled 'discarded non. The series is divergent; therefore we may be sense'. Eric T. Bell able to do something with it O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non. Iinearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and fO other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic has rendered com. puter science ... .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series."

Power Aware Computing (Paperback, Softcover reprint of hardcover 1st ed. 2002): Robert Graybill, Rami Melhem Power Aware Computing (Paperback, Softcover reprint of hardcover 1st ed. 2002)
Robert Graybill, Rami Melhem
R5,200 Discovery Miles 52 000 Ships in 18 - 22 working days

With the advent of portable and autonomous computing systems, power con sumption has emerged as a focal point in many research projects, commercial systems and DoD platforms. One current research initiative, which drew much attention to this area, is the Power Aware Computing and Communications (PAC/C) program sponsored by DARPA. Many of the chapters in this book include results from work that have been supported by the PACIC program. The performance of computer systems has been tremendously improving while the size and weight of such systems has been constantly shrinking. The capacities of batteries relative to their sizes and weights has been also improv ing but at a rate which is much slower than the rate of improvement in computer performance and the rate of shrinking in computer sizes. The relation between the power consumption of a computer system and it performance and size is a complex one which is very much dependent on the specific system and the technology used to build that system. We do not need a complex argument, however, to be convinced that energy and power, which is the rate of energy consumption, are becoming critical components in computer systems in gen eral, and portable and autonomous systems, in particular. Most of the early research on power consumption in computer systems ad dressed the issue of minimizing power in a given platform, which usually translates into minimizing energy consumption, and thus, longer battery life."

Digital Image Compression - Algorithms and Standards (Paperback, Softcover reprint of hardcover 1st ed. 1995): Weidong Kou Digital Image Compression - Algorithms and Standards (Paperback, Softcover reprint of hardcover 1st ed. 1995)
Weidong Kou
R3,995 Discovery Miles 39 950 Ships in 18 - 22 working days

Digital image business applications are expanding rapidly, driven by recent advances in the technology and breakthroughs in the price and performance of hardware and firmware. This ever increasing need for the storage and transmission of images has in turn driven the technology of image compression: image data rate reduction to save storage space and reduce transmission rate requirements. Digital image compression offers a solution to a variety of imaging applications that require a vast amount of data to represent the images, such as document imaging management systems, facsimile transmission, image archiving, remote sensing, medical imaging, entertainment, HDTV, broadcasting, education and video teleconferencing. Digital Image Compression: Algorithms and Standards introduces the reader to compression algorithms, including the CCITT facsimile standards T.4 and T.6, JBIG, CCITT H.261 and MPEG standards. The book provides comprehensive explanations of the principles and concepts of the algorithms, helping the readers' understanding and allowing them to use the standards in business, product development and R&D. Audience: A valuable reference for the graduate student, researcher and engineer. May also be used as a text for a course on the subject.

Quality Of Protection - Security Measurements and Metrics (Paperback, Softcover reprint of hardcover 1st ed. 2006): Dieter... Quality Of Protection - Security Measurements and Metrics (Paperback, Softcover reprint of hardcover 1st ed. 2006)
Dieter Gollmann, Fabio Massacci, Artsiom Yautsiukhin
R5,143 Discovery Miles 51 430 Ships in 18 - 22 working days

Quality of Protection: Security Measurements and Metrics is an edited volume based on the Quality of Protection Workshop in Milano, Italy (September 2005). This volume discusses how security research can progress towards quality of protection in security comparable to quality of service in networking and software measurements, and metrics in empirical software engineering. Information security in the business setting has matured in the last few decades. Standards such as IS017799, the Common Criteria (ISO15408), and a number of industry certifications and risk analysis methodologies have raised the bar for good security solutions from a business perspective.

Designed for a professional audience composed of researchers and practitioners in industry, Quality of Protection: Security Measurements and Metrics is also suitable for advanced-level students in computer science.

Floating Gate Devices: Operation and Compact Modeling (Paperback, Softcover reprint of the original 1st ed. 2004): Paolo Pavan,... Floating Gate Devices: Operation and Compact Modeling (Paperback, Softcover reprint of the original 1st ed. 2004)
Paolo Pavan, Luca Larcher, Andrea Marmiroli
R2,624 Discovery Miles 26 240 Ships in 18 - 22 working days

Floating Gate Devices: Operation and Compact Modeling focuses on standard operations and compact modeling of memory devices based on Floating Gate architecture. Floating Gate devices are the building blocks of Flash, EPROM, EEPROM memories. Flash memories, which are the most versatile nonvolatile memories, are widely used to store code (BIOS, Communication protocol, Identification code, ) and data (solid-state Hard Disks, Flash cards for digital cameras, ).
The reader, who deals with Floating Gate memory devices at different levels - from test-structures to complex circuit design - will find an essential explanation on device physics and technology, and also circuit issues which must be fully understood while developing a new device. Device engineers will use this book to find simplified models to design new process steps or new architectures. Circuit designers will find the basic theory to understand the use of compact models to validate circuits against process variations and to evaluate the impact of parameter variations on circuit performances.
Floating Gate Devices: Operation and Compact Modeling is meant to be a basic tool for designing the next generation of memory devices based on FG technologies.

Microarchitecture of VLSI Computers (Paperback, Softcover reprint of the original 1st ed. 1985): P. Antognetti, F. Anceau, J... Microarchitecture of VLSI Computers (Paperback, Softcover reprint of the original 1st ed. 1985)
P. Antognetti, F. Anceau, J Vuillemin
R1,409 Discovery Miles 14 090 Ships in 18 - 22 working days

We are about to enter a period of radical change in computer architecture. It is made necessary by adL)anCeS in processing tech- nology that will make it possible to build devices exceeding in performance and complexity anything conceived in the past. These advances the logical extension of large - to very-large-scale in- J tegration (VLSI) are all but inevitable. With the large number of shlitching elements available in a sinqle chip as promised by VLSI technology, the question that arises naturally is: What can hle do hlith this technology and hOhl can hle best utilize it? The final anShler, hlhatever it may be, hlill be based on architectu- ral concepts that probably hlill depart, in several cases, from past and present practices. Furthermore, as hle continue to build increasingly pOhlerful microprocessors permitted by VLSI process advances, the method of efficiently interconnecting them hlill become more and more important. In fact one serious drahlback of VLSI technology is the limited number of pins on each chip. While VLSI chips provide an exponentially grOhling number of gates, the number of pins they provide remains almost constant. As a result communication becomes a very difficult design problem in the interconnection of VLSI chips. Due to the insufficient commu- nication pOhler and the high design cost of VLSI chips, computer systems employing VLSI technology hlill thus need to employ many architectural concepts that depart sharply from past and present practices.

Fault-Tolerance Techniques for SRAM-Based FPGAs (Paperback, Softcover reprint of hardcover 1st ed. 2006): Fernanda Lima... Fault-Tolerance Techniques for SRAM-Based FPGAs (Paperback, Softcover reprint of hardcover 1st ed. 2006)
Fernanda Lima Kastensmidt, Ricardo Reis
R2,653 Discovery Miles 26 530 Ships in 18 - 22 working days

Fault-tolerance in integrated circuits is not an exclusive concern regarding space designers or highly-reliable application engineers. Rather, designers of next generation products must cope with reduced margin noises due to technological advances. The continuous evolution of the fabrication technology process of semiconductor components, in terms of transistor geometry shrinking, power supply, speed, and logic density, has significantly reduced the reliability of very deep submicron integrated circuits, in face of the various internal and external sources of noise. The very popular Field Programmable Gate Arrays, customizable by SRAM cells, are a consequence of the integrated circuit evolution with millions of memory cells to implement the logic, embedded memories, routing, and more recently with embedded microprocessors cores. These re-programmable systems-on-chip platforms must be fault-tolerant to cope with present days requirements. This book discusses fault-tolerance techniques for SRAM-based Field Programmable Gate Arrays (FPGAs). It starts by showing the model of the problem and the upset effects in the programmable architecture. In the sequence, it shows the main fault tolerance techniques used nowadays to protect integrated circuits against errors. A large set of methods for designing fault tolerance systems in SRAM-based FPGAs is described. Some presented techniques are based on developing a new fault-tolerant architecture with new robustness FPGA elements. Other techniques are based on protecting the high-level hardware description before the synthesis in the FPGA. The reader has the flexibility of choosing the most suitable fault-tolerance technique for its project and to compare a set of fault tolerant techniques for programmable logic applications.

Fault-Tolerant Parallel Computation (Paperback, Softcover reprint of hardcover 1st ed. 1997): Paris Christos Kanellakis, Alex... Fault-Tolerant Parallel Computation (Paperback, Softcover reprint of hardcover 1st ed. 1997)
Paris Christos Kanellakis, Alex Allister Shvartsman
R2,632 Discovery Miles 26 320 Ships in 18 - 22 working days

Fault-Tolerant Parallel Computation presents recent advances in algorithmic ways of introducing fault-tolerance in multiprocessors under the constraint of preserving efficiency. The difficulty associated with combining fault-tolerance and efficiency is that the two have conflicting means: fault-tolerance is achieved by introducing redundancy, while efficiency is achieved by removing redundancy. This monograph demonstrates how in certain models of parallel computation it is possible to combine efficiency and fault-tolerance and shows how it is possible to develop efficient algorithms without concern for fault-tolerance, and then correctly and efficiently execute these algorithms on parallel machines whose processors are subject to arbitrary dynamic fail-stop errors. The efficient algorithmic approaches to multiprocessor fault-tolerance presented in this monograph make a contribution towards bridging the gap between the abstract models of parallel computation and realizable parallel architectures. Fault-Tolerant Parallel Computation presents the state of the art in algorithmic approaches to fault-tolerance in efficient parallel algorithms. The monograph synthesizes work that was presented in recent symposia and published in refereed journals by the authors and other leading researchers. This is the first text that takes the reader on the grand tour of this new field summarizing major results and identifying hard open problems. This monograph will be of interest to academic and industrial researchers and graduate students working in the areas of fault-tolerance, algorithms and parallel computation and may also be used as a text in a graduate course on parallel algorithmic techniques and fault-tolerance.

Event-Triggered and Time-Triggered Control Paradigms (Paperback, Softcover reprint of hardcover 1st ed. 2005): Roman Obermaisser Event-Triggered and Time-Triggered Control Paradigms (Paperback, Softcover reprint of hardcover 1st ed. 2005)
Roman Obermaisser
R2,618 Discovery Miles 26 180 Ships in 18 - 22 working days

Event-Triggered and Time-Triggered Control Paradigms presents a valuable survey about existing architectures for safety-critical applications and discusses the issues that must be considered when moving from a federated to an integrated architecture. The book focuses on one key topic - the amalgamation of the event-triggered and the time-triggered control paradigm into a coherent integrated architecture. The architecture provides for the integration of independent distributed application subsystems by introducing multi-criticality nodes and virtual networks of known temporal properties. The feasibility and the tangible advantages of this new architecture are demonstrated with practical examples taken from the automotive industry.

Event-Triggered and Time-Triggered Control Paradigms offers significant insights into the architecture and design of integrated embedded systems, both at the conceptual and at the practical level.

Electronics System Design Techniques for Safety Critical Applications (Paperback, Softcover reprint of hardcover 1st ed. 2009):... Electronics System Design Techniques for Safety Critical Applications (Paperback, Softcover reprint of hardcover 1st ed. 2009)
Luca Sterpone
R2,623 Discovery Miles 26 230 Ships in 18 - 22 working days

What is exactly "Safety"? A safety system should be defined as a system that will not endanger human life or the environment. A safety-critical system requires utmost care in their specification and design in order to avoid possible errors in their implementation that should result in unexpected system's behavior during his operating "life." An inappropriate method could lead to loss of life, and will almost certainly result in financial penalties in the long run, whether because of loss of business or because the imposition of fines. Risks of this kind are usually managed with the methods and tools of the "safety engineering." A life-critical system is designed to 9 lose less than one life per billion (10 ). Nowadays, computers are used at least an order of magnitude more in safety-critical applications compared to two decades ago. Increasingly electronic devices are being used in applications where their correct operation is vital to ensure the safety of the human life and the environment. These application ranging from the anti-lock braking systems (ABS) in automobiles, to the fly-by-wire aircrafts, to biomedical supports to the human care. Therefore, it is vital that electronic designers be aware of the safety implications of the systems they develop. State of the art electronic systems are increasingly adopting progr- mable devices for electronic applications on earthling system. In particular, the Field Programmable Gate Array (FPGA) devices are becoming very interesting due to their characteristics in terms of performance, dimensions and cost.

Circuit and Interconnect Design for RF and High Bit-rate Applications (Paperback, 1st ed. Softcover of orig. ed. 2008): Hugo... Circuit and Interconnect Design for RF and High Bit-rate Applications (Paperback, 1st ed. Softcover of orig. ed. 2008)
Hugo Veenstra, John R. Long
R4,002 Discovery Miles 40 020 Ships in 18 - 22 working days

Realizing maximum performance from high bit-rate and RF circuits requires close attention to IC technology, circuit-to-circuit interconnections (i.e., the interconnect ) and circuit design. Circuit and Interconnet Design for RF and High Bit-rate Applications covers each of these topics from theory to practice, with sufficient detail to help you produce circuits that are first-time right . A thorough analysis of the interplay between on-chip circuits and interconnects is presented, including practical examples in high bit-rate and RF applications. Optimum interconnect geometries for the distribution of RF signals are described, together with simple models for standard interconnect geometries that capture characteristic impedance and propagation delay across a broad frequency range. The analyses also covers single-ended and differential geometries, so that the designer can incorporate the effects of interconnections as soon as estimated interconnect lengths are available. Application of interconnect design is illustrated using a 12.5 Gb/s crosspoint switch example taken from a volume production part."

High Performance Computing in Fluid Dynamics - Proceedings of the Summerschool on High Performance Computing in Fluid Dynamics... High Performance Computing in Fluid Dynamics - Proceedings of the Summerschool on High Performance Computing in Fluid Dynamics held at Delft University of Technology, The Netherlands, June 24-28 1996 (Paperback, Softcover reprint of the original 1st ed. 1996)
P Wesseling
R1,411 Discovery Miles 14 110 Ships in 18 - 22 working days

This book contains the course notes of the Summerschool on High Performance Computing in Fluid Dynamics, held at the Delft University of Technology, June 24-28, 1996. The lectures presented deal to a large extent with algorithmic, programming and implementation issues, as well as experiences gained so far on parallel platforms. Attention is also given to mathematics aspects, notably domain decomposition and scalable algorithms. Topics considered are: basic concepts of parallel computers, parallelization strategies, programming aspects, parallel algorithms, applications in computational fluid dynamics, the present hardware situation and developments to be expected. The book is addressed to students on a graduate level and researchers in industry engaged in scientific computing, who have little or no experience with high performance computing, but who want to learn more, and/or want to port their code to parallel platforms. It is a good starting point for those who want to enter the field of high performance computing, especially if applications in fluid dynamics are envisaged.

Database Concurrency Control - Methods, Performance, and Analysis (Paperback, Softcover reprint of hardcover 1st ed. 1996):... Database Concurrency Control - Methods, Performance, and Analysis (Paperback, Softcover reprint of hardcover 1st ed. 1996)
Alexander Thomasian
R4,011 Discovery Miles 40 110 Ships in 18 - 22 working days

Database Concurrency Control: Methods, Performance and Analysis is a review of developments in concurrency control methods for centralized database systems, with a quick digression into distributed databases and multicomputers, the emphasis being on performance. The main goals of Database Concurrency Control: Methods, Performance and Analysis are to succinctly specify various concurrency control methods; to describe models for evaluating the relative performance of concurrency control methods; to point out problem areas in earlier performance analyses; to introduce queuing network models to evaluate the baseline performance of transaction processing systems; to provide insights into the relative performance of transaction processing systems; to illustrate the application of basic analytic methods to the performance analysis of various concurrency control methods; to review transaction models which are intended to relieve the effect of lock contention; to provide guidelines for improving the performance of transaction processing systems due to concurrency control; and to point out areas for further investigation. This monograph should be of direct interest to computer scientists doing research on concurrency control methods for high performance transaction processing systems, designers of such systems, and professionals concerned with improving (tuning) the performance of transaction processing systems.

Input/Output in Parallel and Distributed Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1996): Ravi... Input/Output in Parallel and Distributed Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
Ravi Jain, John Werth, James C. Browne
R5,175 Discovery Miles 51 750 Ships in 18 - 22 working days

Input/Output in Parallel and Distributed Computer Systems has attracted increasing attention over the last few years, as it has become apparent that input/output performance, rather than CPU performance, may be the key limiting factor in the performance of future systems. This I/O bottleneck is caused by the increasing speed mismatch between processing units and storage devices, the use of multiple processors operating simultaneously in parallel and distributed systems, and by the increasing I/O demands of new classes of applications, like multimedia. It is also important to note that, to varying degrees, the I/O bottleneck exists at multiple levels of the memory hierarchy. All indications are that the I/O bottleneck will be with us for some time to come, and is likely to increase in importance. Input/Output in Parallel and Distributed Computer Systems is based on papers presented at the 1994 and 1995 IOPADS workshops held in conjunction with the International Parallel Processing Symposium. This book is divided into three parts. Part I, the Introduction, contains four invited chapters which provide a tutorial survey of I/O issues in parallel and distributed systems. The chapters in Parts II and III contain selected research papers from the 1994 and 1995 IOPADS workshops; many of these papers have been substantially revised and updated for inclusion in this volume. Part II collects the papers from both years which deal with various aspects of system software, and Part III addresses architectural issues. Input/Output in Parallel and Distributed Computer Systems is suitable as a secondary text for graduate level courses in computer architecture, software engineering, and multimedia systems, and as a reference for researchers and practitioners in industry.

Multithreaded Processor Design (Paperback, Softcover reprint of the original 1st ed. 1996): Simon W. Moore Multithreaded Processor Design (Paperback, Softcover reprint of the original 1st ed. 1996)
Simon W. Moore
R3,976 Discovery Miles 39 760 Ships in 18 - 22 working days

Multithreaded Processor Design takes the unique approach of designing a multithreaded processor from the ground up. Every aspect is carefully considered to form a balanced design rather than making incremental changes to an existing design and then ignoring problem areas. The general purpose parallel computer is an elusive goal. Multithreaded processors have emerged as a promising solution to this conundrum by forming some amalgam of the commonplace control-flow (von Neumann) processor model with the more exotic data-flow approach. This new processor model offers many exciting possibilities and there is much research to be performed to make this technology widespread. Multithreaded processors utilize the simple and efficient sequential execution technique of control-flow, and also data-flow like concurrency primitives. This supports the conceptually simple but powerful idea of rescheduling rather than blocking when waiting for data, e.g. from large and distributed memories, thereby tolerating long data transmission latencies. This makes multiprocessing far more efficient because the cost of moving data between distributed memories and processors can be hidden by other activity. The same hardware mechanisms may also be used to synchronize interprocess communications to awaiting threads, thereby alleviating operating system overheads. Supporting synchronization and scheduling mechanisms in hardware naturally adds complexity. Consequently, existing multithreaded processor designs have tended to make incremental changes to existing control-flow processor designs to resolve some problems but not others. Multithreaded Processor Design serves as an excellent reference source and is suitable as a text for advanced courses in computer architecture dealing with the subject.

Parallel Programming and Compilers (Paperback, Softcover reprint of the original 1st ed. 1988): Constantine D. Polychronopoulos Parallel Programming and Compilers (Paperback, Softcover reprint of the original 1st ed. 1988)
Constantine D. Polychronopoulos
R1,395 Discovery Miles 13 950 Ships in 18 - 22 working days

The second half of the 1970s was marked with impressive advances in array/vector architectures and vectorization techniques and compilers. This progress continued with a particular focus on vector machines until the middle of the 1980s. The major ity of supercomputers during this period were register-to-register (Cray 1) or memory-to-memory (CDC Cyber 205) vector (pipelined) machines. However, the increasing demand for higher computational rates lead naturally to parallel comput ers and software. Through the replication of autonomous processors in a coordinated system, one can skip over performance barriers due technology limitations. In princi ple, parallelism offers unlimited performance potential. Nevertheless, it is very difficult to realize this performance potential in practice. So far, we have seen only the tip of the iceberg called "parallel machines and parallel programming." Parallel programming in particular is a rapidly evolving art and, at present, highly empirical. In this book we discuss several aspects of parallel programming and parallelizing compilers. Instead of trying to develop parallel programming methodologies and paradigms, we often focus on more advanced topics assuming that the reader has an adequate background in parallel processing. The book is organized in three main parts. In the first part (Chapters 1 and 2) we set the stage and focus on program transformations and parallelizing compilers. The second part of this book (Chapters 3 and 4) discusses scheduling for parallel machines from the practical point of view macro and microtasking and supporting environments). Finally, the last part (Le."

High Assurance Services Computing (Paperback, Softcover reprint of hardcover 1st ed. 2009): Jing Dong, Raymond Paul, Liang-Jie... High Assurance Services Computing (Paperback, Softcover reprint of hardcover 1st ed. 2009)
Jing Dong, Raymond Paul, Liang-Jie Zhang
R4,023 Discovery Miles 40 230 Ships in 18 - 22 working days

Service computing is a cutting-edge area, popular in both industry and academia. New challenges have been introduced to develop service-oriented systems with high assurance requirements. High Assurance Services Computing captures and makes accessible the most recent practical developments in service-oriented high-assurance systems.

An edited volume contributed by well-established researchers in this field worldwide, this book reports the best current practices and emerging methods in the areas of service-oriented techniques for high assurance systems. Available results from industry and government, R&D laboratories and academia are included, along with unreported results from the "hands-on" experiences of software professionals in the respective domains.

Designed for practitioners and researchers working for industrial organizations and government agencies, High Assurance Services Computing is also suitable for advanced-level students in computer science and engineering.

Grid and Services Evolution (Paperback, Softcover reprint of hardcover 1st ed. 2009): Norbert Meyer, Domenico Talia, Ramin... Grid and Services Evolution (Paperback, Softcover reprint of hardcover 1st ed. 2009)
Norbert Meyer, Domenico Talia, Ramin Yahyapour
R2,653 Discovery Miles 26 530 Ships in 18 - 22 working days

Grids are a crucial enabling technology for scientific and industrial development. Grid and Services Evolution, the 11th edited volume of the CoreGRID series, was based on The CoreGRID Middleware Workshop, held in Barcelona, Spain, June 5-6, 2008.

Grid and Services Evolution provides a bridge between the application community and the developers of middleware services, especially in terms of parallel computing. This edited volume brings together a critical mass of well-established researchers worldwide, from forty-two institutions active in the fields of distributed systems and middleware, programming models, algorithms, tools and environments.

Grid and Services Evolution is designed for a professional audience composed of researchers and practitioners within the Grid community industry. This volume is also suitable for advanced-level students in computer science.

Analog Circuit Design - High-speed Clock and Data Recovery, High-performance Amplifiers, Power Management (Paperback, Softcover... Analog Circuit Design - High-speed Clock and Data Recovery, High-performance Amplifiers, Power Management (Paperback, Softcover reprint of hardcover 1st ed. 2008)
Michiel Steyaert, Arthur H. M. van Roermund, Herman Casier
R5,164 Discovery Miles 51 640 Ships in 18 - 22 working days

Analog Circuit Design contains the contribution of 18 tutorials of the 17th workshop on Advances in Analog Circuit Design. Each part discusses a specific to-date topic on new and valuable design ideas in the area of analog circuit design. Each part is presented by six experts in that field and state of the art information is shared and overviewed. This book is number 17 in this successful series of Analog Circuit Design.

Component Models and Systems for Grid Applications - Proceedings of the Workshop on Component Models and Systems for Grid... Component Models and Systems for Grid Applications - Proceedings of the Workshop on Component Models and Systems for Grid Applications held June 26, 2004 in Saint Malo, France. (Paperback, Softcover reprint of hardcover 1st ed. 2005)
Vladimir Getov, Thilo Kielmann
R3,988 Discovery Miles 39 880 Ships in 18 - 22 working days

Component Models and Systems for Grid Applications is the essential reference for the most current research on Grid technologies. This first volume of the CoreGRID series addresses such vital issues as the architecture of the Grid, the way software will influence the development of the Grid, and the practical applications of Grid technologies for individuals and businesses alike.

Part I of the book, "Application-Oriented Designs," focuses on development methodology and how it may contribute to a more component-based use of the Grid. "Middleware Architecture," the second part, examines portable Grid engines, hierarchical infrastructures, interoperability, as well as workflow modeling environments. The final part of the book, "Communication Frameworks," looks at dynamic self-adaptation, collective operations, and higher-order components.

With Component Models and Systems for Grid Applications, editors Vladimir Getov and Thilo Kielmann offer the computing professional and the computing researcher the most informative, up-to-date, and forward-looking thoughts on the fast-growing field of Grid studies.

Compilation Techniques for Reconfigurable Architectures (Paperback, Softcover reprint of hardcover 1st ed. 2009): Joao M.P.... Compilation Techniques for Reconfigurable Architectures (Paperback, Softcover reprint of hardcover 1st ed. 2009)
Joao M.P. Cardoso, Pedro C. Diniz
R2,653 Discovery Miles 26 530 Ships in 18 - 22 working days

The extreme ?exibility of recon?gurable architectures and their performance pot- tial have made them a vehicle of choice in a wide range of computing domains, from rapid circuit prototyping to high-performance computing. The increasing availab- ity of transistors on a die has allowed the emergence of recon?gurable architectures with a large number of computing resources and interconnection topologies. To - ploit the potential of these recon?gurable architectures, programmers are forced to map their applications, typically written in high-level imperative programming l- guages, such as C or MATLAB, to hardware-oriented languages such as VHDL or Verilog. In this process, they must assume the role of hardware designers and software programmers and navigate a maze of program transformations, mapping, and synthesis steps to produce ef?cient recon?gurable computing implementations. The richness and sophistication of any of these application mapping steps make the mapping of computations to these architectures an increasingly daunting process. It is thus widely believed that automatic compilation from high-level programming languages is the key to the success of recon?gurable computing. This book describes a wide range of code transformations and mapping te- niques for programs described in high-level programming languages, most - tably imperative languages, to recon?gurable architectures.

Finite Element Methods: - Parallel-Sparse Statics and Eigen-Solutions (Paperback, Softcover reprint of hardcover 1st ed. 2006):... Finite Element Methods: - Parallel-Sparse Statics and Eigen-Solutions (Paperback, Softcover reprint of hardcover 1st ed. 2006)
Duc Thai Nguyen
R3,402 Discovery Miles 34 020 Ships in 18 - 22 working days

Finite element methods (FEM), and its associated computer software have been widely accepted as one of the most effective general tools for solving large-scale, practical engineering and science applications. For implicit finite element codes, it is a well-known fact that efficient equation and eigen-solvers play critical roles in solving large-scale, practical engineering/science problems. Sparse matrix technologies have been evolved and become mature enough that all popular, commercialized FEM codes have already inserted sparse solvers into their software. However, a few FEM books have detailed discussions about Lanczos eigen-solvers, or explain domain decomposition (DD) finite element formulation (including detailed hand-calculator numerical examples) for parallel computing purposes. The materials from this book have been evolved over the past several years through the author's research work, and graduate courses.

Architecture and Protocols for High-Speed Networks (Paperback, Softcover reprint of hardcover 1st ed. 1994): Otto Spaniol,... Architecture and Protocols for High-Speed Networks (Paperback, Softcover reprint of hardcover 1st ed. 1994)
Otto Spaniol, Andre Danthine, Wolfgang Effelsberg
R5,147 Discovery Miles 51 470 Ships in 18 - 22 working days

Multimedia data streams will form a major part of the new generation of applications in high-speed networks. Continuous media streams, however, require transmission with guaranteed performance. In addition, many multimedia applications will require peer-to-multipeer communication. Guaranteed performance can only be provided with resource reservation in the network, and efficient multipeer communication must be based on multicast support in the lower layers of the network. Architecture and Protocols for High-Speed Networks focuses on techniques for building the networks that will meet the needs of these multimedia applications. In particular two areas of current research interest in such communication systems are covered in depth. These are the protocol related aspects, such as switched networks, ATM, MAC layer, network and transport layer; and the services and applications. Architecture and Protocols for High-Speed Networks contains contributions from leading world experts, giving the most up-to-date research available. It is an essential reference for all professionals, engineers and researchers working in the area of high-speed networks.

Design of Energy-Efficient Application-Specific Instruction Set Processors (Paperback, Softcover reprint of the original 1st... Design of Energy-Efficient Application-Specific Instruction Set Processors (Paperback, Softcover reprint of the original 1st ed. 2004)
Tilman Gloekler, Heinrich Meyr
R2,660 Discovery Miles 26 600 Ships in 18 - 22 working days

After a brief introduction to low-power VLSI design, the design space of ASIP instruction set architectures (ISAs) is introduced with a special focus on important features for digital signal processing. Based on the degrees of freedom offered by this design space, a consistent ASIP design flow is proposed: this design flow starts with a given application and uses incremental optimization of the ASIP hardware, of ASIP coprocessors and of the ASIP software by using a top-down approach and by applying application-specific modifications on all levels of design hierarchy. A broad range of real-world signal processing applications serves as vehicle to illustrate each design decision and provides a hands-on approach to ASIP design. Finally, two complete case studies demonstrate the feasibility and the efficiency of the proposed methodology and quantitatively evaluate the benefits of ASIPs in an industrial context.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Advancements in Instrumentation and…
Srijan Bhattacharya Hardcover R6,138 Discovery Miles 61 380
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues Paperback R4,669 Discovery Miles 46 690
Quantum Computing, Second Edition - A…
Hafiz Md. Hasan Babu Hardcover R3,271 Discovery Miles 32 710
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald Hardcover R5,315 Discovery Miles 53 150
Modern Computer Architecture
Stephanie Collins Hardcover R3,285 R2,973 Discovery Miles 29 730
Shared-Memory Parallelism Can Be Simple…
Julian Shun Hardcover R2,946 Discovery Miles 29 460
Artificial Intelligence - Concepts…
Information Reso Management Association Hardcover R9,036 Discovery Miles 90 360
The Practice of Enterprise Architecture…
Svyatoslav Kotusev Hardcover R1,571 Discovery Miles 15 710
CSS and HTML for beginners - A Beginners…
Ethan Hall Hardcover R1,027 R881 Discovery Miles 8 810
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, … Paperback R2,281 Discovery Miles 22 810

 

Partners