0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (5)
  • R250 - R500 (23)
  • R500+ (2,639)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General

Matrix Computations on Systolic-Type Arrays (Paperback, Softcover reprint of the original 1st ed. 1992): Jaime Moreno, Tomas... Matrix Computations on Systolic-Type Arrays (Paperback, Softcover reprint of the original 1st ed. 1992)
Jaime Moreno, Tomas Lang
R4,016 Discovery Miles 40 160 Ships in 18 - 22 working days

Matrix Computations on Systolic-Type Arrays provides a framework which permits a good understanding of the features and limitations of processor arrays for matrix algorithms. It describes the tradeoffs among the characteristics of these systems, such as internal storage and communication bandwidth, and the impact on overall performance and cost. A system which allows for the analysis of methods for the design/mapping of matrix algorithms is also presented. This method identifies stages in the design/mapping process and the capabilities required at each stage. Matrix Computations on Systolic-Type Arrays provides a much needed description of the area of processor arrays for matrix algorithms and of the methods used to derive those arrays. The ideas developed here reduce the space of solutions in the design/mapping process by establishing clear criteria to select among possible options as well as by a-priori rejection of alternatives which are not adequate (but which are considered in other approaches). The end result is a method which is more specific than other techniques previously available (suitable for a class of matrix algorithms) but which is more systematic, better defined and more effective in reaching the desired objectives. Matrix Computations on Systolic-Type Arrays will interest researchers and professionals who are looking for systematic mechanisms to implement matrix algorithms either as algorithm-specific structures or using specialized architectures. It provides tools that simplify the design/mapping process without introducing degradation, and that permit tradeoffs between performance/cost measures selected by the designer.

Scalable Shared Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1992): Michel Dubois, Shreekant S.... Scalable Shared Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1992)
Michel Dubois, Shreekant S. Thakkar
R4,025 Discovery Miles 40 250 Ships in 18 - 22 working days

The workshop on Scalable Shared Memory Multiprocessors took place on May 26 and 27 1990 at the Stouffer Madison Hotel in Seattle, Washington as a prelude to the 1990 International Symposium on Computer Architecture. About 100 participants listened for two days to the presentations of 22 invited The motivation for this workshop was to speakers, from academia and industry. promote the free exchange of ideas among researchers working on shared-memory multiprocessor architectures. There was ample opportunity to argue with speakers, and certainly participants did not refrain a bit from doing so. Clearly, the problem of scalability in shared-memory multiprocessors is still a wide-open question. We were even unable to agree on a definition of "scalability." Authors had more than six months to prepare their manuscript, and therefore the papers included in this proceedings are refinements of the speakers' presentations, based on the criticisms received at the workshop. As a result, 17 authors contributed to these proceedings. We wish to thank them for their diligence and care. The contributions in these proceedings can be partitioned into four categories 1. Access Order and Synchronization 2. Performance 3. Cache Protocols and Architectures 4. Distributed Shared Memory Particular topics on which new ideas and results are presented in these proceedings include: efficient schemes for combining networks, formal specification of shared memory models, correctness of trace-driven simulations, synchronization, various coherence protocols, ."

Modeling Microprocessor Performance (Paperback, Softcover reprint of the original 1st ed. 1998): Bibiche Geuskens, Kenneth Rose Modeling Microprocessor Performance (Paperback, Softcover reprint of the original 1st ed. 1998)
Bibiche Geuskens, Kenneth Rose
R2,632 Discovery Miles 26 320 Ships in 18 - 22 working days

Modeling Microprocessor Performance focuses on the development of a design and evaluation tool, named RIPE (Rensselaer Interconnect Performance Estimator). This tool analyzes the impact on wireability, clock frequency, power dissipation, and the reliability of single chip CMOS microprocessors as a function of interconnect, device, circuit, design and architectural parameters. It can accurately predict the overall performance of existing microprocessor systems. For the three major microprocessor architectures, DEC, PowerPC and Intel, the results have shown agreement within 10% on key parameters. The models cover a broad range of issues that relate to the implementation and performance of single chip CMOS microprocessors. The book contains a detailed discussion of the various models and the underlying assumptions based on actual design practices. As such, RIPE and its models provide an insightful tool into single chip microprocessor design and its performance aspects. At the same time, it provides design and process engineers with the capability to model, evaluate, compare and optimize single chip microprocessor systems using advanced technology and design techniques at an early design stage without costly and time consuming implementation. RIPE and its models demonstrate the factors which must be considered when estimating tradeoffs in device and interconnect technology and architecture design on microprocessor performance.

Synchronization Design for Digital Systems (Paperback, Softcover reprint of the original 1st ed. 1991): Teresa H. Meng Synchronization Design for Digital Systems (Paperback, Softcover reprint of the original 1st ed. 1991)
Teresa H. Meng
R2,628 Discovery Miles 26 280 Ships in 18 - 22 working days

Synchronization is one of the important issues in digital system design. While other approaches have always been intriguing, up until now synchro nous operation using a common clock has been the dominant design philo sophy. However, we have reached the point, with advances in technology, where other options should be given serious consideration. This is because the clock periods are getting much smaller in relation to the interconnect propagation delays, even within a single chip and certainly at the board and backplane level. To a large extent, this problem can be overcome with care ful clock distribution in synchronous design, and tools for computer-aided design of clock distribution. However, this places global constraints on the design, making it necessary, for example, to redesign the clock distribution each time any part of the system is changed. In this book, some alternative approaches to synchronization in digital sys tem design are described and developed. We owe these techniques to a long history of effort in both digital system design and in digital communica tions, the latter field being relevant because large propagation delays have always been a dominant consideration in design. While synchronous design is discussed and contrasted to the other techniques in Chapter 6, the dom inant theme of this book is alternative approaches.

Arrays, Functional Languages, and Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1991): Lenore... Arrays, Functional Languages, and Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1991)
Lenore M.Restifo Mullin; Contributions by Michael Jenkins, Gaetan Hains, Robert Bernecky, Guang R. Gao
R4,023 Discovery Miles 40 230 Ships in 18 - 22 working days

During a meeting in Toronto last winter, Mike Jenkins, Bob Bernecky and I were discussing how the two existing theories on arrays influenced or were in fluenced by programming languages and systems. More's Army Theory was the basis for NIAL and APL2 and Mullin's A Mathematics of A rmys(MOA), is being used as an algebra of arrays in functional and A-calculus based pro gramming languages. MOA was influenced by Iverson's initial and extended algebra, the foundations for APL and J respectively. We discussed that there is a lot of interest in the Computer Science and Engineering communities concerning formal methods for languages that could support massively parallel operations in scientific computing, a back to-roots interest for both Mike and myself. Languages for this domain can no longer be informally developed since it is necessary to map languages easily to many multiprocessor architectures. Software systems intended for parallel computation require a formal basis so that modifications can be done with relative ease while ensuring integrity in design. List based lan guages are profiting from theoretical foundations such as the Bird-Meertens formalism. Their theory has been successfully used to describe list based parallel algorithms across many classes of architectures."

Image and Text Compression (Paperback, Softcover reprint of the original 1st ed. 1992): James A. Storer Image and Text Compression (Paperback, Softcover reprint of the original 1st ed. 1992)
James A. Storer
R4,031 Discovery Miles 40 310 Ships in 18 - 22 working days

This book presents exciting recent research on the compression of images and text. Part 1 presents the (lossy) image compression techniques of vector quantization, iterated transforms (fractal compression), and techniques that employ optical hardware. Part 2 presents the (lossless) text compression techniques of arithmetic coding, context modeling, and dictionary methods (LZ methods); this part of the book also addresses practical massively parallel architectures for text compression. Part 3 presents theoretical work in coding theory that has applications to both text and image compression. The book ends with an extensive bibliography of data compression papers and books which can serve as a valuable aid to researchers in the field. Points of Interest: * Data compression is becoming a key factor in the digital storage of text, speech graphics, images, and video, digital communications, data bases, and supercomputing. * The book addresses 'hot' data compression topics such as vector quantization, fractal compression, optical data compression hardware, massively parallel hardware, LZ methods, arithmetic coding. * Contributors are all accomplished researchers.* Extensive bibliography to aid researchers in the field.

Quality by Design for Electronics (Paperback, Softcover reprint of the original 1st ed. 1996): W. Fleischammer Quality by Design for Electronics (Paperback, Softcover reprint of the original 1st ed. 1996)
W. Fleischammer
R4,682 Discovery Miles 46 820 Ships in 18 - 22 working days

This book concentrates on the quality of electronic products. Electronics in general, including semiconductor technology and software, has become the key technology for wide areas of industrial production. In nearly all expanding branches of industry electronics, especially digital electronics, is involved. And the spread of electronic technology has not yet come to an end. This rapid development, coupled with growing competition and the shorter innovation cycle, have caused economic problems which tend to have adverse effects on quality. Therefore, good quality at low cost is a very attractive goal in industry today. The demand for better quality continues along with a demand for more studies in quality assurance. At the same time, many companies are experiencing a drop in profits just when better quality of their products is essential in order to survive against the competition. There have been many proposals in the past to improve quality without increase in cost, or to reduce cost for quality assurance without loss of quality. This book tries to summarize the practical content of many of these proposals and to give some advice, above all to the designer and manufacturer of electronic devices. It mainly addresses practically minded engineers and managers. It is probably of less interest to pure scientists. The book covers all aspects of quality assurance of components used in electronic devices. Integrated circuits (lCs) are considered to be the most important components because the degree of integration is still rising.

Designing TSVs for 3D Integrated Circuits (Paperback, 2013): Nauman Khan, Soha Hassoun Designing TSVs for 3D Integrated Circuits (Paperback, 2013)
Nauman Khan, Soha Hassoun
R1,622 Discovery Miles 16 220 Ships in 18 - 22 working days

This book explores the challenges and presents best strategies for designing Through-Silicon Vias (TSVs) for 3D integrated circuits. It describes a novel technique to mitigate TSV-induced noise, the GND Plug, which is superior to others adapted from 2-D planar technologies, such as a backside ground plane and traditional substrate contacts. The book also investigates, in the form of a comparative study, the impact of TSV size and granularity, spacing of C4 connectors, off-chip power delivery network, shared and dedicated TSVs, and coaxial TSVs on the quality of power delivery in 3-D ICs. The authors provide detailed best design practices for designing 3-D power delivery networks. Since TSVs occupy silicon real-estate and impact device density, this book provides four iterative algorithms to minimize the number of TSVs in a power delivery network. Unlike other existing methods, these algorithms can be applied in early design stages when only functional block- level behaviors and a floorplan are available. Finally, the authors explore the use of Carbon Nanotubes for power grid design as a futuristic alternative to Copper.

Formal Techniques in Real-Time and Fault-Tolerant Systems (Paperback, Softcover reprint of the original 1st ed. 1993): Jan... Formal Techniques in Real-Time and Fault-Tolerant Systems (Paperback, Softcover reprint of the original 1st ed. 1993)
Jan Vytopil
R3,993 Discovery Miles 39 930 Ships in 18 - 22 working days

Formal Techniques in Real-Time and Fault-Tolerant Systems focuses on the state of the art in formal specification, development and verification of fault-tolerant computing systems. The term `fault-tolerance' refers to a system having properties which enable it to deliver its specified function despite (certain) faults of its subsystem. Fault-tolerance is achieved by adding extra hardware and/or software which corrects the effects of faults. In this sense, a system can be called fault-tolerant if it can be proved that the resulting (extended) system under some model of reliability meets the reliability requirements. The main theme of Formal Techniques in Real-Time and Fault-Tolerant Systems can be formulated as follows: how do the specification, development and verification of conventional and fault-tolerant systems differ? How do the notations, methodology and tools used in design and development of fault-tolerant and conventional systems differ? Formal Techniques in Real-Time and Fault-Tolerant Systems is divided into two parts. The chapters in Part One set the stage for what follows by defining the basic notions and practices of the field of design and specification of fault-tolerant systems. The chapters in Part Two represent the `how-to' section, containing examples of the use of formal methods in specification and development of fault-tolerant systems. The book serves as an excellent reference for researchers in both academia and industry, and may be used as a text for advanced courses on the subject.

Hierarchical Scheduling in Parallel and Cluster Systems (Paperback, Softcover reprint of the original 1st ed. 2003): Sivarama... Hierarchical Scheduling in Parallel and Cluster Systems (Paperback, Softcover reprint of the original 1st ed. 2003)
Sivarama Dandamudi
R4,007 Discovery Miles 40 070 Ships in 18 - 22 working days

Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e. , to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space.

Languages, Compilers and Run-Time Systems for Scalable Computers (Paperback, Softcover reprint of the original 1st ed. 1996):... Languages, Compilers and Run-Time Systems for Scalable Computers (Paperback, Softcover reprint of the original 1st ed. 1996)
Boleslaw K. Szymanski, Balaram Sinharoy
R4,028 Discovery Miles 40 280 Ships in 18 - 22 working days

Language, Compilers and Run-time Systems for Scalable Computers contains 20 articles based on presentations given at the third workshop of the same title, and 13 extended abstracts from the poster session. Starting with new developments in classical problems of parallel compiler design, such as dependence analysis and an exploration of loop parallelism, the book goes on to address the issues of compiler strategy for specific architectures and programming environments. Several chapters investigate support for multi-threading, object orientation, irregular computation, locality enhancement, and communication optimization. Issues of the interface between language and operating system support are also discussed. Finally, the load balance issues are discussed in different contexts, including sparse matrix computation and iteratively balanced adaptive solvers for partial differential equations. Some additional topics are also discussed in the extended abstracts. Each chapter provides a bibliography of relevant papers and the book can thus be used as a reference to the most up-to-date research in parallel software engineering.

Active Middleware Services - From the Proceedings of the 2nd Annual Workshop on Active Middleware Services (Paperback, 2000... Active Middleware Services - From the Proceedings of the 2nd Annual Workshop on Active Middleware Services (Paperback, 2000 ed.)
Salim Hariri, Craig A. Lee, Cauligi S. Raghavendra
R2,639 Discovery Miles 26 390 Ships in 18 - 22 working days

The papers in this volume were presented at the Second Annual Work shop on Active Middleware Services and were selected for inclusion here by the Editors. The AMS workshop was organized with support from both the National Science Foundation and the CAT center at the Uni versity of Arizona, and was held in Pittsburgh, Pennsylvania, on August 1, 2000, in conjunction with the 9th IEEE International Symposium on High Performance Distributed Computing (HPDC-9). The explosive growth of Internet-based applications and the prolifer ation of networking technologies has been transforming most areas of computer science and engineering as well as computational science and commercial application areas. This opens an outstanding opportunity to explore new, Internet-oriented software technologies that will open new research and application opportunities not only for the multimedia and commercial world, but also for the scientific and high-performance computing applications community. Two emerging technologies - agents and active networks - allow increased programmability to enable bring ing new services to Internet based applications. The AMS workshop presented research results and working papers in the areas of active net works, mobile and intelligent agents, software tools for high performance distributed computing, network operating systems, and application pro gramming models and environments. The success of an endeavor such as this depends on the contributions of many individuals. We would like to thank Dr. Frederica Darema and the NSF for sponsoring the workshop.

VLSI Design Methodologies for Digital Signal Processing Architectures (Paperback, Softcover reprint of the original 1st ed.... VLSI Design Methodologies for Digital Signal Processing Architectures (Paperback, Softcover reprint of the original 1st ed. 1994)
Magdy A. Bayoumi
R5,176 Discovery Miles 51 760 Ships in 18 - 22 working days

Designing VLSI systems represents a challenging task. It is a transfonnation among different specifications corresponding to different levels of design: abstraction, behavioral, stntctural and physical. The behavioral level describes the functionality of the design. It consists of two components; static and dynamic. The static component describes operations, whereas the dynamic component describes sequencing and timing. The structural level contains infonnation about components, control and connectivity. The physical level describes the constraints that should be imposed on the floor plan, the placement of components, and the geometry of the design. Constraints of area, speed and power are also applied at this level. To implement such multilevel transfonnation, a design methodology should be devised, taking into consideration the constraints, limitations and properties of each level. The mapping process between any of these domains is non-isomorphic. A single behavioral component may be transfonned into more than one structural component. Design methodologies are the most recent evolution in the design automation era, which started off with the introduction and subsequent usage of module generation especially for regular structures such as PLA's and memories. A design methodology should offer an integrated design system rather than a set of separate unrelated routines and tools. A general outline of a desired integrated design system is as follows: * Decide on a certain unified framework for all design levels. * Derive a design method based on this framework. * Create a design environment to implement this design method.

Distributed and Parallel Systems - From Instruction Parallelism to Cluster Computing (Paperback, Softcover reprint of the... Distributed and Parallel Systems - From Instruction Parallelism to Cluster Computing (Paperback, Softcover reprint of the original 1st ed. 2000)
Peter Kacsuk, Gabriele Kotsis
R5,131 Discovery Miles 51 310 Ships in 18 - 22 working days

Distributed and Parallel Systems: From Instruction Parallelism to Cluster Computing is the proceedings of the third Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by the Austrian Computer Society and the MTA SZTAKI Computer and Automation Research Institute. This book contains 18 full papers and 12 short papers from 14 countries around the world, including Japan, Korea and Brazil. The paper sessions cover a broad range of research topics in the area of parallel and distributed systems, including software development environments, performance evaluation, architectures, languages, algorithms, web and cluster computing. This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.

A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures (Paperback, Softcover reprint of the... A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures (Paperback, Softcover reprint of the original 1st ed. 2003)
Ian N. Dunn, Gerard G.L. Meyer
R2,607 Discovery Miles 26 070 Ships in 18 - 22 working days

Despite five decades of research, parallel computing remains an exotic, frontier technology on the fringes of mainstream computing. Its much-heralded triumph over sequential computing has yet to materialize. This is in spite of the fact that the processing needs of many signal processing applications continue to eclipse the capabilities of sequential computing. The culprit is largely the software development environment. Fundamental shortcomings in the development environment of many parallel computer architectures thwart the adoption of parallel computing. Foremost, parallel computing has no unifying model to accurately predict the execution time of algorithms on parallel architectures. Cost and scarce programming resources prohibit deploying multiple algorithms and partitioning strategies in an attempt to find the fastest solution. As a consequence, algorithm design is largely an intuitive art form dominated by practitioners who specialize in a particular computer architecture. This, coupled with the fact that parallel computer architectures rarely last more than a couple of years, makes for a complex and challenging design environment. To navigate this environment, algorithm designers need a road map, a detailed procedure they can use to efficiently develop high performance, portable parallel algorithms. The focus of this book is to draw such a road map. The Parallel Algorithm Synthesis Procedure can be used to design reusable building blocks of adaptable, scalable software modules from which high performance signal processing applications can be constructed. The hallmark of the procedure is a semi-systematic process for introducing parameters to control the partitioning and scheduling of computation and communication. This facilitates the tailoring of software modules to exploit different configurations of multiple processors, multiple floating-point units, and hierarchical memories. To showcase the efficacy of this procedure, the book presents three case studies requiring various degrees of optimization for parallel execution.

The Interaction of Compilation Technology and Computer Architecture (Paperback, Softcover reprint of the original 1st ed.... The Interaction of Compilation Technology and Computer Architecture (Paperback, Softcover reprint of the original 1st ed. 1994)
David J. Lilja, Peter L. Bird
R2,653 Discovery Miles 26 530 Ships in 18 - 22 working days

In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.

Image and Video Compression Standards - Algorithms and Architectures (Paperback, Softcover reprint of the original 2nd ed.... Image and Video Compression Standards - Algorithms and Architectures (Paperback, Softcover reprint of the original 2nd ed. 1997)
Vasudev Bhaskaran, Konstantinos Konstantinides
R7,683 Discovery Miles 76 830 Ships in 18 - 22 working days

New to the Second Edition: offers the latest developments in standards activities (JPEG-LS, MPEG-4, MPEG-7, and H.263) provides a comprehensive review of recent activities on multimedia enhanced processors, multimedia coprocessors, and dedicated processors, including examples from industry. Image and Video Compression Standards: Algorithms and Architectures, Second Edition presents an introduction to the algorithms and architectures that form the underpinnings of the image and video compressions standards, including JPEG (compression of still-images), H.261 and H.263 (video teleconferencing), and MPEG-1 and MPEG-2 (video storage and broadcasting). The next generation of audiovisual coding standards, such as MPEG-4 and MPEG-7, are also briefly described. In addition, the book covers the MPEG and Dolby AC-3 audio coding standards and emerging techniques for image and video compression, such as those based on wavelets and vector quantization. Image and Video Compression Standards: Algorithms and Architectures, Second Edition emphasizes the foundations of these standards; namely, techniques such as predictive coding, transform-based coding such as the discrete cosine transform (DCT), motion estimation, motion compensation, and entropy coding, as well as how they are applied in the standards. The implementation details of each standard are avoided; however, the book provides all the material necessary to understand the workings of each of the compression standards, including information that can be used by the reader to evaluate the efficiency of various software and hardware implementations conforming to these standards. Particular emphasis is placed on those algorithms and architectures that have been found to be useful in practical software or hardware implementations. Image and Video Compression Standards: Algorithms and Architectures, Second Edition uniquely covers all major standards (JPEG, MPEG-1, MPEG-2, MPEG-4, H.261, H.263) in a simple and tutorial manner, while fully addressing the architectural considerations involved when implementing these standards. As such, it serves as a valuable reference for the graduate student, researcher or engineer. The book is also used frequently as a text for courses on the subject, in both academic and professional settings.

Application-Driven Architecture Synthesis (Paperback, Softcover reprint of the original 1st ed. 1993): Francky Catthoor,... Application-Driven Architecture Synthesis (Paperback, Softcover reprint of the original 1st ed. 1993)
Francky Catthoor, Lars-Gunnar Svensson
R4,005 Discovery Miles 40 050 Ships in 18 - 22 working days

Application-Driven Architecture Synthesis describes the state of the art of architectural synthesis for complex real-time processing. In order to deal with the stringent timing requirements and the intricacies of complex real-time signal and data processing, target architecture styles and target application domains have been adopted to make the synthesis approach feasible. These approaches are also heavily application-driven, which is illustrated by many realistic demonstrations, used as examples in the book. The focus is on domains where application-specific solutions are attractive, such as significant parts of audio, telecom, instrumentation, speech, robotics, medical and automotive processing, image and video processing, TV, multi-media, radar, sonar. Application-Driven Architecture Synthesis is of interest to both academics and senior design engineers and CAD managers in industry. It provides an excellent overview of what capabilities to expect from future practical design tools, and includes an extensive bibliography.

Parallel Computing on Distributed Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1993): Fusun... Parallel Computing on Distributed Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1993)
Fusun Oezguner, Fikret Ercal
R2,679 Discovery Miles 26 790 Ships in 18 - 22 working days

Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.

Formal Techniques for Networked and Distributed Systems - FORTE 2001 (Paperback, Softcover reprint of the original 1st ed.... Formal Techniques for Networked and Distributed Systems - FORTE 2001 (Paperback, Softcover reprint of the original 1st ed. 2002)
Myungchul Kim, Byoungmoon Chin, Sungwon Kang, Danhyung Lee
R5,192 Discovery Miles 51 920 Ships in 18 - 22 working days

FORTE 2001, formerly FORTE/PSTV conference, is a combined conference of FORTE (Formal Description Techniques for Distributed Systems and Communication Protocols) and PSTV (Protocol Specification, Testing and Verification) conferences. This year the conference has a new name FORTE (Formal Techniques for Networked and Distributed Systems). The previous FORTE began in 1989 and the PSTV conference in 1981. Therefore the new FORTE conference actually has a long history of 21 years. The purpose of this conference is to introduce theories and formal techniques applicable to various engineering stages of networked and distributed systems and to share applications and experiences of them. This FORTE 2001 conference proceedings contains 24 refereed papers and 4 invited papers on the subjects. We regret that many good papers submitted could not be published in this volume due to the lack of space. FORTE 2001 was organized under the auspices of IFIP WG 6.1 by Information and Communications University of Korea. It was financially supported by Ministry of Information and Communication of Korea. We would like to thank every author who submitted a paper to FORTE 2001 and thank the reviewers who generously spent their time on reviewing. Special thanks are due to the reviewers who kindly conducted additional reviews for rigorous review process within a very short time frame. We would like to thank Prof. Guy Leduc, the chairman of IFIP WG 6.1, who made valuable suggestions and shared his experiences for conference organization.

Computer Engineering and Technology - 17th National Conference, NCCET 2013, Xining, China, July 20-22, 2013. Revised Selected... Computer Engineering and Technology - 17th National Conference, NCCET 2013, Xining, China, July 20-22, 2013. Revised Selected Papers (Paperback, 2013 ed.)
Weixia Xu, Liquan Xiao, Chengyi Zhang, Jinwen Li, Liyan Yu
R1,401 Discovery Miles 14 010 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 17th National Conference on Computer Engineering and Technology, NCCET 2013, held in Xining, China, in July 2013. The 26 papers presented were carefully reviewed and selected from 234 submissions. They are organized in topical sections named: Application Specific Processors; Communication Architecture; Computer Application and Software Optimization; IC Design and Test; Processor Architecture; Technology on the Horizon.

Interlinking of Computer Networks - Proceedings of the NATO Advanced Study Institute held at Bonas, France, August 28 -... Interlinking of Computer Networks - Proceedings of the NATO Advanced Study Institute held at Bonas, France, August 28 - September 8, 1978 (Paperback, Softcover reprint of the original 1st ed. 1979)
K.G. Beauchamp
R5,195 Discovery Miles 51 950 Ships in 18 - 22 working days

This volume contains the papers presented at the NATO Advanced Study Institute on the Interlinking of Computer Networks held between August 28th and September 8th 1978 at Bonas, France. The development of computer networks has proceeded over the last few decades to the point where a number of scientific and commercial networks are firmly established - albeit using different philosophies of design and operation. Many of these networks are serving similar communities having the same basic computer needs and those communities where the computer resources are complementary. Consequently there is now a considerable interest in the possibility of linking computer networks to provide resource sharing over quite wide geographical distances. The purpose of the Institute organisers was to consider the problems that arise when this form of interlinking is attempted. The problems fall into three categories, namely technical problems, compatibility and management. Only within the last few years have the technical problems been understood sufficiently well to enable interlinking to take place. Consequently considerable value was given during the meeting to discussing the compatibility and management problems that require solution before x FOREWORD global interlinking becomes an accepted and cost effective operation. Existing computer networks were examined in depth and case-histories of their operations were presented by delegates drawn from the international community. The scope and detail of the papers presented should provide a valuable contribution to this emerging field and be useful to Communications Specialists and Managers as well as those concerned with Computer Operations and Development."

Adiabatic Logic - Future Trend and System Level Perspective (Paperback, 2012 ed.): Philip Teichmann Adiabatic Logic - Future Trend and System Level Perspective (Paperback, 2012 ed.)
Philip Teichmann
R2,623 Discovery Miles 26 230 Ships in 18 - 22 working days

Adiabatic logic is a potential successor for static CMOS circuit design when it comes to ultra-low-power energy consumption. Future development like the evolutionary shrinking of the minimum feature size as well as revolutionary novel transistor concepts will change the gate level savings gained by adiabatic logic. In addition, the impact of worsening degradation effects has to be considered in the design of adiabatic circuits. The impact of the technology trends on the figures of merit of adiabatic logic, energy saving potential and optimum operating frequency, are investigated, as well as degradation related issues. Adiabatic logic benefits from future devices, is not susceptible to Hot Carrier Injection, and shows less impact of Bias Temperature Instability than static CMOS circuits. Major interest also lies on the efficient generation of the applied power-clock signal. This oscillating power supply can be used to save energy in short idle times by disconnecting circuits. An efficient way to generate the power-clock is by means of the synchronous 2N2P LC oscillator, which is also robust with respect to pattern-induced capacitive variations. An easy to implement but powerful power-clock gating supplement is proposed by gating the synchronization signals. Diverse implementations to shut down the system are presented and rated for their applicability and other aspects like energy reduction capability and data retention. Advantageous usage of adiabatic logic requires compact and efficient arithmetic structures. A broad variety of adder structures and a Coordinate Rotation Digital Computer are compared and rated according to energy consumption and area usage, and the resulting energy saving potential against static CMOS proves the ultra-low-power capability of adiabatic logic. In the end, a new circuit topology has to compete with static CMOS also in productivity. On a 130nm test chip, a large scale test vehicle containing an FIR filter was implemented in adiabatic logic, utilizing a standard, library-based design flow, fabricated, measured and compared to simulations of a static CMOS counterpart, with measured saving factors compliant to the values gained by simulation. This leads to the conclusion that adiabatic logic is ready for productive design due to compatibility not only to CMOS technology, but also to electronic design automation (EDA) tools developed for static CMOS system design.

Object Management in Distributed Database Systems for Stationary and Mobile Computing Environments - A Competitive Approach... Object Management in Distributed Database Systems for Stationary and Mobile Computing Environments - A Competitive Approach (Paperback, Softcover reprint of the original 1st ed. 2003)
Wujuan Lin, Bharadwaj Veeravalli
R2,624 Discovery Miles 26 240 Ships in 18 - 22 working days

N etwork-based computing domain unifies all best research efforts presented from single computer systems to networked systems to render overwhelming computational power for several modern day applications. Although this power is expected to grow with respect to time due to tech nological advancements, application requirements impose a continuous thrust on network utilization and on the resources to deliver supreme quality of service. Strictly speaking, network-based computing dornain has no confined scope and each element offers considerable challenges. Any modern day networked application strongly thrives on efficient data storage and management system, which is essentially a Database System. There have been nurnber of books-to-date in this domain that discuss fundamental principles of designing a database systern. Research in this dornain is now far matured and rnany researchers are venturing in this dornain continuously due to a wide variety of challenges posed. In this book, our dornain of interest is in exposing the underlying key challenges in designing algorithms to handle unpredictable requests that arrive at a Distributed Database System(DDBS) and evaluating their performance. These requests are otherwise called as on-line requests arriving at a system to process. Transactions in an on-line Banking service, Airline Reservation systern, Video-on-Demand systern, etc, are few examples of on-line requests."

Supercomputing - 28th International Supercomputing Conference, ISC 2013, Leipzig, Germany, June 16-20, 2013. Proceedings... Supercomputing - 28th International Supercomputing Conference, ISC 2013, Leipzig, Germany, June 16-20, 2013. Proceedings (Paperback, 2013 ed.)
Julian M. Kunkel, Thomas Ludwig, Hans Meuer
R1,461 Discovery Miles 14 610 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 28th International Supercomputing Conference, ISC 2013, held in Leipzig, Germany, in June 2013. The 35 revised full papers presented together were carefully reviewed and selected from 89 submissions. The papers cover the following topics: scalable applications with 50K+ cores; performance improvements in algorithms; accelerators; performance analysis and optimization; library development; administration and management of supercomputers; energy efficiency; parallel I/O; grid and cloud.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Tools and Technologies for the…
Sergey Balandin, Ekaterina Balandina Hardcover R6,502 Discovery Miles 65 020
Advancements in Instrumentation and…
Srijan Bhattacharya Hardcover R6,138 Discovery Miles 61 380
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, … Paperback R2,281 Discovery Miles 22 810
Handbook of Enterprise Systems…
Hardcover R4,284 Discovery Miles 42 840
Clean Architecture - A Craftsman's Guide…
Robert Martin Paperback  (1)
R860 R741 Discovery Miles 7 410
Biologically Inspired Networking and…
Pietro Lio, Dinesh Verma Hardcover R6,119 Discovery Miles 61 190
CSS For Beginners - The Best CSS Guide…
Ethan Hall Hardcover R895 R773 Discovery Miles 7 730
CSS and HTML for beginners - A Beginners…
Ethan Hall Hardcover R1,027 R881 Discovery Miles 8 810
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald Hardcover R5,315 Discovery Miles 53 150
Thinking Machines - Machine Learning and…
Shigeyuki Takano Paperback R2,011 Discovery Miles 20 110

 

Partners