0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (12)
  • R250 - R500 (38)
  • R500+ (3,074)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

Loop Tiling for Parallelism (Paperback, Softcover reprint of the original 1st ed. 2000): Jingling Xue Loop Tiling for Parallelism (Paperback, Softcover reprint of the original 1st ed. 2000)
Jingling Xue
R4,007 Discovery Miles 40 070 Ships in 18 - 22 working days

Loop tiling, as one of the most important compiler optimizations, is beneficial for both parallel machines and uniprocessors with a memory hierarchy. This book explores the use of loop tiling for reducing communication cost and improving parallelism for distributed memory machines. The author provides mathematical foundations, investigates loop permutability in the framework of nonsingular loop transformations, discusses the necessary machineries required, and presents state-of-the-art results for finding communication- and time-minimal tiling choices. Throughout the book, theorems and algorithms are illustrated with numerous examples and diagrams. The techniques presented in Loop Tiling for Parallelism can be adapted to work for a cluster of workstations, and are also directly applicable to shared-memory machines once the machines are modeled as BSP (Bulk Synchronous Parallel) machines.Features and key topics: * Detailed review of the mathematical foundations, including convex polyhedra and cones; * Self-contained treatment of nonsingular loop transformations, code generation, and full loop permutability; * Tiling loop nests by rectangles and parallelepipeds, including their mathematical definition, dependence analysis, legality test, and code generation; * A complete suite of techniques for generating SPMD code for a tiled loop nest; * Up-to-date results on tile size and shape selection for reducing communication and improving parallelism; * End-of-chapter references for further reading. Researchers and practitioners involved in optimizing compilers and students in advanced computer architecture studies will find this a lucid and well-presented reference work with numerous citations to original sources.

New Horizons of Computational Science - Proceedings of the International Symposium on Supercomputing held in Tokyo, Japan,... New Horizons of Computational Science - Proceedings of the International Symposium on Supercomputing held in Tokyo, Japan, September 1-3, 1997 (Paperback, Softcover reprint of the original 1st ed. 2001)
Toshikazu Ebisuzaki, Junichiro Makino
R5,136 Discovery Miles 51 360 Ships in 18 - 22 working days

The International Symposium on Supercomputing - New Horizon of Computational Science was held on September 1-3, 1997 at the Science MuseuminTokyo, tocelebrate60-yearbirthdayofProfessorDaiichiroSug imoto, who hasbeenleadingtheoreticalandnumericalastrophysicsfor 30 years. The conference coveredexceptionally wide range ofsubjects, to follow Sugimoto'saccomplishmentsinmanyfields.Onthefirstdaywehadthree talksonstellarevolutionandsixtalksonstellardynamics. Onthesecond day, six talks on special-purpose computingand four talks on large-scale computing in MolecularDynamicswere given. Onthethirdandthelast day, threetalks on dedicatedcomputerson LatticeQCDcalculationsand sixtalksonpresentandfutureofgeneral-purposeHPCsystemsweregiven. Inaddition, some30posterswerepresentedonvarioussubjectsincompu tationalscience. Instellarevolution, D.Arnett (Univ. ofArizona) gaveanexcellenttalk on the recent development in three-dimensionalsimulation ofSupernova, inparticularonquantitativecomparisonbetweendifferenttechniquessuch asgrid-basedmethodsandSPH (SmoothedParticleHydrodynamics). Y. Kondo (NASA) discussedresentadvanceinthemodelingoftheevolution ofbinarystars, and1.Hachisu(Univ. ofTokyo)discussedRayleigh-Taylor instabilitiesinsupernovae(contributionnotincluded). Instellardynamics, P.Hut(lAS)gaveasuperbreviewonthelong-term evolution ofstellarsystem, J. Makino (Univ. ofTokyo) described briefly theresultsobtainedonGRAPE-4special-purposecomputerandthefollow up project, GRAPE-6, whichisapprovedas ofJune 1997. GRAPE-6will be completed by year 2001 with the peak speed around 200 Tflops. R. Spurzem (Rechen-Inst.) and D. Heggie (Univ. of Edinburgh) talked on recentadvanceinthestudyofstarclusters, andE.Athanassoula(Marseille Observatory) describedthe work doneusingtheirGRAPE-3 systems. S. Ida (TokyoInst. ofTechnology) describedthe result ofthe simulationof theformationofMoon. Thefirst talkoftheseconddaywas given by F-H. Hsu oftheIBMT.J. Watson Research center, on "Deep Blue," the special-purpose computer for Chess, which, forthefirst timeinthehistory, wonthematchwiththe besthumanplayer, Mr. GaryKasparov(unfortunately, Hsu'scontribution isnot included in this volume). Then A. Bakker of Delft Inst. of Tech nology looked back his 20 years ofdevelopingspecial-purpose computers formoleculardynamicsandsimulationofspinsystems. J.Arnoldgavean overviewoftheemergingnewfieldofreconfigurablecomputing, whichfalls inbetweentraditionalgeneral-purposecomputersandspecial-purposecom puters. S.Okumura(NAO)describedthehistoryofultra-high-performance digital signalprocessors for radio astronomy. They havebuilt a machine with 20GaPS performance in early 80s, and keep improvingthe speed. M. Taiji (ISM) told on general aspects of GRAPE-type systems, and T. Narumi (Univ. of Tokyo) the 100-Tflops GRAPE-type machine for MD calculations, whichwillbefinished by 199

Reversible Logic Synthesis - From Fundamentals to Quantum Computing (Paperback, Softcover reprint of the original 1st ed.... Reversible Logic Synthesis - From Fundamentals to Quantum Computing (Paperback, Softcover reprint of the original 1st ed. 2004)
Anas N Al-Rabadi
R2,696 Discovery Miles 26 960 Ships in 18 - 22 working days

For the first time in book form, this comprehensive and systematic monograph presents the methods for the reversible synthesis of logic functions and circuits. This methodology offers designers the capability to solve major problems in system design now and in the future, such as the high rate of power consumption, and the emergence of quantum effects for highly dense ICs. The challenge addressed here is to design reliable systems that consume as little power as possible and in which the signals are processed and transmitted at very high speeds with very high signal integrity. Researchers in academia or industry and graduate students, who work in logic synthesis, computer design, computer-aided design tools, and low power VLSI circuit design, will find this book a valuable resource.

Database Recovery (Paperback, Softcover reprint of the original 1st ed. 1998): Vijay Kumar, Sang Hyuk Son Database Recovery (Paperback, Softcover reprint of the original 1st ed. 1998)
Vijay Kumar, Sang Hyuk Son
R3,962 Discovery Miles 39 620 Ships in 18 - 22 working days

Database Recovery presents an in-depth discussion on all aspects of database recovery. Firstly, it introduces the topic informally to set the intuitive understanding, and then presents a formal treatment of recovery mechanism. In the past, recovery has been treated merely as a mechanism which is implemented on an ad-hoc basis. This book elevates the recovery from a mechanism to a concept, and presents its essential properties. A book on recovery is incomplete if it does not present how recovery is practiced in commercial systems. This book, therefore, presents a detailed description of recovery mechanisms as implemented on Informix, OpenIngres, Oracle, and Sybase commercial database systems. Database Recovery is suitable as a textbook for a graduate-level course on database recovery, as a secondary text for a graduate-level course on database systems, and as a reference for researchers and practitioners in industry.

Constructing Predictable Real Time Systems (Paperback, Softcover reprint of the original 1st ed. 1991): Alexander D. Stoyenko Constructing Predictable Real Time Systems (Paperback, Softcover reprint of the original 1st ed. 1991)
Alexander D. Stoyenko
R4,025 Discovery Miles 40 250 Ships in 18 - 22 working days

Vorwort In der Natur entwickelten sich die Echtzeitsysteme seit einigen 100 Mil- Honen Jahren. Tierische Nervensysteme haben zur Aufgabe, auf die Nachrichten aus der Umwelt die Steuerungsbefehle an die aktiven Or- gane zu geben. Dabei spielen zum Beispiel bedingte Reflexe eine wichtige Rolle. Vielleicht kann man die Entstehung des Menschen etwa zu der Zeit ansetzen, als sein sich allmahlich entwickelndes Gehirn Gedanken entwickelte, deren Bedeutung in vorausplanender Weise iiber die gerade vorliegende Situation hinausging. Das fiihrte schliesslich unter anderem zum heutigen Wissenschaftler, der seine Theorien und Systeme aufgrund langwieriger Uberlegungen aufbaut. Die Entwicklung der Computer ging im wesentlichen den umgekehrten Weg. Zunachst diente sie nur der Durchfiihrung "starrer" Programme, wie z.B. das erste programmgesteuerte Rechengerat Z3, das der Unterzeichner im Jahre 1941 vorfiihren konnte. Es folgte unter an- derem ein Spezialgerat zur Fliigelvermessung, das man als den ersten Prozessrechner bezeichnen kann. Es wurden etwa vierzig als Analog- Digital-Wandler arbeitende Messuhren yom Rechnerautomaten abgele- sen und im Rahmen eines Programms als Variable verarbeitet. Abel' auch das erfolgte noch in starrer Reihenfolge. Die echte Prozesssteuerung - heute auch Echtzeitsysteme genannt - erfordert aber ein Reagieren auf bestandig wechselnde Situationen.

Scheduling in Parallel Computing Systems - Fuzzy and Annealing Techniques (Paperback, Softcover reprint of the original 1st ed.... Scheduling in Parallel Computing Systems - Fuzzy and Annealing Techniques (Paperback, Softcover reprint of the original 1st ed. 1999)
Shaharuddin Salleh, Albert Y. Zomaya
R3,983 Discovery Miles 39 830 Ships in 18 - 22 working days

Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques advocates the viability of using fuzzy and annealing methods in solving scheduling problems for parallel computing systems. The book proposes new techniques for both static and dynamic scheduling, using emerging paradigms that are inspired by natural phenomena such as fuzzy logic, mean-field annealing, and simulated annealing. Systems that are designed using such techniques are often referred to in the literature as intelligent' because of their capability to adapt to sudden changes in their environments. Moreover, most of these changes cannot be anticipated in advance or included in the original design of the system. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques provides results that prove such approaches can become viable alternatives to orthodox solutions to the scheduling problem, which are mostly based on heuristics. Although heuristics are robust and reliable when solving certain instances of the scheduling problem, they do not perform well when one needs to obtain solutions to general forms of the scheduling problem. On the other hand, techniques inspired by natural phenomena have been successfully applied for solving a wide range of combinatorial optimization problems (e.g. traveling salesman, graph partitioning). The success of these methods motivated their use in this book to solve scheduling problems that are known to be formidable combinatorial problems. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques is an excellent reference and may be used for advanced courses on the topic.

Compiler Technology - Tools, Translators and Language Implementation (Paperback, Softcover reprint of the original 1st ed.... Compiler Technology - Tools, Translators and Language Implementation (Paperback, Softcover reprint of the original 1st ed. 1997)
Derek Beng Kee Kiong
R3,995 Discovery Miles 39 950 Ships in 18 - 22 working days

Compiler technology is fundamental to computer science since it provides the means to implement many other tools. It is interesting that, in fact, many tools have a compiler framework - they accept input in a particular format, perform some processing and present output in another format. Such tools support the abstraction process and are crucial to productive systems development. The focus of Compiler Technology: Tools, Translators and Language Implementation is to enable quick development of analysis tools. Both lexical scanner and parser generator tools are provided as supplements to this book, since a hands-on approach to experimentation with a toy implementation aids in understanding abstract topics such as parse-trees and parse conflicts. Furthermore, it is through hands-on exercises that one discovers the particular intricacies of language implementation. Compiler Technology: Tools, Translators and Language Implementation is suitable as a textbook for an undergraduate or graduate level course on compiler technology, and as a reference for researchers and practitioners interested in compilers and language implementation.

Multiprocessor Execution of Logic Programs (Paperback, Softcover reprint of the original 1st ed. 1994): Gopal Gupta Multiprocessor Execution of Logic Programs (Paperback, Softcover reprint of the original 1st ed. 1994)
Gopal Gupta
R4,003 Discovery Miles 40 030 Ships in 18 - 22 working days

Multiprocessor Execution of Logic Programs addresses the problem of efficient implementation of logic programming languages, specifically Prolog, on multiprocessor architectures. The approaches and implementations developed attempt to take full advantage of sequential implementation technology developed for Prolog (such as the WAM) while exploiting all forms of control parallelism present in logic programs, namely, or-parallelism, independent and-parallelism and dependent and-parallelism. Coverage includes a thorough survey of parallel implementation techniques and parallel systems developed for Prolog. Multiprocessor Execution of Logic Programs is recommended for people implementing parallel logic programming systems, parallel symbolic systems, parallel AI systems, and parallel theorem proving systems. It will also be useful to people who wish to learn about the implementation of parallel logic programming systems.

Multithreaded Computer Architecture: A Summary of the State of the ART (Paperback, Softcover reprint of the original 1st ed.... Multithreaded Computer Architecture: A Summary of the State of the ART (Paperback, Softcover reprint of the original 1st ed. 1994)
Robert A. Iannucci, Guang R. Gao, Robert H. Halstead Jr, Burton Smith
R5,177 Discovery Miles 51 770 Ships in 18 - 22 working days

Multithreaded computer architecture has emerged as one of the most promising and exciting avenues for the exploitation of parallelism. This new field represents the confluence of several independent research directions which have united over a common set of issues and techniques. Multithreading draws on recent advances in dataflow, RISC, compiling for fine-grained parallel execution, and dynamic resource management. It offers the hope of dramatic performance increases through parallel execution for a broad spectrum of significant applications based on extensions to `traditional' approaches. Multithreaded Computer Architecture is divided into four parts, reflecting four major perspectives on the topic. Part I provides the reader with basic background information, definitions, and surveys of work which have in one way or another been pivotal in defining and shaping multithreading as an architectural discipline. Part II examines key elements of multithreading, highlighting the fundamental nature of latency and synchronization. This section presents clever techniques for hiding latency and supporting large synchronization name spaces. Part III looks at three major multithreaded systems, considering issues of machine organization and compilation strategy. Part IV concludes the volume with an analysis of multithreaded architectures, showcasing methodologies and actual measurements. Multithreaded Computer Architecture: A Summary of the State of the Art is an excellent reference source and may be used as a text for advanced courses on the subject.

Real-Time Database Systems - Issues and Applications (Paperback, Softcover reprint of the original 1st ed. 1997): Azer... Real-Time Database Systems - Issues and Applications (Paperback, Softcover reprint of the original 1st ed. 1997)
Azer Bestavros, Kwei-Jay Lin, Sang Hyuk Son
R5,168 Discovery Miles 51 680 Ships in 18 - 22 working days

Despite the growing interest in Real-Time Database Systems, there is no single book that acts as a reference to academics, professionals, and practitioners who wish to understand the issues involved in the design and development of RTDBS. Real-Time Database Systems: Issues and Applications fulfills this need. This book presents the spectrum of issues that may arise in various real-time database applications, the available solutions and technologies that may be used to address these issues, and the open problems that need to be tackled in the future. With rapid advances in this area, several concepts have been proposed without a widely accepted consensus on their definitions and implications. To address this need, the first chapter is an introduction to the key RTDBS concepts and definitions, which is followed by a survey of the state of the art in RTDBS research and practice. The remainder of the book consists of four sections: models and paradigms, applications and benchmarks, scheduling and concurrency control, and experimental systems. The chapters in each section are contributed by experts in the respective areas. Real-Time Database Systems: Issues and Applications is primarily intended for practicing engineers and researchers working in the growing area of real-time database systems. For practitioners, the book will provide a much needed bridge for technology transfer and continued education. For researchers, this book will provide a comprehensive reference for well-established results. This book can also be used in a senior or graduate level course on real-time systems, real-time database systems, and database systems or closely related courses.

Matrix Computations on Systolic-Type Arrays (Paperback, Softcover reprint of the original 1st ed. 1992): Jaime Moreno, Tomas... Matrix Computations on Systolic-Type Arrays (Paperback, Softcover reprint of the original 1st ed. 1992)
Jaime Moreno, Tomas Lang
R4,016 Discovery Miles 40 160 Ships in 18 - 22 working days

Matrix Computations on Systolic-Type Arrays provides a framework which permits a good understanding of the features and limitations of processor arrays for matrix algorithms. It describes the tradeoffs among the characteristics of these systems, such as internal storage and communication bandwidth, and the impact on overall performance and cost. A system which allows for the analysis of methods for the design/mapping of matrix algorithms is also presented. This method identifies stages in the design/mapping process and the capabilities required at each stage. Matrix Computations on Systolic-Type Arrays provides a much needed description of the area of processor arrays for matrix algorithms and of the methods used to derive those arrays. The ideas developed here reduce the space of solutions in the design/mapping process by establishing clear criteria to select among possible options as well as by a-priori rejection of alternatives which are not adequate (but which are considered in other approaches). The end result is a method which is more specific than other techniques previously available (suitable for a class of matrix algorithms) but which is more systematic, better defined and more effective in reaching the desired objectives. Matrix Computations on Systolic-Type Arrays will interest researchers and professionals who are looking for systematic mechanisms to implement matrix algorithms either as algorithm-specific structures or using specialized architectures. It provides tools that simplify the design/mapping process without introducing degradation, and that permit tradeoffs between performance/cost measures selected by the designer.

Scalable Shared Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1992): Michel Dubois, Shreekant S.... Scalable Shared Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1992)
Michel Dubois, Shreekant S. Thakkar
R4,025 Discovery Miles 40 250 Ships in 18 - 22 working days

The workshop on Scalable Shared Memory Multiprocessors took place on May 26 and 27 1990 at the Stouffer Madison Hotel in Seattle, Washington as a prelude to the 1990 International Symposium on Computer Architecture. About 100 participants listened for two days to the presentations of 22 invited The motivation for this workshop was to speakers, from academia and industry. promote the free exchange of ideas among researchers working on shared-memory multiprocessor architectures. There was ample opportunity to argue with speakers, and certainly participants did not refrain a bit from doing so. Clearly, the problem of scalability in shared-memory multiprocessors is still a wide-open question. We were even unable to agree on a definition of "scalability." Authors had more than six months to prepare their manuscript, and therefore the papers included in this proceedings are refinements of the speakers' presentations, based on the criticisms received at the workshop. As a result, 17 authors contributed to these proceedings. We wish to thank them for their diligence and care. The contributions in these proceedings can be partitioned into four categories 1. Access Order and Synchronization 2. Performance 3. Cache Protocols and Architectures 4. Distributed Shared Memory Particular topics on which new ideas and results are presented in these proceedings include: efficient schemes for combining networks, formal specification of shared memory models, correctness of trace-driven simulations, synchronization, various coherence protocols, ."

Modeling Microprocessor Performance (Paperback, Softcover reprint of the original 1st ed. 1998): Bibiche Geuskens, Kenneth Rose Modeling Microprocessor Performance (Paperback, Softcover reprint of the original 1st ed. 1998)
Bibiche Geuskens, Kenneth Rose
R2,632 Discovery Miles 26 320 Ships in 18 - 22 working days

Modeling Microprocessor Performance focuses on the development of a design and evaluation tool, named RIPE (Rensselaer Interconnect Performance Estimator). This tool analyzes the impact on wireability, clock frequency, power dissipation, and the reliability of single chip CMOS microprocessors as a function of interconnect, device, circuit, design and architectural parameters. It can accurately predict the overall performance of existing microprocessor systems. For the three major microprocessor architectures, DEC, PowerPC and Intel, the results have shown agreement within 10% on key parameters. The models cover a broad range of issues that relate to the implementation and performance of single chip CMOS microprocessors. The book contains a detailed discussion of the various models and the underlying assumptions based on actual design practices. As such, RIPE and its models provide an insightful tool into single chip microprocessor design and its performance aspects. At the same time, it provides design and process engineers with the capability to model, evaluate, compare and optimize single chip microprocessor systems using advanced technology and design techniques at an early design stage without costly and time consuming implementation. RIPE and its models demonstrate the factors which must be considered when estimating tradeoffs in device and interconnect technology and architecture design on microprocessor performance.

Synchronization Design for Digital Systems (Paperback, Softcover reprint of the original 1st ed. 1991): Teresa H. Meng Synchronization Design for Digital Systems (Paperback, Softcover reprint of the original 1st ed. 1991)
Teresa H. Meng
R2,628 Discovery Miles 26 280 Ships in 18 - 22 working days

Synchronization is one of the important issues in digital system design. While other approaches have always been intriguing, up until now synchro nous operation using a common clock has been the dominant design philo sophy. However, we have reached the point, with advances in technology, where other options should be given serious consideration. This is because the clock periods are getting much smaller in relation to the interconnect propagation delays, even within a single chip and certainly at the board and backplane level. To a large extent, this problem can be overcome with care ful clock distribution in synchronous design, and tools for computer-aided design of clock distribution. However, this places global constraints on the design, making it necessary, for example, to redesign the clock distribution each time any part of the system is changed. In this book, some alternative approaches to synchronization in digital sys tem design are described and developed. We owe these techniques to a long history of effort in both digital system design and in digital communica tions, the latter field being relevant because large propagation delays have always been a dominant consideration in design. While synchronous design is discussed and contrasted to the other techniques in Chapter 6, the dom inant theme of this book is alternative approaches.

Arrays, Functional Languages, and Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1991): Lenore... Arrays, Functional Languages, and Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1991)
Lenore M.Restifo Mullin; Contributions by Michael Jenkins, Gaetan Hains, Robert Bernecky, Guang R. Gao
R4,023 Discovery Miles 40 230 Ships in 18 - 22 working days

During a meeting in Toronto last winter, Mike Jenkins, Bob Bernecky and I were discussing how the two existing theories on arrays influenced or were in fluenced by programming languages and systems. More's Army Theory was the basis for NIAL and APL2 and Mullin's A Mathematics of A rmys(MOA), is being used as an algebra of arrays in functional and A-calculus based pro gramming languages. MOA was influenced by Iverson's initial and extended algebra, the foundations for APL and J respectively. We discussed that there is a lot of interest in the Computer Science and Engineering communities concerning formal methods for languages that could support massively parallel operations in scientific computing, a back to-roots interest for both Mike and myself. Languages for this domain can no longer be informally developed since it is necessary to map languages easily to many multiprocessor architectures. Software systems intended for parallel computation require a formal basis so that modifications can be done with relative ease while ensuring integrity in design. List based lan guages are profiting from theoretical foundations such as the Bird-Meertens formalism. Their theory has been successfully used to describe list based parallel algorithms across many classes of architectures."

Image and Text Compression (Paperback, Softcover reprint of the original 1st ed. 1992): James A. Storer Image and Text Compression (Paperback, Softcover reprint of the original 1st ed. 1992)
James A. Storer
R4,031 Discovery Miles 40 310 Ships in 18 - 22 working days

This book presents exciting recent research on the compression of images and text. Part 1 presents the (lossy) image compression techniques of vector quantization, iterated transforms (fractal compression), and techniques that employ optical hardware. Part 2 presents the (lossless) text compression techniques of arithmetic coding, context modeling, and dictionary methods (LZ methods); this part of the book also addresses practical massively parallel architectures for text compression. Part 3 presents theoretical work in coding theory that has applications to both text and image compression. The book ends with an extensive bibliography of data compression papers and books which can serve as a valuable aid to researchers in the field. Points of Interest: * Data compression is becoming a key factor in the digital storage of text, speech graphics, images, and video, digital communications, data bases, and supercomputing. * The book addresses 'hot' data compression topics such as vector quantization, fractal compression, optical data compression hardware, massively parallel hardware, LZ methods, arithmetic coding. * Contributors are all accomplished researchers.* Extensive bibliography to aid researchers in the field.

Quality by Design for Electronics (Paperback, Softcover reprint of the original 1st ed. 1996): W. Fleischammer Quality by Design for Electronics (Paperback, Softcover reprint of the original 1st ed. 1996)
W. Fleischammer
R4,682 Discovery Miles 46 820 Ships in 18 - 22 working days

This book concentrates on the quality of electronic products. Electronics in general, including semiconductor technology and software, has become the key technology for wide areas of industrial production. In nearly all expanding branches of industry electronics, especially digital electronics, is involved. And the spread of electronic technology has not yet come to an end. This rapid development, coupled with growing competition and the shorter innovation cycle, have caused economic problems which tend to have adverse effects on quality. Therefore, good quality at low cost is a very attractive goal in industry today. The demand for better quality continues along with a demand for more studies in quality assurance. At the same time, many companies are experiencing a drop in profits just when better quality of their products is essential in order to survive against the competition. There have been many proposals in the past to improve quality without increase in cost, or to reduce cost for quality assurance without loss of quality. This book tries to summarize the practical content of many of these proposals and to give some advice, above all to the designer and manufacturer of electronic devices. It mainly addresses practically minded engineers and managers. It is probably of less interest to pure scientists. The book covers all aspects of quality assurance of components used in electronic devices. Integrated circuits (lCs) are considered to be the most important components because the degree of integration is still rising.

Designing TSVs for 3D Integrated Circuits (Paperback, 2013): Nauman Khan, Soha Hassoun Designing TSVs for 3D Integrated Circuits (Paperback, 2013)
Nauman Khan, Soha Hassoun
R1,622 Discovery Miles 16 220 Ships in 18 - 22 working days

This book explores the challenges and presents best strategies for designing Through-Silicon Vias (TSVs) for 3D integrated circuits. It describes a novel technique to mitigate TSV-induced noise, the GND Plug, which is superior to others adapted from 2-D planar technologies, such as a backside ground plane and traditional substrate contacts. The book also investigates, in the form of a comparative study, the impact of TSV size and granularity, spacing of C4 connectors, off-chip power delivery network, shared and dedicated TSVs, and coaxial TSVs on the quality of power delivery in 3-D ICs. The authors provide detailed best design practices for designing 3-D power delivery networks. Since TSVs occupy silicon real-estate and impact device density, this book provides four iterative algorithms to minimize the number of TSVs in a power delivery network. Unlike other existing methods, these algorithms can be applied in early design stages when only functional block- level behaviors and a floorplan are available. Finally, the authors explore the use of Carbon Nanotubes for power grid design as a futuristic alternative to Copper.

Formal Techniques in Real-Time and Fault-Tolerant Systems (Paperback, Softcover reprint of the original 1st ed. 1993): Jan... Formal Techniques in Real-Time and Fault-Tolerant Systems (Paperback, Softcover reprint of the original 1st ed. 1993)
Jan Vytopil
R3,993 Discovery Miles 39 930 Ships in 18 - 22 working days

Formal Techniques in Real-Time and Fault-Tolerant Systems focuses on the state of the art in formal specification, development and verification of fault-tolerant computing systems. The term `fault-tolerance' refers to a system having properties which enable it to deliver its specified function despite (certain) faults of its subsystem. Fault-tolerance is achieved by adding extra hardware and/or software which corrects the effects of faults. In this sense, a system can be called fault-tolerant if it can be proved that the resulting (extended) system under some model of reliability meets the reliability requirements. The main theme of Formal Techniques in Real-Time and Fault-Tolerant Systems can be formulated as follows: how do the specification, development and verification of conventional and fault-tolerant systems differ? How do the notations, methodology and tools used in design and development of fault-tolerant and conventional systems differ? Formal Techniques in Real-Time and Fault-Tolerant Systems is divided into two parts. The chapters in Part One set the stage for what follows by defining the basic notions and practices of the field of design and specification of fault-tolerant systems. The chapters in Part Two represent the `how-to' section, containing examples of the use of formal methods in specification and development of fault-tolerant systems. The book serves as an excellent reference for researchers in both academia and industry, and may be used as a text for advanced courses on the subject.

Hierarchical Scheduling in Parallel and Cluster Systems (Paperback, Softcover reprint of the original 1st ed. 2003): Sivarama... Hierarchical Scheduling in Parallel and Cluster Systems (Paperback, Softcover reprint of the original 1st ed. 2003)
Sivarama Dandamudi
R4,007 Discovery Miles 40 070 Ships in 18 - 22 working days

Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e. , to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space.

Languages, Compilers and Run-Time Systems for Scalable Computers (Paperback, Softcover reprint of the original 1st ed. 1996):... Languages, Compilers and Run-Time Systems for Scalable Computers (Paperback, Softcover reprint of the original 1st ed. 1996)
Boleslaw K. Szymanski, Balaram Sinharoy
R4,028 Discovery Miles 40 280 Ships in 18 - 22 working days

Language, Compilers and Run-time Systems for Scalable Computers contains 20 articles based on presentations given at the third workshop of the same title, and 13 extended abstracts from the poster session. Starting with new developments in classical problems of parallel compiler design, such as dependence analysis and an exploration of loop parallelism, the book goes on to address the issues of compiler strategy for specific architectures and programming environments. Several chapters investigate support for multi-threading, object orientation, irregular computation, locality enhancement, and communication optimization. Issues of the interface between language and operating system support are also discussed. Finally, the load balance issues are discussed in different contexts, including sparse matrix computation and iteratively balanced adaptive solvers for partial differential equations. Some additional topics are also discussed in the extended abstracts. Each chapter provides a bibliography of relevant papers and the book can thus be used as a reference to the most up-to-date research in parallel software engineering.

Active Middleware Services - From the Proceedings of the 2nd Annual Workshop on Active Middleware Services (Paperback, 2000... Active Middleware Services - From the Proceedings of the 2nd Annual Workshop on Active Middleware Services (Paperback, 2000 ed.)
Salim Hariri, Craig A. Lee, Cauligi S. Raghavendra
R2,639 Discovery Miles 26 390 Ships in 18 - 22 working days

The papers in this volume were presented at the Second Annual Work shop on Active Middleware Services and were selected for inclusion here by the Editors. The AMS workshop was organized with support from both the National Science Foundation and the CAT center at the Uni versity of Arizona, and was held in Pittsburgh, Pennsylvania, on August 1, 2000, in conjunction with the 9th IEEE International Symposium on High Performance Distributed Computing (HPDC-9). The explosive growth of Internet-based applications and the prolifer ation of networking technologies has been transforming most areas of computer science and engineering as well as computational science and commercial application areas. This opens an outstanding opportunity to explore new, Internet-oriented software technologies that will open new research and application opportunities not only for the multimedia and commercial world, but also for the scientific and high-performance computing applications community. Two emerging technologies - agents and active networks - allow increased programmability to enable bring ing new services to Internet based applications. The AMS workshop presented research results and working papers in the areas of active net works, mobile and intelligent agents, software tools for high performance distributed computing, network operating systems, and application pro gramming models and environments. The success of an endeavor such as this depends on the contributions of many individuals. We would like to thank Dr. Frederica Darema and the NSF for sponsoring the workshop.

VLSI Design Methodologies for Digital Signal Processing Architectures (Paperback, Softcover reprint of the original 1st ed.... VLSI Design Methodologies for Digital Signal Processing Architectures (Paperback, Softcover reprint of the original 1st ed. 1994)
Magdy A. Bayoumi
R5,176 Discovery Miles 51 760 Ships in 18 - 22 working days

Designing VLSI systems represents a challenging task. It is a transfonnation among different specifications corresponding to different levels of design: abstraction, behavioral, stntctural and physical. The behavioral level describes the functionality of the design. It consists of two components; static and dynamic. The static component describes operations, whereas the dynamic component describes sequencing and timing. The structural level contains infonnation about components, control and connectivity. The physical level describes the constraints that should be imposed on the floor plan, the placement of components, and the geometry of the design. Constraints of area, speed and power are also applied at this level. To implement such multilevel transfonnation, a design methodology should be devised, taking into consideration the constraints, limitations and properties of each level. The mapping process between any of these domains is non-isomorphic. A single behavioral component may be transfonned into more than one structural component. Design methodologies are the most recent evolution in the design automation era, which started off with the introduction and subsequent usage of module generation especially for regular structures such as PLA's and memories. A design methodology should offer an integrated design system rather than a set of separate unrelated routines and tools. A general outline of a desired integrated design system is as follows: * Decide on a certain unified framework for all design levels. * Derive a design method based on this framework. * Create a design environment to implement this design method.

Distributed and Parallel Systems - From Instruction Parallelism to Cluster Computing (Paperback, Softcover reprint of the... Distributed and Parallel Systems - From Instruction Parallelism to Cluster Computing (Paperback, Softcover reprint of the original 1st ed. 2000)
Peter Kacsuk, Gabriele Kotsis
R5,131 Discovery Miles 51 310 Ships in 18 - 22 working days

Distributed and Parallel Systems: From Instruction Parallelism to Cluster Computing is the proceedings of the third Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by the Austrian Computer Society and the MTA SZTAKI Computer and Automation Research Institute. This book contains 18 full papers and 12 short papers from 14 countries around the world, including Japan, Korea and Brazil. The paper sessions cover a broad range of research topics in the area of parallel and distributed systems, including software development environments, performance evaluation, architectures, languages, algorithms, web and cluster computing. This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.

A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures (Paperback, Softcover reprint of the... A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures (Paperback, Softcover reprint of the original 1st ed. 2003)
Ian N. Dunn, Gerard G.L. Meyer
R2,607 Discovery Miles 26 070 Ships in 18 - 22 working days

Despite five decades of research, parallel computing remains an exotic, frontier technology on the fringes of mainstream computing. Its much-heralded triumph over sequential computing has yet to materialize. This is in spite of the fact that the processing needs of many signal processing applications continue to eclipse the capabilities of sequential computing. The culprit is largely the software development environment. Fundamental shortcomings in the development environment of many parallel computer architectures thwart the adoption of parallel computing. Foremost, parallel computing has no unifying model to accurately predict the execution time of algorithms on parallel architectures. Cost and scarce programming resources prohibit deploying multiple algorithms and partitioning strategies in an attempt to find the fastest solution. As a consequence, algorithm design is largely an intuitive art form dominated by practitioners who specialize in a particular computer architecture. This, coupled with the fact that parallel computer architectures rarely last more than a couple of years, makes for a complex and challenging design environment. To navigate this environment, algorithm designers need a road map, a detailed procedure they can use to efficiently develop high performance, portable parallel algorithms. The focus of this book is to draw such a road map. The Parallel Algorithm Synthesis Procedure can be used to design reusable building blocks of adaptable, scalable software modules from which high performance signal processing applications can be constructed. The hallmark of the procedure is a semi-systematic process for introducing parameters to control the partitioning and scheduling of computation and communication. This facilitates the tailoring of software modules to exploit different configurations of multiple processors, multiple floating-point units, and hierarchical memories. To showcase the efficacy of this procedure, the book presents three case studies requiring various degrees of optimization for parallel execution.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Ethical Hacking - The Ultimate Guide to…
Lester Evans Hardcover R664 R593 Discovery Miles 5 930
Spy - Uncovering Craig Williamson
Jonathan Ancer Paperback  (6)
R280 R259 Discovery Miles 2 590
The Requirements Engineering Handbook
Ralph R. Young Hardcover R2,111 Discovery Miles 21 110
Transport and Coherent Structures in…
S Tardu Hardcover R4,790 Discovery Miles 47 900
Township Economy - People, Spaces And…
Andrew Charman, Leif Petersen, … Paperback  (1)
R210 R194 Discovery Miles 1 940
Engineering Applications of…
Ku Zilati Ku Shaari, Mokhtar Awang Hardcover R3,528 R3,268 Discovery Miles 32 680
Atlas of AI - Power, Politics, and the…
Kate Crawford Paperback R468 R440 Discovery Miles 4 400
Evaluation of Supply Chain Performance…
Liliana Avelar-Sosa, Jorge Luis Garcia-Alcaraz, … Hardcover R4,753 Discovery Miles 47 530
Expansive - A Guide To Thinking Bigger…
John Sanei, Erik Kruger Paperback R290 R259 Discovery Miles 2 590
The Oxford Handbook of Philosophy of…
Mohan Matthen Hardcover R4,546 Discovery Miles 45 460

 

Partners