0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (11)
  • R250 - R500 (39)
  • R500+ (3,211)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

The Origins of Digital Computers - Selected Papers (Paperback, 3rd ed. 1982. Softcover reprint of the original 3rd ed. 1982):... The Origins of Digital Computers - Selected Papers (Paperback, 3rd ed. 1982. Softcover reprint of the original 3rd ed. 1982)
B. Randell
R5,853 Discovery Miles 58 530 Ships in 10 - 15 working days
A VLSI Architecture for Concurrent Data Structures (Paperback, Softcover reprint of the original 1st ed. 1987): J W Dally A VLSI Architecture for Concurrent Data Structures (Paperback, Softcover reprint of the original 1st ed. 1987)
J W Dally
R4,477 Discovery Miles 44 770 Ships in 10 - 15 working days

Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and commu nication into data structures. This thesis develops a notation for describing concurrent data structures, presents examples of concurrent data structures, and describes an architecture to support concurrent data structures. Concurrent Smalltalk (CST), a derivative of Smalltalk-80 with extensions for concurrency, is developed to describe concurrent data structures. CST allows the programmer to specify objects that are distributed over the nodes of a concurrent computer. These distributed objects have many constituent objects and thus can process many messages simultaneously. They are the foundation upon which concurrent data structures are built. The balanced cube is a concurrent data structure for ordered sets. The set is distributed by a balanced recursive partition that maps to the subcubes of a binary 7lrcube using a Gray code. A search algorithm, VW search, based on the distance properties of the Gray code, searches a balanced cube in O(log N) time. Because it does not have the root bottleneck that limits all tree-based data structures to 0(1) concurrency, the balanced cube achieves 0C.: N) con currency. Considering graphs as concurrent data structures, graph algorithms are pre sented for the shortest path problem, the max-flow problem, and graph parti tioning. These algorithms introduce new synchronization techniques to achieve better performance than existing algorithms."

Memory Performance of Prolog Architectures (Paperback, Softcover reprint of the original 1st ed. 1988): Evan Tick Memory Performance of Prolog Architectures (Paperback, Softcover reprint of the original 1st ed. 1988)
Evan Tick
R4,473 Discovery Miles 44 730 Ships in 10 - 15 working days

One suspects that the people who use computers for their livelihood are growing more "sophisticated" as the field of computer science evolves. This view might be defended by the expanding use of languages such as C and Lisp in contrast to the languages such as FORTRAN and COBOL. This hypothesis is false however - computer languages are not like natural languages where successive generations stick with the language of their ancestors. Computer programmers do not grow more sophisticated - programmers simply take the time to muddle through the increasingly complex language semantics in an attempt to write useful programs. Of course, these programmers are "sophisticated" in the same sense as are hackers of MockLisp, PostScript, and Tex - highly specialized and tedious languages. It is quite frustrating how this myth of sophistication is propagated by some industries, universities, and government agencies. When I was an undergraduate at MIT, I distinctly remember the convoluted questions on exams concerning dynamic scoping in Lisp - the emphasis was placed solely on a "hacker's" view of computation, i. e. , the control and manipulation of storage cells. No consideration was given to the logical structure of programs. Within the past five years, Ada and Common Lisp have become programming language standards, despite their complexity (note that dynamic scoping was dropped even from Common Lisp). Of course, most industries' selection of programming languages are primarily driven by the requirement for compatibility (with previous software) and performance.

Adiabatic Logic - Future Trend and System Level Perspective (Hardcover, 2012): Philip Teichmann Adiabatic Logic - Future Trend and System Level Perspective (Hardcover, 2012)
Philip Teichmann
R2,956 Discovery Miles 29 560 Ships in 10 - 15 working days

Adiabatic logic is a potential successor for static CMOS circuit design when it comes to ultra-low-power energy consumption. Future development like the evolutionary shrinking of the minimum feature size as well as revolutionary novel transistor concepts will change the gate level savings gained by adiabatic logic. In addition, the impact of worsening degradation effects has to be considered in the design of adiabatic circuits. The impact of the technology trends on the figures of merit of adiabatic logic, energy saving potential and optimum operating frequency, are investigated, as well as degradation related issues. Adiabatic logic benefits from future devices, is not susceptible to Hot Carrier Injection, and shows less impact of Bias Temperature Instability than static CMOS circuits. Major interest also lies on the efficient generation of the applied power-clock signal. This oscillating power supply can be used to save energy in short idle times by disconnecting circuits. An efficient way to generate the power-clock is by means of the synchronous 2N2P LC oscillator, which is also robust with respect to pattern-induced capacitive variations. An easy to implement but powerful power-clock gating supplement is proposed by gating the synchronization signals. Diverse implementations to shut down the system are presented and rated for their applicability and other aspects like energy reduction capability and data retention. Advantageous usage of adiabatic logic requires compact and efficient arithmetic structures. A broad variety of adder structures and a Coordinate Rotation Digital Computer are compared and rated according to energy consumption and area usage, and the resulting energy saving potential against static CMOS proves the ultra-low-power capability of adiabatic logic. In the end, a new circuit topology has to compete with static CMOS also in productivity. On a 130nm test chip, a large scale test vehicle containing an FIR filter was implemented in adiabatic logic, utilizing a standard, library-based design flow, fabricated, measured and compared to simulations of a static CMOS counterpart, with measured saving factors compliant to the values gained by simulation. This leads to the conclusion that adiabatic logic is ready for productive design due to compatibility not only to CMOS technology, but also to electronic design automation (EDA) tools developed for static CMOS system design.

Multi-Microprocessor Systems for Real-Time Applications (Paperback, Softcover reprint of the original 1st ed. 1985): Gianni... Multi-Microprocessor Systems for Real-Time Applications (Paperback, Softcover reprint of the original 1st ed. 1985)
Gianni Conte, Dante Del Corso
R4,490 Discovery Miles 44 900 Ships in 10 - 15 working days

The continous development of computer technology supported by the VLSI revolution stimulated the research in the field .of multiprocessors systems. The main motivation for the migration of design efforts from conventional architectures towards multiprocessor ones is the possibi I ity to obtain a significant processing power together with the improvement of price/performance, reliability and flexibility figures. Currently, such systems are moving from research laboratories to real field appl ications. Future technological advances and new generations of components are I ikely to further enhance this trend. This book is intended to provide basic concepts and design methodologies for engineers and researchers involved in the development of mul tiprocessor systems and/or of appl ications based on multiprocessor architectures. In addition the book can be a source of material for computer architecture courses at graduate level. A preliminary knowledge of computer architecture and logical design has been assumed in wri ting this book. Not all the problems related with the development of multiprocessor systems are addressed in th i s book. The covered range spans from the electrical and logical design problems, to architectural issues, to design methodologis for system software. Subj ects such as software development in a multiprocessor environment or loosely coupled multiprocessor systems are out of the scope of the book. Since the basic elements, processors and memories, are now available as standard integrated circuits, the key design problem is how to put them together in an efficient and reliable way."

Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997): A. Migdalas, Panos M. Pardalos,... Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997)
A. Migdalas, Panos M. Pardalos, Sverre Storoy
R8,667 Discovery Miles 86 670 Ships in 10 - 15 working days

During the last three decades, breakthroughs in computer technology have made a tremendous impact on optimization. In particular, parallel computing has made it possible to solve larger and computationally more difficult prob lems. This volume contains mainly lecture notes from a Nordic Summer School held at the Linkoping Institute of Technology, Sweden in August 1995. In order to make the book more complete, a few authors were invited to contribute chapters that were not part of the course on this first occasion. The purpose of this Nordic course in advanced studies was three-fold. One goal was to introduce the students to the new achievements in a new and very active field, bring them close to world leading researchers, and strengthen their competence in an area with internationally explosive rate of growth. A second goal was to strengthen the bonds between students from different Nordic countries, and to encourage collaboration and joint research ventures over the borders. In this respect, the course built further on the achievements of the "Nordic Network in Mathematical Programming," which has been running during the last three years with the support ofthe Nordic Council for Advanced Studies (NorFA). The final goal was to produce literature on the particular subject, which would be available to both the participating students and to the students of the "next generation" ."

Computer Architecture - Proceedings of the NATO Advanced Study Institute held in St. Raphael, France, 12-24 September, 1976... Computer Architecture - Proceedings of the NATO Advanced Study Institute held in St. Raphael, France, 12-24 September, 1976 (Paperback, Softcover reprint of the original 1st ed. 1977)
G. Boulaye, T. R. Lewin
R1,561 Discovery Miles 15 610 Ships in 10 - 15 working days

This book presents as formal papers nearly all of the lectures given at the NATO advanced summer institute on Computer Architecture held at St. Raphael, France from September 12th - 24th 1976. It was not possible to include an important paper by G. Amdahl on the 470V6 System, nor papers by Mde. A. Recoque on distributed processing, Messrs. A. Maison and G. Debruyne on LSI technology, and K. Bowden. Computer architecture is a very diverse and expanding subject, consequently it was decided to limit the scope of the School to five main subject areas. These were: specific computer architectures, language orientated machines, associative processing, computer networks and specification and design methods. In addition an overall emphasis was placed on distributed and parallel processing and the need for an integrated hardware-software approach to design. Though some introductory material is included, this book is primarily intended for workers in the field of computer science and engineering who wish to update themselves on current topics in computer architecture. The main work of the School is well reflected in the collected papers, but it is impossible to convey the benefits obtained from the discussion groups and the continuous dialogue that was maintained throughout the School. The Editors would like to acknowledge with thanks the support of the NATO Scientific Affairs Division, who financed the School, and the European Research Office of the U.S. Army and the National Science Foundation for providing travel grants.

Switching Machines - Volume 2 Sequential Systems (Paperback, Softcover reprint of the original 1st ed. 1972): J.P. Perrin, M... Switching Machines - Volume 2 Sequential Systems (Paperback, Softcover reprint of the original 1st ed. 1972)
J.P. Perrin, M Denouette, E. Daclin
R4,531 Discovery Miles 45 310 Ships in 10 - 15 working days
Database Machines and Knowledge Base Machines (Paperback, Softcover reprint of the original 1st ed. 1988): Masaru Kitsuregawa,... Database Machines and Knowledge Base Machines (Paperback, Softcover reprint of the original 1st ed. 1988)
Masaru Kitsuregawa, Hidehiko Tanaka
R8,686 Discovery Miles 86 860 Ships in 10 - 15 working days

This volume contains the papers presented at the Fifth International Workshop on Database Machines. The papers cover a wide spectrum of topics on Database Machines and Knowledge Base Machines. Reports of major projects, ECRC, MCC, and ICOT are included. Topics on DBM cover new database machine architectures based on vector processing and hypercube parallel processing, VLSI oriented architecture, filter processor, sorting machine, concurrency control mechanism for DBM, main memory database, interconnection network for DBM, and performance evaluation. In this workshop much more attention was given to knowledge base management as compared to the previous four workshops. Many papers discuss deductive database processing. Architectures for semantic network, prolog, and production system were also proposed. We would like to express our deep thanks to all those who contributed to the success of the workshop. We would also like to express our apprecia tion for the valuable suggestions given to us by Prof. D. K. Hsiao, Prof. D."

Advances in Randomized Parallel Computing (Paperback, Softcover reprint of the original 1st ed. 1999): Panos M. Pardalos,... Advances in Randomized Parallel Computing (Paperback, Softcover reprint of the original 1st ed. 1999)
Panos M. Pardalos, Sanguthevar Rajasekaran
R4,496 Discovery Miles 44 960 Ships in 10 - 15 working days

The technique of randomization has been employed to solve numerous prob lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at the O( n log n) average run time for quicksort is that each input permutation is equally likely. Clearly, any average case analysis is only as good as how valid the assumption made on the input space is. Randomized algorithms achieve superior performances without making any assumptions on the inputs by making coin flips within the algorithm. Any analysis done of randomized algorithms will be valid for all p0: .sible inputs."

Fairness (Paperback, Softcover reprint of the original 1st ed. 1986): Nissim Francez Fairness (Paperback, Softcover reprint of the original 1st ed. 1986)
Nissim Francez
R1,564 Discovery Miles 15 640 Ships in 10 - 15 working days

The main purpose of this book is to bring together much of the research conducted in recent years in a subject I find both fascinating and impor tant, namely fairness. Much of the reported research is still in the form of technical reports, theses and conference papers, and only a small part has already appeared in the formal scientific journal literature. Fairness is one of those concepts that can intuitively be explained very brieft.y, but bear a lot of consequences, both in theory and the practicality of programming languages. Scientists have traditionally been attracted to studying such concepts. However, a rigorous study of the concept needs a lot of detailed development, evoking much machinery of both mathemat ics and computer science. I am fully aware of the fact that this field of research still lacks matu rity, as does the whole subject of theoretical studies of concurrency and nondeterminism. One symptom of this lack of maturity is the proliferation of models used by the research community to discuss these issues, a variety lacking the invariance property present, for example, in universal formalisms for sequential computing."

Data Organization in Parallel Computers (Paperback, Softcover Repri): Harry A.G. Wijshoff Data Organization in Parallel Computers (Paperback, Softcover Repri)
Harry A.G. Wijshoff
R2,948 Discovery Miles 29 480 Ships in 10 - 15 working days

The organization of data is clearly of great importance in the design of high performance algorithms and architectures. Although there are several landmark papers on this subject, no comprehensive treatment has appeared. This monograph is intended to fill that gap. We introduce a model of computation for parallel computer architec tures, by which we are able to express the intrinsic complexity of data or ganization for specific architectures. We apply this model of computation to several existing parallel computer architectures, e.g., the CDC 205 and CRAY vector-computers, and the MPP binary array processor. The study of data organization in parallel computations was introduced as early as 1970. During the development of the ILLIAC IV system there was a need for a theory of possible data arrangements in interleaved mem ory systems. The resulting theory dealt primarily with storage schemes also called skewing schemes for 2-dimensional matrices, i.e., mappings from a- dimensional array to a number of memory banks. By means of the model of computation we are able to apply the theory of skewing schemes to var ious kinds of parallel computer architectures. This results in a number of consequences for both the design of parallel computer architectures and for applications of parallel processing."

Switching Machines - Volume 1: Combinational Systems Introduction to Sequential Systems (Paperback, Softcover reprint of the... Switching Machines - Volume 1: Combinational Systems Introduction to Sequential Systems (Paperback, Softcover reprint of the original 1st ed. 1972)
J.P. Perrin, M Denouette, E. Daclin
R1,596 Discovery Miles 15 960 Ships in 10 - 15 working days

We shall begin this brief section with what we consider to be its objective. It will be followed by the main outline and then concluded by a few notes as to how this work should be used. Although logical systems have been manufactured for some time, the theory behind them is quite recent. Without going into historical digressions, we simply remark that the first comprehensive ideas on the application of Boolean algebra to logical systems appeared in the 1930's. These systems appeared in telephone exchanges and were realized with relays. It is only around 1955 that many articles and books trying to systematize the study of such automata, appeared. Since then, the theory has advanced regularly, but not in a way which satisfies those concerned with practical applications. What is serious, is that aside the books by Caldwell (which dates already from 1958), Marcus, and P. Naslin (in France), few works have been published which try to gather and unify results which can be used by the practis ing engineer; this is the objective of the present volumes."

VLSI for Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1989): Jose G.Delgado- Frias, Will Moore VLSI for Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1989)
Jose G.Delgado- Frias, Will Moore
R2,956 Discovery Miles 29 560 Ships in 10 - 15 working days

This book is an edited selection of the papers presented at the International Workshop on VLSI for Artiflcial Intelligence which was held at the University of Oxford in July 1988. Our thanks go to all the contributors and especially to the programme committee for all their hard work. Thanks are also due to the ACM-SIGARCH, the Alvey Directorate, the lEE and the IEEE Computer Society for publicising the event and to Oxford University for their active support. We are particularly grateful to David Cawley and Paula Appleby for coping with the administrative problems. Jose Delgado-Frias Will Moore October 1988 Programme Committee Igor Aleksander, Imperial College (UK) Yves Bekkers, IRISA/INRIA (France) Michael Brady, University of Oxford (UK) Jose Delgado-Frias, University of Oxford (UK) Steven Krueger, Texas Instruments Inc. (USA) Simon Lavington, University of Essex (UK) Will Moore, University of Oxford (UK) Philip Treleaven, University College London (UK) Benjamin Wah, University of Illinois (USA) Prologue Research on architectures dedicated to artificial intelligence (AI) processing has been increasing in recent years, since conventional data- or numerically-oriented architec tures are not able to provide the computational power and/or functionality required. For the time being these architectures have to be implemented in VLSI technology with its inherent constraints on speed, connectivity, fabrication yield and power. This in turn impacts on the effectiveness of the computer architecture."

Microarchitecture of VLSI Computers (Paperback, Softcover reprint of the original 1st ed. 1985): P. Antognetti, F. Anceau, J... Microarchitecture of VLSI Computers (Paperback, Softcover reprint of the original 1st ed. 1985)
P. Antognetti, F. Anceau, J Vuillemin
R1,558 Discovery Miles 15 580 Ships in 10 - 15 working days

We are about to enter a period of radical change in computer architecture. It is made necessary by adL)anCeS in processing tech- nology that will make it possible to build devices exceeding in performance and complexity anything conceived in the past. These advances the logical extension of large - to very-large-scale in- J tegration (VLSI) are all but inevitable. With the large number of shlitching elements available in a sinqle chip as promised by VLSI technology, the question that arises naturally is: What can hle do hlith this technology and hOhl can hle best utilize it? The final anShler, hlhatever it may be, hlill be based on architectu- ral concepts that probably hlill depart, in several cases, from past and present practices. Furthermore, as hle continue to build increasingly pOhlerful microprocessors permitted by VLSI process advances, the method of efficiently interconnecting them hlill become more and more important. In fact one serious drahlback of VLSI technology is the limited number of pins on each chip. While VLSI chips provide an exponentially grOhling number of gates, the number of pins they provide remains almost constant. As a result communication becomes a very difficult design problem in the interconnection of VLSI chips. Due to the insufficient commu- nication pOhler and the high design cost of VLSI chips, computer systems employing VLSI technology hlill thus need to employ many architectural concepts that depart sharply from past and present practices.

A Systolic Array Optimizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1989): Monica S. Lam A Systolic Array Optimizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1989)
Monica S. Lam
R2,936 Discovery Miles 29 360 Ships in 10 - 15 working days

This book is a revision of my Ph. D. thesis dissertation submitted to Carnegie Mellon University in 1987. It documents the research and results of the compiler technology developed for the Warp machine. Warp is a systolic array built out of custom, high-performance processors, each of which can execute up to 10 million floating-point operations per second (10 MFLOPS). Under the direction of H. T. Kung, the Warp machine matured from an academic, experimental prototype to a commercial product of General Electric. The Warp machine demonstrated that the scalable architecture of high-peiformance, programmable systolic arrays represents a practical, cost-effective solu tion to the present and future computation-intensive applications. The success of Warp led to the follow-on iWarp project, a joint project with Intel, to develop a single-chip 20 MFLOPS processor. The availability of the highly integrated iWarp processor will have a significant impact on parallel computing. One of the major challenges in the development of Warp was to build an optimizing compiler for the machine. First, the processors in the xx A Systolic Array Optimizing Compiler array cooperate at a fine granularity of parallelism, interaction between processors must be considered in the generation of code for individual processors. Second, the individual processors themselves derive their performance from a VLIW (Very Long Instruction Word) instruction set and a high degree of internal pipelining and parallelism. The compiler contains optimizations pertaining to the array level of parallelism, as well as optimizations for the individual VLIW processors."

The Art of Hardware Architecture - Design Methods and Techniques for Digital Circuits (Hardcover, 2012): Mohit Arora The Art of Hardware Architecture - Design Methods and Techniques for Digital Circuits (Hardcover, 2012)
Mohit Arora
R4,246 Discovery Miles 42 460 Ships in 10 - 15 working days

This book highlights the complex issues, tasks and skills that must be mastered by an IP designer, in order to design an optimized and robust digital circuit to solve a problem. The techniques and methodologies described can serve as a bridge between specifications that are known to the designer and RTL code that is final outcome, reducing significantly the time it takes to convert initial ideas and concepts into right-first-time silicon. Coverage focuses on real problems rather than theoretical concepts, with an emphasis on design techniques across various aspects of chip-design.

A Systolic Array Parallelizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1990): Ping-Sheng Tseng A Systolic Array Parallelizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1990)
Ping-Sheng Tseng
R2,912 Discovery Miles 29 120 Ships in 10 - 15 working days

Widespread use of parallel processing will become a reality only if the process of porting applications to parallel computers can be largely automated. Usually it is straightforward for a user to determine how an application can be mapped onto a parallel machine; however, the actual development of parallel code, if done by hand, is typically difficult and time consuming. Parallelizing compilers, which can gen erate parallel code automatically, are therefore a key technology for parallel processing. In this book, Ping-Sheng Tseng describes a parallelizing compiler for systolic arrays, called AL. Although parallelizing compilers are quite common for shared-memory parallel machines, the AL compiler is one of the first working parallelizing compilers for distributed memory machines, of which systolic arrays are a special case. The AL compiler takes advantage of the fine grain and high bandwidth interprocessor communication capabilities in a systolic architecture to generate efficient parallel code. xii Foreword While capable of handling an important class of applications, AL is not intended to be a general-purpose parallelizing compiler."

Self-Timed Control of Concurrent Processes - The Design of Aperiodic Logical Circuits in Computers and Discrete Systems... Self-Timed Control of Concurrent Processes - The Design of Aperiodic Logical Circuits in Computers and Discrete Systems (Paperback, Softcover reprint of the original 1st ed. 1990)
Victor I. Varshavsky
R2,999 Discovery Miles 29 990 Ships in 10 - 15 working days

'Et moi ... si j'avait su comment en revenir. One service mathematics has rendered thl je n'y serais point aile: human race. It has put common sense back where it belongs. on the topmost shelf nexl Jules Verne to the dusty canister labelled 'discarded non. The series is divergent; therefore we may be sense'. Eric T. Bell able to do something with it O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non. Iinearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and fO other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic has rendered com. puter science ... .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series."

Application Specific Processors (Paperback, Softcover reprint of the original 1st ed. 1997): Earl E. Swartzlander Jr Application Specific Processors (Paperback, Softcover reprint of the original 1st ed. 1997)
Earl E. Swartzlander Jr
R2,949 Discovery Miles 29 490 Ships in 10 - 15 working days

Application Specific Processors is written for use by engineers who are developing specialized systems (application specific systems). Traditionally, most high performance signal processors have been realized with application specific processors. The explanation is that application specific processors can be tailored to exactly match the (usually very demanding) application requirements. The result is that no processing power' is wasted for unnecessary capabilities and maximum performance is achieved. A disadvantage is that such processors have been expensive to design since each is a unique design that is customized to the specific application. In the last decade, computer-aided design systems have been developed to facilitate the development of application specific integrated circuits. The success of such ASIC CAD systems suggests that it should be possible to streamline the process of application specific processor design. Application Specific Processors consists of eight chapters which provide a mixture of techniques and examples that relate to application specific processing. The inclusion of techniques is expected to suggest additional research and to assist those who are faced with the requirement to implement efficient application specific processors. The examples illustrate the application of the concepts and demonstrate the efficiency that can be achieved via application specific processors. The chapters were written by members and former members of the application specific processing group at the University of Texas at Austin. The first five chapters relate to specific arithmetic which often is the key to achieving high performance in application specific processors. The next two chapters focus on signal processing systems, and the final chapter examines the interconnection of possibly disparate elements to create systems.

Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Paperback, Softcover reprint... Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
Nandit R. Soparkar, Henry F. Korth, Abraham Silberschatz
R2,915 Discovery Miles 29 150 Ships in 10 - 15 working days

Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."

Design It! : Pragmatic Programmers (Paperback): Micahel Keeling Design It! : Pragmatic Programmers (Paperback)
Micahel Keeling
R1,167 R761 Discovery Miles 7 610 Save R406 (35%) Ships in 12 - 17 working days

Don't engineer by coincidence-design it like you mean it! Filled with practical techniques, Design It! is the perfect introduction to software architecture for programmers who are ready to grow their design skills. Lead your team as a software architect, ask the right stakeholders the right questions, explore design options, and help your team implement a system that promotes the right -ilities. Share your design decisions, facilitate collaborative design workshops that are fast, effective, and fun-and develop more awesome software! With dozens of design methods, examples, and practical know-how, Design It! shows you how to become a software architect. Walk through the core concepts every architect must know, discover how to apply them, and learn a variety of skills that will make you a better programmer, leader, and designer. Uncover the big ideas behind software architecture and gain confidence working on projects big and small. Plan, design, implement, and evaluate software architectures and collaborate with your team, stakeholders, and other architects. Identify the right stakeholders and understand their needs, dig for architecturally significant requirements, write amazing quality attribute scenarios, and make confident decisions. Choose technologies based on their architectural impact, facilitate architecture-centric design workshops, and evaluate architectures using lightweight, effective methods. Write lean architecture descriptions people love to read. Run an architecture design studio, implement the architecture you've designed, and grow your team's architectural knowledge. Good design requires good communication. Talk about your software architecture with stakeholders using whiteboards, documents, and code, and apply architecture-focused design methods in your day-to-day practice. Hands-on exercises, real-world scenarios, and practical team-based decision-making tools will get everyone on board and give you the experience you need to become a confident software architect.

Parallel Machines: Parallel Machine Languages - The Emergence of Hybrid Dataflow Computer Architectures (Paperback, Softcover... Parallel Machines: Parallel Machine Languages - The Emergence of Hybrid Dataflow Computer Architectures (Paperback, Softcover reprint of the original 1st ed. 1990)
Robert A. Iannucci
R4,465 Discovery Miles 44 650 Ships in 10 - 15 working days

It is universally accepted today that parallel processing is here to stay but that software for parallel machines is still difficult to develop. However, there is little recognition of the fact that changes in processor architecture can significantly ease the development of software. In the seventies the availability of processors that could address a large name space directly, eliminated the problem of name management at one level and paved the way for the routine development of large programs. Similarly, today, processor architectures that can facilitate cheap synchronization and provide a global address space can simplify compiler development for parallel machines. If the cost of synchronization remains high, the pro gramming of parallel machines will remain significantly less abstract than programming sequential machines. In this monograph Bob Iannucci presents the design and analysis of an architecture that can be a better building block for parallel machines than any von Neumann processor. There is another very interesting motivation behind this work. It is rooted in the long and venerable history of dataflow graphs as a formalism for ex pressing parallel computation. The field has bloomed since 1974, when Dennis and Misunas proposed a truly novel architecture using dataflow graphs as the parallel machine language. The novelty and elegance of dataflow architectures has, however, also kept us from asking the real question: "What can dataflow architectures buy us that von Neumann ar chitectures can't?" In the following I explain in a round about way how Bob and I arrived at this question."

Input/Output in Parallel and Distributed Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1996): Ravi... Input/Output in Parallel and Distributed Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
Ravi Jain, John Werth, James C. Browne
R5,795 Discovery Miles 57 950 Ships in 10 - 15 working days

Input/Output in Parallel and Distributed Computer Systems has attracted increasing attention over the last few years, as it has become apparent that input/output performance, rather than CPU performance, may be the key limiting factor in the performance of future systems. This I/O bottleneck is caused by the increasing speed mismatch between processing units and storage devices, the use of multiple processors operating simultaneously in parallel and distributed systems, and by the increasing I/O demands of new classes of applications, like multimedia. It is also important to note that, to varying degrees, the I/O bottleneck exists at multiple levels of the memory hierarchy. All indications are that the I/O bottleneck will be with us for some time to come, and is likely to increase in importance. Input/Output in Parallel and Distributed Computer Systems is based on papers presented at the 1994 and 1995 IOPADS workshops held in conjunction with the International Parallel Processing Symposium. This book is divided into three parts. Part I, the Introduction, contains four invited chapters which provide a tutorial survey of I/O issues in parallel and distributed systems. The chapters in Parts II and III contain selected research papers from the 1994 and 1995 IOPADS workshops; many of these papers have been substantially revised and updated for inclusion in this volume. Part II collects the papers from both years which deal with various aspects of system software, and Part III addresses architectural issues. Input/Output in Parallel and Distributed Computer Systems is suitable as a secondary text for graduate level courses in computer architecture, software engineering, and multimedia systems, and as a reference for researchers and practitioners in industry.

Automatic Performance Prediction of Parallel Programs (Paperback, Softcover reprint of the original 1st ed. 1996): Thomas... Automatic Performance Prediction of Parallel Programs (Paperback, Softcover reprint of the original 1st ed. 1996)
Thomas Fahringer
R2,957 Discovery Miles 29 570 Ships in 10 - 15 working days

Automatic Performance Prediction of Parallel Programs presents a unified approach to the problem of automatically estimating the performance of parallel computer programs. The author focuses primarily on distributed memory multiprocessor systems, although large portions of the analysis can be applied to shared memory architectures as well. The author introduces a novel and very practical approach for predicting some of the most important performance parameters of parallel programs, including work distribution, number of transfers, amount of data transferred, network contention, transfer time, computation time and number of cache misses. This approach is based on advanced compiler analysis that carefully examines loop iteration spaces, procedure calls, array subscript expressions, communication patterns, data distributions and optimizing code transformations at the program level; and the most important machine specific parameters including cache characteristics, communication network indices, and benchmark data for computational operations at the machine level. The material has been fully implemented as part of P3T, which is an integrated automatic performance estimator of the Vienna Fortran Compilation System (VFCS), a state-of-the-art parallelizing compiler for Fortran77, Vienna Fortran and a subset of High Performance Fortran (HPF) programs. A large number of experiments using realistic HPF and Vienna Fortran code examples demonstrate highly accurate performance estimates, and the ability of the described performance prediction approach to successfully guide both programmer and compiler in parallelizing and optimizing parallel programs. A graphical user interface is described and displayed that visualizes each program source line together with the corresponding parameter values. P3T uses color-coded performance visualization to immediately identify hot spots in the parallel program. Performance data can be filtered and displayed at various levels of detail. Colors displayed by the graphical user interface are visualized in greyscale. Automatic Performance Prediction of Parallel Programs also includes coverage of fundamental problems of automatic parallelization for distributed memory multicomputers, a description of the basic parallelization strategy and a large variety of optimizing code transformations as included under VFCS.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Understanding Users - Designing…
Andrew Dillon Paperback R972 Discovery Miles 9 720
Enterprise Level Security 1 & 2
Kevin Foltz, William R. Simpson Paperback R1,421 Discovery Miles 14 210
Botnets - Architectures…
Georgios Kambourakis, Marios Anagnostopoulos, … Paperback R1,378 R1,191 Discovery Miles 11 910
The Architecture of Computer Hardware…
I Englander Paperback R4,268 R677 Discovery Miles 6 770
The Auditor's Guide to Blockchain…
Shaun Aghili Hardcover R2,992 R2,502 Discovery Miles 25 020
Edge-AI in Healthcare - Trends and…
Sonali Vyas, Akanksha Upadhyaya, … Hardcover R2,644 Discovery Miles 26 440
Designing Switch/Routers - Fundamental…
James Aweya Paperback R3,845 Discovery Miles 38 450
The TOGAF standard, version 9.2
Open Group Paperback R3,097 R332 Discovery Miles 3 320
Designing Switch/Routers - Fundamental…
James Aweya Hardcover R9,091 Discovery Miles 90 910
Digital Blood on Their Hands - The…
Andrew Jenkinson Paperback R881 Discovery Miles 8 810

 

Partners