0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (5)
  • R250 - R500 (23)
  • R500+ (2,639)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General

Parallel Algorithms and Architectures for DSP Applications (Paperback, Softcover reprint of the original 1st ed. 1991): Magdy... Parallel Algorithms and Architectures for DSP Applications (Paperback, Softcover reprint of the original 1st ed. 1991)
Magdy A. Bayoumi
R2,654 Discovery Miles 26 540 Ships in 18 - 22 working days

Over the past few years, the demand for high speed Digital Signal Proces sing (DSP) has increased dramatically. New applications in real-time image processing, satellite communications, radar signal processing, pattern recogni tion, and real-time signal detection and estimation require major improvements at several levels; algorithmic, architectural, and implementation. These perfor mance requirements can be achieved by employing parallel processing at all levels. Very Large Scale Integration (VLSI) technology supports and provides a good avenue for parallelism. Parallelism offers efficient sohitions to several problems which can arise in VLSI DSP architectures such as: 1. Intermediate data communication and routing: several DSP algorithms, such as FFT, involve excessive data routing and reordering. Parallelism is an efficient mechanism to minimize the silicon cost and speed up the pro cessing time of the intermediate middle stages. 2. Complex DSP applications: the required computation is almost doubled. Parallelism will allow two similar channels processing at the same time. The communication between the two channels has to be minimized. 3. Applicatilm specific systems: this emerging approach should achieve real-time performance in a cost-effective way. 4. Testability and fault tolerance: reliability has become a required feature in most of DSP systems. To achieve such property, the involved time overhead is significant. Parallelism may be the solution to maintain ac ceptable speed performance."

A Formal Approach to Hardware Design (Paperback, Softcover reprint of the original 1st ed. 1994): Jorgen Staunstrup A Formal Approach to Hardware Design (Paperback, Softcover reprint of the original 1st ed. 1994)
Jorgen Staunstrup
R4,003 Discovery Miles 40 030 Ships in 18 - 22 working days

A Formal Approach to Hardware Design discusses designing computations to be realised by application specific hardware. It introduces a formal design approach based on a high-level design language called Synchronized Transitions. The models created using Synchronized Transitions enable the designer to perform different kinds of analysis and verification based on descriptions in a single language. It is, for example, possible to use exactly the same design description both for mechanically supported verification and synthesis. Synchronized Transitions is supported by a collection of public domain CAD tools. These tools can be used with the book in presenting a course on the subject. A Formal Approach to Hardware Design illustrates the benefits to be gained from adopting such techniques, but it does so without assuming prior knowledge of formal design methods. The book is thus not only an excellent reference, it is also suitable for use by students and practitioners.

TRON Project 1987 Open-Architecture Computer Systems - Proceedings of the Third TRON Project Symposium (Paperback, Softcover... TRON Project 1987 Open-Architecture Computer Systems - Proceedings of the Third TRON Project Symposium (Paperback, Softcover reprint of the original 1st ed. 1987)
Ken Sakamura
R1,428 Discovery Miles 14 280 Ships in 18 - 22 working days

Almost 4 years have elapsed since Dr. Ken Sakamura of The University of Tokyo first proposed the TRON (the realtime operating system nucleus) concept and 18 months since the foundation of the TRON Association on 16 June 1986. Members of the Association from Japan and overseas currently exceed 80 corporations. The TRON concept, as advocated by Dr. Ken Sakamura, is concerned with the problem of interaction between man and the computer (the man-machine inter face), which had not previously been given a great deal of attention. Dr. Sakamura has gone back to basics to create a new and complete cultural environment relative to computers and envisage a role for computers which will truly benefit mankind. This concept has indeed caused a stir in the computer field. The scope of the research work involved was initially regarded as being so extensive and diverse that the completion of activities was scheduled for the 1990s. However, I am happy to note that the enthusiasm expressed by individuals and organizations both within and outside Japan has permitted acceleration of the research and development activities. It is to be hoped that the presentations of the Third TRON Project Symposium will further the progress toward the creation of a computer environment that will be compatible with the aspirations of mankind."

Computer Systems and Software Engineering - State-of-the-art (Paperback, Softcover reprint of the original 1st ed. 1992):... Computer Systems and Software Engineering - State-of-the-art (Paperback, Softcover reprint of the original 1st ed. 1992)
Patrick de Wilde, Joos P.L. Vandewalle
R4,055 Discovery Miles 40 550 Ships in 18 - 22 working days

Computer Systems and Software Engineering is a compilation of sixteen state-of-the-art lectures and keynote speeches given at the COMPEURO '92 conference. The contributions are from leading researchers, each of whom gives a new insight into subjects ranging from hardware design through parallelism to computer applications. The pragmatic flavour of the contributions makes the book a valuable asset for both researchers and designers alike. The book covers the following subjects: Hardware Design: memory technology, logic design, algorithms and architecture; Parallel Processing: programming, cellular neural networks and load balancing; Software Engineering: machine learning, logic programming and program correctness; Visualization: the graphical computer interface.

Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Paperback, Softcover... Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Paperback, Softcover reprint of the original 1st ed. 1993)
David E. Hudak, Santosh G. Abraham
R2,622 Discovery Miles 26 220 Ships in 18 - 22 working days

The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs.In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cache certain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns.This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.

Multiprocessing - Trade-Offs in Computation and Communication (Paperback, Softcover reprint of the original 1st ed. 1993):... Multiprocessing - Trade-Offs in Computation and Communication (Paperback, Softcover reprint of the original 1st ed. 1993)
Vijay K. Naik
R2,634 Discovery Miles 26 340 Ships in 18 - 22 working days

Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.

Real-Time UNIX (R) Systems - Design and Application Guide (Paperback, Softcover reprint of the original 1st ed. 1991): Borko... Real-Time UNIX (R) Systems - Design and Application Guide (Paperback, Softcover reprint of the original 1st ed. 1991)
Borko Furht, Dan Grostick, David Gluch, Guy Rabbat, John Parker, …
R4,003 Discovery Miles 40 030 Ships in 18 - 22 working days

A growing concern of mine has been the unrealistic expectations for new computer-related technologies introduced into all kinds of organizations. Unrealistic expectations lead to disappointment, and a schizophrenic approach to the introduction of new technologies. The UNIX and real-time UNIX operating system technologies are major examples of emerging technologies with great potential benefits but unrealistic expectations. Users want to use UNIX as a common operating system throughout large segments of their organizations. A common operating system would decrease software costs by helping to provide portability and interoperability between computer systems in today's multivendor environments. Users would be able to more easily purchase new equipment and technologies and cost-effectively reuse their applications. And they could more easily connect heterogeneous equipment in different departments without having to constantly write and rewrite interfaces. On the other hand, many users in various organizations do not understand the ramifications of general-purpose versus real-time UNIX. Users tend to think of "real-time" as a way to handle exotic heart-monitoring or robotics systems. Then these users use UNIX for transaction processing and office applications and complain about its performance, robustness, and reliability. Unfortunately, the users don't realize that real-time capabilities added to UNIX can provide better performance, robustness and reliability for these non-real-time applications. Many other vendors and users do realize this, however. There are indications even now that general-purpose UNIX will go away as a separate entity. It will be replaced by a real-time UNIX. General-purpose UNIX will exist only as a subset of real-time UNIX.

High Performance Memory Systems (Paperback, Softcover reprint of the original 1st ed. 2004): Haldun Hadimioglu, David Kaeli,... High Performance Memory Systems (Paperback, Softcover reprint of the original 1st ed. 2004)
Haldun Hadimioglu, David Kaeli, Jeffrey Kuskin, Ashwini Nanda, Josep Torrellas
R1,412 Discovery Miles 14 120 Ships in 18 - 22 working days

The State of Memory Technology Over the past decade there has been rapid growth in the speed of micropro cessors. CPU speeds are approximately doubling every eighteen months, while main memory speed doubles about every ten years. The International Tech nology Roadmap for Semiconductors (ITRS) study suggests that memory will remain on its current growth path. The ITRS short-and long-term targets indicate continued scaling improvements at about the current rate by 2016. This translates to bit densities increasing at two times every two years until the introduction of 8 gigabit dynamic random access memory (DRAM) chips, after which densities will increase four times every five years. A similar growth pattern is forecast for other high-density chip areas and high-performance logic (e.g., microprocessors and application specific inte grated circuits (ASICs)). In the future, molecular devices, 64 gigabit DRAMs and 28 GHz clock signals are targeted. Although densities continue to grow, we still do not see significant advances that will improve memory speed. These trends have created a problem that has been labeled the Memory Wall or Memory Gap."

Progress in VLSI Design and Test - 16th International Symposium on VSLI Design and Test, VDAT 2012, Shipur, India, July 1-4,... Progress in VLSI Design and Test - 16th International Symposium on VSLI Design and Test, VDAT 2012, Shipur, India, July 1-4, 2012, Proceedings (Paperback, 2012 ed.)
Hafizur Rahaman, Sanatan Chattopadhyay, Santanu Chattopadhyay
R1,445 Discovery Miles 14 450 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 16th International Symposium on VSLI Design and Test, VDAT 2012, held in Shibpur, India, in July 2012. The 30 revised regular papers presented together with 10 short papers and 13 poster sessions were carefully selected from 135 submissions. The papers are organized in topical sections on VLSI design, design and modeling of digital circuits and systems, testing and verification, design for testability, testing memories and regular logic arrays, embedded systems: hardware/software co-design and verification, emerging technology: nanoscale computing and nanotechnology.

TRON Project 1990 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1990): Ken Sakamura TRON Project 1990 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1990)
Ken Sakamura
R1,466 Discovery Miles 14 660 Ships in 18 - 22 working days

I wish to extend my warm greetings to you all on behalf of the TRON Association, on this occasion of the Seventh International TRON Project Symposium. The TRON Project was proposed by Dr. Ken Sakamura of the University of Tokyo, with the aim of designing a new, comprehen sive computer architecture that is open to worldwide use. Already more than six years have passed since the project was put in motion. The TRON Association is now made up of over 140 co m panies and organizations, including 25 overseas firms or their affiliates. A basic goal of TRON Project activities is to offer the world a human-oriented computer culture, that will lead to a richer and more fulfilling life for people throughout the world. It is our desire to bring to reality a new order in the world of computers, based on design concepts that consider the needs of human beings first of all, and to enable people to enjoy the full benefits of these com puters in their daily life. Thanks to the efforts of Association members, in recent months a number of TRON-specification 32-bit microprocessors have been made available. ITRON-specification products are continuing to appear, and we are now seeing commercial implementations of BTRON specifications as well. The CTRON subproject, mean while, is promoting standardization through validation testing and a portability experiment, and products are being marketed by sev eral firms. This is truly a year in which the TRON Project has reached the practical implementation stage."

OpenMP in a Heterogeneous World - 8th International Workshop on OpenMP, IWOMP 2012, Rome, Italy, June 11-13, 2012. Proceedings... OpenMP in a Heterogeneous World - 8th International Workshop on OpenMP, IWOMP 2012, Rome, Italy, June 11-13, 2012. Proceedings (Paperback, 2012)
Barbara Chapman, Federico Massaioli, Matthias S. Muller, Marco Rorro
R1,406 Discovery Miles 14 060 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 8th International Workshop on OpenMP, held in in Rome, Italy, in June 2012. The 18 technical full papers presented together with 7 posters were carefully reviewed and selected from 30 submissions. The papers are organized in topical sections on proposed extensions to OpenMP, runtime environments, optimization and accelerators, task parallelism, validations and benchmarks

Software Performability: From Concepts to Applications (Paperback, Softcover reprint of the original 1st ed. 1996): Ann T. Tai,... Software Performability: From Concepts to Applications (Paperback, Softcover reprint of the original 1st ed. 1996)
Ann T. Tai, John F. Meyer, Algirdas Avizienis
R3,991 Discovery Miles 39 910 Ships in 18 - 22 working days

Computers are currently used in a variety of critical applications, including systems for nuclear reactor control, flight control (both aircraft and spacecraft), and air traffic control. Moreover, experience has shown that the dependability of such systems is particularly sensitive to that of its software components, both the system software of the embedded computers and the application software they support. Software Performability: From Concepts to Applications addresses the construction and solution of analytic performability models for critical-application software. The book includes a review of general performability concepts along with notions which are peculiar to software performability. Since fault tolerance is widely recognized as a viable means for improving the dependability of computer system (beyond what can be achieved by fault prevention), the examples considered are fault-tolerant software systems that incorporate particular methods of design diversity and fault recovery. Software Performability: From Concepts to Applications will be of direct benefit to both practitioners and researchers in the area of performance and dependability evaluation, fault-tolerant computing, and dependable systems for critical applications. For practitioners, it supplies a basis for defining combined performance-dependability criteria (in the form of objective functions) that can be used to enhance the performability (performance/dependability) of existing software designs. For those with research interests in model-based evaluation, the book provides an analytic framework and a variety of performability modeling examples in an application context of recognized importance. The material contained in this book will both stimulate future research on related topics and, for teaching purposes, serve as a reference text in courses on computer system evaluation, fault-tolerant computing, and dependable high-performance computer systems.

VLSI Placement and Routing: The PI Project (Paperback, Softcover reprint of the original 1st ed. 1989): Alan T Sherman VLSI Placement and Routing: The PI Project (Paperback, Softcover reprint of the original 1st ed. 1989)
Alan T Sherman
R1,384 Discovery Miles 13 840 Ships in 18 - 22 working days

This book provides a superb introduction to and overview of the MIT PI System for custom VLSI placement and routing. Alan Sher man has done an excellent job of collecting and clearly presenting material that was previously available only in various theses, confer ence papers, and memoranda. He has provided here a balanced and comprehensive presentation of the key ideas and techniques used in PI, discussing part of his own Ph. D. work (primarily on the place ment problem) in the context of the overall design of PI and the contributions of the many other PI team members. I began the PI Project in 1981 after learning first-hand how dif ficult it is to manually place modules and route interconnections in a custom VLSI chip. In 1980 Adi Shamir, Leonard Adleman, and I designed a custom VLSI chip for performing RSA encryp tion/decryption [226]. I became fascinated with the combinatorial and algorithmic questions arising in placement and routing, and be gan active research in these areas. The PI Project was started in the belief that many of the most interesting research issues would arise during an actual implementation effort, and secondarily in the hope that a practically useful tool might result. The belief was well-founded, but I had underestimated the difficulty of building a large easily-used software tool for a complex domain; the PI soft ware should be considered as a prototype implementation validating the design choices made.

Relations and Graphs - Discrete Mathematics for Computer Scientists (Paperback, Softcover reprint of the original 1st ed.... Relations and Graphs - Discrete Mathematics for Computer Scientists (Paperback, Softcover reprint of the original 1st ed. 1993)
Gunther Schmidt, Thomas Stroehlein
R2,659 Discovery Miles 26 590 Ships in 18 - 22 working days

Relational methods can be found at various places in computer science, notably in data base theory, relational semantics of concurrency, relationaltype theory, analysis of rewriting systems, and modern programming language design. In addition, they appear in algorithms analysis and in the bulk of discrete mathematics taught to computer scientists. This book is devoted to the background of these methods. It explains how to use relational and graph-theoretic methods systematically in computer science. A powerful formal framework of relational algebra is developed with respect to applications to a diverse range of problem areas. Results are first motivated by practical examples, often visualized by both Boolean 0-1-matrices and graphs, and then derived algebraically.

Advances in Randomized Parallel Computing (Paperback, Softcover reprint of the original 1st ed. 1999): Panos M. Pardalos,... Advances in Randomized Parallel Computing (Paperback, Softcover reprint of the original 1st ed. 1999)
Panos M. Pardalos, Sanguthevar Rajasekaran
R4,021 Discovery Miles 40 210 Ships in 18 - 22 working days

The technique of randomization has been employed to solve numerous prob lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at the O( n log n) average run time for quicksort is that each input permutation is equally likely. Clearly, any average case analysis is only as good as how valid the assumption made on the input space is. Randomized algorithms achieve superior performances without making any assumptions on the inputs by making coin flips within the algorithm. Any analysis done of randomized algorithms will be valid for all p0: .sible inputs."

Proof and Computation (Paperback, Softcover reprint of the original 1st ed. 1995): Helmut Schwichtenberg Proof and Computation (Paperback, Softcover reprint of the original 1st ed. 1995)
Helmut Schwichtenberg
R2,703 Discovery Miles 27 030 Ships in 18 - 22 working days

Logical concepts and methods are of growing importance in many areas of computer science. The proofs-as-programs paradigm and the wide acceptance of Prolog show this clearly. The logical notion of a formal proof in various constructive systems can be viewed as a very explicit way to describe a computation procedure. Also conversely, the development of logical systems has been influenced by accumulating knowledge on rewriting and unification techniques. This volume contains a series of lectures by leading researchers giving a presentation of new ideas on the impact of the concept of a formal proof on computation theory. The subjects covered are: specification and abstract data types, proving techniques, constructive methods, linear logic, and concurrency and logic.

Ad Hoc Mobile Wireless Networks - Principles, Protocols, and Applications, Second Edition (Hardcover, 2nd edition): Subir Kumar... Ad Hoc Mobile Wireless Networks - Principles, Protocols, and Applications, Second Edition (Hardcover, 2nd edition)
Subir Kumar Sarkar, T. G. Basavaraju, C. Puttamadappa
R4,232 Discovery Miles 42 320 Ships in 10 - 15 working days

The military, the research community, emergency services, and industrial environments all rely on ad hoc mobile wireless networks because of their simple infrastructure and minimal central administration. Now in its second edition, Ad Hoc Mobile Wireless Networks: Principles, Protocols, and Applications explains the concepts, mechanism, design, and performance of these highly valued systems.

Following an overview of wireless network fundamentals, the book explores MAC layer, routing, multicast, and transport layer protocols for ad hoc mobile wireless networks. Next, it examines quality of service and energy management systems. Additional chapters cover mobility models for multi-hop ad hoc wireless networks as well as cross-layer design issues.

Exploring Bluetooth, IrDA (Infrared Data Association), HomeRF, WiFi, WiMax, Wireless Internet, and Mobile IP, the book contains appropriate examples and problems at the end of each chapter to illustrate each concept. This second edition has been completely updated with the latest technology and includes a new chapter on recent developments in the field, including sensor networks, personal area networks (PANs), smart dress, and vehicular ad hoc networks.

Self-organized, self-configured, and self-controlled, ad hoc mobile wireless networks will continue to be valued for a range of applications, as they can be set up and deployed anywhere and anytime. This volume captures the current state of the field as well as upcoming challenges awaiting researchers.

Laser Spectroscopy (Paperback, Softcover reprint of the original 1st ed. 1974): Richard Brewer Laser Spectroscopy (Paperback, Softcover reprint of the original 1st ed. 1974)
Richard Brewer
R2,782 Discovery Miles 27 820 Ships in 18 - 22 working days

The Laser Spectroscopy Conference held at Vail, Colorado, June 25-29, 1973 was in certain ways the first meeting of its kind. Var ious quantum electronics conferences in the past have covered non linear optics, coherence theory, lasers and masers, breakdown, light scattering and so on. However, at Vail only two major themes were developed - tunable laser sources and the use of lasers in spectro scopic measurements, especially those involving high precision. Even so, Laser Spectroscopy covers a broad range of topics, making possible entirely new investigations and in older ones orders of magnitude improvement in resolution. The conference was interdisciplinary and international in char acter with scientists representing Japan, Italy, West Germany, Canada, Israel, France, England, and the United States. Of the 150 participants, the majority were physicists and electrical engineers in quantum electronics and the remainder, physical chemists and astrophysicists. We regret, because of space limitations, about 100 requests to attend had to be refused."

Petri Nets - An Introduction (Paperback, Softcover reprint of the original 1st ed. 1985): Wolfgang Reisig Petri Nets - An Introduction (Paperback, Softcover reprint of the original 1st ed. 1985)
Wolfgang Reisig
R1,384 Discovery Miles 13 840 Ships in 18 - 22 working days

Net theory is a theory of systems organization which had its origins, about 20 years ago, in the dissertation of C. A. Petri [1]. Since this seminal paper, nets have been applied in various areas, at the same time being modified and theoretically investigated. In recent time, computer scientists are taking a broader interest in net theory. The main concern of this book is the presentation of those parts of net theory which can serve as a basis for practical application. It introduces the basic net theoretical concepts and ways of thinking, motivates them by means of examples and derives relations between them. Some extended examples il lustrate the method of application of nets. A major emphasis is devoted to those aspect which distinguish nets from other system models. These are for instance, the role of concurrency, an awareness of the finiteness of resources, and the pos sibility of using the same representation technique of different levels of ab straction. On completing this book the reader should have achieved a system atic grounding in the subject allowing him access to the net literature [25]. These objectives determined the subjects treated here. The presentation of the material here is rather more axiomatic than in ductive. We start with the basic notions of 'condition' and 'event' and the con cept of the change of states by (concurrently) occurring events. By generali zation of these notions a part of the theory of nets is presented.

The Origins of Digital Computers - Selected Papers (Paperback, 3rd ed. 1982. Softcover reprint of the original 3rd ed. 1982):... The Origins of Digital Computers - Selected Papers (Paperback, 3rd ed. 1982. Softcover reprint of the original 3rd ed. 1982)
B. Randell
R5,224 Discovery Miles 52 240 Ships in 18 - 22 working days
Multi-Microprocessor Systems for Real-Time Applications (Paperback, Softcover reprint of the original 1st ed. 1985): Gianni... Multi-Microprocessor Systems for Real-Time Applications (Paperback, Softcover reprint of the original 1st ed. 1985)
Gianni Conte, Dante Del Corso
R4,016 Discovery Miles 40 160 Ships in 18 - 22 working days

The continous development of computer technology supported by the VLSI revolution stimulated the research in the field .of multiprocessors systems. The main motivation for the migration of design efforts from conventional architectures towards multiprocessor ones is the possibi I ity to obtain a significant processing power together with the improvement of price/performance, reliability and flexibility figures. Currently, such systems are moving from research laboratories to real field appl ications. Future technological advances and new generations of components are I ikely to further enhance this trend. This book is intended to provide basic concepts and design methodologies for engineers and researchers involved in the development of mul tiprocessor systems and/or of appl ications based on multiprocessor architectures. In addition the book can be a source of material for computer architecture courses at graduate level. A preliminary knowledge of computer architecture and logical design has been assumed in wri ting this book. Not all the problems related with the development of multiprocessor systems are addressed in th i s book. The covered range spans from the electrical and logical design problems, to architectural issues, to design methodologis for system software. Subj ects such as software development in a multiprocessor environment or loosely coupled multiprocessor systems are out of the scope of the book. Since the basic elements, processors and memories, are now available as standard integrated circuits, the key design problem is how to put them together in an efficient and reliable way."

Reconfigurable Computing: Architectures, Tools and Applications - 8th International Symposium, ARC 2012, Hongkong, China, March... Reconfigurable Computing: Architectures, Tools and Applications - 8th International Symposium, ARC 2012, Hongkong, China, March 19-23, 2012, Proceedings (Paperback, 2012 ed.)
Oliver Choy, Ray Cheung, Peter Athanas, Kentaro Sano
R1,435 Discovery Miles 14 350 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 8th International Symposium on Reconfigurable Computing: Architectures, Tools and Applications, ARC 2012, held in Hongkong, China, in March 2012. The 35 revised papers presented, consisting of 25 full papers and 10 poster papers were carefully reviewed and selected from 44 submissions. The topics covered are applied RC design methods and tools, applied RC architectures, applied RC applications and critical issues in applied RC.

Robust Computing with Nano-scale Devices - Progresses and Challenges (Paperback, 2010 ed.): Chao Huang Robust Computing with Nano-scale Devices - Progresses and Challenges (Paperback, 2010 ed.)
Chao Huang
R2,624 Discovery Miles 26 240 Ships in 18 - 22 working days

Robust Nano-Computing focuses on various issues of robust nano-computing, defect-tolerance design for nano-technology at different design abstraction levels. It addresses both redundancy- and configuration-based methods as well as fault detecting techniques through the development of accurate computation models and tools. The contents present an insightful view of the ongoing researches on nano-electronic devices, circuits, architectures, and design methods, as well as provide promising directions for future research.

Pipelined ADC Design and Enhancement Techniques (Paperback, 2010 ed.): Imran Ahmed Pipelined ADC Design and Enhancement Techniques (Paperback, 2010 ed.)
Imran Ahmed
R3,997 Discovery Miles 39 970 Ships in 18 - 22 working days

Pipelined ADCs have seen phenomenal improvements in performance over the last few years. As such, when designing a pipelined ADC a clear understanding of the design tradeoffs, and state of the art techniques is required to implement today's high performance low power ADCs.

Nonlinear Optical Materials and Devices for Applications in Information Technology (Paperback, Softcover reprint of the... Nonlinear Optical Materials and Devices for Applications in Information Technology (Paperback, Softcover reprint of the original 1st ed. 1995)
A Miller, K.R. Welford, B. Daino
R5,181 Discovery Miles 51 810 Ships in 18 - 22 working days

Nonlinear Optical Materials and Devices for Applications in Information Technology takes the reader from fundamental interactions of laser light in materials to the latest developments of digital optical information processing. The book emphasises nonlinear optical interactions in bulk and low-dimensional semiconductors, liquid crystals and optical fibres. After establishing the basic laser--material interactions in these materials, it goes on to assess applications in soliton propagation, integrated optics, smart pixel arrays and digital optical computing.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash Hardcover R5,924 Discovery Miles 59 240
Agile Software Architecture - Aligning…
Muhammad Ali Babar, Alan W. Brown, … Paperback R2,680 R2,187 Discovery Miles 21 870
Clean Architecture - Tips and Tricks to…
William Vance Hardcover R572 R527 Discovery Miles 5 270
CSS and HTML for beginners - A Beginners…
Ethan Hall Hardcover R1,027 R881 Discovery Miles 8 810
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, … Paperback R2,281 Discovery Miles 22 810
Clean Architecture - A Craftsman's Guide…
Robert Martin Paperback  (1)
R860 R741 Discovery Miles 7 410
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic Hardcover R4,204 Discovery Miles 42 040
Advances in Intelligent Systems…
Sergey Yurish Hardcover R2,453 Discovery Miles 24 530
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues Paperback R4,669 Discovery Miles 46 690
Handbook of Enterprise Systems…
Hardcover R4,284 Discovery Miles 42 840

 

Partners