Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book provides a superb introduction to and overview of the MIT PI System for custom VLSI placement and routing. Alan Sher man has done an excellent job of collecting and clearly presenting material that was previously available only in various theses, confer ence papers, and memoranda. He has provided here a balanced and comprehensive presentation of the key ideas and techniques used in PI, discussing part of his own Ph. D. work (primarily on the place ment problem) in the context of the overall design of PI and the contributions of the many other PI team members. I began the PI Project in 1981 after learning first-hand how dif ficult it is to manually place modules and route interconnections in a custom VLSI chip. In 1980 Adi Shamir, Leonard Adleman, and I designed a custom VLSI chip for performing RSA encryp tion/decryption [226]. I became fascinated with the combinatorial and algorithmic questions arising in placement and routing, and be gan active research in these areas. The PI Project was started in the belief that many of the most interesting research issues would arise during an actual implementation effort, and secondarily in the hope that a practically useful tool might result. The belief was well-founded, but I had underestimated the difficulty of building a large easily-used software tool for a complex domain; the PI soft ware should be considered as a prototype implementation validating the design choices made.
This book describes the first comprehensive approach to the optimization of interconnect architectures in 3D systems on chips (SoCs), specially addressing the challenges and opportunities arising from heterogeneous integration. Readers learn about the physical implications of using heterogeneous 3D technologies for SoC integration, while also learning to maximize the 3D-technology gains, through a physical-effect-aware architecture design. The book provides a deep theoretical background covering all abstraction-levels needed to research and architect tomorrow's 3D-integrated circuits, an extensive set of optimization methods (for power, performance, area, and yield), as well as an open-source optimization and simulation framework for fast exploration of novel designs.
It has been almost 5 years since the inauguration of the TRON project, a concept first proposed by Dr. K. Sakamura of the University of Tokyo. The TRON Association, which was founded as an independent organization in March 1988, has been taking over the activities of the earlier TRON Association, which was a division of Japan Electronic Industry Development Association (JEIDA). It has been expanding various operations to globalize the organizations activities. The number of member companies already exceeds 100, with increasing participation from overseas companies. It is truly an awaring historical event that so many members with the same qualifications and aims engaged in the research and development of the computer environment could be gathered together. The TRON concept aims at the creation of a new and complete environment beneficial to both computer and mankind. It has a very wide scope and great diversity. As it includes the open architecture concept and as the TRON machine should be able to work with various foreign languages, the TRON is targetted to be used internationally. In order for us to create a complete TRON world, at though there are several TRON products already on the market, continuous and aggressive participation from as members together with concentration as further development are indispensable. We, the TRON promoters, are much encouraged by such a driving force.
This project had its beginnings in the Fall of 1980. At that time Robert Wagner suggested that I investigate compiler optimi zation of data organization, suitable for use in a parallel or vector machine environment. We developed a scheme in which the compiler, having knowledge of the machine's access patterns, does a global analysis of a program's operations, and automatically determines optimum organization for the data. For example, for certain architectures and certain operations, large improvements in performance can be attained by storing a matrix in row major order. However a subsequent operation may require the matrix in column major order. A determination must be made whether or not it is the best solution globally to store the matrix in row order, column order, or even have two copies of it, each organized differently. We have developed two algorithms for making this determination. The technique shows promise in a vector machine environ ment, particularly if memory interleaving is used. Supercomputers such as the Cray, the CDC Cyber 205, the IBM 3090, as well as superminis such as the Convex are possible environments for implementation."
This book contains papers presented at the NATO Advanced Research Workshop on "Real-time Object and Environment Measurement and Classification" held in Maratea, Italy, August 31 - September 3, 1987. This workshop was organized within the activities of the NATO Special Programme on Sensory Systems for Robotic Control. Four major themes were discussed at this workshop: Real-time Requirements, Feature Measurement, Object Representation and Recognition, and Architecture for Measurement and Classification. A total of twenty-five technical presentations, contained in this book, cover a wide spectrum of topics including hardware implementation of specific vision algorithms, a complete vision system for object tracking and inspection, using three cameras (trinocular stereo) for feature measurement, neural network for object recognition, integration of CAD (Computer Aided Design) and vision systems, and the use of pyramid architectures for solving various computer vision problems. These papers are written by some of the very well-known researchers in the computer vision and pattern recognition community, and represent both industrial and academic viewpoints. The authors come from thirteen different countries from Europe and North America. Therefore, readers will get a first hand and current information about the status of computer vision research in various western countries. Further, this book will also be useful in understanding the current research issues in computer vision and the difficulties in designing real-time vision systems.
This book contains the proceedings of the NATO Advanced Research Workshop held in Maratea (Italy), May 5-9, 1986 on Pyramidal Systems for Image Processing and Computer Vision. We had 40 participants from 11 countries playing an active part in the workshop and all the leaders of groups that have produced a prototype pyramid machine or a design for such a machine were present. Within the wide field of parallel architectures for image processing a new area was recently born and is growing healthily: the area of pyramidally structured multiprocessing systems. Essentially, the processors are arranged in planes (from a base to an apex) each one of which is generally a reduced (usually by a power of two) version of the plane underneath: these processors are horizontally interconnected (within a plane) and vertically connected with "fathers" (on top planes) and "children" on the plane below. This arrangement has a number of interesting features, all of which were amply discussed in our Workshop including the cellular array and hypercube versions of pyramids. A number of projects (in different parts of the world) are reported as well as some interesting applications in computer vision, tactile systems and numerical calculations.
Logical concepts and methods are of growing importance in many areas of computer science. The proofs-as-programs paradigm and the wide acceptance of Prolog show this clearly. The logical notion of a formal proof in various constructive systems can be viewed as a very explicit way to describe a computation procedure. Also conversely, the development of logical systems has been influenced by accumulating knowledge on rewriting and unification techniques. This volume contains a series of lectures by leading researchers giving a presentation of new ideas on the impact of the concept of a formal proof on computation theory. The subjects covered are: specification and abstract data types, proving techniques, constructive methods, linear logic, and concurrency and logic.
These are the proceedings of a NATO Advanced Study Institute (ASI) held in Cetraro, Italy during 6-17 June 1983. The title of the ASI was Computer Arehiteetures for SpatiaZZy vistributed Vata, and it brouqht together some 60 participants from Europe and America. Presented ere are 21 of the lectures that were delivered. The articles cover a wide spectrum of topics related to computer architecture s specially oriented toward the fast processing of spatial data, and represent an excellent review of the state-of-the-art of this topic. For more than 20 years now researchers in pattern recognition, image processing, meteorology, remote sensing, and computer engineering have been looking toward new forms of computer architectures to speed the processing of data from two- and three-dimensional processes. The work can be said to have commenced with the landmark article by Steve Unger in 1958, and it received a strong forward push with the development of the ILIAC III and IV computers at the University of Illinois during the 1960's. One clear obstacle faced by the computer designers in those days was the limitation of the state-of-the-art of hardware, when the only switching devices available to them were discrete transistors. As aresult parallel processing was generally considered to be imprae tieal, and relatively little progress was made."
This volume contains the collected papers of the NATO Advanced Research Workshop on neurocomputing held in Les Arcs in February 1989. Various fields in neurocomputing are covered, including: - new or improved neural network algorithms: product units, recurrent multilayer networks, Boltzmann machines, Kohonen maps, growth algorithms; - topics in neural architectures: VLSI circuits, VLSI neurons, dedicated processors; - applications in speech processing: coding, link with hidden Markov Models (HMM), recognition, compression; - applications in image processing: recognition of digits, wind maps, mammographic images, compression; - models in neurobiology: cortical columns, posture, movement coordination. The book can be read at various levels. Newcomers will find in it a useful introduction to the field through the major papers, by leading scientists, and the collected bibliography at the end of the volume. Specialists will learn about some of the more advanced applications, architectures, chip designs and new algorithms. The book will interest all neural network scientists who wish to know what is happening in Europe.
Almost 4 years have elapsed since Dr. Ken Sakamura of The University of Tokyo first proposed the TRON (the realtime operating system nucleus) concept and 18 months since the foundation of the TRON Association on 16 June 1986. Members of the Association from Japan and overseas currently exceed 80 corporations. The TRON concept, as advocated by Dr. Ken Sakamura, is concerned with the problem of interaction between man and the computer (the man-machine inter face), which had not previously been given a great deal of attention. Dr. Sakamura has gone back to basics to create a new and complete cultural environment relative to computers and envisage a role for computers which will truly benefit mankind. This concept has indeed caused a stir in the computer field. The scope of the research work involved was initially regarded as being so extensive and diverse that the completion of activities was scheduled for the 1990s. However, I am happy to note that the enthusiasm expressed by individuals and organizations both within and outside Japan has permitted acceleration of the research and development activities. It is to be hoped that the presentations of the Third TRON Project Symposium will further the progress toward the creation of a computer environment that will be compatible with the aspirations of mankind."
This book provides a comprehensive survey of recent progress in the design and implementation of Networks-on-Chip. It addresses a wide spectrum of on-chip communication problems, ranging from physical, network, to application layers. Specific topics that are explored in detail include packet routing, resource arbitration, error control/correction, application mapping, and communication scheduling. Additionally, a novel bi-directional communication channel NoC (BiNoC) architecture is described, with detailed explanation. Written for practicing engineers in need of practical knowledge about the design and implementation of networks-on-chip; Includes tutorial-like details to introduce readers to a diverse range of NoC designs, as well as in-depth analysis for designers with NoC experience to explore advanced issues; Describes a variety of on-chip communication architectures, including a novel bi-directional communication channel NoC. From the Foreword: Overall this book shows important advances over the state of the art that will affect future system design as well as R&D in tools and methods for NoC design. It represents an important reference point for both designers and electronic design automation researchers and developers. --Giovanni De Micheli"
This is the official Open Group Pocket Guide for TOGAF Version 9.1 and is published in hard copy and electronic format by Van Haren Publishing on behalf of The Open Group. TOGAF(R), an Open Group Standard, is a proven enterprise architecture methodology and framework used by the world's leading organizations to improve business efficiency. It is the most prominent and reliable enterprise architecture standard, ensuring consistent standards, methods, and communication among enterprise architecture professionals. Enterprise architecture professionals fluent in TOGAF standards enjoy greater industry credibility, job effectiveness, and career opportunities. TOGAF helps practitioners avoid being locked into proprietary methods, utilize resources more efficiently and effectively, and realize a greater return on investment.
It is almost six years since the inauguration of the TRON project, a con cept first proposed by Dr. K. Sakamura of the University of Tokyo, and it is almost 2 years since the foundation of the TRON Association on March 1988. The number of regular member companies registered in the TRON Association as of November 1988 is 145 which is a new re cord for the Association. Some of this year's major activities that I would particularly like to mention are: - Over 50 TRON project-related products have been or are about to be introduced to the marketplace, according to a preliminary report from the Future Study Committee of the TRON Association. In par ticular, I am happy to say that the ITRON subproject, which is ahead of the other subprojects, has progressed so far that several papers on ITRON applications will be presented at this conference, which means that the ITRON specifications are now ready for application to em bedded commercial and industrial products."
This book gives an introduction to the mathematical theory of cooperative behavior in active systems of various origins, both natural and artificial. It is based on a lecture course in synergetics which I held for almost ten years at the University of Moscow. The first volume deals mainly with the problems of pattern fonnation and the properties of self-organized regular patterns in distributed active systems. It also contains a discussion of distributed analog information processing which is based on the cooperative dynamics of active systems. The second volume is devoted to the stochastic aspects of self-organization and the properties of self-established chaos. I have tried to avoid delving into particular applications. The primary intention is to present general mathematical models that describe the principal kinds of coopera tive behavior in distributed active systems. Simple examples, ranging from chemical physics to economics, serve only as illustrations of the typical context in which a particular model can apply. The manner of exposition is more in the tradition of theoretical physics than of in mathematics: Elaborate fonnal proofs and rigorous estimates are often replaced the text by arguments based on an intuitive understanding of the relevant models. Because of the interdisciplinary nature of this book, its readers might well come from very diverse fields of endeavor. It was therefore desirable to minimize the re quired preliminary knowledge. Generally, a standard university course in differential calculus and linear algebra is sufficient."
Supercomputing is an important science and technology that enables the scientist or the engineer to simulate numerically very complex physical phenomena related to large-scale scientific, industrial and military applications. It has made considerable progress since the first NATO Workshop on High-Speed Computation in 1983 (Vol. 7 of the same series). This book is a collection of papers presented at the NATO Advanced Research Workshop held in Trondheim, Norway, in June 1989. It presents key research issues related to: - hardware systems, architecture and performance; - compilers and programming tools; - user environments and visualization; - algorithms and applications. Contributions include critical evaluations of the state-of-the-art and many original research results.
This book brings together a selection of the best papers from the thirteenth edition of the Forum on specification and Design Languages Conference (FDL), which was held in Southampton, UK in September 2010. FDL is a well established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modelling and verification of integrated circuits, complex hardware/software embedded systems, and mixed-technology systems.
As conventional memory technologies such as DRAM and Flash run into scaling challenges, architects and system designers are forced to look at alternative technologies for building future computer systems. This synthesis lecture begins by listing the requirements for a next generation memory technology and briefly surveys the landscape of novel non-volatile memories. Among these, Phase Change Memory (PCM) is emerging as a leading contender, and the authors discuss the material, device, and circuit advances underlying this exciting technology. The lecture then describes architectural solutions to enable PCM for main memories. Finally, the authors explore the impact of such byte-addressable non-volatile memories on future storage and system designs. Table of Contents: Next Generation Memory Technologies / Architecting PCM for Main Memories / Tolerating Slow Writes in PCM / Wear Leveling for Durability / Wear Leveling Under Adversarial Settings / Error Resilience in Phase Change Memories / Storage and System Design With Emerging Non-Volatile Memories
This volume contains refereed papers and extended abstracts of papers presented at the NATO Advanced Research Workshop entitled 'Numerical Integration: Recent Developments, Software and Applications', held at Dalhousie University, Halifax, Canada, August 11-15, 1986. The Workshop was attended by thirty-six scientists from eleven NATO countries. Thirteen invited lectures and twenty-two contributed lectures were presented, of which twenty-five appear in full in this volume, together with extended abstracts of the remaining ten. It is more than ten years since the last workshop of this nature was held, in Los Alamos in 1975. Many developments have occurred in quadrature in the intervening years, and it seemed an opportune time to bring together again researchers in this area. The development of QUADPACK by Piessens, de Doncker, Uberhuber and Kahaner has changed the focus of research in the area of one dimensional quadrature from the construction of new rules to an emphasis on reliable robust software. There has been a dramatic growth in interest in the testing and evaluation of software, stimulated by the work of Lyness and Kaganove, Einarsson, and Piessens. The earlier research of Patterson into Kronrod extensions of Gauss rules, followed by the work of Monegato, and Piessens and Branders, has greatly increased interest in Gauss-based formulas for one-dimensional integration.
As the technology of Supercomputing processes, methodologies for approaching problems have also been developed. The main object of this symposium was the interdisciplinary participation of experts in related fields and passionate discussion to work toward the solution of problems. An executive committee especially arranged for this symposium selected speakers and other participants who submitted papers which are included in this volume. Also included are selected extracts from the two sessions of panel discussion, the "Needs and Seeds of Supercomputing," and "The Future of Supercomputing," which arose during a wide-ranging exchange of viewpoints.
Design technology to address the new and vast problem of heterogeneous embedded systems design while remaining compatible with standard "More Moore" flows, i.e. capable of simultaneously handling both silicon complexity and system complexity, represents one of the most important challenges facing the semiconductor industry today and will be for several years to come. While the micro-electronics industry, over the years and with its spectacular and unique evolution, has built its own specific design methods to focus mainly on the management of complexity through the establishment of abstraction levels, the emergence of device heterogeneity requires new approaches enabling the satisfactory design of physically heterogeneous embedded systems for the widespread deployment of such systems. Heterogeneous Embedded Systems, compiled largely from a set of contributions from participants of past editions of the Winter School on Heterogeneous Embedded Systems Design Technology (FETCH), proposes a necessarily broad and holistic overview of design techniques used to tackle the various facets of heterogeneity in terms of technology and opportunities at the physical level, signal representations and different abstraction levels, architectures and components based on hardware and software, in all the main phases of design (modeling, validation with multiple models of computation, synthesis and optimization). It concentrates on the specific issues at the interfaces, and is divided into two main parts. The first part examines mainly theoretical issues and focuses on the modeling, validation and design techniques themselves. The second part illustrates the use of these methods in various design contexts at the forefront of new technology and architectural developments.
This book is an updated version of my Ph.D. dissertation, The AND/OR Process Model for Parallel Interpretation of Logic Programs. The three years since that paper was finished (or so I thought then) have seen quite a bit of work in the area of parallel execution models and programming languages for logic programs. A quick glance at the bibliography here shows roughly 50 papers on these topics, 40 of which were published after 1983. The main difference between the book and the dissertation is the updated survey of related work. One of the appendices in the dissertation was an overview of a Prolog implementation of an interpreter based on the AND/OR Process Model, a simulator I used to get some preliminary measurements of parallelism in logic programs. In the last three years I have been involved with three other implementations. One was written in C and is now being installed on a small multiprocessor at the University of Oregon. Most of the programming of this interpreter was done by Nitin More under my direction for his M.S. project. The other two, one written in Multilisp and the other in Modula-2, are more limited, intended to test ideas about implementing specific aspects of the model. Instead of an appendix describing one interpreter, this book has more detail about implementation included in Chapters 5 through 7, based on a combination of ideas from the four interpreters.
Net theory is a theory of systems organization which had its origins, about 20 years ago, in the dissertation of C. A. Petri [1]. Since this seminal paper, nets have been applied in various areas, at the same time being modified and theoretically investigated. In recent time, computer scientists are taking a broader interest in net theory. The main concern of this book is the presentation of those parts of net theory which can serve as a basis for practical application. It introduces the basic net theoretical concepts and ways of thinking, motivates them by means of examples and derives relations between them. Some extended examples il lustrate the method of application of nets. A major emphasis is devoted to those aspect which distinguish nets from other system models. These are for instance, the role of concurrency, an awareness of the finiteness of resources, and the pos sibility of using the same representation technique of different levels of ab straction. On completing this book the reader should have achieved a system atic grounding in the subject allowing him access to the net literature [25]. These objectives determined the subjects treated here. The presentation of the material here is rather more axiomatic than in ductive. We start with the basic notions of 'condition' and 'event' and the con cept of the change of states by (concurrently) occurring events. By generali zation of these notions a part of the theory of nets is presented.
The Laser Spectroscopy Conference held at Vail, Colorado, June 25-29, 1973 was in certain ways the first meeting of its kind. Var ious quantum electronics conferences in the past have covered non linear optics, coherence theory, lasers and masers, breakdown, light scattering and so on. However, at Vail only two major themes were developed - tunable laser sources and the use of lasers in spectro scopic measurements, especially those involving high precision. Even so, Laser Spectroscopy covers a broad range of topics, making possible entirely new investigations and in older ones orders of magnitude improvement in resolution. The conference was interdisciplinary and international in char acter with scientists representing Japan, Italy, West Germany, Canada, Israel, France, England, and the United States. Of the 150 participants, the majority were physicists and electrical engineers in quantum electronics and the remainder, physical chemists and astrophysicists. We regret, because of space limitations, about 100 requests to attend had to be refused."
Although asynchronous circuits date back to the early 1950s most of
the digital circuits in use today are synchronous because,
traditionally, asynchronous circuits have been viewed as difficult
to understand and design. In recent years, however, there has been
a great surge of interest in asynchronous circuits, largely through
the development of new asynchronous design methodologies.
Kubernetes radically changes the way applications are built and deployed in the cloud. Since its introduction in 2014, this container orchestrator has become one of the largest and most popular open source projects in the world. The updated edition of this practical book shows developers and ops personnel how Kubernetes and container technology can help you achieve new levels of velocity, agility, reliability, and efficiency. Kelsey Hightower, Brendan Burns, and Joe Beda-who've worked on Kubernetes at Google and beyond-explain how this system fits into the lifecycle of a distributed application. You'll learn how to use tools and APIs to automate scalable distributed systems, whether it's for online services, machine learning applications, or a cluster of Raspberry Pi computers. Create a simple cluster to learn how Kubernetes works Dive into the details of deploying an application using Kubernetes Learn specialized objects in Kubernetes, such as DaemonSets, jobs, ConfigMaps, and secrets Explore deployments that tie together the lifecycle of a complete application Get practical examples of how to develop and deploy real-world applications in Kubernetes |
You may like...
Enterprise Level Security 1 & 2
Kevin Foltz, William R. Simpson
Paperback
R1,421
Discovery Miles 14 210
Edge-AI in Healthcare - Trends and…
Sonali Vyas, Akanksha Upadhyaya, …
Hardcover
R2,644
Discovery Miles 26 440
Discoverability in Digital Repositories…
Liz Woolcott, Ali Shiri
Paperback
R971
Discovery Miles 9 710
Research Advances in Intelligent…
Anshul Verma, Pradeepika Verma, …
Hardcover
R3,034
Discovery Miles 30 340
|