Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Regular Nanofabrics in Emerging Technologies gives a deep insight into both fabrication and design aspects of emerging semiconductor technologies, that represent potential candidates for the post-CMOS era. Its approach is unique, across different fields, and it offers a synergetic view for a public of different communities ranging from technologists, to circuit designers, and computer scientists. The book presents two technologies as potential candidates for future semiconductor devices and systems and it shows how fabrication issues can be addressed at the design level and vice versa. The reader either for academic or research purposes will find novel material that is explained carefully for both experts and non-initiated readers. Regular Nanofabrics in Emerging Technologies is a survey of post-CMOS technologies. It explains processing, circuit and system level design for people with various backgrounds.
Derived from industry-training classes that the author teaches at the Embedded Systems Institute at Eindhoven, the Netherlands and at Buskerud University College at Kongsberg in Norway, Systems Architecting: A Business Perspective places the processes of systems architecting in a broader context by juxtaposing the relationship of the systems architect with enterprise and management. This practical, scenario-driven guide fills an important gap, providing systems architects insight into the business processes, and especially into the processes to which they actively contribute. The book uses a simple reference model to enable understanding of the inside of a system in relation to its context. It covers the impact of tool selection and brings balance to the application of the intellectual tools versus computer-aided tools. Stressing the importance of a clear strategy, the authors discuss methods and techniques that facilitate the architect's contribution to the strategy process. They also give insight into the needs and complications of harvesting synergy, insight that will help establish an effective synergy-harvesting strategy. The book also explores the often difficult relationship between managers and systems architects. Written in an approachable style, the book discusses the breadth of the human sciences and their relevance to systems architecting. It highlights the relevance of human aspects to systems architects, linking theory to practical experience when developing systems architecting competence.
Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore the systems-level challenges in achieving scalable, fault-tolerant quantum computation. In this lecture, we provide an engineering-oriented introduction to quantum computation with an overview of the theory behind key quantum algorithms. Next, we look at architectural case studies based upon experimental data and future projections for quantum computation implemented using trapped ions. While we focus here on architectures targeted for realization using trapped ions, the techniques for quantum computer architecture design, quantum fault-tolerance, and compilation described in this lecture are applicable to many other physical technologies that may be viable candidates for building a large-scale quantum computing system. We also discuss general issues involved with programming a quantum computer as well as a discussion of work on quantum architectures based on quantum teleportation. Finally, we consider some of the open issues remaining in the design of quantum computers. Table of Contents: Introduction / Basic Elements for Quantum Computation / Key Quantum Algorithms / Building Reliable and Scalable Quantum Architectures / Simulation of Quantum Computation / Architectural Elements / Case Study: The Quantum Logic Array Architecture / Programming the Quantum Architecture / Using the QLA for Quantum Simulation: The Transverse Ising Model / Teleportation-Based Quantum Architectures / Concluding Remarks
This book constitutes the refereed proceedings of the 7th International Symposium on Reconfigurable Computing: Architectures, Tools and Applications, ARC 2011, held in Belfast, UK, in March 2011. The 40 revised papers presented, consisting of 24 full papers, 14 poster papers, and the abstracts of 2 plenary talks, were carefully reviewed and selected from 88 submissions. The topics covered are reconfigurable accelerators, design tools, reconfigurable processors, applications, device architecture, methodology and simulation, and system architecture.
Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applications to communicate and interoperate in an orchestrated and efficient way. This book describes the design and engineering tradeoffs of datacenter networks. It describes interconnection networks from topology and network architecture to routing algorithms, and presents opportunities for taking advantage of the emerging technology trends that are influencing router microarchitecture. With the emergence of "many-core" processor chips, it is evident that we will also need "many-port" routing chips to provide a bandwidth-rich network to avoid the performance limiting effects of Amdahl's Law. We provide an overview of conventional topologies and their routing algorithms and show how technology, signaling rates and cost-effective optics are motivating new network topologies that scale up to millions of hosts. The book also provides detailed case studies of two high performance parallel computer systems and their networks. Table of Contents: Introduction / Background / Topology Basics / High-Radix Topologies / Routing / Scalable Switch Microarchitecture / System Packaging / Case Studies / Closing Remarks
This book sets out the principles of parallel computing in a way which will be useful to student and potential user alike. It includes coverage of both conventional and neural computers. The content of the book is arranged hierarchically. It explains why, where and how parallel computing is used; the fundamental paradigms employed in the field; how systems are programmed or trained; technical aspects including connectivity and processing element complexity; and how system performance is estimated (and why doing so is difficult). The penultimate chapter of the book comprises a set of case studies of archetypal parallel computers, each study written by an individual closely connected with the system in question. The final chapter correlates the various aspects of parallel computing into a taxonomy of systems.
"The Pentium Chronicles" describes the architecture and key decisions that shaped the P6, Intel's most successful chip to date. As author Robert Colwell recognizes, success is about learning from others, and "Chronicles" is filled with stories of ordinary, exceptional people as well as frank assessments of "oops" moments, leaving you with a better understanding of what it takes to create and grow a winning product.
It is a great pleasure to write a preface to this book. In my view, the content is unique in that it blends traditional teaching approaches with the use of mathematics and a mainstream Hardware Design Language (HDL) as formalisms to describe key concepts. The book keeps the "machine" separate from the "application" by strictly following a bottom-up approach: it starts with transistors and logic gates and only introduces assembly language programs once their execution by a processor is clearly de ned. Using a HDL, Verilog in this case, rather than static circuit diagrams is a big deviation from traditional books on computer architecture. Static circuit diagrams cannot be explored in a hands-on way like the corresponding Verilog model can. In order to understand why I consider this shift so important, one must consider how computer architecture, a subject that has been studied for more than 50 years, has evolved. In the pioneering days computers were constructed by hand. An entire computer could (just about) be described by drawing a circuit diagram. Initially, such d- grams consisted mostly of analogue components before later moving toward d- ital logic gates. The advent of digital electronics led to more complex cells, such as half-adders, ip- ops, and decoders being recognised as useful building blocks.
This lecture presents a study of the microarchitecture of contemporary microprocessors. The focus is on implementation aspects, with discussions on their implications in terms of performance, power, and cost of state-of-the-art designs. The lecture starts with an overview of the different types of microprocessors and a review of the microarchitecture of cache memories. Then, it describes the implementation of the fetch unit, where special emphasis is made on the required support for branch prediction. The next section is devoted to instruction decode with special focus on the particular support to decoding x86 instructions. The next chapter presents the allocation stage and pays special attention to the implementation of register renaming. Afterward, the issue stage is studied. Here, the logic to implement out-of-order issue for both memory and non-memory instructions is thoroughly described. The following chapter focuses on the instruction execution and describes the different functional units that can be found in contemporary microprocessors, as well as the implementation of the bypass network, which has an important impact on the performance. Finally, the lecture concludes with the commit stage, where it describes how the architectural state is updated and recovered in case of exceptions or misspeculations. This lecture is intended for an advanced course on computer architecture, suitable for graduate students or senior undergrads who want to specialize in the area of computer architecture. It is also intended for practitioners in the industry in the area of microprocessor design. The book assumes that the reader is familiar with the main concepts regarding pipelining, out-of-order execution, cache memories, and virtual memory. Table of Contents: Introduction / Caches / The Instruction Fetch Unit / Decode / Allocation / The Issue Stage / Execute / The Commit Stage / References / Author Biographies
Nonlinear Optical Materials and Devices for Applications in Information Technology takes the reader from fundamental interactions of laser light in materials to the latest developments of digital optical information processing. The book emphasises nonlinear optical interactions in bulk and low-dimensional semiconductors, liquid crystals and optical fibres. After establishing the basic laser--material interactions in these materials, it goes on to assess applications in soliton propagation, integrated optics, smart pixel arrays and digital optical computing.
This new approach to an established field introduces the concept of predicate logic in order to supersede propositional logic in switching theory. The author gives new insight into the theory of latches (memory circuits) for use in undergraduate and graduate courses.
Quality of Protection: Security Measurements and Metrics is an edited volume based on the Quality of Protection Workshop in Milano, Italy (September 2005). This volume discusses how security research can progress towards quality of protection in security comparable to quality of service in networking and software measurements, and metrics in empirical software engineering. Information security in the business setting has matured in the last few decades. Standards such as IS017799, the Common Criteria (ISO15408), and a number of industry certifications and risk analysis methodologies have raised the bar for good security solutions from a business perspective. Designed for a professional audience composed of researchers and practitioners in industry, Quality of Protection: Security Measurements and Metrics is also suitable for advanced-level students in computer science.
After a brief introduction to low-power VLSI design, the design space of ASIP instruction set architectures (ISAs) is introduced with a special focus on important features for digital signal processing. Based on the degrees of freedom offered by this design space, a consistent ASIP design flow is proposed: this design flow starts with a given application and uses incremental optimization of the ASIP hardware, of ASIP coprocessors and of the ASIP software by using a top-down approach and by applying application-specific modifications on all levels of design hierarchy. A broad range of real-world signal processing applications serves as vehicle to illustrate each design decision and provides a hands-on approach to ASIP design. Finally, two complete case studies demonstrate the feasibility and the efficiency of the proposed methodology and quantitatively evaluate the benefits of ASIPs in an industrial context.
Digital image business applications are expanding rapidly, driven by recent advances in the technology and breakthroughs in the price and performance of hardware and firmware. This ever increasing need for the storage and transmission of images has in turn driven the technology of image compression: image data rate reduction to save storage space and reduce transmission rate requirements. Digital image compression offers a solution to a variety of imaging applications that require a vast amount of data to represent the images, such as document imaging management systems, facsimile transmission, image archiving, remote sensing, medical imaging, entertainment, HDTV, broadcasting, education and video teleconferencing. Digital Image Compression: Algorithms and Standards introduces the reader to compression algorithms, including the CCITT facsimile standards T.4 and T.6, JBIG, CCITT H.261 and MPEG standards. The book provides comprehensive explanations of the principles and concepts of the algorithms, helping the readers' understanding and allowing them to use the standards in business, product development and R&D. Audience: A valuable reference for the graduate student, researcher and engineer. May also be used as a text for a course on the subject.
A new approach to explaining the existence of firms and markets, focusing on variability and coordination. It stands in contrast to the emphasis on transaction costs, and on monitoring and incentive structures, which are prominent in most of the modern literature in this field. This approach, called the variability approach, allows us to: show why both the need for communication and the coordination costs increase when the division of labor increases; explain why, while the firm relies on direction, the market does not; rigorously formulate the optimum divisionalization problem; better understand the relationship between technology and organization; show why the size' of the firm is limited; and to refine the analysis of whether the existence of a sharable input, or the presence of an external effect leads to the emergence of a firm. The book provides a wealth of insights for students and professionals in economics, business, law and organization.
Automatic transformation of a sequential program into a parallel form is a subject that presents a great intellectual challenge and promises a great practical award. There is a tremendous investment in existing sequential programs, and scientists and engineers continue to write their application programs in sequential languages (primarily in Fortran). The demand for higher speedups increases. The job of a restructuring compiler is to discover the dependence structure and the characteristics of the given machine. Much attention has been focused on the Fortran do loop. This is where one expects to find major chunks of computation that need to be performed repeatedly for different values of the index variable. Many loop transformations have been designed over the years, and several of them can be found in any parallelizing compiler currently in use in industry or at a university research facility. The book series on KappaLoop Transformations for Restructuring Compilerskappa provides a rigorous theory of loop transformations and dependence analysis. We want to develop the transformations in a consistent mathematical framework using objects like directed graphs, matrices, and linear equations. Then, the algorithms that implement the transformations can be precisely described in terms of certain abstract mathematical algorithms. The first volume, Loop Transformations for Restructuring Compilers: The Foundations, provided the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discussed data dependence, and introduced the major transformations. The current volume, Loop Parallelization, builds a detailed theory of iteration-level loop transformations based on the material developed in the previous book.
Multimedia data streams will form a major part of the new generation of applications in high-speed networks. Continuous media streams, however, require transmission with guaranteed performance. In addition, many multimedia applications will require peer-to-multipeer communication. Guaranteed performance can only be provided with resource reservation in the network, and efficient multipeer communication must be based on multicast support in the lower layers of the network. Architecture and Protocols for High-Speed Networks focuses on techniques for building the networks that will meet the needs of these multimedia applications. In particular two areas of current research interest in such communication systems are covered in depth. These are the protocol related aspects, such as switched networks, ATM, MAC layer, network and transport layer; and the services and applications. Architecture and Protocols for High-Speed Networks contains contributions from leading world experts, giving the most up-to-date research available. It is an essential reference for all professionals, engineers and researchers working in the area of high-speed networks.
Proceedings of the International Symposium on High Performance Computational Science and Engineering 2004 (IFIP World Computer Congress) is an essential reference for both academic and professional researchers in the field of computational science and engineering. Computational Science and Engineering is increasingly becoming an emerging and promising discipline in shaping future research and development activities in academia and industry ranging from engineering, science, finance, economics, arts and humanitarian fields. New challenges are in modeling of complex systems, sophisticated algorithms, advanced scientific and engineering computing, and associated (multi-disciplinary) problem solving environments. The papers presented in this volume are specially selected to address the most up-to-date ideas, results, work-in-progress and research experience in the area of high performance computational techniques for science and engineering applications. This state-of-the-are volume presents the proceedings of the International Symposium on High Performance Computational Science and Engineering, held in conjunction with the IFIP World Computer Congress, August 2004, in Toulouse, France. The collection will be important not only for computational science and engineering experts and researchers but for all teachers and administrators interested in high performance computational techniques.
Floating Gate Devices: Operation and Compact Modeling focuses on
standard operations and compact modeling of memory devices based on
Floating Gate architecture. Floating Gate devices are the building
blocks of Flash, EPROM, EEPROM memories. Flash memories, which are
the most versatile nonvolatile memories, are widely used to store
code (BIOS, Communication protocol, Identification code, ) and data
(solid-state Hard Disks, Flash cards for digital cameras, ).
Web caching and content delivery technologies provide the
infrastructure on which systems are built for the scalable
distribution of information. This proceedings of the eighth annual
workshop, captures a cross-section of the latest issues and
techniques of interest to network architects and researchers in
large-scale content delivery. Topics covered include the
distribution of streaming multimedia, edge caching and computation,
multicast, delivery of dynamic content, enterprise content
delivery, streaming proxies and servers, content transcoding,
replication and caching strategies, peer-to-peer content delivery,
and Web prefetching.
Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
Efficient parallel solutions have been found to many problems. Some of them can be obtained automatically from sequential programs, using compilers. However, there is a large class of problems - irregular problems - that lack efficient solutions. IRREGULAR 94 - a workshop and summer school organized in Geneva - addressed the problems associated with the derivation of efficient solutions to irregular problems. This book, which is based on the workshop, draws on the contributions of outstanding scientists to present the state of the art in irregular problems, covering aspects ranging from scientific computing, discrete optimization, and automatic extraction of parallelism. Audience: This first book on parallel algorithms for irregular problems is of interest to advanced graduate students and researchers in parallel computer science.
This book introduces the area of image processing and data-parallel processing. It covers a number of standard algorithms in image processing and describes their parallel implementation. The programming language chosen for all examples is a structured parallel programming language which is ideal for educational purposes. It has a number of advantages over C, and since all image processing tasks are inherently parallel, using a parallel language for presentation actually simplifies the subject matter. This results in shorter source codes and a better understanding. Sample programs and a free compiler are available on an accompanying Web site.
Optical media are now widely used in the telecommunication networks, and the evolution of optical and optoelectronic technologies tends to show that their wide range of techniques could be successfully introduced in shorter-distance interconnection systems. This book bridges the existing gap between research in optical interconnects and research in high-performance computing and communication systems, of which parallel processing is just an example. It also provides a more comprehensive understanding of the advantages and limitations of optics as applied to high-speed communications. Audience: The book will be a vital resource for researchers and graduate students of optical interconnects, computer architectures and high-performance computing and communication systems who wish to understand the trends in the newest technologies, models and communication issues in the field.
The SCAN conference, the International Symposium on Scientific Com puting, Computer Arithmetic and Validated Numerics, takes place bian nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. It is important that the possibly first participation of 6 young researchers was made possible due to the obtained support. The number of East-European participants was relatively high. These results are especially valuable, since in contrast to the usual 2 years period, the present meeting was organized just one year after the last SCAN-xx conference." |
You may like...
Understanding Computers - Today and…
Charles Parker, Deborah Morley
Paperback
Edge-AI in Healthcare - Trends and…
Sonali Vyas, Akanksha Upadhyaya, …
Hardcover
R2,644
Discovery Miles 26 440
Discoverability in Digital Repositories…
Liz Woolcott, Ali Shiri
Paperback
R971
Discovery Miles 9 710
|