![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
An epic account of the decades-long battle to control what has emerged as the world's most critical resource—microchip technology—with the United States and China increasingly in conflict. You may be surprised to learn that microchips are the new oil—the scarce resource on which the modern world depends. Today, military, economic, and geopolitical power are built on a foundation of computer chips. Virtually everything—from missiles to microwaves, smartphones to the stock market—runs on chips. Until recently, America designed and built the fastest chips and maintained its lead as the #1 superpower. Now, America's edge is slipping, undermined by competitors in Taiwan, Korea, Europe, and, above all, China. Today, as Chip War reveals, China, which spends more money each year importing chips than it spends importing oil, is pouring billions into a chip-building initiative to catch up to the US. At stake is America's military superiority and economic prosperity. Economic historian Chris Miller explains how the semiconductor came to play a critical role in modern life and how the U.S. become dominant in chip design and manufacturing and applied this technology to military systems. America's victory in the Cold War and its global military dominance stems from its ability to harness computing power more effectively than any other power. But here, too, China is catching up, with its chip-building ambitions and military modernization going hand in hand. America has let key components of the chip-building process slip out of its grasp, contributing not only to a worldwide chip shortage but also a new Cold War with a superpower adversary that is desperate to bridge the gap. Illuminating, timely, and fascinating, Chip War shows that, to make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips.
Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.
Are memory applications more critical than they have been in the
past? Yes, but even more critical is the number of designs and the
sheer number of bits on each design. It is assured that
catastrophes, which were avoided in the past because memories were
small, will easily occur if the design and test engineers do not do
their jobs very carefully. High Performance Memory Testing: Design Principles, Fault Modeling and Self Test is written for the professional and the researcher to help them understand the memories that are being tested.
In many organizations, information technology has become crucial in the support, sustainability and growth of the business. This pervasive use of technology has created a critical dependency on IT that calls for a specific focus on IT governance. Implementing Information Technology Governance: Models, Practices and Cases presents insight gained by literary research and case studies to provide practical guidance for organizations who want to start implementing IT governance or improving existing governance models. Implementing Information Technology Governance: Models, Practices and Cases provides a detailed set of IT governance structures, processes, and relational mechanisms that can be leveraged to implement IT governance in practice.
Component Models and Systems for Grid Applications is the essential reference for the most current research on Grid technologies. This first volume of the CoreGRID series addresses such vital issues as the architecture of the Grid, the way software will influence the development of the Grid, and the practical applications of Grid technologies for individuals and businesses alike. Part I of the book, "Application-Oriented Designs," focuses on development methodology and how it may contribute to a more component-based use of the Grid. "Middleware Architecture," the second part, examines portable Grid engines, hierarchical infrastructures, interoperability, as well as workflow modeling environments. The final part of the book, "Communication Frameworks," looks at dynamic self-adaptation, collective operations, and higher-order components. With Component Models and Systems for Grid Applications, editors Vladimir Getov and Thilo Kielmann offer the computing professional and the computing researcher the most informative, up-to-date, and forward-looking thoughts on the fast-growing field of Grid studies.
This book provides an overview of current Intellectual Property (IP) based System-on-Chip (SoC) design methodology and highlights how security of IP can be compromised at various stages in the overall SoC design-fabrication-deployment cycle. Readers will gain a comprehensive understanding of the security vulnerabilities of different types of IPs. This book would enable readers to overcome these vulnerabilities through an efficient combination of proactive countermeasures and design-for-security solutions, as well as a wide variety of IP security and trust assessment and validation techniques. This book serves as a single-source of reference for system designers and practitioners for designing secure, reliable and trustworthy SoCs.
I am very pleased to play even a small part in the publication of this book on the SIGNAL language and its environment POLYCHRONY. I am sure it will be a s- ni?cant milestone in the development of the SIGNAL language, of synchronous computing in general, and of the data?ow approach to computation. In data?ow, the computation takes place in a producer-consumer network of - dependent processing stations. Data travels in streams and is transformed as these streams pass through the processing stations (often called ?lters). Data?ow is an attractive model for many reasons, not least because it corresponds to the way p- duction, transportation, andcommunicationare typicallyorganizedin the real world (outside cyberspace). I myself stumbled into data?ow almost against my will. In the mid-1970s, Ed Ashcroft and I set out to design a "super" structured programming language that, we hoped, would radically simplify proving assertions about programs. In the end, we decided that it had to be declarative. However, we also were determined that iterative algorithms could be expressed directly, without circumlocutions such as the use of a tail-recursive function. The language that resulted, which we named LUCID, was much less traditional then we would have liked. LUCID statements are equations in a kind of executable temporallogic thatspecifythe (time)sequencesof variablesinvolvedin aniteration.
Cache And Interconnect Architectures In Multiprocessors Eilat, Israel May 25-261989 Michel Dubois UniversityofSouthernCalifornia Shreekant S. Thakkar SequentComputerSystems The aim of the workshop was to bring together researchers working on cache coherence protocols for shared-memory multiprocessors with various interconnect architectures. Shared-memory multiprocessors have become viable systems for many applications. Bus based shared-memory systems (Eg. Sequent's Symmetry, Encore's Multimax) are currently limited to 32 processors. The fIrst goal of the workshop was to learn about the performance ofapplications on current cache-based systems. The second goal was to learn about new network architectures and protocols for future scalable systems. These protocols and interconnects would allow shared-memory architectures to scale beyond current imitations. The workshop had 20 speakers who talked about their current research. The discussions were lively and cordial enough to keep the participants away from the wonderful sand and sun for two days. The participants got to know each other well and were able to share their thoughts in an informal manner. The workshop was organized into several sessions. The summary of each session is described below. This book presents revisions of some of the papers presented at the workshop."
This book provides students and practicing chip designers with an easy-to-follow yet thorough, introductory treatment of the most promising emerging memories under development in the industry. Focusing on the chip designer rather than the end user, this book offers expanded, up-to-date coverage of emerging memories circuit design. After an introduction on the old solid-state memories and the fundamental limitations soon to be encountered, the working principle and main technology issues of each of the considered technologies (PCRAM, MRAM, FeRAM, ReRAM) are reviewed and a range of topics related to design is explored: the array organization, sensing and writing circuitry, programming algorithms and error correction techniques are reviewed comparing the approach followed and the constraints for each of the technologies considered. Finally the issue of radiation effects on memory devices has been briefly treated. Additionally some considerations are entertained about how emerging memories can find a place in the new memory paradigm required by future electronic systems. This book is an up-to-date and comprehensive introduction for students in courses on memory circuit design or advanced digital courses in VLSI or CMOS circuit design. It also serves as an essential, one-stop resource for academics, researchers and practicing engineers.
This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; reviews the latest research on the DataFlow architecture and its applications; introduces a new method for the rapid handling of real-world challenges involving large datasets; provides a case study on the use of the new approach to accelerate the Cooley-Tukey algorithm on a DataFlow machine; includes a step-by-step guide to the web-based integrated development environment WebIDE.
Proceedings of the International Symposium on High Performance Computational Science and Engineering 2004 (IFIP World Computer Congress) is an essential reference for both academic and professional researchers in the field of computational science and engineering. Computational Science and Engineering is increasingly becoming an emerging and promising discipline in shaping future research and development activities in academia and industry ranging from engineering, science, finance, economics, arts and humanitarian fields. New challenges are in modeling of complex systems, sophisticated algorithms, advanced scientific and engineering computing, and associated (multi-disciplinary) problem solving environments. The papers presented in this volume are specially selected to address the most up-to-date ideas, results, work-in-progress and research experience in the area of high performance computational techniques for science and engineering applications. This state-of-the-are volume presents the proceedings of the International Symposium on High Performance Computational Science and Engineering, held in conjunction with the IFIP World Computer Congress, August 2004, in Toulouse, France. The collection will be important not only for computational science and engineering experts and researchers but for all teachers and administrators interested in high performance computational techniques.
Instruction-Level Parallelism presents a collection of papers that attempts to capture the most significant work that took place during the 1980s in the area of instruction-level (ILP) parallel processing. The papers in this book discuss both compiler techniques and actual implementation experience on very long instruction word (VLIW) and superscalar architectures.
This book is devoted to logic synthesis and design techniques for asynchronous circuits. It uses the mathematical theory of Petri Nets and asynchronous automata to develop practical algorithms implemented in a public domain CAD tool. Asynchronous circuits have so far been designed mostly by hand, and are thus much less common than their synchronous counterparts, which have enjoyed a high level of design automation since the mid-1970s. Asynchronous circuits, on the other hand, can be very useful to tackle clock distribution, modularity, power dissipation and electro-magnetic interference in digital integrated circuits. This book provides the foundation needed for CAD-assisted design of such circuits, and can also be used as the basis for a graduate course on logic design.
This book aids in the rehabilitation of the wrongfully deprecated work of William Parry, and is the only full-length investigation into Parry-type propositional logics. A central tenet of the monograph is that the sheer diversity of the contexts in which the mereological analogy emerges - its effervescence with respect to fields ranging from metaphysics to computer programming - provides compelling evidence that the study of logics of analytic implication can be instrumental in identifying connections between topics that would otherwise remain hidden. More concretely, the book identifies and discusses a host of cases in which analytic implication can play an important role in revealing distinct problems to be facets of a larger, cross-disciplinary problem. It introduces an element of constancy and cohesion that has previously been absent in a regrettably fractured field, shoring up those who are sympathetic to the worth of mereological analogy. Moreover, it generates new interest in the field by illustrating a wide range of interesting features present in such logics - and highlighting these features to appeal to researchers in many fields.
Microcantilevers for Atomic Force Microscope Data Storage describes a research collaboration between IBM Almaden and Stanford University in which a new mass data storage technology was evaluated. This technology is based on the use of heated cantilevers to form submicron indentations on a polycarbonate surface, and piezoresistive cantilevers to read those indentations. Microcantilevers for Atomic Force Microscope Data Storage describes how silicon micromachined cantilevers can be used for high-density topographic data storage on a simple substrate such as polycarbonate. The cantilevers can be made to incorporate resistive heaters (for thermal writing) or piezoresistive deflection sensors (for data readback). The primary audience for Microcantilevers for Atomic Force Microscope Data Storage is industrial and academic workers in the microelectromechanical systems (MEMS) area. It will also be of interest to researchers in the data storage industry who are investigating future storage technologies.
The communication complexity of two-party protocols is an only 15 years old complexity measure, but it is already considered to be one of the fundamen tal complexity measures of recent complexity theory. Similarly to Kolmogorov complexity in the theory of sequential computations, communication complex ity is used as a method for the study of the complexity of concrete computing problems in parallel information processing. Especially, it is applied to prove lower bounds that say what computer resources (time, hardware, memory size) are necessary to compute the given task. Besides the estimation of the compu tational difficulty of computing problems the proved lower bounds are useful for proving the optimality of algorithms that are already designed. In some cases the knowledge about the communication complexity of a given problem may be even helpful in searching for efficient algorithms to this problem. The study of communication complexity becomes a well-defined indepen dent area of complexity theory. In addition to a strong relation to several funda mental complexity measures (and so to several fundamental problems of com plexity theory) communication complexity has contributed to the study and to the understanding of the nature of determinism, nondeterminism, and random ness in algorithmics. There already exists a non-trivial mathematical machinery to handle the communication complexity of concrete computing problems, which gives a hope that the approach based on communication complexity will be in strumental in the study of several central open problems of recent complexity theory."
This book concentrates on the quality of electronic products. Electronics in general, including semiconductor technology and software, has become the key technology for wide areas of industrial production. In nearly all expanding branches of industry electronics, especially digital electronics, is involved. And the spread of electronic technology has not yet come to an end. This rapid development, coupled with growing competition and the shorter innovation cycle, have caused economic problems which tend to have adverse effects on quality. Therefore, good quality at low cost is a very attractive goal in industry today. The demand for better quality continues along with a demand for more studies in quality assurance. At the same time, many companies are experiencing a drop in profits just when better quality of their products is essential in order to survive against the competition. There have been many proposals in the past to improve quality without increase in cost, or to reduce cost for quality assurance without loss of quality. This book tries to summarize the practical content of many of these proposals and to give some advice, above all to the designer and manufacturer of electronic devices. It mainly addresses practically minded engineers and managers. It is probably of less interest to pure scientists. The book covers all aspects of quality assurance of components used in electronic devices. Integrated circuits (lCs) are considered to be the most important components because the degree of integration is still rising.
As is true of most technological fields, the software industry is constantly advancing and becoming more accessible to a wider range of people. The advancement and accessibility of these systems creates a need for understanding and research into their development. Optimizing Contemporary Application and Processes in Open Source Software is a critical scholarly resource that examines the prevalence of open source software systems as well as the advancement and development of these systems. Featuring coverage on a wide range of topics such as machine learning, empirical software engineering and management, and open source, this book is geared toward academicians, practitioners, and researchers seeking current and relevant research on the advancement and prevalence of open source software systems.
Smart cards or IC cards offer a huge potential for information processing purposes. The portability and processing power of IC cards allow for highly secure conditional access and reliable distributed information processing. IC cards that can perform highly sophisticated cryptographic computations are already available. Their application in the financial services and telecom industries are well known. But the potential of IC cards go well beyond that. Their applicability in mainstream Information Technology and the Networked Economy is limited mainly by our imagination; the information processing power that can be gained by using IC cards remains as yet mostly untapped and is not well understood. Here lies a vast uncovered research area which we are only beginning to assess, and which will have a great impact on the eventual success of the technology. The research challenges range from electrical engineering on the hardware side to tailor-made cryptographic applications on the software side, and their synergies. This volume comprises the proceedings of the Fourth Working Conference on Smart Card Research and Advanced Applications (CARDIS 2000), which was sponsored by the International Federation for Information Processing (IFIP) and held at the Hewlett-Packard Labs in the United Kingdom in September 2000. CARDIS conferences are unique in that they bring together researchers who are active in all aspects of design of IC cards and related devices and environments, thus stimulating synergy between different research communities from both academia and industry. This volume presents the latest advances in smart card research and applications, and will be essential reading for smart card developers, smart card application developers, and computer science researchers involved in computer architecture, computer security, and cryptography.
A tutorial approach to using the UML modeling language in system-on-chip design Based on the DAC 2004 tutorial, applicable for students and professionals Contributions by top-level international researchers The best work at the first UML for SoC workshop Unique combination of both UML capabilities and SoC design issues Condenses research and development ideas that are only found in multiple conference proceedings and many other books into one place Will be the seminal reference work for this area for years to come
This book addresses challenges faced by both the algorithm designer
and the chip designer, who need to deal with the ongoing increase
of algorithmic complexity and required data throughput for today s
mobile applications. The focus is on implementation aspects and
implementation constraints of individual components that are needed
in transceivers for current standards, such as UMTS, LTE, WiMAX and
DVB-S2. The application domain is the so called outer receiver,
which comprises the channel coding, interleaving stages, modulator,
and multiple antenna transmission. Throughout the book, the focus
is on advanced algorithms that are actually in use
|
You may like...
Practical TCP/IP and Ethernet Networking…
Deon Reynders, Edwin Wright
Paperback
R1,491
Discovery Miles 14 910
Wireless Communication Networks…
Hailong Huang, Andrey V. Savkin, …
Paperback
R2,763
Discovery Miles 27 630
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
BTEC Nationals Information Technology…
Jenny Phillips, Alan Jarvis, …
Paperback
R1,018
Discovery Miles 10 180
|