![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems
This book provides embedded software developers with techniques for programming heterogeneous Multi-Processor Systems-on-Chip (MPSoCs), capable of executing multiple applications simultaneously. It describes a set of algorithms and methodologies to narrow the software productivity gap, as well as an in-depth description of the underlying problems and challenges of today's programming practices. The authors present four different tool flows: A parallelism extraction flow for applications written using the C programming language, a mapping and scheduling flow for parallel applications, a special mapping flow for baseband applications in the context of Software Defined Radio (SDR) and a final flow for analyzing multiple applications at design time. The tool flows are evaluated on Virtual Platforms (VPs), which mimic different characteristics of state-of-the-art heterogeneous MPSoCs.
This book is about security in embedded systems and it provides an authoritative reference to all aspects of security in system-on-chip (SoC) designs. The authors discuss issues ranging from security requirements in SoC designs, definition of architectures and design choices to enforce and validate security policies, and trade-offs and conflicts involving security, functionality, and debug requirements. Coverage also includes case studies from the "trenches" of current industrial practice in design, implementation, and validation of security-critical embedded systems. Provides an authoritative reference and summary of the current state-of-the-art in security for embedded systems, hardware IPs and SoC designs; Takes a "cross-cutting" view of security that interacts with different design and validation components such as architecture, implementation, verification, and debug, each enforcing unique trade-offs; Includes high-level overview, detailed analysis on implementation, and relevant case studies on design/verification/debug issues related to IP/SoC security.
FPGAs (Field-Programmable Gate Arrays) can be found in applications
such as smart phones, mp3 players, medical imaging devices, and for
aerospace and defense technology. FPGAs consist of logic blocks and
programmable interconnects. This allows an engineer to start with a
blank slate and program the FPGA for a specific task, for instance,
digital signal processing, or a specific device, for example, a
software-defined radio. Due to the short time to market and ability
to reprogram to fix bugs without having to respin FPGAs are in
increasingly high demand.
This book provides a comprehensive overview of flow-based, microfluidic VLSI. The authors describe and solve in a comprehensive and holistic manner practical challenges such as control synthesis, wash optimization, design for testability, and diagnosis of modern flow-based microfluidic biochips. They introduce practical solutions, based on rigorous optimization and formal models. The technical contributions presented in this book will not only shorten the product development cycle, but also accelerate the adoption and further development of modern flow-based microfluidic biochips, by facilitating the full exploitation of design complexities that are possible with current fabrication techniques.
This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book's chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. * Examines how to optimize the architecture of hardware design for error correcting codes; * Presents error correction codes from theory to optimized architecture for the current and the next generation standards; * Provides coverage of industrial user needs advanced error correcting techniques. Advanced Hardware Design for Error Correcting Codes includes a foreword by Claude Berrou.
This book covers key concepts in the design of 2D and 3D Network-on-Chip interconnect. It highlights design challenges and discusses fundamentals of NoC technology, including architectures, algorithms and tools. Coverage focuses on topology exploration for both 2D and 3D NoCs, routing algorithms, NoC router design, NoC-based system integration, verification and testing, and NoC reliability. Case studies are used to illuminate new design methodologies.
This book introduces new massively parallel computer (MPSoC) architectures called invasive tightly coupled processor arrays. It proposes strategies, architecture designs, and programming interfaces for invasive TCPAs that allow invading and subsequently executing loop programs with strict requirements or guarantees of non-functional execution qualities such as performance, power consumption, and reliability. For the first time, such a configurable processor array architecture consisting of locally interconnected VLIW processing elements can be claimed by programs, either in full or in part, using the principle of invasive computing. Invasive TCPAs provide unprecedented energy efficiency for the parallel execution of nested loop programs by avoiding any global memory access such as GPUs and may even support loops with complex dependencies such as loop-carried dependencies that are not amenable to parallel execution on GPUs. For this purpose, the book proposes different invasion strategies for claiming a desired number of processing elements (PEs) or region within a TCPA exclusively for an application according to performance requirements. It not only presents models for implementing invasion strategies in hardware, but also proposes two distinct design flavors for dedicated hardware components to support invasion control on TCPAs.
This book provides a comprehensive guide to the design of sustainable and green computing systems (GSC). Coverage includes important breakthroughs in various aspects of GSC, including multi-core architectures, interconnection technology, data centers, high performance computing (HPC), and sensor networks. The authors address the challenges of power efficiency and sustainability in various contexts, including system design, computer architecture, programming languages, compilers and networking.
These are the proceedings of the 20th international conference on domain decomposition methods in science and engineering. Domain decomposition methods are iterative methods for solving the often very large linearor nonlinear systems of algebraic equations that arise when various problems in continuum mechanics are discretized using finite elements. They are designed for massively parallel computers and take the memory hierarchy of such systems in mind. This is essential for approaching peak floating point performance. There is an increasingly well developed theory whichis having a direct impact on the development and improvements of these algorithms.
1 Die wirtschaftliche Bedeutung der Schutzmassnahmen fur EDV-Anlagen.- 2 Voraussetzungen und Anforderungen an die sicherheits-gerechte Konzipierung eines Hochsicherheitsbereichs.- 3 Unterschiedliche Gefahrdungen fur Rechenzentren.- 3.1 Einbruch/Diebstahl, Sabotage und Vandalismus.- 3.2 Brand, Verrauchung.- 3.3 Fehlfunktionen in der Klimatisierung.- 3.4 Wassereinbruch.- 3.5 Elektrische Versorgung.- 3.5.1 Aufrechterhaltung der Stromversorgung.- 3.5.2 UEberspannungen und Blitzschlag.- 3.6 Datenverlust.- 3.7 Sonstige Gefahren.- 4 Moegliche Analysemethoden.- 5 Schema der konkreten Risiko- und Schutzniveauermittlung.- 5.1 Massnahmen gegen Einbruch, Diebstahl, Sabotage und Vandalismus.- 5.2 Massnahmen gegen Feuer und Verrauchung.- 5.3 Massnahmen gegen Fehlfunktionen der Klimatisierung.- 5.4 Massnahmen gegen Beschadigungen durch Wasser bzw. fehlerhafte Versorgung.- 5.5 Massnahmen zur Aufrechterhaltung der gleichbleibenden Stromversorgung.- 5.6 Massnahmen gegen Datenverlust.- 5.7 Sonstige sicherheitsrelevante Kriterien.- 5.8 Zusammenfassende Benotung der analysierten Risiken.- 6 Sicherheitsmanagement: Organisation und Realisierung der sicherheitstechnischen Massnahmen.- 7 Sicherheitsgerechter EDV-Betrieb.- 8 Organisatorische Schritte zur permanenten Beibehaltung des Niveaus des ursprunglich entworfenen Sicherheitskonzepts.- 8.1 Menschliche Aspekte.- 8.2 Technische Massnahmen.- 9 Katastrophenvorsorge.- 9.1 Katastrophenplan.- 9.2 Backup-Konzepte.- 9.3 Versicherungskonzepte fur Hochsicherheitsbereiche.- 10 Schlussworte und Aussicht.
The Heinz Nixdorf Museum Forum (HNF) is the world's largest c- puter museum and is dedicated to portraying the past, present and future of information technology. In the "Year of Informatics 2006" the HNF was particularly keen to examine the history of this still quite young discipline. The short-lived nature of information technologies means that individuals, inventions, devices, institutes and companies"age" more rapidly than in many other specialties. And in the nature of things the group of computer pioneers from the early days is growing smaller all the time. To supplement a planned new exhibit on "Software and Inform- ics" at the HNF, the idea arose of recording the history of informatics in an accompanying publication. Mysearchforsuitablesourcesandauthorsveryquickly cameupwith the right answer, the very rst name in Germany: Friedrich L. Bauer, Professor Emeritus of Mathematics at the TU in Munich, one of the - thers of informatics in Germany and for decades the indefatigable author of the"Historical Notes" column of the journal Informatik Spektrum. Friedrich L. Bauer was already the author of two works on the history of informatics, published in different decades and in different books. Both of them are notable for their knowledgeable, extremely comp- hensive and yet compact style. My obvious course was to motivate this author to amalgamate, supplement and illustrate his previous work.
Given the widespread use of real-time multitasking systems, there are tremendous optimization opportunities if reconfigurable computing can be effectively incorporated while maintaining performance and other design constraints of typical applications. The focus of this book is to describe the dynamic reconfiguration techniques that can be safely used in real-time systems. This book provides comprehensive approaches by considering synergistic effects of computation, communication as well as storage together to significantly improve overall performance, power, energy and temperature."
This book provides a hands-on, application-oriented guide to the language and methodology of both SystemVerilog Assertions and Functional Coverage. Readers will benefit from the step-by-step approach to learning language and methodology nuances of both SystemVerilog Assertions and Functional Coverage, which will enable them to uncover hidden and hard to find bugs, point directly to the source of the bug, provide for a clean and easy way to model complex timing checks and objectively answer the question 'have we functionally verified everything'. Written by a professional end-user of ASIC/SoC/CPU and FPGA design and Verification, this book explains each concept with easy to understand examples, simulation logs and applications derived from real projects. Readers will be empowered to tackle the modeling of complex checkers for functional verification and exhaustive coverage models for functional coverage, thereby drastically reducing their time to design, debug and cover. This updated third edition addresses the latest functional set released in IEEE-1800 (2012) LRM, including numerous additional operators and features. Additionally, many of the Concurrent Assertions/Operators explanations are enhanced, with the addition of more examples and figures. * Covers in its entirety the latest IEEE-1800 2012 LRM syntax and semantics; * Covers both SystemVerilog Assertions and SystemVerilog Functional Coverage languages and methodologies; * Provides practical applications of the what, how and why of Assertion Based Verification and Functional Coverage methodologies; * Explains each concept in a step-by-step fashion and applies it to a practical real life example; * Includes 6 practical LABs that enable readers to put in practice the concepts explained in the book.
Making the most ef?cient use of computer systems has rapidly become a leading topic of interest for the computer industry and its customers alike. However, the focus of these discussions is often on single, isolated, and speci?c architectural and technological improvements for power reduction and conservation, while ignoring the fact that power ef?ciency as a ratio of performance to power consumption is equally in?uenced by performance improvements and architectural power red- tion. Furthermore, ef?ciency can be in?uenced on all levels of today's system hi- archies from single cores all the way to distributed Grid environments. To improve execution and power ef?ciency requires progress in such diverse ?elds as program optimization, optimization of program scheduling, and power reduction of idling system components for all levels of the system hierarchy. Improving computer system ef?ciency requires improving system performance and reducing system power consumption. To research and reach reasonable conc- sions about system performance we need to not only understand the architectures of our computer systems and the available array of code transformations for p- formance optimizations, but we also need to be able to express this understanding in performance models good enough to guide decisions about code optimizations for speci?c systems. This understanding is necessary on all levels of the system hierarchy from single cores to nodes to full high performance computing (HPC) systems, and eventually to Grid environments with multiple systems and resources.
This book presents research in an interdisciplinary field, resulting from the vigorous and fruitful cross-pollination between traditional deontic logic and computer science. AI researchers have used deontic logic as one of the tools in modelling legal reasoning. Computer scientists have discovered that computer systems (including their interaction with other computer systems and with human agents) can often be productively modelled as norm-governed. So, for example, deontic logic has been applied by computer scientists for specifying bureaucratic systems, access and security policies, and soft design or integrity constraints, and for modelling fault tolerance. In turn, computer scientists and AI researchers have also discovered (and made it clear to the rest of us) that various formal tools (e.g. nonmonotonic, temporal and dynamic logics) developed in computer science and artificial intelligence have interesting applications to traditional issues in deontic logic. This volume presents some of the best work done in this area, with the selection at once reflecting the general interdisciplinary (and international) character that this area of research has taken on, as well as reflecting the more specific recent inter-disciplinary developments between traditional deontic logic and computer science.
Wafer-scale integration has long been the dream of system designers. Instead of chopping a wafer into a few hundred or a few thousand chips, one would just connect the circuits on the entire wafer. What an enormous capability wafer-scale integration would offer: all those millions of circuits connected by high-speed on-chip wires. Unfortunately, the best known optical systems can provide suitably ?ne resolution only over an area much smaller than a whole wafer. There is no known way to pattern a whole wafer with transistors and wires small enough for modern circuits. Statistical defects present a ?rmer barrier to wafer-scale integration. Flaws appear regularly in integrated circuits; the larger the circuit area, the more probable there is a ?aw. If such ?aws were the result only of dust one might reduce their numbers, but ?aws are also the inevitable result of small scale. Each feature on a modern integrated circuit is carved out by only a small number of photons in the lithographic process. Each transistor gets its electrical properties from only a small number of impurity atoms in its tiny area. Inevitably, the quantized nature of light and the atomic nature of matter produce statistical variations in both the number of photons de?ning each tiny shape and the number of atoms providing the electrical behavior of tiny transistors. No known way exists to eliminate such statistical variation, nor may any be possible.
An epic account of the decades-long battle to control what has emerged as the world's most critical resource—microchip technology—with the United States and China increasingly in conflict. You may be surprised to learn that microchips are the new oil—the scarce resource on which the modern world depends. Today, military, economic, and geopolitical power are built on a foundation of computer chips. Virtually everything—from missiles to microwaves, smartphones to the stock market—runs on chips. Until recently, America designed and built the fastest chips and maintained its lead as the #1 superpower. Now, America's edge is slipping, undermined by competitors in Taiwan, Korea, Europe, and, above all, China. Today, as Chip War reveals, China, which spends more money each year importing chips than it spends importing oil, is pouring billions into a chip-building initiative to catch up to the US. At stake is America's military superiority and economic prosperity. Economic historian Chris Miller explains how the semiconductor came to play a critical role in modern life and how the U.S. become dominant in chip design and manufacturing and applied this technology to military systems. America's victory in the Cold War and its global military dominance stems from its ability to harness computing power more effectively than any other power. But here, too, China is catching up, with its chip-building ambitions and military modernization going hand in hand. America has let key components of the chip-building process slip out of its grasp, contributing not only to a worldwide chip shortage but also a new Cold War with a superpower adversary that is desperate to bridge the gap. Illuminating, timely, and fascinating, Chip War shows that, to make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips.
Today s semiconductor memory market is divided between two types of memory: DRAM and Flash. Each has its own advantages and disadvantages. While DRAM is fast but volatile, Flash is non-volatile but slow. A memory system based on self-organized quantum dots (QDs) as storage node could combine the advantages of modern DRAM and Flash, thus merging the latter s non-volatility with very fast write times. This thesis investigates the electronic properties of and carrier dynamics in self-organized quantum dots by means of time-resolved capacitance spectroscopy and time-resolved current measurements. The first aim is to study the localization energy of various QD systems in order to assess the potential of increasing the storage time in QDs to non-volatility. Surprisingly, it is found that the major impact of carrier capture cross-sections of QDs is to influence, and at times counterbalance, carrier storage in addition to the localization energy. The second aim is to study the coupling between a layer of self-organized QDs and a two-dimensional hole gas (2DHG), which is relevant for the read-out process in memory systems. The investigation yields the discovery of the many-particle ground states in the QD ensemble.In addition to its technological relevance, the thesis also offers new insights into the fascinating field of nanostructure physics."
This book addresses the topic of exploiting enterprise-linked data with a particular focus on knowledge construction and accessibility within enterprises. It identifies the gaps between the requirements of enterprise knowledge consumption and "standard" data consuming technologies by analysing real-world use cases, and proposes the enterprise knowledge graph to fill such gaps. It provides concrete guidelines for effectively deploying linked-data graphs within and across business organizations. It is divided into three parts, focusing on the key technologies for constructing, understanding and employing knowledge graphs. Part 1 introduces basic background information and technologies, and presents a simple architecture to elucidate the main phases and tasks required during the lifecycle of knowledge graphs. Part 2 focuses on technical aspects; it starts with state-of-the art knowledge-graph construction approaches, and then discusses exploration and exploitation techniques as well as advanced question-answering topics concerning knowledge graphs. Lastly, Part 3 demonstrates examples of successful knowledge graph applications in the media industry, healthcare and cultural heritage, and offers conclusions and future visions.
This book serves as a practical guide for practicing engineers who need to design embedded systems for high-speed data acquisition and control systems. A minimum amount of theory is presented, along with a review of analog and digital electronics, followed by detailed explanations of essential topics in hardware design and software development. The discussion of hardware focuses on microcontroller design (ARM microcontrollers and FPGAs), techniques of embedded design, high speed data acquisition (DAQ) and control systems. Coverage of software development includes main programming techniques, culminating in the study of real-time operating systems. All concepts are introduced in a manner to be highly-accessible to practicing engineers and lead to the practical implementation of an embedded board that can be used in various industrial fields as a control system and high speed data acquisition system. |
![]() ![]() You may like...
Applications of Bat Algorithm and its…
Nilanjan Dey, V. Rajinikanth
Hardcover
R4,348
Discovery Miles 43 480
Computing Platforms for Software-Defined…
Waqar Hussain, Jari Nurmi, …
Hardcover
XML in Data Management - Understanding…
Peter Aiken, M. David Allen
Paperback
R1,218
Discovery Miles 12 180
Java Foundations - Pearson New…
John Lewis, Peter DePasquale, …
Paperback
R2,777
Discovery Miles 27 770
|