Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book discusses analysis, design and optimization techniques for streaming multiprocessor systems, while satisfying a given area, performance, and energy budget. The authors describe design flows for both application-specific and general purpose streaming systems. Coverage also includes the use of machine learning for thermal optimization at run-time, when an application is being executed. The design flow described in this book extends to thermal and energy optimization with multiple applications running sequentially and concurrently.
For courses in engineering and technical management System architecture is the study of early decision making in complex systems. This text teaches how to capture experience and analysis about early system decisions, and how to choose architectures that meet stakeholder needs, integrate easily, and evolve flexibly. With case studies written by leading practitioners, from hybrid cars to communications networks to aircraft, this text showcases the science and art of system architecture.
Massively Parallel Systems (MPSs) with their scalable computation and storage space promises are becoming increasingly important for high-performance computing. The growing acceptance of MPSs in academia is clearly apparent. However, in industrial companies, their usage remains low. The programming of MPSs is still the big obstacle, and solving this software problem is sometimes referred to as one of the most challenging tasks of the 1990's. The 1994 working conference on "Programming Environments for Massively Parallel Systems" was the latest event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) in this field. It succeeded the 1992 conference in Edinburgh on "Programming Environments for Parallel Computing." The research and development work discussed at the conference addresses the entire spectrum of software problems including virtual machines which are less cumbersome to program; more convenient programming models; advanced programming languages, and especially more sophisticated programming tools; but also algorithms and applications.
This volume gives an overview of the state-of-the-art with respect to the development of all types of parallel computers and their application to a wide range of problem areas.
This book describes simple to complex ASIC design practical scenarios using Verilog. It builds a story from the basic fundamentals of ASIC designs to advanced RTL design concepts using Verilog. Looking at current trends of miniaturization, the contents provide practical information on the issues in ASIC design and synthesis using Synopsys DC and their solution. The book explains how to write efficient RTL using Verilog and how to improve design performance. It also covers architecture design strategies, multiple clock domain designs, low-power design techniques, DFT, pre-layout STA and the overall ASIC design flow with case studies. The contents of this book will be useful to practicing hardware engineers, students, and hobbyists looking to learn about ASIC design and synthesis.
This book describes a new design methodology that allows optimization-based synthesis of RF systems in a hierarchical multilevel approach, in which the system is designed in a bottom-up fashion, from the device level up to the (sub)system level. At each level of the design hierarchy, the authors discuss methods that increase the design robustness and increase the accuracy and efficiency of the simulations. The methodology described enables circuit sizing and layout in a complete and automated integrated manner, achieving optimized designs in significantly less time than with traditional approaches.
This book offers readers comprehensive coverage of security policy specification using new policy languages, implementation of security policies in Systems-on-Chip (SoC) designs - current industrial practice, as well as emerging approaches to architecting SoC security policies and security policy verification. The authors focus on a promising security architecture for implementing security policies, which satisfies the goals of flexibility, verification, and upgradability from the ground up, including a plug-and-play hardware block in which all policy implementations are enclosed. Using this architecture, they discuss the ramifications of designing SoC security policies, including effects on non-functional properties (power/performance), debug, validation, and upgrade. The authors also describe a systematic approach for "hardware patching", i.e., upgrading hardware implementations of security requirements safely, reliably, and securely in the field, meeting a critical need for diverse Internet of Things (IoT) devices. Provides comprehensive coverage of SoC security requirements, security policies, languages, and security architecture for current and emerging computing devices; Explodes myths and ambiguities in SoC security policy implementations, and provide a rigorous treatment of the subject; Demonstrates a rigorous, step-by-step approach to developing a diversity of SoC security policies; Introduces a rigorous, disciplined approach to "hardware patching", i.e., secure technique for updating hardware functionality of computing devices in-field; Includes discussion of current and emerging approaches for security policy verification.
This book introduces a new level of abstraction that closes the gap between the textual specification of embedded systems and the executable model at the Electronic System Level (ESL). Readers will be enabled to operate at this new, Formal Specification Level (FSL), using models which not only allow significant verification tasks in this early stage of the design flow, but also can be extracted semi-automatically from the textual specification in an interactive manner. The authors explain how to use these verification tasks to check conceptual properties, e.g. whether requirements are in conflict, as well as dynamic behavior, in terms of execution traces.
This book presents techniques necessary to predict cardiac arrhythmias, long before they occur, based on minimal ECG data. The authors describe the key information needed for automated ECG signal processing, including ECG signal pre-processing, feature extraction and classification. The adaptive and novel ECG processing techniques introduced in this book are highly effective and suitable for real-time implementation on ASICs.
The question whether molecular primitives can prove to be real alternatives to contemporary semiconductor means or effective supplements extending greatly possibilities of information technologies is addressed. Molecular primitives and circuitry for information processing devices are also discussed. Investigations in molecular based computing devices were initiated in the early 1970s in the hopes for an increase in the integration level and processing speed. Real progress proved unfeasible into the 1980 s. However, recently, important and promising results were achieved. The elaboration of operational 160-kilobit molecular electronic memory patterned 1011 bits per square centimeter in the end of 90?'s were the first timid steps of information processing further development. Subsequent advances beyond these developments are presented and discussed. This work provides useful knowledge to anyone working in molecular based information processing.
Covering system architecture, implementation and testing, this work is written by authors who are widely experienced with cellular radio in general and with GSM in particular. It provides a structured overview to help make sense of the GSM specifications and surveys competing cellular systems such as NADC and CDMA. Practical testing applications are explored in depth and compared with similar techniques used with analogue cellular systems.
This book explores near-threshold computing (NTC), a design-space using techniques to run digital chips (processors) near the lowest possible voltage. Readers will be enabled with specific techniques to design chips that are extremely robust; tolerating variability and resilient against errors. Variability-aware voltage and frequency allocation schemes will be presented that will provide performance guarantees, when moving toward near-threshold manycore chips. * Provides an introduction to near-threshold computing, enabling reader with a variety of tools to face the challenges of the power/utilization wall; * Demonstrates how to design efficient voltage regulation, so that each region of the chip can operate at the most efficient voltage and frequency point; * Investigates how performance guarantees can be ensured when moving towards NTC manycores through variability-aware voltage and frequency allocation schemes.
This book provides a comprehensive overview of both theoretical and pragmatic aspects of resource-allocation and scheduling in multiprocessor and multicore hard-real-time systems. The authors derive new, abstract models of real-time tasks that capture accurately the salient features of real application systems that are to be implemented on multiprocessor platforms, and identify rules for mapping application systems onto the most appropriate models. New run-time multiprocessor scheduling algorithms are presented, which are demonstrably better than those currently used, both in terms of run-time efficiency and tractability of off-line analysis. Readers will benefit from a new design and analysis framework for multiprocessor real-time systems, which will translate into a significantly enhanced ability to provide formally verified, safety-critical real-time systems at a significantly lower cost.
This book addresses Software-Defined Radio (SDR) baseband processing from the computer architecture point of view, providing a detailed exploration of different computing platforms by classifying different approaches, highlighting the common features related to SDR requirements and by showing pros and cons of the proposed solutions. It covers architectures exploiting parallelism by extending single-processor environment (such as VLIW, SIMD, TTA approaches), multi-core platforms distributing the computation to either a homogeneous array or a set of specialized heterogeneous processors, and architectures exploiting fine-grained, coarse-grained, or hybrid reconfigurability.
This thesis takes an empirical approach to understanding of the behavior and interactions between the two main components of reinforcement learning: the learning algorithm and the functional representation of learned knowledge. The author approaches these entities using design of experiments not commonly employed to study machine learning methods. The results outlined in this work provide insight as to what enables and what has an effect on successful reinforcement learning implementations so that this learning method can be applied to more challenging problems.
This book describes the state-of-the art of industrial and academic research in the architectural design of heterogeneous, multi/many-core processors. The authors describe methods and tools to enable next-generation embedded and high-performance heterogeneous processors to confront cost-effectively the inevitable variations by providing Dependable-Performance: correct functionality and timing guarantees throughout the expected lifetime of a platform under thermal, power, and energy constraints. Various aspects of the reliability problem are discussed, at both the circuit and architecture level, the intelligent selection of knobs and monitors in multicore platforms, and systematic design methodologies. The authors demonstrate how new techniques have been applied in real case studies from different applications domain and report on results and conclusions of those experiments. Enables readers to develop performance-dependable heterogeneous multi/many-core architectures Describes system software designs that support high performance dependability requirements Discusses and analyzes low level methodologies to tradeoff conflicting metrics, i.e. power, performance, reliability and thermal management Includes new application design guidelines to improve performance dependability
This book provides readers with insight into an alternative approach for enhancing the reliability, security, and low power features of integrated circuit designs, related to transient faults, hardware Trojans, and power consumption. The authors explain how the addition of integrated sensors enables the detection of ionizing particles and how this information can be processed at a high layer. The discussion also includes a variety of applications, such as the detection of hardware Trojans and fault attacks, and how sensors can operate to provide different body bias levels and reduce power costs. Readers can benefit from these sensors-based approaches through designs with fast response time, non-intrusive integration on gate-level and reasonable design costs.
This book provides a unified treatment of Flip-Flop design and selection in nanometer CMOS VLSI systems. The design aspects related to the energy-delay tradeoff in Flip-Flops are discussed, including their energy-optimal selection according to the targeted application, and the detailed circuit design in nanometer CMOS VLSI systems. Design strategies are derived in a coherent framework that includes explicitly nanometer effects, including leakage, layout parasitics and process/voltage/temperature variations, as main advances over the existing body of work in the field. The related design tradeoffs are explored in a wide range of applications and the related energy-performance targets. A wide range of existing and recently proposed Flip-Flop topologies are discussed. Theoretical foundations are provided to set the stage for the derivation of design guidelines, and emphasis is given on practical aspects and consequences of the presented results. Analytical models and derivations are introduced when needed to gain an insight into the inter-dependence of design parameters under practical constraints. This book serves as a valuable reference for practicing engineers working in the VLSI design area, and as text book for senior undergraduate, graduate and postgraduate students (already familiar with digital circuits and timing).
This monograph condenses the relevant and pertinent literature on blanket and selective CVD of tungsten (W) into a single manageable volume. The book supplies the reader with the necessary background to bring up, fine tune, and successfully maintain a CVD-W process in a production set-up. Materials deposition chemistry, equipment, process technology, developments, and applications are described.
This book proposes a synergistic framework to help IP vendors to protect hardware IP privacy and integrity from design, optimization, and evaluation perspectives. The proposed framework consists of five interacting components that directly target at the primary IP violations. All the five algorithms are developed based on rigorous mathematical modeling for primary IP violations and focus on different stages of IC design, which can be combined to provide a formal security guarantee.
This book explains for readers how 3D chip stacks promise to increase the level of on-chip integration, and to design new heterogeneous semiconductor devices that combine chips of different integration technologies (incl. sensors) in a single package of the smallest possible size. The authors focus on heterogeneous 3D integration, addressing some of the most important challenges in this emerging technology, including contactless, optics-based, and carbon-nanotube-based 3D integration, as well as signal-integrity and thermal management issues in copper-based 3D integration. Coverage also includes the 3D heterogeneous integration of power sources, photonic devices, and non-volatile memories based on new materials systems.
This volume is the first ever collection devoted to the field of proof-theoretic semantics. Contributions address topics including the systematics of introduction and elimination rules and proofs of normalization, the categorial characterization of deductions, the relation between Heyting's and Gentzen's approaches to meaning, knowability paradoxes, proof-theoretic foundations of set theory, Dummett's justification of logical laws, Kreisel's theory of constructions, paradoxical reasoning, and the defence of model theory. The field of proof-theoretic semantics has existed for almost 50 years, but the term itself was proposed by Schroeder-Heister in the 1980s. Proof-theoretic semantics explains the meaning of linguistic expressions in general and of logical constants in particular in terms of the notion of proof. This volume emerges from presentations at the Second International Conference on Proof-Theoretic Semantics in Tubingen in 2013, where contributing authors were asked to provide a self-contained description and analysis of a significant research question in this area. The contributions are representative of the field and should be of interest to logicians, philosophers, and mathematicians alike.
This book introduces readers to the most advanced research results on Design for Manufacturability (DFM) with multiple patterning lithography (MPL) and electron beam lithography (EBL). The authors describe in detail a set of algorithms/methodologies to resolve issues in modern design for manufacturability problems with advanced lithography. Unlike books that discuss DFM from the product level or physical manufacturing level, this book describes DFM solutions from a circuit design level, such that most of the critical problems can be formulated and solved through combinatorial algorithms.
This book provides a single-source reference to the state-of-the-art of high-level programming models and compilation tool-chains for embedded system platforms. The authors address challenges faced by programmers developing software to implement parallel applications in embedded systems, where very often they are forced to rewrite sequential programs into parallel software, taking into account all the low level features and peculiarities of the underlying platforms. Readers will benefit from these authors' approach, which takes into account both the application requirements and the platform specificities of various embedded systems from different industries. Parallel programming tool-chains are described that take as input parameters both the application and the platform model, then determine relevant transformations and mapping decisions on the concrete platform, minimizing user intervention and hiding the difficulties related to the correct and efficient use of memory hierarchy and low level code generation. |
You may like...
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R3,075
Discovery Miles 30 750
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,608
Discovery Miles 56 080
Cyber-Physical Systems for Social…
Maya Dimitrova, Hiroaki Wagatsuma
Hardcover
R6,896
Discovery Miles 68 960
Architectural Wireless Networks…
Santosh Kumar Das, Sourav Samanta, …
Hardcover
R4,915
Discovery Miles 49 150
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R6,253
Discovery Miles 62 530
|