![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book describes the integrated circuit supply chain flow and discusses security issues across the flow, which can undermine the trustworthiness of final design. The author discusses and analyzes the complexity of the flow, along with vulnerabilities of digital circuits to malicious modifications (i.e. hardware Trojans) at the register-transfer level, gate level and layout level. Various metrics are discussed to quantify circuit vulnerabilities to hardware Trojans at different levels. Readers are introduced to design techniques for preventing hardware Trojan insertion and to facilitate hardware Trojan detection. Trusted testing is also discussed, enabling design trustworthiness at different steps of the integrated circuit design flow. Coverage also includes hardware Trojans in mixed-signal circuits.
The relevant techniques, vocabulary, currently available hardware architectures, and programming languages which provide the basic concepts of parallel computing are introduced in this book. In the future, we can expect to see massively parallel teraflop machines. These machines will be supported by gigabit network which allow grand-challenge problems to be solved by using several supercomputers and parallel machines concurrently.
Creativity in Computing and DataFlow Supercomputing, the latest release in the Advances in Computers series published since 1960, presents detailed coverage of innovations in computer hardware, software, theory, design, and applications. In addition, it provides contributors with a medium in which they can explore topics in greater depth and breadth than journal articles typically allow. As a result, many articles have become standard references that continue to be of significant, lasting value in this rapidly expanding field.
This book is based on the 18 tutorials presented during the 26th workshop on Advances in Analog Circuit Design. Expert designers present readers with information about a variety of topics at the frontier of analog circuit design, with specific contributions focusing on hybrid ADCs, smart sensors for the IoT, sub-1V and advanced-node analog circuit design. This book serves as a valuable reference to the state-of-the-art, for anyone involved in analog circuit research and development.
Our society continues to depend upon systems that are built in a way that they end up being inflexible and intolerant to change. Therefore there is an urgent need to investigate innovations and approaches to the management of adaptive and dependable systems. These studies are usually implemented through design, development, and the evaluation of techniques and models to structure computer systems as adaptive systems. Innovations and Approaches for Resilient and Adaptive Systems is a comprehensive collection of knowledge on increasing the notions and models in adaptive and dependable systems. This book aims to enhance the awareness of the role of adaptability and resilience in system environments for researchers, practitioners, educators, and professionals alike.
This book focuses on two of the most relevant problems related to power management on multicore and manycore systems. Specifically, one part of the book focuses on maximizing/optimizing computational performance under power or thermal constraints, while another part focuses on minimizing energy consumption under performance (or real-time) constraints.
In the last few years, courses on parallel computation have been developed and offered in many institutions in the UK, Europe and US as a recognition of the growing significance of this topic in mathematics and computer science. There is a clear need for texts that meet the needs of students and lecturers and this book, based on the author's lecture at ETH Zurich, is an ideal practical student guide to scientific computing on parallel computers working up from a hardware instruction level, to shared memory machines, and finally to distributed memory machines. Aimed at advanced undergraduate and graduate students in applied mathematics, computer science, and engineering, subjects covered include linear algebra, fast Fourier transform, and Monte-Carlo simulations, including examples in C and, in some cases, Fortran. This book is also ideal for practitioners and programmers.
This book examines some of the underlying processes behind different forms of information management, including how we store information in our brains, the impact of new technologies such as computers and robots on our efficiency in storing information, and how information is stored in families and in society. The editors brought together experts from a variety of disciplines. While it is generally agreed that information reduces uncertainties and that the ability to store it safely is of vital importance, these authors are open to different meanings of "information": computer science considers the bit as the information block; neuroscience emphasizes the importance of information as sensory inputs that are processed and transformed in the brain; theories in psychology focus more on individual learning and on the acquisition of knowledge; and finally sociology looks at how interpersonal processes within groups or society itself come to the fore. The book will be of value to researchers and students in the areas of information theory, artificial intelligence, and computational neuroscience.
This book provides a comprehensive analysis of the most important topics in parallel computation. It is written so that it may be used as a self-study guide to the field, and researchers in parallel computing will find it a useful reference for many years to come. The first half of the book consists of an introduction to many fundamental issues in parallel computing. The second half provides lists of P-complete- and open problems. These lists will have lasting value to researchers in both industry and academia. The lists of problems, with their corresponding remarks, the thorough index, and the hundreds of references add to the exceptional value of this resource. While the exciting field of parallel computation continues to expand rapidly, this book serves as a guide to research done through 1994 and also describes the fundamental concepts that new workers will need to know in coming years. It is intended for anyone interested in parallel computing, including senior level undergraduate students, graduate students, faculty, and people in industry. As an essential reference, the book will be needed in all academic libraries.
This book provides readers with a comprehensive introduction to the formal verification of hardware and software. World-leading experts from the domain of formal proof techniques show the latest developments starting from electronic system level (ESL) descriptions down to the register transfer level (RTL). The authors demonstrate at different abstraction layers how formal methods can help to ensure functional correctness. Coverage includes the latest academic research results, as well as descriptions of industrial tools and case studies.
This book describes state-of-the-art techniques for designing real-time computer systems. The author shows how to estimate precisely the effect of cache architecture on the execution time of a program, how to dispatch workload on multicore processors to optimize resources, while meeting deadline constraints, and how to use closed-form mathematical approaches to characterize highly variable workloads and their interaction in a networked environment. Readers will learn how to deal with unpredictable timing behaviors of computer systems on different levels of system granularity and abstraction.
This book discusses the design and performance analysis of SDRAM controllers that cater to both real-time and best-effort applications, i.e. mixed-time-criticality memory controllers. The authors describe the state of the art, and then focus on an architecture template for reconfigurable memory controllers that addresses effectively the quickly evolving set of SDRAM standards, in terms of worst-case timing and power analysis, as well as implementation. A prototype implementation of the controller in SystemC and synthesizable VHDL for an FPGA development board are used as a proof of concept of the architecture template.
Massively Parallel Systems (MPSs) with their scalable computation and storage space promises are becoming increasingly important for high-performance computing. The growing acceptance of MPSs in academia is clearly apparent. However, in industrial companies, their usage remains low. The programming of MPSs is still the big obstacle, and solving this software problem is sometimes referred to as one of the most challenging tasks of the 1990's. The 1994 working conference on "Programming Environments for Massively Parallel Systems" was the latest event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) in this field. It succeeded the 1992 conference in Edinburgh on "Programming Environments for Parallel Computing." The research and development work discussed at the conference addresses the entire spectrum of software problems including virtual machines which are less cumbersome to program; more convenient programming models; advanced programming languages, and especially more sophisticated programming tools; but also algorithms and applications.
This book describes novel hardware security and microfluidic biochip design methodologies to protect against tampering attacks in cyberphysical microfluidic biochips (CPMBs). It also provides a general overview of this nascent area of research, which will prove to be a vital resource for practitioners in the field.This book shows how hardware-based countermeasures and design innovations can be a simple and effective last line of defense, demonstrating that it is no longer justifiable to ignore security and trust in the design phase of biochips.
This volume gives an overview of the state-of-the-art with respect to the development of all types of parallel computers and their application to a wide range of problem areas.
This book discusses analysis, design and optimization techniques for streaming multiprocessor systems, while satisfying a given area, performance, and energy budget. The authors describe design flows for both application-specific and general purpose streaming systems. Coverage also includes the use of machine learning for thermal optimization at run-time, when an application is being executed. The design flow described in this book extends to thermal and energy optimization with multiple applications running sequentially and concurrently.
This book offers readers an easy introduction into quantum computing as well as into the design for corresponding devices. The authors cover several design tasks which are important for quantum computing and introduce corresponding solutions. A special feature of the book is that those tasks and solutions are explicitly discussed from a design automation perspective, i.e., utilizing clever algorithms and data structures which have been developed by the design automation community for conventional logic (i.e., for electronic devices and systems) and are now applied for this new technology. By this, relevant design tasks can be conducted in a much more efficient fashion than before - leading to improvements of several orders of magnitude (with respect to runtime and other design objectives). Describes the current state of the art for designing quantum circuits, for simulating them, and for mapping them to real hardware; Provides a first comprehensive introduction into design automation for quantum computing that tackles practically relevant tasks; Targets the quantum computing community as well as the design automation community, showing both perspectives to quantum computing, and what impressive improvements are possible when combining the knowledge of both communities.
For courses in engineering and technical management System architecture is the study of early decision making in complex systems. This text teaches how to capture experience and analysis about early system decisions, and how to choose architectures that meet stakeholder needs, integrate easily, and evolve flexibly. With case studies written by leading practitioners, from hybrid cars to communications networks to aircraft, this text showcases the science and art of system architecture.
Covering system architecture, implementation and testing, this work is written by authors who are widely experienced with cellular radio in general and with GSM in particular. It provides a structured overview to help make sense of the GSM specifications and surveys competing cellular systems such as NADC and CDMA. Practical testing applications are explored in depth and compared with similar techniques used with analogue cellular systems.
This book introduces a new level of abstraction that closes the gap between the textual specification of embedded systems and the executable model at the Electronic System Level (ESL). Readers will be enabled to operate at this new, Formal Specification Level (FSL), using models which not only allow significant verification tasks in this early stage of the design flow, but also can be extracted semi-automatically from the textual specification in an interactive manner. The authors explain how to use these verification tasks to check conceptual properties, e.g. whether requirements are in conflict, as well as dynamic behavior, in terms of execution traces.
The question whether molecular primitives can prove to be real alternatives to contemporary semiconductor means or effective supplements extending greatly possibilities of information technologies is addressed. Molecular primitives and circuitry for information processing devices are also discussed. Investigations in molecular based computing devices were initiated in the early 1970s in the hopes for an increase in the integration level and processing speed. Real progress proved unfeasible into the 1980 s. However, recently, important and promising results were achieved. The elaboration of operational 160-kilobit molecular electronic memory patterned 1011 bits per square centimeter in the end of 90?'s were the first timid steps of information processing further development. Subsequent advances beyond these developments are presented and discussed. This work provides useful knowledge to anyone working in molecular based information processing.
This book presents techniques necessary to predict cardiac arrhythmias, long before they occur, based on minimal ECG data. The authors describe the key information needed for automated ECG signal processing, including ECG signal pre-processing, feature extraction and classification. The adaptive and novel ECG processing techniques introduced in this book are highly effective and suitable for real-time implementation on ASICs.
This book provides a comprehensive overview of both theoretical and pragmatic aspects of resource-allocation and scheduling in multiprocessor and multicore hard-real-time systems. The authors derive new, abstract models of real-time tasks that capture accurately the salient features of real application systems that are to be implemented on multiprocessor platforms, and identify rules for mapping application systems onto the most appropriate models. New run-time multiprocessor scheduling algorithms are presented, which are demonstrably better than those currently used, both in terms of run-time efficiency and tractability of off-line analysis. Readers will benefit from a new design and analysis framework for multiprocessor real-time systems, which will translate into a significantly enhanced ability to provide formally verified, safety-critical real-time systems at a significantly lower cost.
This thesis takes an empirical approach to understanding of the behavior and interactions between the two main components of reinforcement learning: the learning algorithm and the functional representation of learned knowledge. The author approaches these entities using design of experiments not commonly employed to study machine learning methods. The results outlined in this work provide insight as to what enables and what has an effect on successful reinforcement learning implementations so that this learning method can be applied to more challenging problems.
This book explores near-threshold computing (NTC), a design-space using techniques to run digital chips (processors) near the lowest possible voltage. Readers will be enabled with specific techniques to design chips that are extremely robust; tolerating variability and resilient against errors. Variability-aware voltage and frequency allocation schemes will be presented that will provide performance guarantees, when moving toward near-threshold manycore chips. * Provides an introduction to near-threshold computing, enabling reader with a variety of tools to face the challenges of the power/utilization wall; * Demonstrates how to design efficient voltage regulation, so that each region of the chip can operate at the most efficient voltage and frequency point; * Investigates how performance guarantees can be ensured when moving towards NTC manycores through variability-aware voltage and frequency allocation schemes. |
![]() ![]() You may like...
Handbook of Research on Natural…
Jyotsna Kumar Mandal, Somnath Mukhopadhyay, …
Hardcover
R7,680
Discovery Miles 76 800
Do Wave Functions Jump? - Perspectives…
Valia Allori, Angelo Bassi, …
Hardcover
R3,920
Discovery Miles 39 200
C++ How to Program: Horizon Edition
Harvey Deitel, Paul Deitel
Paperback
R1,917
Discovery Miles 19 170
Validated Designs for Object-oriented…
John Fitzgerald, Peter Gorm Larsen, …
Hardcover
R2,268
Discovery Miles 22 680
|