![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This comprehensive textbook on the field programmable gate array (FPGA) covers its history, fundamental knowledge, architectures, device technologies, computer-aided design technologies, design tools, examples of application, and future trends. Programmable logic devices represented by FPGAs have been rapidly developed in recent years and have become key electronic devices used in most IT products. This book provides both complete introductions suitable for students and beginners, and high-level techniques useful for engineers and researchers in this field. Differently developed from usual integrated circuits, the FPGA has unique structures, design methodologies, and application techniques. Allowing programming by users, the device can dramatically reduce the rising cost of development in advanced semiconductor chips. The FPGA is now driving the most advanced semiconductor processes and is an all-in-one platform combining memory, CPUs, and various peripheral interfaces. This book introduces the FPGA from various aspects for readers of different levels. Novice learners can acquire a fundamental knowledge of the FPGA, including its history, from Chapter 1; the first half of Chapter 2; and Chapter 4. Professionals who are already familiar with the device will gain a deeper understanding of the structures and design methodologies from Chapters 3 and 5. Chapters 6-8 also provide advanced techniques and cutting-edge applications and trends useful for professionals. Although the first parts are mainly suitable for students, the advanced sections of the book will be valuable for professionals in acquiring an in-depth understanding of the FPGA to maximize the performance of the device.
This book discusses various aspects of cloud computing, in which trust and fault-tolerance models are included in a multilayered, cloud architecture. The authors present a variety of trust and fault models used in the cloud, comparing them based on their functionality and the layer in the cloud to which they respond. Various methods are discussed that can improve the performance of cloud architectures, in terms of trust and fault-tolerance, while providing better performance and quality of service to user. The discussion also includes new algorithms that overcome drawbacks of existing methods, using a performance matrix for each functionality. This book provide readers with an overview of cloud computing and how trust and faults in cloud datacenters affects the performance and quality of service assured to the users. Discusses fundamental issues related to trust and fault-tolerance in Cloud Computing; Describes trust and fault management techniques in multi layered cloud architecture to improve security, reliability and performance of the system; Includes methods to enhance power efficiency and network efficiency, using trust and fault based resource allocation.
This book describes a circuit architecture for converting real analog signals into a digital format, suitable for digital signal processors. This architecture, referred to as multi-stage noise-shaping (MASH) Continuous-Time Sigma-Delta Modulators (CT- M), has the potential to provide better digital data quality and achieve better data rate conversion with lower power consumption. The authors not only cover MASH continuous-time sigma delta modulator fundamentals, but also provide a literature review that will allow students, professors, and professionals to catch up on the latest developments in related technology.
In Part I, the impact of an integro-differential operator on parity logic engines (PLEs) as a tool for scientific modeling from scratch is presented. Part II outlines the fuzzy structural modeling approach for building new linear and nonlinear dynamical causal forecasting systems in terms of fuzzy cognitive maps (FCMs). Part III introduces the new type of autogenetic algorithms (AGAs) to the field of evolutionary computing. Altogether, these PLEs, FCMs, and AGAs may serve as conceptual and computational power tools.
This state-of-the-art monograph presents a coherent survey of a variety of methods and systems for formal hardware verification. It emphasizes the presentation of approaches that have matured into tools and systems usable for the actual verification of nontrivial circuits. All in all, the book is a representative and well-structured survey on the success and future potential of formal methods in proving the correctness of circuits. The various chapters describe the respective approaches supplying theoretical foundations as well as taking into account the application viewpoint. By applying all methods and systems presented to the same set of IFIP WG10.5 hardware verification examples, a valuable and fair analysis of the strenghts and weaknesses of the various approaches is given.
The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs. In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cachecertain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns. This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.
It gives me immense pleasure to introduce this timely handbook to the research/- velopment communities in the ?eld of signal processing systems (SPS). This is the ?rst of its kind and represents state-of-the-arts coverage of research in this ?eld. The driving force behind information technologies (IT) hinges critically upon the major advances in both component integration and system integration. The major breakthrough for the former is undoubtedly the invention of IC in the 50's by Jack S. Kilby, the Nobel Prize Laureate in Physics 2000. In an integrated circuit, all components were made of the same semiconductor material. Beginning with the pocket calculator in 1964, there have been many increasingly complex applications followed. In fact, processing gates and memory storage on a chip have since then grown at an exponential rate, following Moore's Law. (Moore himself admitted that Moore's Law had turned out to be more accurate, longer lasting and deeper in impact than he ever imagined. ) With greater device integration, various signal processing systems have been realized for many killer IT applications. Further breakthroughs in computer sciences and Internet technologies have also catalyzed large-scale system integration. All these have led to today's IT revolution which has profound impacts on our lifestyle and overall prospect of humanity. (It is hard to imagine life today without mobiles or Internets ) The success of SPS requires a well-concerted integrated approach from mul- ple disciplines, such as device, design, and application.
The primary audience for this book are advanced undergraduate students and graduate students. Computer architecture, as it happened in other fields such as electronics, evolved from the small to the large, that is, it left the realm of low-level hardware constructs, and gained new dimensions, as distributed systems became the keyword for system implementation. As such, the system architect, today, assembles pieces of hardware that are at least as large as a computer or a network router or a LAN hub, and assigns pieces of software that are self-contained, such as client or server programs, Java applets or pro tocol modules, to those hardware components. The freedom she/he now has, is tremendously challenging. The problems alas, have increased too. What was before mastered and tested carefully before a fully-fledged mainframe or a closely-coupled computer cluster came out on the market, is today left to the responsibility of computer engineers and scientists invested in the role of system architects, who fulfil this role on behalf of software vendors and in tegrators, add-value system developers, R&D institutes, and final users. As system complexity, size and diversity grow, so increases the probability of in consistency, unreliability, non responsiveness and insecurity, not to mention the management overhead. What System Architects Need to Know The insight such an architect must have includes but goes well beyond, the functional properties of distributed systems."
Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e., to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space."
THE CONTEXT OF PARALLEL PROCESSING The field of digital computer architecture has grown explosively in the past two decades. Through a steady stream of experimental research, tool-building efforts, and theoretical studies, the design of an instruction-set architecture, once considered an art, has been transformed into one of the most quantitative branches of computer technology. At the same time, better understanding of various forms of concurrency, from standard pipelining to massive parallelism, and invention of architectural structures to support a reasonably efficient and user-friendly programming model for such systems, has allowed hardware performance to continue its exponential growth. This trend is expected to continue in the near future. This explosive growth, linked with the expectation that performance will continue its exponential rise with each new generation of hardware and that (in stark contrast to software) computer hardware will function correctly as soon as it comes off the assembly line, has its down side. It has led to unprecedented hardware complexity and almost intolerable dev- opment costs. The challenge facing current and future computer designers is to institute simplicity where we now have complexity; to use fundamental theories being developed in this area to gain performance and ease-of-use benefits from simpler circuits; to understand the interplay between technological capabilities and limitations, on the one hand, and design decisions based on user and application requirements on the other.
Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.
Recently, a variety ofresults on the complexitystatusofthegraph isomorphism problem has been obtained. These results belong to the so-called structural part of Complexity Theory. Our idea behind this book is to summarize such results which might otherwise not be easily accessible in the literature, and also, to give the reader an understanding of the aims and topics in Structural Complexity Theory, in general. The text is basically self contained; the only prerequisite for reading it is some elementary knowledge from Complexity Theory and Probability Theory. It can be used to teach a seminar or a monographic graduate course, but also parts of it (especially Chapter 1) provide a source of examples for a standard graduate course on Complexity Theory. Many people have helped us in different ways III the process of writing this book. Especially, we would like to thank V. Arvind, R.V. Book, E. May ordomo, and the referee who gave very constructive comments. This book project was especially made possible by a DAAD grant in the "Acciones In tegrada" program. The third author has been supported by the ESPRIT project ALCOM-II."
Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node.
Multithreaded computer architecture has emerged as one of the most promising and exciting avenues for the exploitation of parallelism. This new field represents the confluence of several independent research directions which have united over a common set of issues and techniques. Multithreading draws on recent advances in dataflow, RISC, compiling for fine-grained parallel execution, and dynamic resource management. It offers the hope of dramatic performance increases through parallel execution for a broad spectrum of significant applications based on extensions to traditional' approaches. Multithreaded Computer Architecture is divided into four parts, reflecting four major perspectives on the topic. Part I provides the reader with basic background information, definitions, and surveys of work which have in one way or another been pivotal in defining and shaping multithreading as an architectural discipline. Part II examines key elements of multithreading, highlighting the fundamental nature of latency and synchronization. This section presents clever techniques for hiding latency and supporting large synchronization name spaces. Part III looks at three major multithreaded systems, considering issues of machine organization and compilation strategy. Part IV concludes the volume with an analysis of multithreaded architectures, showcasing methodologies and actual measurements. Multithreaded Computer Architecture: A Summary of the State of the Art is an excellent reference source and may be used as a text for advanced courses on the subject.
This book provides a comprehensive survey of recent progress in the design and implementation of Networks-on-Chip. It addresses a wide spectrum of on-chip communication problems, ranging from physical, network, to application layers. Specific topics that are explored in detail include packet routing, resource arbitration, error control/correction, application mapping, and communication scheduling. Additionally, a novel bi-directional communication channel NoC (BiNoC) architecture is described, with detailed explanation. Written for practicing engineers in need of practical knowledge about the design and implementation of networks-on-chip; Includes tutorial-like details to introduce readers to a diverse range of NoC designs, as well as in-depth analysis for designers with NoC experience to explore advanced issues; Describes a variety of on-chip communication architectures, including a novel bi-directional communication channel NoC. From the Foreword: Overall this book shows important advances over the state of the art that will affect future system design as well as R&D in tools and methods for NoC design. It represents an important reference point for both designers and electronic design automation researchers and developers. --Giovanni De Micheli"
This book provides a comprehensive and consistent introduction to the Internet of Things. Hot topics, including the European privacy legislation GDPR, and homomorphic encryption are explained. For each topic, the reader gets a theoretical introduction and an overview, backed by programming examples. For demonstration, the authors use the IoT platform VICINITY, which is open-source, free, and offers leading standards for privacy. Presents readers with a coherent single-source introduction into the IoT; Introduces selected, hot-topics of IoT, including GDPR (European legislation on data protection), and homomorphic encryption; Provides coding examples for most topics that allow the reader to kick-start his own IoT applications, smart services, etc.
The most important use of computing in the future will be in the context of the global "digital convergence" where everything becomes digital and every thing is inter-networked. The application will be dominated by storage, search, retrieval, analysis, exchange and updating of information in a wide variety of forms. Heavy demands will be placed on systems by many simultaneous re quests. And, fundamentally, all this shall be delivered at much higher levels of dependability, integrity and security. Increasingly, large parallel computing systems and networks are providing unique challenges to industry and academia in dependable computing, espe cially because of the higher failure rates intrinsic to these systems. The chal lenge in the last part of this decade is to build a systems that is both inexpensive and highly available. A machine cluster built of commodity hardware parts, with each node run ning an OS instance and a set of applications extended to be fault resilient can satisfy the new stringent high-availability requirements. The focus of this book is to present recent techniques and methods for im plementing fault-tolerant parallel and distributed computing systems. Section I, Fault-Tolerant Protocols, considers basic techniques for achieving fault-tolerance in communication protocols for distributed systems, including synchronous and asynchronous group communication, static total causal order ing protocols, and fail-aware datagram service that supports communications by time."
This book guides readers through the design of hardware architectures using VHDL for digital communication and image processing applications that require performance computing. Further it includes the description of all the VHDL-related notions, such as language, levels of abstraction, combinational vs. sequential logic, structural and behavioral description, digital circuit design, and finite state machines. It also includes numerous examples to make the concepts presented in text more easily understandable.
This textbook serves as an introduction to the subject of embedded
systems design, with emphasis on integration of custom hardware
components with software. The key problem addressed in the book is
the following: how can an embedded systems designer strike a
balance between flexibility and efficiency? The book describes how
combining hardware design with software design leads to a solution
to this important computer engineering problem. The book covers
four topics in hardware/software codesign: fundamentals, the design
space of custom architectures, the hardware/software interface and
application examples. The book comes with an associated design
environment that helps the reader to perform experiments in
hardware/software codesign. Each chapter also includes exercises
and further reading suggestions.
This book presents high-/mixed-voltage analog and radio frequency (RF) circuit techniques for developing low-cost multistandard wireless receivers in nm-length CMOS processes. Key benefits of high-/mixed-voltage RF and analog CMOS circuits are explained, state-of-the-art examples are studied, and circuit solutions before and after voltage-conscious design are compared. Three real design examples are included, which demonstrate the feasibility of high-/mixed-voltage circuit techniques. Provides a valuable summary and real case studies of the state-of-the-art in high-/mixed-voltage circuits and systems; Includes novel high-/mixed-voltage analog and RF circuit techniques - from concept to practice; Describes the first high-voltage-enabled mobile-TVRF front-end in 90nm CMOS and the first mixed-voltage full-band mobile-TV Receiver in 65nm CMOS;Demonstrates the feasibility of high-/mixed-voltage circuit techniques with real design examples."
This book provides the most comprehensive and consistent survey of the field of IC design for Biological Sensing and Processing. The authors describe a multitude of applications that require custom CMOS IC design and highlight the techniques in analog and mixed-signal circuit design that potentially can cross boundaries and benefit the very wide community of bio-medical engineers.
This book is intended to serve as a textbook for a second course in the im plementation (Le. microarchitecture) of computer architectures. The subject matter covered is the collection of techniques that are used to achieve the highest performance in single-processor machines; these techniques center the exploitation of low-level parallelism (temporal and spatial) in the processing of machine instructions. The target audience consists students in the final year of an undergraduate program or in the first year of a postgraduate program in computer science, computer engineering, or electrical engineering; professional computer designers will also also find the book useful as an introduction to the topics covered. Typically, the author has used the material presented here as the basis of a full-semester undergraduate course or a half-semester post graduate course, with the other half of the latter devoted to multiple-processor machines. The background assumed of the reader is a good first course in computer architecture and implementation - to the level in, say, Computer Organization and Design, by D. Patterson and H. Hennessy - and familiarity with digital-logic design. The book consists of eight chapters: The first chapter is an introduction to all of the main ideas that the following chapters cover in detail: the topics covered are the main forms of pipelining used in high-performance uniprocessors, a taxonomy of the space of pipelined processors, and performance issues. It is also intended that this chapter should be readable as a brief "stand-alone" survey."
This book describes the algorithms and computer architectures used to create and analyze photographs in modern digital cameras. It also puts the capabilities of digital cameras into context for applications in art, entertainment, and video analysis. The author discusses the entire range of topics relevant to digital camera design, including image processing, computer vision, image sensors, system-on-chip, and optics, while clearly describing the interactions between design decisions at these different levels of abstraction. Readers will benefit from this comprehensive view of digital camera design, describing the range of algorithms used to compose, enhance, and analyze images, as well as the characteristics of optics, image sensors, and computing platforms that determine the physical limits of image capture and computing. The content is designed to be used by algorithm designers and does not require an extensive background in optics or electronics.
This book provides an overview of current Intellectual Property (IP) based System-on-Chip (SoC) design methodology and highlights how security of IP can be compromised at various stages in the overall SoC design-fabrication-deployment cycle. Readers will gain a comprehensive understanding of the security vulnerabilities of different types of IPs. This book would enable readers to overcome these vulnerabilities through an efficient combination of proactive countermeasures and design-for-security solutions, as well as a wide variety of IP security and trust assessment and validation techniques. This book serves as a single-source of reference for system designers and practitioners for designing secure, reliable and trustworthy SoCs. |
![]() ![]() You may like...
Statistical Analysis for…
Arnoldo Frigessi, Peter Buhlmann, …
Hardcover
Algebra, Geometry and Software Systems
Michael Joswig, Nobuki Takayama
Hardcover
R2,919
Discovery Miles 29 190
Modeling and Simulation with Compose and…
Stephen L. Campbell, Ramine Nikoukhah
Hardcover
R3,445
Discovery Miles 34 450
Computer Mathematics - 9th Asian…
Ruyong Feng, Wen-shin Lee, …
Hardcover
Topological Methods in Data Analysis and…
Ingrid Hotz, Talha Bin Masood, …
Hardcover
R5,143
Discovery Miles 51 430
Computer-supported Calculus
Adi Ben-Israel, Robert P. Gilbert
Hardcover
R2,649
Discovery Miles 26 490
Bayesian Nonparametric Data Analysis
Peter Muller, Fernando Andres Quintana, …
Hardcover
R3,211
Discovery Miles 32 110
Time-dependent Problems in Imaging and…
Barbara Kaltenbacher, Thomas Schuster, …
Hardcover
R4,265
Discovery Miles 42 650
|