![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book describes new, fuzzy logic-based mathematical apparatus, which enable readers to work with continuous variables, while implementing whole circuit simulations with speed, similar to gate-level simulators and accuracy, similar to circuit-level simulators. The author demonstrates newly developed principles of digital integrated circuit simulation and optimization that take into consideration various external and internal destabilizing factors, influencing the operation of digital ICs. The discussion includes factors including radiation, ambient temperature, electromagnetic fields, and climatic conditions, as well as non-ideality of interconnects and power rails.
This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems. It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time. Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifying problems and solutions. She then describes out-of-order parallel discrete event simulation (OoO PDES), a novel approach for efficient validation of system-level designs by aggressively exploiting the parallel capabilities of todays' multi-core PCs. This approach enables readers to design simulators that can fully exploit the parallel processing capability of the multi-core system to achieve fast speed simulation, without loss of simulation and timing accuracy. Based on this parallel simulation infrastructure, the author further describes automatic approaches that help the designer quickly to narrow down the debugging targets in faulty ESL models with parallelism.
This book presents task-scheduling techniques for emerging complex parallel architectures including heterogeneous multi-core architectures, warehouse-scale datacenters, and distributed big data processing systems. The demand for high computational capacity has led to the growing popularity of multicore processors, which have become the mainstream in both the research and real-world settings. Yet to date, there is no book exploring the current task-scheduling techniques for the emerging complex parallel architectures. Addressing this gap, the book discusses state-of-the-art task-scheduling techniques that are optimized for different architectures, and which can be directly applied in real parallel systems. Further, the book provides an overview of the latest advances in task-scheduling policies in parallel architectures, and will help readers understand and overcome current and emerging issues in this field.
Computing systems are becoming highly complex, harder to understand, and therefore more prone to failure. Where such systems control aircraft for example, system failure could have disastrous consequences. It is important therefore that we are able to employ mathematical techniques to specify the behavior of critical systems. This thesis uses the theory of Communicating Sequential Processes to show how a real-time system (a system that maintains a continuous interaction with its environment) may be specified. Included is a case study in which a local area network protocol is described at two levels of abstraction, and a general method for structuring CSP descriptions of layered protocols is given. The research contained here represents the very latest work on the specification and verification of real-time systems.
Traditionally, design space exploration for Systems-on-Chip (SoCs) has focused on the computational aspects of the problem at hand. However, as the number of components on a single chip and their performance continue to increase, the communication architecture plays a major role in the area, performance and energy consumption of the overall system. As a result, a shift from computation-based to communication-based design becomes mandatory. Towards this end, network-on-chip (NoC) communication architectures have emerged recently as a promising alternative to classical bus and point-to-point communication architectures. In this dissertation, we study outstanding research problems related to modeling, analysis and optimization of NoC communication architectures. More precisely, we present novel design methodologies, software tools and FPGA prototypes to aid the design of application-specific NoCs.
The book discusses some key scientific and technological developments in high performance computing, identifies significant trends, and defines desirable research objectives. It covers general concepts and emerging systems, software technology, algorithms and applications. Coverage includes hardware, software tools, networks and numerical methods, new computer architectures, and a discussion of future trends. Beyond purely scientific/engineering computing, the book extends to coverage of enterprise-wide, commercial applications, including papers on performance and scalability of database servers and Oracle DBM systems. Audience: Most papers are research level, but some are suitable for computer literate managers and technicians, making the book useful to users of commercial parallel computers.
For the first time, this up-to-date text combines the main issues of the hardware description language VHDL-AMS aimed at model representation of mixed-signal circuits and systems, characterization methods and tools for the extraction of model parameters, and modelling methodologies for accurate high-level behavioural models.
This book covers state-of-the art techniques for high-level modeling and validation of complex hardware/software systems, including those with multicore architectures. Readers will learn to avoid time-consuming and error-prone validation from the comprehensive coverage of system-level validation, including high-level modeling of designs and faults, automated generation of directed tests, and efficient validation methodology using directed tests and assertions. The methodologies described in this book will help designers to improve the quality of their validation, performing as much validation as possible in the early stages of the design, while reducing the overall validation effort and cost.
Networks on Chip presents a variety of topics, problems and approaches with the common theme to systematically organize the on-chip communication in the form of a regular, shared communication network on chip, an NoC for short. As the number of processor cores and IP blocks integrated on a single chip is steadily growing, a systematic approach to design the communication infrastructure becomes necessary. Different variants of packed switched on-chip networks have been proposed by several groups during the past two years. This book summarizes the state of the art of these efforts and discusses the major issues from the physical integration to architecture to operating systems and application interfaces. It also provides a guideline and vision about the direction this field is moving to. Moreover, the book outlines the consequences of adopting design platforms based on packet switched network. The consequences may in fact be far reaching because many of the topics of distributed systems, distributed real-time systems, fault tolerant systems, parallel computer architecture, parallel programming as well as traditional system-on-chip issues will appear relevant but within the constraints of a single chip VLSI implementation. The book is organized in three parts. The first deals with system design and methodology issues. The second presents problems and solutions concerning the hardware and the basic communication infrastructure. Finally, the third part covers operating system, embedded software and application. However, communication from the physical to the application level is a central theme throughout the book. The book serves as an excellent reference source and may be usedas a text for advanced courses on the subject.
This book describes state-of-the-art approaches to Fog Computing, including the background of innovations achieved in recent years. Coverage includes various aspects of fog computing architectures for Internet of Things, driving reasons, variations and case studies. The authors discuss in detail key topics, such as meeting low latency and real-time requirements of applications, interoperability, federation and heterogeneous computing, energy efficiency and mobility, fog and cloud interplay, geo-distribution and location awareness, and case studies in healthcare and smart space applications.
High Performance Computing Systems and Applications contains fully refereed papers from the 15th Annual Symposium on High Performance Computing. These papers cover both fundamental and applied topics in HPC: parallel algorithms, distributed systems and architectures, distributed memory and performance, high level applications, tools and solvers, numerical methods and simulation, advanced computing systems, and the emerging area of computational grids. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
This book discusses the opportunities offered by disruptive technologies to overcome the economical and physical limits currently faced by the electronics industry. It provides a new methodology for the fast evaluation of an emerging technology from an architectural prospective and discusses the implications from simple circuits to complex architectures. Several technologies are discussed, ranging from 3-D integration of devices (Phase Change Memories, Monolithic 3-D, Vertical NanoWires-based transistors) to dense 2-D arrangements (Double-Gate Carbon Nanotubes, Sublithographic Nanowires, Lithographic Crossbar arrangements). Novel architectural organizations, as well as the associated tools, are presented in order to explore this freshly opened design space.
The book provides a bottom-up approach to understanding how a computer works and how to use computing to solve real-world problems. It covers the basics of digital logic through the lens of computer organization and programming. The reader should be able to design his or her own computer from the ground up at the end of the book. Logic simulation with Verilog is used throughout, assembly languages are introduced and discussed, and the fundamentals of computer architecture and embedded systems are touched upon, all in a cohesive design-driven framework suitable for class or self-study.
As the complexity of modern embedded systems increases, it becomes less practical to design monolithic processing platforms. As a result, reconfigurable computing is being adopted widely for more flexible design. Reconfigurable Computers offer the spatial parallelism and fine-grained customizability of application-specific circuits with the postfabrication programmability of software. To make the most of this unique combination of performance and flexibility, designers need to be aware of both hardware and software issues. FPGA users must think not only about the gates needed to perform a computation but also about the software flow that supports the design process. The goal of this book is to help designers become comfortable with these issues, and thus be able to exploit the vast opportunities possible with reconfigurable logic.
This book describes for readers a methodology for dynamic power estimation, using Transaction Level Modeling (TLM). The methodology exploits the existing tools for RTL simulation, design synthesis and SystemC prototyping to provide fast and accurate power estimation using Transaction Level Power Modeling (TLPM). Readers will benefit from this innovative way of evaluating power on a high level of abstraction, at an early stage of the product life cycle, decreasing the number of the expensive design iterations.
As e-government applications are coming of age, security has been gradually becoming more demanding a requirement for users, administrators, and service providers. The increasingly widespread use of Web services facilitates the exchange of data among various e-government applications, and paves the way for enhanced service delivery. ""Secure E-Government Web Services"" addresses various aspects of building secure e-government architectures and services, and presents the views of experts from academia, policy, and the industry to conclude that secure e-government Web services can be deployed in an application-centric and interoperable way. ""Secure E-Government Web Services"" presents the promising area of Web services, shedding new light onto this innovative area of applications, and responding to the current and upcoming challenges of e-government security.
Operating systems kernels are central to the functioning of computers. Security of the overall system, as well as its reliability and responsiveness, depend upon the correct functioning of the kernel. This unique approach - presenting a formal specification of a kernel - starts with basic constructs and develops a set of kernels; proofs are included as part of the text.
This book presents a detailed review of high-performance computing infrastructures for next-generation big data and fast data analytics. Features: includes case studies and learning activities throughout the book and self-study exercises in every chapter; presents detailed case studies on social media analytics for intelligent businesses and on big data analytics (BDA) in the healthcare sector; describes the network infrastructure requirements for effective transfer of big data, and the storage infrastructure requirements of applications which generate big data; examines real-time analytics solutions; introduces in-database processing and in-memory analytics techniques for data mining; discusses the use of mainframes for handling real-time big data and the latest types of data management systems for BDA; provides information on the use of cluster, grid and cloud computing systems for BDA; reviews the peer-to-peer techniques and tools and the common information visualization techniques, used in BDA.
In three main divisions the book covers combinational circuits, latches, and asynchronous sequential circuits. Combinational circuits have no memorising ability, while sequential circuits have such an ability to various degrees. Latches are the simplest sequential circuits, ones with the shortest memory. The presentation is decidedly non-standard. The design of combinational circuits is discussed in an orthodox manner using normal forms and in an unorthodox manner using set-theoretical evaluation formulas relying heavily on Karnaugh maps. The latter approach allows for a new design technique called composition. Latches are covered very extensively. Their memory functions are expressed mathematically in a time-independent manner allowing the use of (normal, non-temporal) Boolean logic in their calculation. The theory of latches is then used as the basis for calculating asynchronous circuits. Asynchronous circuits are specified in a tree-representation, each internal node of the tree representing an internal latch of the circuit, the latches specified by the tree itself. The tree specification allows solutions of formidable problems such as algorithmic state assignment, finding equivalent states non-recursively, and verifying asynchronous circuits.
This book explores the design implications of emerging, non-volatile memory (NVM) technologies on future computer memory hierarchy architecture designs. Since NVM technologies combine the speed of SRAM, the density of DRAM, and the non-volatility of Flash memory, they are very attractive as the basis for future universal memories. This book provides a holistic perspective on the topic, covering modeling, design, architecture and applications. The practical information included in this book will enable designers to exploit emerging memory technologies to improve significantly the performance/power/reliability of future, mainstream integrated circuits.
Distributed and Parallel Systems: From Instruction Parallelism to Cluster Computing is the proceedings of the third Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by the Austrian Computer Society and the MTA SZTAKI Computer and Automation Research Institute. This book contains 18 full papers and 12 short papers from 14 countries around the world, including Japan, Korea and Brazil. The paper sessions cover a broad range of research topics in the area of parallel and distributed systems, including software development environments, performance evaluation, architectures, languages, algorithms, web and cluster computing. This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.
This book discusses the trade-offs involved in designing direct RF
digitization receivers for the radio frequency and digital signal
processing domains. A system-level framework is developed,
quantifying the relevant impairments of the signal processing
chain, through a comprehensive system-level analysis. Special focus
is given to noise analysis (thermal noise, quantization noise,
saturation noise, signal-dependent noise), broadband non-linear
distortion analysis, including the impact of the sampling strategy
(low-pass, band-pass), analysis of time-interleaved ADC channel
mismatches, sampling clock purity and digital channel selection.
The system-level framework described is applied to the design of a
cable multi-channel RF direct digitization receiver. An optimum RF
signal conditioning, and some algorithms (automatic gain control
loop, RF front-end amplitude equalization control loop) are used to
relax the requirements of a 2.7GHz 11-bit ADC.
The development of nature-inspired computational techniques has enhanced problem solving in dynamic and uncertain environments. By implementing effective computing strategies, this ensures adaptable, self-organizing, and decentralized behavioral techniques. Recent Developments in Intelligent Nature-Inspired Computing is an authoritative reference source for the latest scholarly material on natural computation methods and applications in diverse fields. Highlighting multidisciplinary studies on swarm intelligence, global optimization, and group technology, this publication is an ideal reference source for professionals, researchers, scholars, and engineers interested in the latest developments in computer science methodologies.
Grids are a crucial enabling technology for scientific and industrial development. Grid and Services Evolution, the 11th edited volume of the CoreGRID series, was based on The CoreGRID Middleware Workshop, held in Barcelona, Spain, June 5-6, 2008. Grid and Services Evolution provides a bridge between the application community and the developers of middleware services, especially in terms of parallel computing. This edited volume brings together a critical mass of well-established researchers worldwide, from forty-two institutions active in the fields of distributed systems and middleware, programming models, algorithms, tools and environments. Grid and Services Evolution is designed for a professional audience composed of researchers and practitioners within the Grid community industry. This volume is also suitable for advanced-level students in computer science.
Advances in optical technologies have made it possible to implement optical interconnections in future massively parallel processing systems. Photons are non-charged particles, and do not naturally interact. Consequently, there are many desirable characteristics of optical interconnects, e.g. high speed (speed of light), increased fanout, high bandwidth, high reliability, longer interconnection lengths, low power requirements, and immunity to EMI with reduced crosstalk. Optics can utilize free-space interconnects as well as guided wave technology, neither of which has the problems of VLSI technology mentioned above. Optical interconnections can be built at various levels, providing chip-to-chip, module-to-module, board-to-board, and node-to-node communications. Massively parallel processing using optical interconnections poses new challenges; new system configurations need to be designed, scheduling and data communication schemes based on new resource metrics need to be investigated, algorithms for a wide variety of applications need to be developed under the novel computation models that optical interconnections permit, and so on. Parallel Computing Using Optical Interconnections is a collection of survey articles written by leading and active scientists in the area of parallel computing using optical interconnections. This is the first book which provides current and comprehensive coverage of the field, reflects the state of the art from high-level architecture design and algorithmic points of view, and points out directions for further research and development. |
![]() ![]() You may like...
Nigel J. Kalton Selecta - Volume 2
Fritz Gesztesy, Gilles Godefroy, …
Hardcover
R3,118
Discovery Miles 31 180
Game-Theoretic Learning and Distributed…
Tatiana Tatarenko
Hardcover
Graded Questions On Income Tax In South…
K. Stark, L. Mitchell
Paperback
General Galois Geometries
James Hirschfeld, Joseph A. Thas
Hardcover
The Complexity of Tax Simplification…
Simon James, Adrian Sawyer, …
Hardcover
R3,618
Discovery Miles 36 180
Nonlinear Combinatorial Optimization
Dingzhu Du, Panos M. Pardalos, …
Hardcover
Networks in the Global World V…
Artem Antonyuk, Nikita Basov
Hardcover
R4,404
Discovery Miles 44 040
|