![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book provides readers with a valuable reference on cyber weapons and, in particular, viruses, software and hardware Trojans. The authors discuss in detail the most dangerous computer viruses, software Trojans and spyware, models of computer Trojans affecting computers, methods of implementation and mechanisms of their interaction with an attacker - a hacker, an intruder or an intelligence agent. Coverage includes Trojans in electronic equipment such as telecommunication systems, computers, mobile communication systems, cars and even consumer electronics. The evolutionary path of development of hardware Trojans from "cabinets", "crates" and "boxes" to the microcircuits (IC) is also discussed. Readers will benefit from the detailed review of the major known types of hardware Trojans in chips, principles of their design, mechanisms of their functioning, methods of their introduction, means of camouflaging and detecting, as well as methods of protection and counteraction.
This state-of-the-art survey gives a systematic presentation of recent advances in the design and validation of computer architectures. Based on advanced research ideas and approaches, and written by eminent researchers in the field, seven chapters cover the whole range from computer aided high-level design of VLSI circuits and systems to layout and testable design, including modeling and synthesis of behavior, of control, and of dataflow, cell based logic optimization, machine assisted verification, and virtual machine design. The chapters presuppose only basic familiarity with computer architecture. They are self-contained and lead the reader gently and informatively to the forefront of current research. A special feature of the book is the comprehensive range of architecture design and validation topics covered, giving the reader a clear view of the problems and of advanced techniques for their solution.
Logic circuits are becoming increasingly susceptible to probabilistic behavior caused by external radiation and process variation. In addition, inherently probabilistic quantum- and nano-technologies are on the horizon as we approach the limits of CMOS scaling. Ensuring the reliability of such circuits despite the probabilistic behavior is a key challenge in IC design---one that necessitates a fundamental, probabilistic reformulation of synthesis and testing techniques. This monograph will present techniques for analyzing, designing, and testing logic circuits with probabilistic behavior.
Motivation Modem enterprises rely on database management systems (DBMS) to collect, store and manage corporate data, which is considered a strategic corporate re source. Recently, with the proliferation of personal computers and departmen tal computing, the trend has been towards the decentralization and distribution of the computing infrastructure, with autonomy and responsibility for data now residing at the departmental and workgroup level of the organization. Users want their data delivered to their desktops, allowing them to incor porate data into their personal databases, spreadsheets, word processing doc uments, and most importantly, into their daily tasks and activities. They want to be able to share their information while retaining control over its access and distribution. There are also pressures from corporate leaders who wish to use information technology as a strategic resource in offering specialized value-added services to customers. Database technology is being used to manage the data associated with corporate processes and activities. Increasingly, the data being managed are not simply formatted tables in relational databases, but all types of ob jects, including unstructured text, images, audio, and video. Thus, the database management providers are being asked to extend the capabilities of DBMS to include object-relational models as well as full object-oriented database man agement systems."
This book is the product of Research Study Group (RSG) 13 on "Human Engineering Evaluation on the Use of Colour in Electronic Displays," of Panel 8, "Defence Applications of Human and Biomedical Sciences," of the NATO Defence Research Group. RSG 13 was chaired by Heino Widdel (Germany) and consisted of Jeffrey Grossman (United States), Jean-Pierre Menu (France), Giampaolo Noja (Italy, point of contact), David Post (United States), and Jan Walraven (Netherlands). Initially, Christopher Gibson (United Kingdom) and Sharon McFaddon (Canada) participated also. Most of these representatives served previously on the NATO program committee that produced Proceedings of a Workshop on Colour Coded vs. Monochrome Displays (edited by Christopher Gibson and published by the Royal Aircraft Establishment, Farnborough, England) in 1984. RSG 13 can be regarded as a descendent of that program committee. RSG 13 was formed in 1987 for the purpose of developing and distributing guidance regarding the use of color on electronic displays. During our first meeting, we discussed the fact that, although there is a tremendous amount of information available concerning color vision, color perception, colorimetry, and color displays-much of it relevant to display design-it is scattered across numerous texts, journals, conference proceedings, and technical reports. We decided that we could fulfill the RSG's purpose best by producing a book that consolidates and summarizes this information, emphasizing those aspects that are most applicable to display design.
This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques.
High Performance Computing Systems and Applications contains the fully refereed papers from the 13th Annual Symposium on High Performance Computing, held in Kingston, Canada, in June 1999. This book presents the latest research in HPC architectures, distributed and shared memory performance, algorithms and solvers, with special sessions on atmospheric science, computational chemistry and physics. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
Logic and Complexity looks at basic logic as it is used in Computer Science, and provides students with a logical approach to Complexity theory. With plenty of exercises, this book presents classical notions of mathematical logic, such as decidability, completeness and incompleteness, as well as new ideas brought by complexity theory such as NP-completeness, randomness and approximations, providing a better understanding for efficient algorithmic solutions to problems. Divided into three parts, it covers: - Model Theory and Recursive Functions - introducing the basic model theory of propositional, 1st order, inductive definitions and 2nd order logic. Recursive functions, Turing computability and decidability are also examined. - Descriptive Complexity - looking at the relationship between definitions of problems, queries, properties of programs and their computational complexity. - Approximation - explaining how some optimization problems and counting problems can be approximated according to their logical form. Logic is important in Computer Science, particularly for verification problems and database query languages such as SQL. Students and researchers in this field will find this book of great interest.
Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
This book discusses various aspects of cloud computing, in which trust and fault-tolerance models are included in a multilayered, cloud architecture. The authors present a variety of trust and fault models used in the cloud, comparing them based on their functionality and the layer in the cloud to which they respond. Various methods are discussed that can improve the performance of cloud architectures, in terms of trust and fault-tolerance, while providing better performance and quality of service to user. The discussion also includes new algorithms that overcome drawbacks of existing methods, using a performance matrix for each functionality. This book provide readers with an overview of cloud computing and how trust and faults in cloud datacenters affects the performance and quality of service assured to the users. Discusses fundamental issues related to trust and fault-tolerance in Cloud Computing; Describes trust and fault management techniques in multi layered cloud architecture to improve security, reliability and performance of the system; Includes methods to enhance power efficiency and network efficiency, using trust and fault based resource allocation.
This comprehensive textbook on the field programmable gate array (FPGA) covers its history, fundamental knowledge, architectures, device technologies, computer-aided design technologies, design tools, examples of application, and future trends. Programmable logic devices represented by FPGAs have been rapidly developed in recent years and have become key electronic devices used in most IT products. This book provides both complete introductions suitable for students and beginners, and high-level techniques useful for engineers and researchers in this field. Differently developed from usual integrated circuits, the FPGA has unique structures, design methodologies, and application techniques. Allowing programming by users, the device can dramatically reduce the rising cost of development in advanced semiconductor chips. The FPGA is now driving the most advanced semiconductor processes and is an all-in-one platform combining memory, CPUs, and various peripheral interfaces. This book introduces the FPGA from various aspects for readers of different levels. Novice learners can acquire a fundamental knowledge of the FPGA, including its history, from Chapter 1; the first half of Chapter 2; and Chapter 4. Professionals who are already familiar with the device will gain a deeper understanding of the structures and design methodologies from Chapters 3 and 5. Chapters 6-8 also provide advanced techniques and cutting-edge applications and trends useful for professionals. Although the first parts are mainly suitable for students, the advanced sections of the book will be valuable for professionals in acquiring an in-depth understanding of the FPGA to maximize the performance of the device.
Adiabatic logic is a potential successor for static CMOS circuit design when it comes to ultra-low-power energy consumption. Future development like the evolutionary shrinking of the minimum feature size as well as revolutionary novel transistor concepts will change the gate level savings gained by adiabatic logic. In addition, the impact of worsening degradation effects has to be considered in the design of adiabatic circuits. The impact of the technology trends on the figures of merit of adiabatic logic, energy saving potential and optimum operating frequency, are investigated, as well as degradation related issues. Adiabatic logic benefits from future devices, is not susceptible to Hot Carrier Injection, and shows less impact of Bias Temperature Instability than static CMOS circuits. Major interest also lies on the efficient generation of the applied power-clock signal. This oscillating power supply can be used to save energy in short idle times by disconnecting circuits. An efficient way to generate the power-clock is by means of the synchronous 2N2P LC oscillator, which is also robust with respect to pattern-induced capacitive variations. An easy to implement but powerful power-clock gating supplement is proposed by gating the synchronization signals. Diverse implementations to shut down the system are presented and rated for their applicability and other aspects like energy reduction capability and data retention. Advantageous usage of adiabatic logic requires compact and efficient arithmetic structures. A broad variety of adder structures and a Coordinate Rotation Digital Computer are compared and rated according to energy consumption and area usage, and the resulting energy saving potential against static CMOS proves the ultra-low-power capability of adiabatic logic. In the end, a new circuit topology has to compete with static CMOS also in productivity. On a 130nm test chip, a large scale test vehicle containing an FIR filter was implemented in adiabatic logic, utilizing a standard, library-based design flow, fabricated, measured and compared to simulations of a static CMOS counterpart, with measured saving factors compliant to the values gained by simulation. This leads to the conclusion that adiabatic logic is ready for productive design due to compatibility not only to CMOS technology, but also to electronic design automation (EDA) tools developed for static CMOS system design.
Integrated Research in Grid Computing presents a selection of the best papers presented at the CoreGRID Integration Workshop (CGIW2005), which took place on November 28-30, 2005 in Pisa, Italy. The aim of CoreGRID is to strengthen and advance scientific and technological excellence in the area of Grid and Peer-to-Peer technologies in order to overcome the current fragmentation and duplication of effort in this area. To achieve this objective, the workshop brought together a critical mass of well-established researchers (including 145 permanent researchers and 171 PhD students) from a number of institutions which have all constructed an ambitious joint program of activities. Priority in the workshop was given to work conducted in Tcollaboration between partners from different research institutions and to promising research proposals that could foster such collaboration in the future.
This book describes a circuit architecture for converting real analog signals into a digital format, suitable for digital signal processors. This architecture, referred to as multi-stage noise-shaping (MASH) Continuous-Time Sigma-Delta Modulators (CT- M), has the potential to provide better digital data quality and achieve better data rate conversion with lower power consumption. The authors not only cover MASH continuous-time sigma delta modulator fundamentals, but also provide a literature review that will allow students, professors, and professionals to catch up on the latest developments in related technology.
In Part I, the impact of an integro-differential operator on parity logic engines (PLEs) as a tool for scientific modeling from scratch is presented. Part II outlines the fuzzy structural modeling approach for building new linear and nonlinear dynamical causal forecasting systems in terms of fuzzy cognitive maps (FCMs). Part III introduces the new type of autogenetic algorithms (AGAs) to the field of evolutionary computing. Altogether, these PLEs, FCMs, and AGAs may serve as conceptual and computational power tools.
This state-of-the-art monograph presents a coherent survey of a variety of methods and systems for formal hardware verification. It emphasizes the presentation of approaches that have matured into tools and systems usable for the actual verification of nontrivial circuits. All in all, the book is a representative and well-structured survey on the success and future potential of formal methods in proving the correctness of circuits. The various chapters describe the respective approaches supplying theoretical foundations as well as taking into account the application viewpoint. By applying all methods and systems presented to the same set of IFIP WG10.5 hardware verification examples, a valuable and fair analysis of the strenghts and weaknesses of the various approaches is given.
The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs. In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cachecertain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns. This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.
It gives me immense pleasure to introduce this timely handbook to the research/- velopment communities in the ?eld of signal processing systems (SPS). This is the ?rst of its kind and represents state-of-the-arts coverage of research in this ?eld. The driving force behind information technologies (IT) hinges critically upon the major advances in both component integration and system integration. The major breakthrough for the former is undoubtedly the invention of IC in the 50's by Jack S. Kilby, the Nobel Prize Laureate in Physics 2000. In an integrated circuit, all components were made of the same semiconductor material. Beginning with the pocket calculator in 1964, there have been many increasingly complex applications followed. In fact, processing gates and memory storage on a chip have since then grown at an exponential rate, following Moore's Law. (Moore himself admitted that Moore's Law had turned out to be more accurate, longer lasting and deeper in impact than he ever imagined. ) With greater device integration, various signal processing systems have been realized for many killer IT applications. Further breakthroughs in computer sciences and Internet technologies have also catalyzed large-scale system integration. All these have led to today's IT revolution which has profound impacts on our lifestyle and overall prospect of humanity. (It is hard to imagine life today without mobiles or Internets ) The success of SPS requires a well-concerted integrated approach from mul- ple disciplines, such as device, design, and application.
This book provides a comprehensive and consistent introduction to the Internet of Things. Hot topics, including the European privacy legislation GDPR, and homomorphic encryption are explained. For each topic, the reader gets a theoretical introduction and an overview, backed by programming examples. For demonstration, the authors use the IoT platform VICINITY, which is open-source, free, and offers leading standards for privacy. Presents readers with a coherent single-source introduction into the IoT; Introduces selected, hot-topics of IoT, including GDPR (European legislation on data protection), and homomorphic encryption; Provides coding examples for most topics that allow the reader to kick-start his own IoT applications, smart services, etc.
The primary audience for this book are advanced undergraduate students and graduate students. Computer architecture, as it happened in other fields such as electronics, evolved from the small to the large, that is, it left the realm of low-level hardware constructs, and gained new dimensions, as distributed systems became the keyword for system implementation. As such, the system architect, today, assembles pieces of hardware that are at least as large as a computer or a network router or a LAN hub, and assigns pieces of software that are self-contained, such as client or server programs, Java applets or pro tocol modules, to those hardware components. The freedom she/he now has, is tremendously challenging. The problems alas, have increased too. What was before mastered and tested carefully before a fully-fledged mainframe or a closely-coupled computer cluster came out on the market, is today left to the responsibility of computer engineers and scientists invested in the role of system architects, who fulfil this role on behalf of software vendors and in tegrators, add-value system developers, R&D institutes, and final users. As system complexity, size and diversity grow, so increases the probability of in consistency, unreliability, non responsiveness and insecurity, not to mention the management overhead. What System Architects Need to Know The insight such an architect must have includes but goes well beyond, the functional properties of distributed systems."
Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e., to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space."
THE CONTEXT OF PARALLEL PROCESSING The field of digital computer architecture has grown explosively in the past two decades. Through a steady stream of experimental research, tool-building efforts, and theoretical studies, the design of an instruction-set architecture, once considered an art, has been transformed into one of the most quantitative branches of computer technology. At the same time, better understanding of various forms of concurrency, from standard pipelining to massive parallelism, and invention of architectural structures to support a reasonably efficient and user-friendly programming model for such systems, has allowed hardware performance to continue its exponential growth. This trend is expected to continue in the near future. This explosive growth, linked with the expectation that performance will continue its exponential rise with each new generation of hardware and that (in stark contrast to software) computer hardware will function correctly as soon as it comes off the assembly line, has its down side. It has led to unprecedented hardware complexity and almost intolerable dev- opment costs. The challenge facing current and future computer designers is to institute simplicity where we now have complexity; to use fundamental theories being developed in this area to gain performance and ease-of-use benefits from simpler circuits; to understand the interplay between technological capabilities and limitations, on the one hand, and design decisions based on user and application requirements on the other.
Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.
Recently, a variety ofresults on the complexitystatusofthegraph isomorphism problem has been obtained. These results belong to the so-called structural part of Complexity Theory. Our idea behind this book is to summarize such results which might otherwise not be easily accessible in the literature, and also, to give the reader an understanding of the aims and topics in Structural Complexity Theory, in general. The text is basically self contained; the only prerequisite for reading it is some elementary knowledge from Complexity Theory and Probability Theory. It can be used to teach a seminar or a monographic graduate course, but also parts of it (especially Chapter 1) provide a source of examples for a standard graduate course on Complexity Theory. Many people have helped us in different ways III the process of writing this book. Especially, we would like to thank V. Arvind, R.V. Book, E. May ordomo, and the referee who gave very constructive comments. This book project was especially made possible by a DAAD grant in the "Acciones In tegrada" program. The third author has been supported by the ESPRIT project ALCOM-II."
Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node. |
![]() ![]() You may like...
Singularities and Their Interaction with…
Javier Fernandez de Bobadilla, Tamas Laszlo, …
Hardcover
R5,072
Discovery Miles 50 720
Little Bird Of Auschwitz - How My Mother…
Alina Peretti, Jacques Peretti
Paperback
Game Theory, Experience, Rationality…
W. Leinfellner, Eckehart Koehler
Hardcover
R4,501
Discovery Miles 45 010
Modelling of Plasmonic and Graphene…
Javier Munarriz Arrieta
Hardcover
R3,296
Discovery Miles 32 960
Recommendation and Search in Social…
OEzgur Ulusoy, Abdullah Uz Tansel, …
Hardcover
|