|
Showing 1 - 8 of
8 matches in All Departments
In order to design and build computers that achieve and sustain
high performance, it is essential that reliability issues be
considered care fully. The problem has several aspects. Certainly,
considering reliability implies that an engineer must be able to
analyze how design decisions affect the incidence of failure. For
instance, in order design reliable inte gritted circuits, it is
necessary to analyze how decisions regarding design rules affect
the yield, i.e., the percentage of functional chips obtained by the
manufacturing process. Of equal importance in producing reliable
computers is the detection of failures in its Very Large Scale
Integrated (VLSI) circuit components, caused by errors in the
design specification, implementation, or manufacturing processes.
Design verification involves the checking of the specification of a
design for correctness prior to carrying out an implementation.
Implementation verification ensures that the manual design or
automatic synthesis process is correct, i.e., the mask-level
description correctly implements the specification. Manufacture
test involves the checking of the complex fabrication process for
correctness, i.e., ensuring that there are no manufacturing defects
in the integrated circuit. It should be noted that all the above
verification mechanisms deal not only with verifying the
functionality of the integrated circuit but also its performance."
For over three decades now, silicon capacity has steadily been
doubling every year and a half with equally staggering improvements
continuously being observed in operating speeds. This increase in
capacity has allowed for more complex systems to be built on a
single silicon chip. Coupled with this functionality increase,
speed improvements have fueled tremendous advancements in computing
and have enabled new multi-media applications. Such trends, aimed
at integrating higher levels of circuit functionality are tightly
related to an emphasis on compactness in consumer electronic
products and a widespread growth and interest in wireless
communications and products. These trends are expected to persist
for some time as technology and design methodologies continue to
evolve and the era of Systems on a Chip has definitely come of age.
While technology improvements and spiraling silicon capacity allow
designers to pack more functions onto a single piece of silicon,
they also highlight a pressing challenge for system designers to
keep up with such amazing complexity. To handle higher operating
speeds and the constraints of portability and connectivity, new
circuit techniques have appeared. Intensive research and progress
in EDA tools, design methodologies and techniques is required to
empower designers with the ability to make efficient use of the
potential offered by this increasing silicon capacity and
complexity and to enable them to design, test, verify and build
such systems.
Rapid increases in chip complexity, increasingly faster clocks, and
the proliferation of portable devices have combined to make power
dissipation an important design parameter. The power consumption of
a digital system determines its heat dissipation as well as battery
life. For some systems, power has become the most critical design
constraint. Computer-Aided Design Techniques for Low Power
Sequential Logic Circuits presents a methodology for low power
design. The authors first present a survey of techniques for
estimating the average power dissipation of a logic circuit. At the
logic level, power dissipation is directly related to average
switching activity. A symbolic simulation method that accurately
computes the average switching activity in logic circuits is then
described. This method is extended to handle sequential logic
circuits by modeling correlation in time and by calculating the
probabilities of present state lines. Computer-Aided Design
Techniques for Low Power Sequential Logic Circuits then presents a
survey of methods to optimize logic circuits for low power
dissipation which target reduced switching activity. A method to
retime a sequential logic circuit where registers are repositioned
such that the overall glitching in the circuit is minimized is also
described. The authors then detail a powerful optimization method
that is based on selectively precomputing the output logic values
of a circuit one clock cycle before they are required, and using
the precomputed value to reduce internal switching activity in the
succeeding clock cycle. Presented next is a survey of methods that
reduce switching activity in circuits described at the
register-transfer and behavioral levels. Also described is a
scheduling algorithm that reduces power dissipation by maximising
the inactivity period of the modules in a given circuit.
Computer-Aided Design Techniques for Low Power Sequential Logic
Circuits concludes with a summary and directions for future
research.
The current trend towards the realization of complex and versatile
Systems on a Chip requires the combined efforts and attention of
experts in a wide range of areas including microsystems, embedded
hardware/software systems, dedicated ASIC and programmable logic
hardware, reconfigurable computing, wireless communications and RF
issues, video and image processing, memory systems, low power
design techniques, design, test and verification algorithms,
modeling and simulation, logic synthesis, and interconnect
analysis. Thus, the contributions presented herein address a wide
range of Systems on a Chip problems. VLSI: Systems on a Chip
comprises the selected proceedings of the Tenth International
Conference on Very Large Scale Integration (VLSI '99), which was
sponsored by the International Federation for Information
Processing (IFIP) and was held in Lisbon, Portugal, in December
1999.The volume is organized around two themes, in which the
following topics are addressed: VLSI Systems Design and
Applications * Analog Systems Design * Analog Modeling and Design *
Image Processing * Reconfigurable Computing * Memory and System
Design * Low Power Design VLSI Design Methods and CAD * Test and
Verification * Analog CAD and Interconnect * Fundamental CAD
Algorithms * Verification and Simulation * CAD for Physical Design
* High-Level Synthesis and Verification of Embedded Systems VLSI:
Systems on a Chip is essential reading for researchers working on
system integration, design, and CAD.
Rapid increases in chip complexity, increasingly faster clocks, and
the proliferation of portable devices have combined to make power
dissipation an important design parameter. The power consumption of
a digital system determines its heat dissipation as well as battery
life. For some systems, power has become the most critical design
constraint. Computer-Aided Design Techniques for Low Power
Sequential Logic Circuits presents a methodology for low power
design. The authors first present a survey of techniques for
estimating the average power dissipation of a logic circuit. At the
logic level, power dissipation is directly related to average
switching activity. A symbolic simulation method that accurately
computes the average switching activity in logic circuits is then
described. This method is extended to handle sequential logic
circuits by modeling correlation in time and by calculating the
probabilities of present state lines. Computer-Aided Design
Techniques for Low Power Sequential Logic Circuits then presents a
survey of methods to optimize logic circuits for low power
dissipation which target reduced switching activity. A method to
retime a sequential logic circuit where registers are repositioned
such that the overall glitching in the circuit is minimized is also
described. The authors then detail a powerful optimization method
that is based on selectively precomputing the output logic values
of a circuit one clock cycle before they are required, and using
the precomputed value to reduce internal switching activity in the
succeeding clock cycle. Presented next is a survey of methods that
reduce switching activity in circuits described at the
register-transfer and behavioral levels. Also described is a
scheduling algorithm that reduces power dissipation by maximising
the inactivity period of the modules in a given circuit.
Computer-Aided Design Techniques for Low Power Sequential Logic
Circuits concludes with a summary and directions for future
research.
In order to design and build computers that achieve and sustain
high performance, it is essential that reliability issues be
considered care fully. The problem has several aspects. Certainly,
considering reliability implies that an engineer must be able to
analyze how design decisions affect the incidence of failure. For
instance, in order design reliable inte gritted circuits, it is
necessary to analyze how decisions regarding design rules affect
the yield, i.e., the percentage of functional chips obtained by the
manufacturing process. Of equal importance in producing reliable
computers is the detection of failures in its Very Large Scale
Integrated (VLSI) circuit components, caused by errors in the
design specification, implementation, or manufacturing processes.
Design verification involves the checking of the specification of a
design for correctness prior to carrying out an implementation.
Implementation verification ensures that the manual design or
automatic synthesis process is correct, i.e., the mask-level
description correctly implements the specification. Manufacture
test involves the checking of the complex fabrication process for
correctness, i.e., ensuring that there are no manufacturing defects
in the integrated circuit. It should be noted that all the above
verification mechanisms deal not only with verifying the
functionality of the integrated circuit but also its performance."
This monograph is the second of a two-part survey and analysis of
the state of the art in secure processor systems, with a specific
focus on remote software attestation and software isolation. The
first part established the taxonomy and prerequisite concepts
relevant to an examination of the state of the art in trusted
remote computation: attested software isolation containers
(enclaves). This second part extends Part I's description of
Intel's Software Guard Extensions (SGX), an available and
documented enclave-capable system, with a rigorous security
analysis of SGX as a system for trusted remote computation. This
part documents the authors' concerns over the shortcomings of SGX
as a secure system and introduces the MIT Sanctum processor
developed by the authors: a system designed to offer stronger
security guarantees, lend itself better to analysis and formal
verification, and offer a more straightforward and complete threat
model than the Intel system, all with an equivalent programming
model. This two-part work advocates a principled, transparent, and
well scrutinized approach to system design, and argues that
practical guarantees of privacy and integrity for remote
computation are achievable at a reasonable design cost and
performance overhead. See also: Secure Processors Part I:
Background, Taxonomy for Secure Enclaves and Intel SGX Architecture
(ISBN 978-1-68083-300-3). Part I of this survey establishes the
taxonomy and prerequisite concepts relevant to an examination of
the state of the art in trusted remote computation: attested
software isolation containers (enclaves).
This monograph is the first in a two-part survey and analysis of
the state of the art in secure processor systems, with a specific
focus on remote software attestation and software isolation. It
first examines the relevant concepts in computer architecture and
cryptography, and then surveys attack vectors and existing
processor systems claiming security for remote computation and/or
software isolation. It examines, in detail, the modern isolation
container (enclave) primitive as a means to minimize trusted
software given practical trusted hardware and reasonable
performance overhead. Specifically, this work examines the
programming model and software design considerations of Intel's
Software Guard Extensions (SGX), as it is an available and
documented enclave-capable system. This work advocates a
principled, transparent, and well-scrutinized approach to secure
system design, and argues that practical guarantees of privacy and
integrity for remote computation are achievable at a reasonable
design cost and performance overhead. See also: Secure Processors
Part II: Intel SGX Security Analysis and MIT Sanctum Architecture
Part II (ISBN 978-1-68083-302-7). Part II of this survey a deep
dive into the implementation and security evaluation of two modern
enclave-capable secure processor systems: SGX and MIT's Sanctum.
The complex but insufficient threat model employed by SGX motivates
Sanctum, which achieves stronger security guarantees under software
attacks with an equivalent programming model.
|
|