|
Showing 1 - 6 of
6 matches in All Departments
Written for an advanced-level course in digital systems design,
DIGITAL SYSTEMS DESIGN USING VHDL integrates the use of the
industry-standard hardware description language VHDL into the
digital design process. Following a review of basic concepts of
logic design in Chapter 1, the author introduces the basics of VHDL
in Chapter 2, and then incorporates more coverage of VHDL topics as
needed, with advanced topics covered in Chapter 8. Rather than
simply teach VHDL as a programming language, this book emphasizes
the practical use of VHDL in the digital design process. For
example, in Chapter 9, the author develops VHDL models for a RAM
memory and a microprocessor bus interface; he then uses a VHDL
simulation to verify that timing specifications for the interface
between the memory and microprocessor bus are satisfied. The book
also covers the use of CAD tools to synthesize digital logic from a
VHDL description (in Chapter 8), and stresses the use of
programmable logic devices, including programmable gate arrays.
Chapter 10 introduces methods for testing digital systems including
boundary scan and a built-in self-test.
The formal study of program behavior has become an essential
ingredient in guiding the design of new computer architectures.
Accurate characterization of applications leads to efficient design
of high performing architectures. Quantitative and analytical
characterization of workloads is important to understand and
exploit the interesting features of workloads. This book includes
ten chapters on various aspects of workload characterizati on. File
caching characteristics of the industry-standard web-serving
benchmark SPECweb99 are presented by Keller et al. in Chapter 1,
while value locality of SPECJVM98 benchmarks are characterized by
Rychlik et al. in Chapter 2. SPECJVM98 benchmarks are visited again
in Chapter 3, where Tao et al. study the operating system activity
in Java programs. In Chapter 4, KleinOsowski et al. describe how
the SPEC2000 CPU benchmark suite may be adapted for computer
architecture research and present the small, representative input
data sets they created to reduce simulation time without
compromising on accuracy. Their research has been recognized by the
Standard Performance Evaluation Corporation (SPEC) and is listed on
the official SPEC website, http://www. spec.
org/osg/cpu2000/research/umnl. The main contribution of Chapter 5
is the proposal of a new measure called locality surface to
characterize locality of reference in programs. Sorenson et al.
describe how a three-dimensional surface can be used to represent
both of programs. In Chapter 6, Thornock et al.
The advent of the world-wide web and web-based applications have
dramatically changed the nature of computer applications. Computer
system design, in the light of these changes, involves
understanding these modem workloads, identifying bottlenecks during
their execution, and appropriately tailoring microprocessors,
memory systems, and the overall system to minimize bottlenecks.
This book contains ten chapters dealing with several contemporary
programming paradigms including Java, web server and database
workloads. The first two chapters concentrate on Java. While
Barisone et al.'s characterization in Chapter 1 deals with
instruction set usage of Java applications, Kim et al.'s analysis
in Chapter 2 focuses on memory referencing behavior of Java
workloads. Several applications including the SPECjvm98 suite are
studied using interpreter and Just-In-Time (TIT) compilers.
Barisone et al.'s work includes an analytical model to compute the
utilization of various functional units. Kim et al. present
information on locality, live-range of objects, object lifetime
distribution, etc. Studying database workloads has been a challenge
to research groups, due to the difficulty in accessing standard
benchmarks. Configuring hardware and software for database
benchmarks such as those from the Transactions Processing Council
(TPC) requires extensive effort. In Chapter 3, Keeton and Patterson
present a simplified workload (microbenchmark) that approximates
the characteristics of complex standardized benchmarks.
Computer and microprocessor architectures are advancing at an
astounding pace. However, increasing demands on performance coupled
with a wide variety of specialized operating environments act to
slow this pace by complicating the performance evaluation process.
Carefully balancing efficiency and accuracy is key to avoid
slowdowns, and such a balance can be achieved with an in-depth
understanding of the available evaluation methodologies.
Performance Evaluation and Benchmarking outlines a variety of
evaluation methods and benchmark suites, considering their
strengths, weaknesses, and when each is appropriate to use.
Following a general overview of important performance analysis
techniques, the book surveys contemporary benchmark suites for
specific areas, such as Java, embedded systems, CPUs, and Web
servers. Subsequent chapters explain how to choose appropriate
averages for reporting metrics and provide a detailed treatment of
statistical methods, including a summary of statistics, how to
apply statistical sampling for simulation, how to apply SimPoint,
and a comprehensive overview of statistical simulation. The
discussion then turns to benchmark subsetting methodologies and the
fundamentals of analytical modeling, including queuing models and
Petri nets. Three chapters devoted to hardware performance counters
conclude the book. Supplying abundant illustrations, examples, and
case studies, Performance Evaluation and Benchmarking offers a firm
foundation in evaluation methods along with up-to-date techniques
that are necessary to develop next-generation architectures.
The formal study of program behavior has become an essential
ingredient in guiding the design of new computer architectures.
Accurate characterization of applications leads to efficient design
of high performing architectures. Quantitative and analytical
characterization of workloads is important to understand and
exploit the interesting features of workloads. This book includes
ten chapters on various aspects of workload characterizati on. File
caching characteristics of the industry-standard web-serving
benchmark SPECweb99 are presented by Keller et al. in Chapter 1,
while value locality of SPECJVM98 benchmarks are characterized by
Rychlik et al. in Chapter 2. SPECJVM98 benchmarks are visited again
in Chapter 3, where Tao et al. study the operating system activity
in Java programs. In Chapter 4, KleinOsowski et al. describe how
the SPEC2000 CPU benchmark suite may be adapted for computer
architecture research and present the small, representative input
data sets they created to reduce simulation time without
compromising on accuracy. Their research has been recognized by the
Standard Performance Evaluation Corporation (SPEC) and is listed on
the official SPEC website, http: //www. spec.
org/osg/cpu2000/research/umnl. The main contribution of Chapter 5
is the proposal of a new measure called locality surface to
characterize locality of reference in programs. Sorenson et al.
describe how a three-dimensional surface can be used to represent
both of programs. In Chapter 6, Thornock et al
The advent of the world-wide web and web-based applications have
dramatically changed the nature of computer applications. Computer
system design, in the light of these changes, involves
understanding these modem workloads, identifying bottlenecks during
their execution, and appropriately tailoring microprocessors,
memory systems, and the overall system to minimize bottlenecks.
This book contains ten chapters dealing with several contemporary
programming paradigms including Java, web server and database
workloads. The first two chapters concentrate on Java. While
Barisone et al.'s characterization in Chapter 1 deals with
instruction set usage of Java applications, Kim et al.'s analysis
in Chapter 2 focuses on memory referencing behavior of Java
workloads. Several applications including the SPECjvm98 suite are
studied using interpreter and Just-In-Time (TIT) compilers.
Barisone et al.'s work includes an analytical model to compute the
utilization of various functional units. Kim et al. present
information on locality, live-range of objects, object lifetime
distribution, etc. Studying database workloads has been a challenge
to research groups, due to the difficulty in accessing standard
benchmarks. Configuring hardware and software for database
benchmarks such as those from the Transactions Processing Council
(TPC) requires extensive effort. In Chapter 3, Keeton and Patterson
present a simplified workload (microbenchmark) that approximates
the characteristics of complex standardized benchmarks.
|
You may like...
Loot
Nadine Gordimer
Paperback
(2)
R398
R330
Discovery Miles 3 300
Loot
Nadine Gordimer
Paperback
(2)
R398
R330
Discovery Miles 3 300
Morbius
Jared Leto, Matt Smith, …
DVD
R179
Discovery Miles 1 790
Tenet
John David Washington, Robert Pattinson, …
DVD
R53
Discovery Miles 530
|