|
|
Showing 1 - 5 of
5 matches in All Departments
In brief summary, the following results were presented in this
work: * A linear time approach was developed to find register
requirements for any specified CS schedule or filled MRT. * An
algorithm was developed for finding register requirements for any
kernel that has a dependence graph that is acyclic and has no data
reuse on machines with depth independent instruction templates. *
We presented an efficient method of estimating register
requirements as a function of pipeline depth. * We developed a
technique for efficiently finding bounds on register require ments
as a function of pipeline depth. * Presented experimental data to
verify these new techniques. * discussed some interesting design
points for register file size on a number of different
architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix,
John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW
Architecture for a Trace Scheduling Com piler. In Architectural
Support for Programming Languages and Operating Systems, pages
180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky.
Compile-Time Optimization of Memory and Register Usage on the
Cray-2. In Proceedings of the Second Workshop on Languages and
Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William
Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of
a Cray-2 by Vector Block Scheduling. In Proceedings of
Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very
High-Speed Computing Systems. Proceedings of the IEEE,
54:1901-1909, December 1966.
In brief summary, the following results were presented in this
work: * A linear time approach was developed to find register
requirements for any specified CS schedule or filled MRT. * An
algorithm was developed for finding register requirements for any
kernel that has a dependence graph that is acyclic and has no data
reuse on machines with depth independent instruction templates. *
We presented an efficient method of estimating register
requirements as a function of pipeline depth. * We developed a
technique for efficiently finding bounds on register require ments
as a function of pipeline depth. * Presented experimental data to
verify these new techniques. * discussed some interesting design
points for register file size on a number of different
architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix,
John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW
Architecture for a Trace Scheduling Com piler. In Architectural
Support for Programming Languages and Operating Systems, pages
180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky.
Compile-Time Optimization of Memory and Register Usage on the
Cray-2. In Proceedings of the Second Workshop on Languages and
Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William
Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of
a Cray-2 by Vector Block Scheduling. In Proceedings of
Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very
High-Speed Computing Systems. Proceedings of the IEEE,
54:1901-1909, December 1966.
This book serves both as an introduction to computer architecture
and as a guide to using a hardware description language (HDL) to
design, model and simulate real digital systems. The book starts
with an introduction to Verilog - the HDL chosen for the book since
it is widely used in industry and straightforward to learn. Next,
the instruction set architecture (ISA) for the simple VeSPA (Very
Small Processor Architecture) processor is defined - this is a real
working device that has been built and tested at the University of
Minnesota by the authors. The VeSPA ISA is used throughout the
remainder of the book to demonstrate how behavioural and structural
models can be developed and intermingled in Verilog. Although
Verilog is used throughout, the lessons learned will be equally
applicable to other HDLs. Written for senior and graduate students,
this book is also an ideal introduction to Verilog for practising
engineers.
Measuring Computer Performance sets out the fundamental techniques
used in analyzing and understanding the performance of computer
systems. Throughout the book, the emphasis is on practical methods
of measurement, simulation, and analytical modeling. The author
discusses performance metrics and provides detailed coverage of the
strategies used in benchmark programmes. He gives intuitive
explanations of the key statistical tools needed to interpret
measured performance data. He also describes the general 'design of
experiments' technique, and shows how the maximum amount of
information can be obtained for the minimum effort. The book closes
with a chapter on the technique of queueing analysis. Appendices
listing common probability distributions and statistical tables are
included, along with a glossary of important technical terms. This
practically-oriented book will be of great interest to anyone who
wants a detailed, yet intuitive, understanding of computer systems
performance analysis.
This book serves both as an introduction to computer architecture
and as a guide to using a hardware description language (HDL) to
design, model and simulate real digital systems. The book starts
with an introduction to Verilog - the HDL chosen for the book since
it is widely used in industry and straightforward to learn. Next,
the instruction set architecture (ISA) for the simple VeSPA (Very
Small Processor Architecture) processor is defined - this is a real
working device that has been built and tested at the University of
Minnesota by the authors. The VeSPA ISA is used throughout the
remainder of the book to demonstrate how behavioural and structural
models can be developed and intermingled in Verilog. Although
Verilog is used throughout, the lessons learned will be equally
applicable to other HDLs. Written for senior and graduate students,
this book is also an ideal introduction to Verilog for practising
engineers.
|
You may like...
Nathan
Various Artists
CD
R89
Discovery Miles 890
Luke
Various Artists
CD
R96
Discovery Miles 960
|