|
Showing 1 - 15 of
15 matches in All Departments
This book precisely formulates and simplifies the presentation of
Instruction Level Parallelism (ILP) compilation techniques. It
uniquely offers consistent and uniform descriptions of the code
transformations involved. Due to the ubiquitous nature of ILP in
virtually every processor built today, from general purpose CPUs to
application-specific and embedded processors, this book is useful
to the student, the practitioner and also the researcher of
advanced compilation techniques. With an emphasis on fine-grain
instruction level parallelism, this book will also prove
interesting to researchers and students of parallelism at large, in
as much as the techniques described yield insights that go beyond
superscalar and VLIW (Very Long Instruction Word) machines
compilation and are more widely applicable to optimizing compilers
in general. ILP techniques have found wide and crucial application
in Design Automation, where they have been used extensively in the
optimization of performance as well as area and power minimization
of computer designs.
Automatic transformation of a sequential program into a parallel
form is a subject that presents a great intellectual challenge and
promises a great practical award. There is a tremendous investment
in existing sequential programs, and scientists and engineers
continue to write their application programs in sequential
languages (primarily in Fortran). The demand for higher speedups
increases. The job of a restructuring compiler is to discover the
dependence structure and the characteristics of the given machine.
Much attention has been focused on the Fortran do loop. This is
where one expects to find major chunks of computation that need to
be performed repeatedly for different values of the index variable.
Many loop transformations have been designed over the years, and
several of them can be found in any parallelizing compiler
currently in use in industry or at a university research facility.
The book series on KappaLoop Transformations for Restructuring
Compilerskappa provides a rigorous theory of loop transformations
and dependence analysis. We want to develop the transformations in
a consistent mathematical framework using objects like directed
graphs, matrices, and linear equations. Then, the algorithms that
implement the transformations can be precisely described in terms
of certain abstract mathematical algorithms. The first volume, Loop
Transformations for Restructuring Compilers: The Foundations,
provided the general mathematical background needed for loop
transformations (including those basic mathematical algorithms),
discussed data dependence, and introduced the major
transformations. The current volume, Loop Parallelization, builds a
detailed theory of iteration-level loop transformations based on
the material developed in the previous book.
Automatic transformation of a sequential program into a parallel
form is a subject that presents a great intellectual challenge and
promises great practical rewards. There is a tremendous investment
in existing sequential programs, and scientists and engineers
continue to write their application programs in sequential
languages (primarily in Fortran), but the demand for increasing
speed is constant. The job of a restructuring compiler is to
discover the dependence structure of a given program and transform
the program in a way that is consistent with both that dependence
structure and the characteristics of the given machine. Much
attention in this field of research has been focused on the Fortran
do loop. This is where one expects to find major chunks of
computation that need to be performed repeatedly for different
values of the index variable. Many loop transformations have been
designed over the years, and several of them can be found in any
parallelizing compiler currently in use in industry or at a
university research facility. Loop Transformations for
Restructuring Compilers: The Foundations provides a rigorous theory
of loop transformations. The transformations are developed in a
consistent mathematical framework using objects like directed
graphs, matrices and linear equations. The algorithms that
implement the transformations can then be precisely described in
terms of certain abstract mathematical algorithms. The book
provides the general mathematical background needed for loop
transformations (including those basic mathematical algorithms),
discusses data dependence, and introduces the major
transformations. The next volume will build a detailed theory of
looptransformations based on the material developed here. Loop
Transformations for Restructuring Compilers: The Foundations
presents a theory of loop transformations that is rigorous and yet
reader-friendly.
Dependence Analysis may be considered to be the second edition of
the author's 1988 book, Dependence Analysis for Supercomputing. It
is, however, a completely new work that subsumes the material of
the 1988 publication. This book is the third volume in the series
Loop Transformations for Restructuring Compilers. This series has
been designed to provide a complete mathematical theory of
transformations that can be used to automatically change a
sequential program containing FORTRAN-like do loops into an
equivalent parallel form. In Dependence Analysis, the author
extends the model to a program consisting of do loops and
assignment statements, where the loops need not be sequentially
nested and are allowed to have arbitrary strides. In the context of
such a program, the author studies, in detail, dependence between
statements of the program caused by program variables that are
elements of arrays. Dependence Analysis is directed toward graduate
and undergraduate students, and professional writers of
restructuring compilers. The prerequisite for the book consists of
some knowledge of programming languages, and familiarity with
calculus and graph theory. No knowledge of linear programming is
required.
This book precisely formulates and simplifies the presentation of
Instruction Level Parallelism (ILP) compilation techniques. It
uniquely offers consistent and uniform descriptions of the code
transformations involved. Due to the ubiquitous nature of ILP in
virtually every processor built today, from general purpose CPUs to
application-specific and embedded processors, this book is useful
to the student, the practitioner and also the researcher of
advanced compilation techniques. With an emphasis on fine-grain
instruction level parallelism, this book will also prove
interesting to researchers and students of parallelism at large, in
as much as the techniques described yield insights that go beyond
superscalar and VLIW (Very Long Instruction Word) machines
compilation and are more widely applicable to optimizing compilers
in general. ILP techniques have found wide and crucial application
in Design Automation, where they have been used extensively in the
optimization of performance as well as area and power minimization
of computer designs.
This book is on dependence concepts and general methods for
dependence testing. Here, dependence means data dependence and the
tests are compile-time tests. We felt the time was ripe to create a
solid theory of the subject, to provide the research community with
a uniform conceptual framework in which things fit together nicely.
How successful we have been in meeting these goals, of course,
remains to be seen. We do not try to include all the minute details
that are known, nor do we deal with clever tricks that all good
programmers would want to use. We do try to convince the reader
that there is a mathematical basis consisting of theories of bounds
of linear functions and linear diophantine equations, that levels
and direction vectors are concepts that arise rather natu rally,
that different dependence tests are really special cases of some
general tests, and so on. Some mathematical maturity is needed for
a good understand ing of the book: mainly calculus and linear
algebra. We have cov ered diophantine equations rather thoroughly
and given a descrip of some matrix theory ideas that are not very
widely known. tion A reader familiar with linear programming would
quickly recog nize several concepts. We have learned a great deal
from the works of M. Wolfe, and K. Kennedy and R. Allen. Wolfe's
Ph. D. thesis at the University of Illinois and Kennedy &
Allen's paper on vectorization of Fortran programs are still very
useful sources on this subject."
Automatic transformation of a sequential program into a parallel
form is a subject that presents a great intellectual challenge and
promises great practical rewards. There is a tremendous investment
in existing sequential programs, and scientists and engineers
continue to write their application programs in sequential
languages (primarily in Fortran),but the demand for increasing
speed is constant. The job of a restructuring compiler is to
discover the dependence structure of a given program and transform
the program in a way that is consistent with both that dependence
structure and the characteristics of the given machine. Much
attention in this field of research has been focused on the Fortran
do loop. This is where one expects to find major chunks of
computation that need to be performed repeatedly for different
values of the index variable. Many loop transformations have been
designed over the years, and several of them can be found in any
parallelizing compiler currently in use in industry or at a
university research facility. Loop Transformations for
Restructuring Compilers: The Foundations provides a rigorous theory
of loop transformations. The transformations are developed in a
consistent mathematical framework using objects like directed
graphs, matrices and linear equations. The algorithms that
implement the transformations can then be precisely described in
terms of certain abstract mathematical algorithms. The book
provides the general mathematical background needed for loop
transformations (including those basic mathematical algorithms),
discusses data dependence, and introduces the major
transformations. The next volume will build a detailed theory of
loop transformations based on the material developed here. Loop
Transformations for Restructuring Compilers: The Foundations
presents a theory of loop transformations that is rigorous and yet
reader-friendly.
Dependence Analysis may be considered to be the second edition of
the author's 1988 book, Dependence Analysis for Supercomputing. It
is, however, a completely new work that subsumes the material of
the 1988 publication. This book is the third volume in the series
Loop Transformations for Restructuring Compilers. This series has
been designed to provide a complete mathematical theory of
transformations that can be used to automatically change a
sequential program containing FORTRAN-like do loops into an
equivalent parallel form. In Dependence Analysis, the author
extends the model to a program consisting of do loops and
assignment statements, where the loops need not be sequentially
nested and are allowed to have arbitrary strides. In the context of
such a program, the author studies, in detail, dependence between
statements of the program caused by program variables that are
elements of arrays. Dependence Analysis is directed toward graduate
and undergraduate students, and professional writers of
restructuring compilers. The prerequisite for the book consists of
some knowledge of programming languages, and familiarity with
calculus and graph theory. No knowledge of linear programming is
required.
Automatic transformation of a sequential program into a parallel
form is a subject that presents a great intellectual challenge and
promises a great practical award. There is a tremendous investment
in existing sequential programs, and scientists and engineers
continue to write their application programs in sequential
languages (primarily in Fortran). The demand for higher speedups
increases. The job of a restructuring compiler is to discover the
dependence structure and the characteristics of the given machine.
Much attention has been focused on the Fortran do loop. This is
where one expects to find major chunks of computation that need to
be performed repeatedly for different values of the index variable.
Many loop transformations have been designed over the years, and
several of them can be found in any parallelizing compiler
currently in use in industry or at a university research facility.
The book series on KappaLoop Transformations for Restructuring
Compilerskappa provides a rigorous theory of loop transformations
and dependence analysis. We want to develop the transformations in
a consistent mathematical framework using objects like directed
graphs, matrices, and linear equations. Then, the algorithms that
implement the transformations can be precisely described in terms
of certain abstract mathematical algorithms. The first volume, Loop
Transformations for Restructuring Compilers: The Foundations,
provided the general mathematical background needed for loop
transformations (including those basic mathematical algorithms),
discussed data dependence, and introduced the major
transformations. The current volume, Loop Parallelization, builds a
detailed theory of iteration-level loop transformations based on
the material developed in the previous book.
|
Languages and Compilers for Parallel Computing - 9th International Workshop, LCPC'96, San Jose, California, USA, August 8-10, 1996, Proceedings (Paperback, 1997 ed.)
David Sehr, Utpal Banerjee, David Gelernter, Alex Nicolau, David Padua
|
R3,164
Discovery Miles 31 640
|
Ships in 10 - 15 working days
|
This book presents the thoroughly refereed post-workshop
proceedings of the 9th International Workshop on Languages and
Compilers for Parallel Computing, LCPC'96, held in San Jose,
California, in August 1996.
The book contains 35 carefully revised full papers together with
nine poster presentations. The papers are organized in topical
sections on automatic data distribution and locality enhancement,
program analysis, compiler algorithms for fine-grain parallelism,
instruction scheduling and register allocation, parallelizing
compilers, communication optimization, compiling HPF, and run-time
control of parallelism.
|
Languages and Compilers for Parallel Computing - 8th International Workshop, Columbus, Ohio, USA, August 10-12, 1995. Proceedings (Paperback, 1996 ed.)
Chua-Huang Huang, Ponnuswamy Sadayappan, Utpal Banerjee, David Gelernter, Alex Nicolau, …
|
R3,156
Discovery Miles 31 560
|
Ships in 10 - 15 working days
|
This book presents the refereed proceedings of the Eighth Annual
Workshop on Languages and Compilers for Parallel Computing, held in
Columbus, Ohio in August 1995.
The 38 full revised papers presented were carefully selected for
inclusion in the proceedings and reflect the state of the art of
research and advanced applications in parallel languages,
restructuring compilers, and runtime systems. The papers are
organized in sections on fine-grain parallelism, interprocedural
analysis, program analysis, Fortran 90 and HPF, loop
parallelization for HPF compilers, tools and libraries, loop-level
optimization, automatic data distribution, compiler models,
irregular computation, object-oriented and functional parallelism.
|
Languages and Compilers for Parallel Computing - 7th International Workshop, Ithaca, NY, USA, August 8 - 10, 1994. Proceedings (Paperback, 1995 ed.)
Keshav Pingali, Utpal Banerjee, David Gelernter, Alex Nicolau, David Padua
|
R1,745
Discovery Miles 17 450
|
Ships in 10 - 15 working days
|
This volume presents revised versions of the 32 papers accepted for
the Seventh Annual Workshop on Languages and Compilers for Parallel
Computing, held in Ithaca, NY in August 1994.
The 32 papers presented report on the leading research activities
in languages and compilers for parallel computing and thus reflect
the state of the art in the field. The volume is organized in
sections on fine-grain parallelism, align- ment and distribution,
postlinear loop transformation, parallel structures, program
analysis, computer communication, automatic parallelization,
languages for parallelism, scheduling and program optimization, and
program evaluation.
|
Languages and Compilers for Parallel Computing - 6th International Workshop, Portland, Oregon, USA, August 12 - 14, 1993. Proceedings (Paperback, 1994 ed.)
Utpal Banerjee, David Gelernter, Alex Nicolau, David Padua
|
R3,189
Discovery Miles 31 890
|
Ships in 10 - 15 working days
|
This book contains papers selected for presentation at the Sixth
Annual Workshop on Languages and Compilers for Parallel Computing.
The workshop washosted by the Oregon Graduate Institute of Science
and Technology. All the major research efforts in parallel
languages and compilers are represented in this workshop series.
The 36 papers in the volume aregrouped under nine headings: dynamic
data structures, parallel languages, High Performance Fortran, loop
transformation, logic and dataflow language implementations, fine
grain parallelism, scalar analysis, parallelizing compilers, and
analysis of parallel programs. The book represents a valuable
snapshot of the state of research in the field in 1993.
|
Languages and Compilers for Parallel Computing - 5th International Workshop, New Haven, Connecticut, USA, August 3-5, 1992. Proceedings (Paperback, 1993 ed.)
Utpal Banerjee, David Gelernter, Alex Nicolau, David Padua
|
R1,790
Discovery Miles 17 900
|
Ships in 10 - 15 working days
|
The articles in this volume are revised versions of the best papers
presented at the Fifth Workshop on Languages and Compilers for
Parallel Computing, held at Yale University, August 1992. The
previous workshops in this series were held in Santa Clara (1991),
Irvine (1990), Urbana (1989), and Ithaca (1988). As in previous
years, a reasonable cross-section of some of the best work in the
field is presented. The volume contains 35 papers, mostly by
authors working in the U.S. or Canada but also by authors from
Austria, Denmark, Israel, Italy, Japan and the U.K.
|
Languages and Compilers for Parallel Computing - Fourth International Workshop, Santa Clara, California, USA, August 7-9, 1991. Proceedings (Paperback, 1992 ed.)
Utpal Banerjee, David Gelernter, Alex Nicolau, David Padua
|
R1,698
Discovery Miles 16 980
|
Ships in 10 - 15 working days
|
This volume contains the proceedings of the Fourth Workshop on
Languages andCompilers for Parallel Computing, held in Santa Clara,
California, in August1991. The purpose of the workshop, held every
year since 1988, is to bring together the leading researchers on
parallel programming language designand compilation techniques for
parallel computers. The papers in this book cover several important
topics including: (1) languages and structures to represent
programs internally in the compiler, (2) techniques to analyzeand
manipulate sequential loops in order to generate a parallel
version, (3)techniques to detect and extract fine-grain
parallelism, (4) scheduling and memory-management issues in
automatically generated parallel programs, (5) parallel programming
language designs, and (6) compilation of explicitly parallel
programs. Together, the papers give a good overview of the research
projects underway in 1991 in this field.
|
|