|
Books > Computing & IT > Computer programming > Compilers & interpreters
Modern computer architectures designed with high-performance
microprocessors offer tremendous potential gains in performance
over previous designs. Yet their very complexity makes it
increasingly difficult to produce efficient code and to realize
their full potential. This landmark text from two leaders in the
field focuses on the pivotal role that compilers can play in
addressing this critical issue.
The basis for all the methods presented in this book is data
dependence, a fundamental compiler analysis tool for optimizing
programs on high-performance microprocessors and parallel
architectures. It enables compiler designers to write compilers
that automatically transform simple, sequential programs into forms
that can exploit special features of these modern
architectures.
The text provides a broad introduction to data dependence, to the
many transformation strategies it supports, and to its applications
to important optimization problems such as parallelization,
compiler memory hierarchy management, and instruction scheduling.
The authors demonstrate the importance and wide applicability of
dependence-based compiler optimizations and give the compiler
writer the basics needed to understand and implement them. They
also offer cookbook explanations for transforming applications by
hand to computational scientists and engineers who are driven to
obtain the best possible performance of their complex
applications.
The approaches presented are based on research conducted over the
past two decades, emphasizing the strategies implemented in
research prototypes at Rice University and in several associated
commercial systems. Randy Allen and Ken Kennedy have provided an
indispensable resource for researchers, practicing professionals,
and graduate students engaged in designing and optimizing compilers
for modern computer architectures.
* Offers a guide to the simple, practical algorithms and approaches
that are most effective in real-world, high-performance
microprocessor and parallel systems.
* Demonstrates each transformation in worked examples.
* Examines how two case study compilers implement the theories and
practices described in each chapter.
* Presents the most complete treatment of memory hierarchy issues
of any compiler text.
* Illustrates ordering relationships with dependence graphs
throughout the book.
* Applies the techniques to a variety of languages, including
Fortran 77, C, hardware definition languages, Fortran 90, and High
Performance Fortran.
* Provides extensive references to the most sophisticated
algorithms known in research.
The AVR RISC Microcontroller Handbook is a comprehensive guide to
designing with Atmel's new controller family, which is designed to
offer high speed and low power consumption at a lower cost. The
main text is divided into three sections: hardware, which covers
all internal peripherals; software, which covers programming and
the instruction set; and tools, which explains using Atmel's
Assembler and Simulator (available on the Web) as well as IAR's C
compiler.
Practical guide for advanced hobbyists or design
professionals
Development tools and code available on the Web
This book introduces basic computing skills designed for industry
professionals without a strong computer science background. Written
in an easily accessible manner, and accompanied by a user-friendly
website, it serves as a self-study guide to survey data science and
data engineering for those who aspire to start a computing career,
or expand on their current roles, in areas such as applied
statistics, big data, machine learning, data mining, and
informatics. The authors draw from their combined experience
working at software and social network companies, on big data
products at several major online retailers, as well as their
experience building big data systems for an AI startup. Spanning
from the basic inner workings of a computer to advanced data
manipulation techniques, this book opens doors for readers to
quickly explore and enhance their computing knowledge. Computing
with Data comprises a wide range of computational topics essential
for data scientists, analysts, and engineers, providing them with
the necessary tools to be successful in any role that involves
computing with data. The introduction is self-contained, and
chapters progress from basic hardware concepts to operating
systems, programming languages, graphing and processing data,
testing and programming tools, big data frameworks, and cloud
computing. The book is fashioned with several audiences in mind.
Readers without a strong educational background in CS--or those who
need a refresher--will find the chapters on hardware, operating
systems, and programming languages particularly useful. Readers
with a strong educational background in CS, but without significant
industry background, will find the following chapters especially
beneficial: learning R, testing, programming, visualizing and
processing data in Python and R, system design for big data, data
stores, and software craftsmanship.
The book focuses on analyses that extract the flow of data, which imperative programming hides through its use and reuse of memory in computer systems and compilers. It will detail some program transformations that conserve this data flow and will introduce a family of analyses, called reaching definition analyses, to do this task. In addition, it shows that correctness of program transformations is guaranteed by the conservation of data flow. Professionals and researchers in software engineering, computer engineering, program design analysis, and compiler design will benefit from its presentation of data-flow methods and memory optimization of compilers.
It is well known that embedded systems have to be implemented
efficiently. This requires that processors optimized for certain
application domains are used in embedded systems. Such an
optimization requires a careful exploration of the design space,
including a detailed study of cost/performance tradeoffs. In order
to avoid time-consuming assembly language programming during design
space exploration, compilers are needed. In order to analyze the
effect of various software or hardware configurations on the
performance, retargetable compilers are needed that can generate
code for numerous different potential hardware configurations. This
book provides a comprehensive and up-to-date overview of the fast
developing area of retargetable compilers for embedded systems. It
describes a large set important tools as well as applications of
retargetable compilers at different levels in the design flow.
Retargetable Compiler Technology for Embedded Systems is mostly
self-contained and requires only fundamental knowledge in software
and compiler design. It is intended to be a key reference for
researchers and designers working on software, compilers, and
processor optimization for embedded systems.
While compilers for high-level programming languages are large
complex software systems, they have particular characteristics that
differentiate them from other software systems. Their functionality
is almost completely well-defined - ideally there exist complete
precise descriptions of the source and target languages. Additional
descriptions of the interfaces to the operating system, programming
system and programming environment, and to other compilers and
libraries are often available. The book deals with the optimization
phase of compilers. In this phase, programs are transformed in
order to increase their efficiency. To preserve the semantics of
the programs in these transformations, the compiler has to meet the
associated applicability conditions. These are checked using static
analysis of the programs. In this book the authors systematically
describe the analysis and transformation of imperative and
functional programs. In addition to a detailed description of
important efficiency-improving transformations, the book offers a
concise introduction to the necessary concepts and methods, namely
to operational semantics, lattices, and fixed-point algorithms.
This book is intended for students of computer science. The book is
supported throughout with examples, exercises and program
fragments.
Effective compilers allow for a more efficient execution of
application programs for a given computer architecture, while
well-conceived architectural features can support more effective
compiler optimization techniques. A well thought-out strategy of
trade-offs between compilers and computer architectures is the key
to the successful designing of highly efficient and effective
computer systems. From embedded micro-controllers to large-scale
multiprocessor systems, it is important to understand the
interaction between compilers and computer architectures. The goal
of the Annual Workshop on Interaction between Compilers and
Computer Architectures (INTERACT) is to promote new ideas and to
present recent developments in compiler techniques and computer
architectures that enhance each other's capabilities and
performance. Interaction Between Compilers and Computer
Architectures is an updated and revised volume consisting of seven
papers originally presented at the Fifth Workshop on Interaction
between Compilers and Computer Architectures (INTERACT-5), which
was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico
in 2001. This volume explores recent developments and ideas for
better integration of the interaction between compilers and
computer architectures in designing modern processors and computer
systems. Interaction Between Compilers and Computer Architectures
is suitable as a secondary text for a graduate level course, and as
a reference for researchers and practitioners in industry.
While compilers for high-level programming languages are large
complex software systems, they have particular characteristics that
differentiate them from other software systems. Their functionality
is almost completely well-defined ideally there exist complete
precise descriptions of the source and target languages. Additional
descriptions of the interfaces to the operating system, programming
system and programming environment, and to other compilers and
libraries are often available.
This book deals with the analysis phase of translators for
programming languages. It describes lexical, syntactic and semantic
analysis, specification mechanisms for these tasks from the theory
of formal languages, and methods for automatic generation based on
the theory of automata. The authors present a conceptual
translation structure, i.e., a division into a set of modules,
which transform an input program into a sequence of steps in a
machine program, and they then describe the interfaces between the
modules. Finally, the structures of real translators are outlined.
The book contains the necessary theory and advice for
implementation.
This book is intended for students of computer science. The book
is supported throughout with examples, exercises and program
fragments.
"
|
|