|
|
Books > Computing & IT > Computer programming > Compilers & interpreters
The AVR RISC Microcontroller Handbook is a comprehensive guide to
designing with Atmel's new controller family, which is designed to
offer high speed and low power consumption at a lower cost. The
main text is divided into three sections: hardware, which covers
all internal peripherals; software, which covers programming and
the instruction set; and tools, which explains using Atmel's
Assembler and Simulator (available on the Web) as well as IAR's C
compiler.
Practical guide for advanced hobbyists or design
professionals
Development tools and code available on the Web
The book focuses on analyses that extract the flow of data, which imperative programming hides through its use and reuse of memory in computer systems and compilers. It will detail some program transformations that conserve this data flow and will introduce a family of analyses, called reaching definition analyses, to do this task. In addition, it shows that correctness of program transformations is guaranteed by the conservation of data flow. Professionals and researchers in software engineering, computer engineering, program design analysis, and compiler design will benefit from its presentation of data-flow methods and memory optimization of compilers.
Modern computer architectures designed with high-performance
microprocessors offer tremendous potential gains in performance
over previous designs. Yet their very complexity makes it
increasingly difficult to produce efficient code and to realize
their full potential. This landmark text from two leaders in the
field focuses on the pivotal role that compilers can play in
addressing this critical issue.
The basis for all the methods presented in this book is data
dependence, a fundamental compiler analysis tool for optimizing
programs on high-performance microprocessors and parallel
architectures. It enables compiler designers to write compilers
that automatically transform simple, sequential programs into forms
that can exploit special features of these modern
architectures.
The text provides a broad introduction to data dependence, to the
many transformation strategies it supports, and to its applications
to important optimization problems such as parallelization,
compiler memory hierarchy management, and instruction scheduling.
The authors demonstrate the importance and wide applicability of
dependence-based compiler optimizations and give the compiler
writer the basics needed to understand and implement them. They
also offer cookbook explanations for transforming applications by
hand to computational scientists and engineers who are driven to
obtain the best possible performance of their complex
applications.
The approaches presented are based on research conducted over the
past two decades, emphasizing the strategies implemented in
research prototypes at Rice University and in several associated
commercial systems. Randy Allen and Ken Kennedy have provided an
indispensable resource for researchers, practicing professionals,
and graduate students engaged in designing and optimizing compilers
for modern computer architectures.
* Offers a guide to the simple, practical algorithms and approaches
that are most effective in real-world, high-performance
microprocessor and parallel systems.
* Demonstrates each transformation in worked examples.
* Examines how two case study compilers implement the theories and
practices described in each chapter.
* Presents the most complete treatment of memory hierarchy issues
of any compiler text.
* Illustrates ordering relationships with dependence graphs
throughout the book.
* Applies the techniques to a variety of languages, including
Fortran 77, C, hardware definition languages, Fortran 90, and High
Performance Fortran.
* Provides extensive references to the most sophisticated
algorithms known in research.
It is well known that embedded systems have to be implemented
efficiently. This requires that processors optimized for certain
application domains are used in embedded systems. Such an
optimization requires a careful exploration of the design space,
including a detailed study of cost/performance tradeoffs. In order
to avoid time-consuming assembly language programming during design
space exploration, compilers are needed. In order to analyze the
effect of various software or hardware configurations on the
performance, retargetable compilers are needed that can generate
code for numerous different potential hardware configurations. This
book provides a comprehensive and up-to-date overview of the fast
developing area of retargetable compilers for embedded systems. It
describes a large set important tools as well as applications of
retargetable compilers at different levels in the design flow.
Retargetable Compiler Technology for Embedded Systems is mostly
self-contained and requires only fundamental knowledge in software
and compiler design. It is intended to be a key reference for
researchers and designers working on software, compilers, and
processor optimization for embedded systems.
While compilers for high-level programming languages are large
complex software systems, they have particular characteristics that
differentiate them from other software systems. Their functionality
is almost completely well-defined - ideally there exist complete
precise descriptions of the source and target languages. Additional
descriptions of the interfaces to the operating system, programming
system and programming environment, and to other compilers and
libraries are often available. The book deals with the optimization
phase of compilers. In this phase, programs are transformed in
order to increase their efficiency. To preserve the semantics of
the programs in these transformations, the compiler has to meet the
associated applicability conditions. These are checked using static
analysis of the programs. In this book the authors systematically
describe the analysis and transformation of imperative and
functional programs. In addition to a detailed description of
important efficiency-improving transformations, the book offers a
concise introduction to the necessary concepts and methods, namely
to operational semantics, lattices, and fixed-point algorithms.
This book is intended for students of computer science. The book is
supported throughout with examples, exercises and program
fragments.
Effective compilers allow for a more efficient execution of
application programs for a given computer architecture, while
well-conceived architectural features can support more effective
compiler optimization techniques. A well thought-out strategy of
trade-offs between compilers and computer architectures is the key
to the successful designing of highly efficient and effective
computer systems. From embedded micro-controllers to large-scale
multiprocessor systems, it is important to understand the
interaction between compilers and computer architectures. The goal
of the Annual Workshop on Interaction between Compilers and
Computer Architectures (INTERACT) is to promote new ideas and to
present recent developments in compiler techniques and computer
architectures that enhance each other's capabilities and
performance. Interaction Between Compilers and Computer
Architectures is an updated and revised volume consisting of seven
papers originally presented at the Fifth Workshop on Interaction
between Compilers and Computer Architectures (INTERACT-5), which
was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico
in 2001. This volume explores recent developments and ideas for
better integration of the interaction between compilers and
computer architectures in designing modern processors and computer
systems. Interaction Between Compilers and Computer Architectures
is suitable as a secondary text for a graduate level course, and as
a reference for researchers and practitioners in industry.
This monograph is concerned with the problem of getting computers
to transform formal language definitions into compilers. Its
purpose is to demonstrate how certain simple theoretical ideas can
be used to generate compilers and even compiler generators. As the
title suggests, a realistic assessment of the relationship between
the complexity of realistic compilation and the relative simplicity
studied in theoretical work is attempted. The monograph contains an
overview of existing compiler generators. The CERES '83 compiler
generator, developed by Neil D. Jones and the author, is described
in detail. The CERES system is based on the idea of composing
language definitions and it serves as an example of a powerful
novel "bootstrapping" technique by which one can generate compiler
generators as well as compilers by considering a compiler generator
to be, in a sense which is made mathematically precise, a special
kind of compiler. The core of the CERES system is a two-page-long
machine generated compiler generator. The approach uses ideas from
denotational semantics and many-sorted algebra and connects them
with novel ideas about how to treat programs and language
definitions as data. Considerable effort has been made to present
the necessary theory in a manner suitable for readers who have some
practical experience but not necessarily a theoretical background
in semantics.
This book is the first comprehensive survey of the field of constraint databases, written by leading researchers. Constraint databases are a fairly new and active area of database research. The key idea is that constraints, such as linear or polynomial equations, are used to represent large, or even infinite, sets in a compact way. The ability to deal with infinite sets makes constraint databases particularly promising as a technology for integrating spatial and temporal data with standard relational databases. Constraint databases bring techniques from a variety of fields, such as logic and model theory, algebraic and computational geometry, as well as symbolic computation, to the design and analysis of data models and query languages.
|
|