|
Books > Computing & IT > Computer programming > Compilers & interpreters
This book introduces basic computing skills designed for industry
professionals without a strong computer science background. Written
in an easily accessible manner, and accompanied by a user-friendly
website, it serves as a self-study guide to survey data science and
data engineering for those who aspire to start a computing career,
or expand on their current roles, in areas such as applied
statistics, big data, machine learning, data mining, and
informatics. The authors draw from their combined experience
working at software and social network companies, on big data
products at several major online retailers, as well as their
experience building big data systems for an AI startup. Spanning
from the basic inner workings of a computer to advanced data
manipulation techniques, this book opens doors for readers to
quickly explore and enhance their computing knowledge. Computing
with Data comprises a wide range of computational topics essential
for data scientists, analysts, and engineers, providing them with
the necessary tools to be successful in any role that involves
computing with data. The introduction is self-contained, and
chapters progress from basic hardware concepts to operating
systems, programming languages, graphing and processing data,
testing and programming tools, big data frameworks, and cloud
computing. The book is fashioned with several audiences in mind.
Readers without a strong educational background in CS--or those who
need a refresher--will find the chapters on hardware, operating
systems, and programming languages particularly useful. Readers
with a strong educational background in CS, but without significant
industry background, will find the following chapters especially
beneficial: learning R, testing, programming, visualizing and
processing data in Python and R, system design for big data, data
stores, and software craftsmanship.
The book focuses on analyses that extract the flow of data, which imperative programming hides through its use and reuse of memory in computer systems and compilers. It will detail some program transformations that conserve this data flow and will introduce a family of analyses, called reaching definition analyses, to do this task. In addition, it shows that correctness of program transformations is guaranteed by the conservation of data flow. Professionals and researchers in software engineering, computer engineering, program design analysis, and compiler design will benefit from its presentation of data-flow methods and memory optimization of compilers.
It is well known that embedded systems have to be implemented
efficiently. This requires that processors optimized for certain
application domains are used in embedded systems. Such an
optimization requires a careful exploration of the design space,
including a detailed study of cost/performance tradeoffs. In order
to avoid time-consuming assembly language programming during design
space exploration, compilers are needed. In order to analyze the
effect of various software or hardware configurations on the
performance, retargetable compilers are needed that can generate
code for numerous different potential hardware configurations. This
book provides a comprehensive and up-to-date overview of the fast
developing area of retargetable compilers for embedded systems. It
describes a large set important tools as well as applications of
retargetable compilers at different levels in the design flow.
Retargetable Compiler Technology for Embedded Systems is mostly
self-contained and requires only fundamental knowledge in software
and compiler design. It is intended to be a key reference for
researchers and designers working on software, compilers, and
processor optimization for embedded systems.
While compilers for high-level programming languages are large
complex software systems, they have particular characteristics that
differentiate them from other software systems. Their functionality
is almost completely well-defined - ideally there exist complete
precise descriptions of the source and target languages. Additional
descriptions of the interfaces to the operating system, programming
system and programming environment, and to other compilers and
libraries are often available. The book deals with the optimization
phase of compilers. In this phase, programs are transformed in
order to increase their efficiency. To preserve the semantics of
the programs in these transformations, the compiler has to meet the
associated applicability conditions. These are checked using static
analysis of the programs. In this book the authors systematically
describe the analysis and transformation of imperative and
functional programs. In addition to a detailed description of
important efficiency-improving transformations, the book offers a
concise introduction to the necessary concepts and methods, namely
to operational semantics, lattices, and fixed-point algorithms.
This book is intended for students of computer science. The book is
supported throughout with examples, exercises and program
fragments.
Effective compilers allow for a more efficient execution of
application programs for a given computer architecture, while
well-conceived architectural features can support more effective
compiler optimization techniques. A well thought-out strategy of
trade-offs between compilers and computer architectures is the key
to the successful designing of highly efficient and effective
computer systems. From embedded micro-controllers to large-scale
multiprocessor systems, it is important to understand the
interaction between compilers and computer architectures. The goal
of the Annual Workshop on Interaction between Compilers and
Computer Architectures (INTERACT) is to promote new ideas and to
present recent developments in compiler techniques and computer
architectures that enhance each other's capabilities and
performance. Interaction Between Compilers and Computer
Architectures is an updated and revised volume consisting of seven
papers originally presented at the Fifth Workshop on Interaction
between Compilers and Computer Architectures (INTERACT-5), which
was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico
in 2001. This volume explores recent developments and ideas for
better integration of the interaction between compilers and
computer architectures in designing modern processors and computer
systems. Interaction Between Compilers and Computer Architectures
is suitable as a secondary text for a graduate level course, and as
a reference for researchers and practitioners in industry.
While compilers for high-level programming languages are large
complex software systems, they have particular characteristics that
differentiate them from other software systems. Their functionality
is almost completely well-defined ideally there exist complete
precise descriptions of the source and target languages. Additional
descriptions of the interfaces to the operating system, programming
system and programming environment, and to other compilers and
libraries are often available.
This book deals with the analysis phase of translators for
programming languages. It describes lexical, syntactic and semantic
analysis, specification mechanisms for these tasks from the theory
of formal languages, and methods for automatic generation based on
the theory of automata. The authors present a conceptual
translation structure, i.e., a division into a set of modules,
which transform an input program into a sequence of steps in a
machine program, and they then describe the interfaces between the
modules. Finally, the structures of real translators are outlined.
The book contains the necessary theory and advice for
implementation.
This book is intended for students of computer science. The book
is supported throughout with examples, exercises and program
fragments.
"
This monograph is concerned with the problem of getting computers
to transform formal language definitions into compilers. Its
purpose is to demonstrate how certain simple theoretical ideas can
be used to generate compilers and even compiler generators. As the
title suggests, a realistic assessment of the relationship between
the complexity of realistic compilation and the relative simplicity
studied in theoretical work is attempted. The monograph contains an
overview of existing compiler generators. The CERES '83 compiler
generator, developed by Neil D. Jones and the author, is described
in detail. The CERES system is based on the idea of composing
language definitions and it serves as an example of a powerful
novel "bootstrapping" technique by which one can generate compiler
generators as well as compilers by considering a compiler generator
to be, in a sense which is made mathematically precise, a special
kind of compiler. The core of the CERES system is a two-page-long
machine generated compiler generator. The approach uses ideas from
denotational semantics and many-sorted algebra and connects them
with novel ideas about how to treat programs and language
definitions as data. Considerable effort has been made to present
the necessary theory in a manner suitable for readers who have some
practical experience but not necessarily a theoretical background
in semantics.
This book is the first comprehensive survey of the field of constraint databases, written by leading researchers. Constraint databases are a fairly new and active area of database research. The key idea is that constraints, such as linear or polynomial equations, are used to represent large, or even infinite, sets in a compact way. The ability to deal with infinite sets makes constraint databases particularly promising as a technology for integrating spatial and temporal data with standard relational databases. Constraint databases bring techniques from a variety of fields, such as logic and model theory, algebraic and computational geometry, as well as symbolic computation, to the design and analysis of data models and query languages.
|
|