![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Compilers & interpreters
Loop tiling, as one of the most important compiler optimizations, is beneficial for both parallel machines and uniprocessors with a memory hierarchy. This book explores the use of loop tiling for reducing communication cost and improving parallelism for distributed memory machines. The author provides mathematical foundations, investigates loop permutability in the framework of nonsingular loop transformations, discusses the necessary machineries required, and presents state-of-the-art results for finding communication- and time-minimal tiling choices. Throughout the book, theorems and algorithms are illustrated with numerous examples and diagrams. The techniques presented in Loop Tiling for Parallelism can be adapted to work for a cluster of workstations, and are also directly applicable to shared-memory machines once the machines are modeled as BSP (Bulk Synchronous Parallel) machines. Features and key topics: Detailed review of the mathematical foundations, including convex polyhedra and cones; Self-contained treatment of nonsingular loop transformations, code generation, and full loop permutability; Tiling loop nests by rectangles and parallelepipeds, including their mathematical definition, dependence analysis, legality test, and code generation; A complete suite of techniques for generating SPMD code for a tiled loop nest; Up-to-date results on tile size and shape selection for reducing communication and improving parallelism; End-of-chapter references for further reading. Researchers and practitioners involved in optimizing compilers and students in advanced computer architecture studies will find this a lucid and well-presented reference work with numerous citations to original sources.
Knowing how to create domain-specific languages (DSLs) can give you a huge productivity boost. Instead of writing code in a general-purpose programming language, you can first build a custom language tailored to make you efficient in a particular domain. The key is understanding the common patterns found across language implementations. "Language Design Patterns" identifies and condenses the most common design patterns, providing sample implementations of each. The pattern implementations use Java, but the patterns themselves are completely general. Some of the implementations use the well-known ANTLR parser generator, so readers will find this book an excellent source of ANTLR examples as well. But this book will benefit anyone interested in implementing languages, regardless of their tool of choice. Other language implementation books focus on compilers, which you rarely need in your daily life. Instead, "Language Design Patterns" shows you patterns you can use for all kinds of language applications. You'll learn to create configuration file readers, data readers, model-driven code generators, source-to-source translators, source analyzers, and interpreters. Each chapter groups related design patterns and, in each pattern, you'll get hands-on experience by building a complete sample implementation. By the time you finish the book, you'll know how to solve most common language implementation problems.
This classroom-tested and clearly-written textbook presents a focused guide to the conceptual foundations of compilation, explaining the fundamental principles and algorithms used for defining the syntax of languages, and for implementing simple translators. This significantly updated and expanded third edition has been enhanced with additional coverage of regular expressions, visibly pushdown languages, bottom-up and top-down deterministic parsing algorithms, and new grammar models. Topics and features: describes the principles and methods used in designing syntax-directed applications such as parsing and regular expression matching; covers translations, semantic functions (attribute grammars), and static program analysis by data flow equations; introduces an efficient method for string matching and parsing suitable for ambiguous regular expressions (NEW); presents a focus on extended BNF grammars with their general parser and with LR(1) and LL(1) parsers (NEW); introduces a parallel parsing algorithm that exploits multiple processing threads to speed up syntax analysis of large files; discusses recent formal models of input-driven automata and languages (NEW); includes extensive use of theoretical models of automata, transducers and formal grammars, and describes all algorithms in pseudocode; contains numerous illustrative examples, and supplies a large set of exercises with solutions at an associated website. Advanced undergraduate and graduate students of computer science will find this reader-friendly textbook to be an invaluable guide to the essential concepts of syntax-directed compilation. The fundamental paradigms of language structures are elegantly explained in terms of the underlying theory, without requiring the use of software tools or knowledge of implementation, and through algorithms simple enough to be practiced by paper and pencil.
This fourth Edition presents new examples on submodules, derived type i/o, object oriented programming, abstract interfaces and procedure pointers, C interop, sorting and searching, statistics and converting to more modern versions of Fortran. Key Features Highlights the core language features of modern Fortran including data typing, array processing, control structures, functions, subroutines, modules and submodules, user defined types, pointers, operator overloading, generic programming, parallel programming, abstract interfaces, procedure pointers Pinpoints common problems that occur when programming Illustrates the use of several compilers Introduction to Programming with Fortran has been written for the complete beginner with little or no programming background as well as existing Fortran programmers and those with programming experience in other languages
This book introduces basic computing skills designed for industry professionals without a strong computer science background. Written in an easily accessible manner, and accompanied by a user-friendly website, it serves as a self-study guide to survey data science and data engineering for those who aspire to start a computing career, or expand on their current roles, in areas such as applied statistics, big data, machine learning, data mining, and informatics. The authors draw from their combined experience working at software and social network companies, on big data products at several major online retailers, as well as their experience building big data systems for an AI startup. Spanning from the basic inner workings of a computer to advanced data manipulation techniques, this book opens doors for readers to quickly explore and enhance their computing knowledge. Computing with Data comprises a wide range of computational topics essential for data scientists, analysts, and engineers, providing them with the necessary tools to be successful in any role that involves computing with data. The introduction is self-contained, and chapters progress from basic hardware concepts to operating systems, programming languages, graphing and processing data, testing and programming tools, big data frameworks, and cloud computing. The book is fashioned with several audiences in mind. Readers without a strong educational background in CS--or those who need a refresher--will find the chapters on hardware, operating systems, and programming languages particularly useful. Readers with a strong educational background in CS, but without significant industry background, will find the following chapters especially beneficial: learning R, testing, programming, visualizing and processing data in Python and R, system design for big data, data stores, and software craftsmanship.
Access the power of bare-metal systems programming with Scala Native, an ahead-of-time Scala compiler. Without the baggage of legacy frameworks and virtual machines, Scala Native lets you re-imagine how your programs interact with your operating system. Compile Scala code down to native machine instructions; seamlessly invoke operating system APIs for low-level networking and IO; control pointers, arrays, and other memory management techniques for extreme performance; and enjoy instant start-up times. Skip the JVM and improve your code performance by getting close to the metal. Developers generally build systems on top of the work of those who came before, accumulating layer upon layer of abstraction. Scala Native provides a rare opportunity to remove layers. Without the JVM, Scala Native uses POSIX and ANSI C APIs to build concise, expressive programs that run unusually close to bare metal. Scala Native compiles Scala code down to native machine instructions instead of JVM bytecode. It starts up fast, without the sluggish warm-up phase that's common for just-in-time compilers. Scala Native programs can seamlessly invoke operating system APIs for low-level networking and IO. And Scala Native lets you control pointers, arrays, and other memory layout types for extreme performance. Write practical, bare-metal code with Scala Native, step by step. Understand the foundations of systems programming, including pointers, arrays, strings, and memory management. Use the UNIX socket API to write network client and server programs without the sort of frameworks higher-level languages rely on. Put all the pieces together to design and implement a modern, asynchronous microservice-style HTTP framework from scratch. Take advantage of Scala Native's clean, modern syntax to write lean, high-performance code without the JVM. What You Need: A modern Windows, Mac OS, or Linux system capable of running Docker. All code examples in the book are designed to run on a portable Docker-based build environment that runs anywhere. If you don't have Docker yet, see the Appendix for instructions on how to get it.
Compilers: Principles, Techniques and Tools, known to professors, students, and developers worldwide as the Dragon Book, is available in a new edition. Every chapter has been completely revised to reflect developments in software engineering, programming languages, and computer architecture that have occurred since 1986, when the last edition published. The authors, recognizing that few readers will ever go on to construct a compiler, retain their focus on the broader set of problems faced in software design and software development. New chapters include: Chapter 10 Instruction-Level Parallelism Chapter 11 Optimizing for Parallelism and Locality Chapter 12 Interprocedural
This book provides a high-level description, together with a mathematical and an experimental analysis, of Java and of the Java Virtual Machine (JVM), including a standard compiler of Java programs to JVM code and the security critical bytecode verifier component of the JVM. The description is structured into language layers and machine components. It comes with a natural executable refinement (written in AsmGofer and provided on CD ROM) which can be used for testing code. The method developed for this purpose is based on Abstract State Machines (ASMs) and can be applied to other virtual machines and to other programming languages as well. The book is written for advanced students and for professionals and practitioners in research and development who need a complete and transparent definition and an executable model of the language and of the virtual machine underlying its intended implementation.The CD ROM contains the entire text of the book and numerous examples and exercises.
While compilers for high-level programming languages are large complex software systems, they have particular characteristics that differentiate them from other software systems. Their functionality is almost completely well-defined ideally there exist complete precise descriptions of the source and target languages. Additional descriptions of the interfaces to the operating system, programming system and programming environment, and to other compilers and libraries are often available. This book deals with the analysis phase of translators for programming languages. It describes lexical, syntactic and semantic analysis, specification mechanisms for these tasks from the theory of formal languages, and methods for automatic generation based on the theory of automata. The authors present a conceptual translation structure, i.e., a division into a set of modules, which transform an input program into a sequence of steps in a machine program, and they then describe the interfaces between the modules. Finally, the structures of real translators are outlined. The book contains the necessary theory and advice for implementation. This book is intended for students of computer science. The book is supported throughout with examples, exercises and program fragments. "
Designed as a self-study guide, the book describes the real-world tradeoffs encountered in building a production-quality, platform-retargetable compiler. The authors examine the implementation of lcc, a production-quality, research-oriented retargetable compiler, designed at AT&T Bell Laboratories for the ANSI C programming language. The authors' innovative approach-a "literate program" that intermingles the text with the source code-uses a line-by-line explanation of the code to demonstrate how lcc is built.
Write code that writes code with Elixir macros. Macros make metaprogramming possible and define the language itself. In this book, you'll learn how to use macros to extend the language with fast, maintainable code and share functionality in ways you never thought possible. You'll discover how to extend Elixir with your own first-class features, optimize performance, and create domain-specific languages. Metaprogramming is one of Elixir's greatest features. Maybe you've played with the basics or written a few macros. Now you want to take it to the next level. This book is a guided series of metaprogramming tutorials that take you step by step to metaprogramming mastery. You'll extend Elixir with powerful features and write faster, more maintainable programs in ways unmatched by other languages. You'll start with the basics of Elixir's metaprogramming system and find out how macros interact with Elixir's abstract format. Then you'll extend Elixir with your own first-class features, write a testing framework, and discover how Elixir treats source code as building blocks, rather than rote lines of instructions. You'll continue your journey by using advanced code generation to create essential libraries in strikingly few lines of code. Finally, you'll create domain-specific languages and learn when and where to apply your skills effectively. When you're done, you will have mastered metaprogramming, gained insights into Elixir's internals, and have the confidence to leverage macros to their full potential in your own projects.
Whatever your programming language, whatever your platform, you
probably tap into linker and loader functions all the time. But do
you know how to use them to their greatest possible advantage? Only
now, with the publication of Linkers & Loaders, is there an
authoritative book devoted entirely to these deep-seated
compile-time and run-time processes.
"It is the first book that I have read that makes STL quickly usable by working programmers" Francis Glassborow, Chair of The Association of C & C++ Users (ACCU) STL for C++ programmers Leen Ammeraal The Standard Template Library (STL) provides many useful and generally applicable programming tools. This book combines reference material and a well-paced tutorial to get you past the basics quickly. Small, complete programs illustrate the key STL features such as containers, algorithms, iterators and function objects. A section is devoted to the new string data type. All STL algorithms are formally presented by their prototypes and then informally described to show how to use them in practice. Concepts are well illustrated with a large number of example programs all of which are available via ftp (for access details please refer to the preface of the book or Wiley’s website). Finally, special examples are given to explain the advanced notions of function objects and function adaptors, including predicates, binders and negators.
This book constitutes the refereed post-conference proceedings of the 4th International Workshop on Accelerator Programming Using Directives, WACCPD 2017, held in Denver, CO, USA, in November 2017. The 9 full papers presented have been carefully reviewed and selected from 14 submissions. The papers share knowledge and experiences to program emerging complex parallel computing systems. They are organized in the following three sections: applications; environments; and program evaluation.
This book constitutes the proceedings of the 23rd International Conference on Formal Methods for Industrial Critical Systems, FMICS 2018, held in Maynooth, Ireland, in September 2018. The 9 regular papers presented in this volume were carefully reviewed and selected from 17 submissions. The book also contains two invited talks in full-paper length. In addition, there are 8 invited contributions in honor of Susanne Graf (Director of Research at VERIMAG Grenoble, France) on the occasion of her 60th birthday. The aim of the FMICS conference series is to provide a forum for researchers who are interested in the development and application of formal methods in industry. In particular, FMICS brings together scientists and engineers who are active in the area of formal methods and interested in exchanging their experiences in the industrial usage of these methods. The FMICS conference series also strives to promote research and development for the improvement of formal methods and tools for industrial applications.
This book constitutes the proceedings of the 21st International Conference on Compiler Construction, CC 2012, held as part of the joint European Conference on Theory and Practice of Software, ETAPS 2012, which took place in Tallinn, Estonia, in March/April 2012. The 13 papers presented in this book were carefully reviewed and selected from 51 submissions. They are organized in topical sections named: GPU optimisation, program analysis, objects and components, and dynamic analysis and runtime support.
Embedded core processors are becoming a vital part of today's system-on-a-chip in the growing areas of telecommunications, multimedia and consumer electronics. This is mainly in response to a need to track evolving standards with the flexibility of embedded software. Consequently, maintaining the high product performance and low product cost requires a careful design of the processor tuned to the application domain. With the increased presence of instruction-set processors, retargetable software compilation techniques are critical, not only for improving engineering productivity, but to allow designers to explore the architectural possibilities for the application domain. Retargetable Compilers for Embedded Core Processors, with a Foreword written by Ahmed Jerraya and Pierre Paulin, overviews the techniques of modern retargetable compilers and shows the application of practical techniques to embedded instruction-set processors. The methods are highlighted with examples from industry processors used in products for multimedia, telecommunications, and consumer electronics. An emphasis is given to the methodology and experience gained in applying two different retargetable compiler approaches in industrial settings. The book also discusses many pragmatic areas such as language support, source code abstraction levels, validation strategies, and source-level debugging. In addition, new compiler techniques are described which support address generation for DSP architecture trends. The contribution is an address calculation transformation based on an architectural model. Retargetable Compilers for Embedded Core Processors will be of interest to embedded system designers and programmers, the developers of electronic design automation (EDA) tools for embedded systems, and researchers in hardware/software co-design.
ETAPS2008wasthe11thinstanceoftheEuropeanJointConferencesonTheory and Practice of Software. ETAPS is an annual federated conference that was established in 1998 by combining a number of existing and new conferences. This yearit comprised?ve conferences (CC, ESOP,FASE, FOSSACS, TACAS), 22satelliteworkshops(ACCAT,AVIS,Bytecode,CMCS,COCV,DCC,FESCA, FIT, FORMED, GaLoP, GT-VMT, LDTA, MBT, MOMPES, PDMC, QAPL, RV,SafeCert,SC,SLA++P,WGT,andWRLA),ninetutorials,andseveninvited lectures (excluding those that were speci?c to the satellite events). The ?ve main conferences received 571 submissions, 147 of which were accepted, giving an overall acceptance rate of less than 26%, with each conference below 27%. Congratulationsthereforetoallthe authorswhomadeittothe ?nalprogramme! I hope that most of the other authors will still have found a way of participating in this exciting event, and that you will all continue submitting to ETAPS and contributing to make of it the best conference in the area. The events that comprise ETAPS address various aspects of the system - velopment process, including speci?cation, design, implementation, analysis and improvement. The languages, methodologies and tools which support these - tivities are all well within its scope. Di?erent blends of theory and practice are represented, with an inclination towards theory with a practical motivation on the one hand and soundly based practice on the other. Many of the issues involved in software design apply to systems in general, including hardware s- tems, and the emphasis on software is not intended to be exclusive.
This book constitutes the refereed proceedings of the 16th International Conference on Compiler Construction, CC 2007, held in Braga, Portugal, in March 2007 as part of ETAPS 2007, the European Joint Conferences on Theory and Practice of Software. The 15 revised full are organized in topical sections on architecture, garbage collection and program analysis, register allocation, and program analysis.
This volume contains the proceedings of the 9th International Symposium on Functional and Logic Programming (FLOPS 2008), held in Ise, Japan, April 14-16, 2008 at the Ise City Plaza. FLOPS is a forum for research on all issues concerning functional progr- ming and logic programming. In particular it aims to stimulate the cro- fertilization as well as integration of the two paradigms. The previous FLOPS meetings took place in Fuji-Susono (1995), Shonan (1996), Kyoto (1998), Tsukuba(1999), Tokyo(2001), Aizu (2002), Nara(2004), and againFuji-Susono (2006). Since its 1999 edition, FLOPS proceedings have been published by Springer in itsLecture Notes in Computer Science series, as volumes 1722,2024, 2441, 2998 and 3945, respectively. In response to the call for papers, 59 papers were submitted. Each paper was reviewedbyatleastthreeProgramCommittee members, withthe helpofexpert external reviewers. The Program Committee meeting was conducted electro- cally, for a period of two weeks in December 2007. After careful and thorough discussion, the ProgramCommittee selected20 papers(33%)for presentationat theconference.Inadditiontothe20contributedpapers, thesymposiumincluded talks by three invited speakers: Peter Dybjer (Chalmers University of Techn- ogy), Naoki Kobayashi (Tohoku University) and Torsten Schaub (University of P
This book constitutes the refereed proceedings of the Third International Conference on High Performance Embedded Architectures and Compilers, HiPEAC 2008, held in GAteborg, Sweden, January 27-29, 2008. The 25 revised full papers presented together with 1 invited keynote paper were carefully reviewed and selected from 77 submissions. The papers are organized in topical sections on Multithreaded and Multicore Processors, Reconfigurable - ASIP, Compiler Optimizations, Industrial Processors and Application Parallelization, Power-Aware Techniques, High-Performance Processors, Profiles: Collection and Analysis as well as Optimizing Memory Performance.
This book constitutes the thoroughly refereed post-proceedings of the 19th International Workshop on Languages and Compilers for Parallel Computing, LCPC 2006, held in New Orleans, LA, USA in November 2006. The 24 revised full papers presented together with two keynote talks cover programming models, code generation, parallelism, compilation techniques, data structures, register allocation, and memory management.
The 15th Workshop on Languages and Compilers for Parallel Computing was held in July 2002 at the University of Maryland, College Park. It was jointly sponsored by the Department of Computer Science at the University of Ma- land and the University of Maryland Institute for Advanced Computer Studies (UMIACS).LCPC2002broughttogetherover60researchersfromacademiaand research institutions from many countries. The program of 26 papers was selected from 32 submissions. Each paper was reviewed by at least three Program Committee members and sometimes by additional reviewers. Prior to the workshop, revised versions of accepted papers were informally published on the workshop's website and in a paper proceedings that was distributed at the meeting. This year, the workshopwas organizedinto sessions of papers on related topics, and each session consisted of two to three 30-minute presentations.Based on feedback from the workshop, the papers were revised and submitted for inclusion in the formal proceedings published in this volume. Two papers were presented at the workshop but later withdrawn from the ?nal proceedings by their authors. We were very lucky to have Bill Carlson from the Department of Defense give the LCPC 2002 keynote speech on "UPC: A C Language for Shared M- ory Parallel Programming." Bill gave an excellent overview of the features and programming model of the UPC parallel programming language.
The CC program committee is pleased to present this volume with the p- ceedings of the 13th International Conference on Compiler Construction (CC 2004). CC continues to provide an exciting forum for researchers, educators, and practitioners to exchange ideas on the latest developments in compiler te- nology, programming language implementation, and language design. The c- ference emphasizes practical and experimental work and invites contributions on methods and tools for all aspects of compiler technology and all language paradigms. This volume serves as the permanent record of the 19 papers accepted for presentation at CC 2004 held in Barcelona, Spain, during April 1 2, 2004. The 19 papers in this volume were selected from 58 submissions. Each paper was assigned to three committee members for review. The program committee met for one day in December 2003 to discuss the papers and the reviews. By the end of the meeting, a consensus emerged to accept the 19 papers presented in this volume. However, there were many other quality submissions that could not be accommodated in the program; hopefully they will be published elsewhere. ThecontinuedsuccessoftheCCconferenceserieswouldnotbepossiblewi- out the help of the CC community. I would like to gratefully acknowledge and thank all of the authors who submitted papers and the many external reviewers who wrote reviews." |
You may like...
Adventures of the Little Blue Dragon…
Yelena Yelizarova, Mariia Yelizarova
Hardcover
R1,107
Discovery Miles 11 070
|