![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming > Programming languages
This volume contains a selection of papers that focus on the state-of the-art in real-time scheduling and resource management. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D.C. A companion volume by the title Foundations of Real-Time Computing: Fonnal Specifications and Methods complements this book by addressing many of the most advanced approaches currently being investigated in the arena of formal specification and verification of real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real-time computing systems on a scientific basis. Many of the papers in this book take care to define the notion of real-time system precisely, because it is often easy to misunderstand what is meant by that term. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the very difficult problems of scheduling tasks and resource management in computer systems whose performance is inextricably fused with the achievement of deadlines. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End-use applications of deadline-driven real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."
For real-time systems, the worst-case execution time (WCET) is the key objective to be considered. Traditionally, code for real-time systems is generated without taking this objective into account and the WCET is computed only after code generation. Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems presents the first comprehensive approach integrating WCET considerations into the code generation process. Based on the proposed reconciliation between a compiler and a timing analyzer, a wide range of novel optimization techniques is provided. Among others, the techniques cover source code and assembly level optimizations, exploit machine learning techniques and address the design of modern systems that have to meet multiple objectives. Using these optimizations, the WCET of real-time applications can be reduced by about 30% to 45% on the average. This opens opportunities for decreasing clock speeds, costs and energy consumption of embedded processors. The proposed techniques can be used for all types real-time systems, including automotive and avionics IT systems.
This book constitutes the refereed proceedings of the 15th International Conference on Fundamental Approaches to Software Engineering, FASE 2012, held in Tallinn, Estonia, in March/April 2012, as part of ETAPS 2012, the European Joint Conferences on Theory and Practice of Software. The 33 full papers presented together with one full length invited talk were carefully reviewed and slected from 134 submissions. The papers are organized in topical sections on software architecture and components, services, verification and monitoring, intermodelling and model transformations, modelling and adaptation, product lines and feature-oriented programming, development process, verification and synthesis, testing and maintenance, and slicing and refactoring.
This book constitutes the proceedings of the 15th International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2012, held as part of the joint European Conference on Theory and Practice of Software, ETAPS 2012, which took place in Tallinn, Estonia, in March/April 2012. The 29 papers presented in this book together with two invited talks in full paper length were carefully reviewed and selected from 100 full paper submissions. The papers deal with theories and methods to support analysis, synthesis, transformation and verification of programs and software systems.
Welcome to the 5th International Conference on Open Source Systems! It is quite an achievement to reach the five-year mark - that's the sign of a successful enterprise. This annual conference is now being recognized as the primary event for the open source research community, attracting not only high-quality papers, but also building a community around a technical program, a collection of workshops, and (starting this year) a Doctoral Consortium. Reaching this milestone reflects the efforts of many people, including the conference founders, as well as the organizers and participants in the previous conferences. My task has been easy, and has been greatly aided by the hard work of Kevin Crowston and Cornelia Boldyreff, the Program Committee, as well as the Organizing Team led by Bjoern Lundell. All of us are also grateful to our attendees, especially in the difficult economic climate of 2009. We hope the participants found the conference valuable both for its technical content and for its personal networking opportunities. To me, it is interesting to look back over the past five years, not just at this conference, but at the development and acceptance of open source software. Since 2004, the business and commercial side of open source has grown enormously. At that time, there were only a handful of open source businesses, led by RedHat and its Linux distribution. Companies such as MySQL and JBoss were still quite small.
This book constitutes the refereed proceedings of the 21st European Symposium on Programming, ESOP 2012, held in Tallinn, Estonia, as part of ETAPS 2012, in March/April 2012. The 28 full papers, presented together with one full length invited talk, were carefully reviewed and selected from 92 submissions. Papers were invited on all aspects of programming language research, including: programming paradigms and styles, methods and tools to write and specify programs and languages, methods and tools for reasoning about programs, methods and tools for implementation, and concurrency and distribution.
Behavioral Specifications of Businesses and Systems deals with the reading, writing and understanding of specifications. The papers presented in this book describe useful and sometimes elegant concepts, good practices (in programming and in specifications), and solid underlying theory that is of interest and importance to those who deal with increased complexity of business and systems. Most concepts have been successfully used in actual industrial projects, while others are from the forefront of research. Authors include practitioners, business thinkers, academics and applied mathematicians. These seemingly different papers address different aspects of a single problem - taming complexity. Behavioral Specifications of Businesses and Systems emphasizes simplicity and elegance in specifications without concentrating on particular methodologies, languages or tools. It shows how to handle complexity, and, specifically, how to succeed in understanding and specifying businesses and systems based upon precise and abstract concepts. It promotes reuse of such concepts, and of constructs based on them, without taking reuse for granted. Behavioral Specifications of Businesses and Systems is the second volume of papers based on a series of workshops held alongside ACM's annual conference on Object-Oriented Programming Systems Languages and Applications (OOPSLA) and European Conference on Object-Oriented Programming (ECOOP). The first volume, Object-Oriented Behavioral Specifications, edited by Haim Kilov and William Harvey, was published by Kluwer Academic Publishers in 1996.
2 Source Code Availability All of the source code in this volume, and some that is not, is available from the author for $20. The author is also interested in learning of any errors that may be found, though care has been taken in the construction of the modules to minimize the possibility of their occurence. Any other comments, suggestions, recommenda- tions, questions, or experiences with the use of these modules would also be of interest. The reader may contact the author via the publisher at the following address: C. Lins: Modula-2 Source Code c/o Springer-Verlag 815 De La Vina Street Santa Barbara, CA 93101 USA As of February 1989, source code is available on two 3. 5" Macintosh diskettes (800K HFS format) for Bob Campbell's Modula-2 compiler for MPW(formerly TML Modula-2) and the MacMETH Modula-2 compiler from ETH Zurich. The author intends to port this software to both the SemperSoft and MetCom Modula- 2 compilers on the Macintosh. For the IBM PC (and compatibles) the software is available for TopSpeed Modula-2 (a product of JPI). The source code will soon be converted to work with Logitech's Modula-2 compiler as well as Stony Brook's Modula-2. Please mention your hardware platform as well as the volume(s) in which you are interested Development Environment The software for this volume was developed using the MPW (Macintosh(TM) Programmer' s Workshop) version 3. 0 and Bob Campbell's Modula-2 compiler ver- sion 1. 4d7.
This book is the first volume in a series entitled The Modula-2 Software Component Library. Charles Lins collection of reusable standard software components, could be the basis for every programmers software project in Modula-2. Components that are implementations of commonly used data structures are presented, along with an adequate description of their functionality and efficiency. Moreover, the books provide the background necessary to tailor these components to the specific needs of any Modula-2 environment. For every Modula-2 programmer this series of books might prove as useful and indispensible as the original language reference by Niklaus Wirth.
Engineering tasks are supposed to achieve defined goals under certain project constraints. Example goals of software engineering tasks include achieving a certain functionality together with some level of reliability or performance. Example constraints of software engineering tasks include budget and time limitations or experience limitations of the developers at hand. Planning of an engineering project requires the selection of techniques, methods and tools suited to achieve stated goals under given project constraints. This assumes sufficient knowledge regarding the process-product relationships (or effects) of candidate techniques, methods and tools. Planning of software projects suffers greatly from lack of knowledge regarding the process-product relationships of candidate techniques, methods and tools. Especially in the area of testing a project planner is confronted with an abundance of testing techniques, but very little knowledge regarding their effects under varying project conditions. This book offers a novel approach to addressing this problem: First, based on a comprehensive initial characterization scheme (see chapter 7) an overview of existing testing techniques and their effects under varying conditions is provided to guide the selection of testing approaches. Second, the optimisation of this knowledge base is suggested based on experience from experts, real projects and scientific experiments (chapters 8, 9, and 10). This book is of equal interest to practitioners, researchers and students. Practitioners interested in identifying ways to organize their company-specific knowledge about testing could start with the schema provided in this book, and optimise it further by applying similar strategies as offered in chapters 8 and 9.
One must be able to say at all times - in stead of points, straight lines, and planes - tables, chairs and beer mugs. (David Hilbert) One service mathematics has rendered the human race. It has put common sense back where it belongs, on the topmost shelf next to the dusty canister labelled "discarded nonsense. " (Eric T. Bell) This book discusses reasoning with partial information. We investigate the proof theory, the model theory and some applications of reasoning with par tial information. We have as a goal a general theory for combining, in a principled way, logic formulae expressing partial information, and a logical tool for choosing among them for application and implementation purposes. We also would like to have a model theory for reasoning with partial infor mation that is a simple generalization of the usual Tarskian semantics for classical logic. We show the need to go beyond the view of logic as a geometry of static truths, and to see logic, both at the proof-theoretic and at the model-theoretic level, as a dynamics of processes. We see the dynamics of logic processes bear with classical logic, the same relation as the one existing between classical mechanics and Euclidean geometry."
Topics * what this book is about, * its intended audience, * what the reader ought to know, * how the book is organized, * acknowledgements. Specifications express information about a program that is not normally part of the program, and often cannot be expressed in a programming lan guage. In the past, the word "specification" has sometimes been used to refer to somewhat vague documentation written in English. But today it indicates a precise statement, written in a machine processable language, about the purpose and behavior of a program. Specifications are written in languages that are just as precise as programming languages, but have additional capabilities that increase their power of expression. The termi nology formal specification is sometimes used to emphasize the modern meaning. For us, all specifications are formal. The use of specifications as an integral part of a program opens up a whole new area of programming - progmmming with specifications. This book describes how to use specifications in the process of building programs, debugging them, and interfacing them with other programs. It deals with a new trend in programming - the evolution of specification languages from the current generation of programming languages. And it describes new strategies and styles of programming that utilize specifications. The trend is just beginning, and the reader, having finished this book, will viii Preface certainly see that there is much yet to be done and to be discovered about programming with specifications.
The second half of this century will remain as the era of proliferation of electronic computers. They did exist before, but they were mechanical. During next century they may perform other mutations to become optical or molecular or even biological. Actually, all these aspects are only fancy dresses put on mathematical machines. This was always recognized to be true in the domain of software, where "machine" or "high level" languages are more or less rigourous, but immaterial, variations of the universaly accepted mathematical language aimed at specifying elementary operations, functions, algorithms and processes. But even a mathematical machine needs a physical support, and this is what hardware is all about. The invention of hardware description languages (HDL's) in the early 60's, was an attempt to stay longer at an abstract level in the design process and to push the stage of physical implementation up to the moment when no more technology independant decisions can be taken. It was also an answer to the continuous, exponential growth of complexity of systems to be designed. This problem is common to hardware and software and may explain why the syntax of hardware description languages has followed, with a reasonable delay of ten years, the evolution of the programming languages: at the end of the 60's they were" Algol like" , a decade later "Pascal like" and now they are "C or ADA-like". They have also integrated the new concepts of advanced software specification languages.
PHP is rapidly becoming the language of choice for dynamic Web development, in particular for e-commerce and on-line database systems. It is open source software and easy to install, and can be used with a variety of operating systems, including Microsoft Windows and UNIX. This comprehensive manual covers the basic core of the language, with lots of practical examples of some of the more recent and useful features available in version 5.0. MySQL database creation and development is also covered, as it is the developer database most commonly used alongside PHP. It will be an invaluable book for professionals wanting to use PHP to develop their own dynamic web pages. Key Topics: - Basic Language Constructs - Manipulating Arrays and Strings - Errors and Buffering - Graphic Manipulation - PDF Library Extension - MySQL Database Management - Classes and Objects Concepts Features and Benefits: Explains how to use PHP to its full extent - covering the latest features and functions of PHP version 5.0, including the use of object-oriented programming Describes how to link a database to a web site, using the MySQL database management system Shows how to connect PHP to other systems and provides many examples, so that you can create powerful and dynamic web pages and applications Contains lots of illustrated, practical, real-world examples - including an e-commerce application created in PHP using many of the features described within the book The scripts used in the examples are available for download from www.phpmysql-manual.com
The representation of uncertainty is a central issue in Artificial Intelligence (AI) and is being addressed in many different ways. Each approach has its proponents, and each has had its detractors. However, there is now an in creasing move towards the belief that an eclectic approach is required to represent and reason under the many facets of uncertainty. We believe that the time is ripe for a wide ranging, yet accessible, survey of the main for malisms. In this book, we offer a broad perspective on uncertainty and approach es to managing uncertainty. Rather than provide a daunting mass of techni cal detail, we have focused on the foundations and intuitions behind the various schools. The aim has been to present in one volume an overview of the major issues and decisions to be made in representing uncertain knowl edge. We identify the central role of managing uncertainty to AI and Expert Systems, and provide a comprehensive introduction to the different aspects of uncertainty. We then describe the rationales, advantages and limitations of the major approaches that have been taken, using illustrative examples. The book ends with a review of the lessons learned and current research di rections in the field. The intended readership will include researchers and practitioners in volved in the design and implementation of Decision Support Systems, Ex pert Systems, other Knowledge-Based Systems and in Cognitive Science."
Massively Parallel Systems (MPSs) with their scalable computation and storage space promises are becoming increasingly important for high-performance computing. The growing acceptance of MPSs in academia is clearly apparent. However, in industrial companies, their usage remains low. The programming of MPSs is still the big obstacle, and solving this software problem is sometimes referred to as one of the most challenging tasks of the 1990's. The 1994 working conference on "Programming Environments for Massively Parallel Systems" was the latest event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) in this field. It succeeded the 1992 conference in Edinburgh on "Programming Environments for Parallel Computing". The research and development work discussed at the conference addresses the entire spectrum of software problems including virtual machines which are less cumbersome to program; more convenient programming models; advanced programming languages, and especially more sophisticated programming tools; but also algorithms and applications.
Prolog has a declarative style. A predicate definition includes both the input and output parameters, and it allows a programmer to define a desired result without being concerned about the detailed instructions of how it is to be computed. Such a declarative language offers a solution to the software crisis, because it is shorter and more concise, more powerful and understandable than present-day languages. Logic highlights novel aspects of programming, namely using the same program to compute a relation and its inverse, and supporting deductive retrieval of informa tion. This is a book about using Prolog. Its real point is the examples introduced from Chapter 3 onwards, and so a Prolog programmer does not need to read Chapters 1 and 2, which are oriented more to teachers and to students, respec tively. The book is recommended for introductory and advanced university courses, where students may need to remember the basics about logic program ming and Prolog, before starting doing. Chapters 1 and 2 were also kept for the sake of unity of the whole material. In Chapter 1 a teaching strategy is explained based on the key concepts of Pro log which are novel aspects of programming. Prolog is enhanced as a computer programming language used for solving problems that involve objects and the relationships between objects. This chapter provides a pedagogical tour of pre scriptions for the organization of Prolog programs, by pointing out the main draw backs novices may encounter."
Augmented Transition Network Grammars are at present the most widely used method for analyzing natural languages. Despite the increasing po pularity of this method, however, no extensive papers on ATN-Grammars have been presented which would be accessible to a larger number of per sons engaged in the problem from both the theoretical and practical points of view. Augmented Transition Networks (ATN) are derived from state automata. Like a finite state automaton, an ATN consists of a collection of la beled states and arcs, a distinguished start state and a set of distin guished final states. States are connected with each other by arcs crea ting a directed graph or net. The label on an arc indicates a terminal symbol (word) or the type of words which must occur in an input stream to allow the transition to the next state. It is said that a sequence of words (or sentence) is accepted by such a net if there exists a se quence of arcs (usually called a path), connecting the start state with a final state, which can be followed to the sentence. The finite state automaton is then enriched by several facilities which increase its computational power. The most important of them permits some arcs to be labeled by nonterminal rather than terminal symbols. This means that the transition through such an arc is actually the re cursive application of the net beginning with a pointed state."
This book introduces the tools you'll need to program with the packetC language packetC speedsthe development ofapplications that live within computer networks, the kind of programs that provide network functionality for connecting "clients" and "servers and clouds."The simplest examples provide packet switching and routing while more complex examples implement cyber security, broadband policies or cloud-based network infrastructure. Network applications, such as those processing digital voice and video, must be highly scalable, secure and maintainable. Such application requirements translate to requirements for a network programming language that leverages massively-parallel systems and ensures a high level of security, while representing networking protocols and transactions in the simplest way possible. packetC meets these requirements with an intuitive approach to coarse-grained parallelism, with strong-typing and controlled memory access for security and with new data types and operators that express the classic operations of the network-oriented world in familiar programming terms. No other language has addressed the full breadth of requirements for tractable parallelism, secure processing and usable constructs. The packetC language is growing in adoption and has been used to develop solutions operating in some of the world's largest networks. What you'll learn This book is the primary document specifying the language from a developer's point of view and act as the formal language user's guide. It covers: How to program applications in packetC. The parallel programming model of packetC Deviations from C99 and the unique aspects of packetC How to leverage existing C code and the applicability of the C standard libraries Who this book is for packetC Programming is written for a widevariety of potential programmers. Most importantly, it's for people who need to use packetC to program for the Internet backbone. Still, knowledge of the packetC language will help a much wider array of programmers who need to write effective code that will be optimized for the cloud and workeffectively and efficiently through complex network structures. Finally, readers will learn about how and why packetC is needed, and to better understand the technologies, standards and issues surrounding the 'net. If you really want to understand this level of programming, this book isa must-have Table of Contents PART 1: packetC Background Chapter 1: Origins of packetC Chapter 2: Introduction to packetC Language Chapter 3: Style Guidelines for packetC Programs Chapter 4: Construction of a packetC Program PART 2: Language Reference Chapter 5: VariablesIdentifiers, Basic Scalar Data Types, and Literals Chapter 6: Data Initialization and Mathematical Expressions Chapter 7: Functions Chapter 8: packetC Data Type Fundamentals Chapter 9: C-Style Data Types Chapter 10: Basic Packet Interaction and Operations Chapter 11: Selection Statements Chapter 12: Loops and Flow Control Chapter 13: Exception Handling Chapter 14: Databases Types and Operations Chapter 15: Search Set Types and Operations Chapter 16: Reference Type and Operations Chapter 17: Lock and Unlock Operators Chapter 18: Packet Information Block and System Packet Operations Chapter 19: Descriptor Type and Operations PART 3: Developing Applications Chapter 20: Control Plane and System Interaction Chapter 21: packetC Pre-Processor Chapter 22: Pragmas and Other Key Compiler Directives Chapter 23: Developing Large Applications in packetC Chapter 24: Construction of a packetC Executable Chapter 25: packetC Standard Networking Descriptors Chapter 26: Developing For Performance Chapter 27: Standard Libraries PART 4: Industry Reprints Appendix A: Reference Tables Appendix B: Open Systems Vendors for packetC Appendix C: Glossary Appendix D: CloudShield Products Supporting packetC
"I prefer to view formal methods as tools. the use of which might be helpful." E. W. Dijkstra Algebraic specifications are about to be accepted by industry. Many projects in which algebraic specifications have been used as a design tool have been carried out. What prevents algebraic specifications from breaking through is the absence of introductory descriptions and tools supporting the construction of algebraic specifications. On the one hand. interest from industry will stimulate people to make introductions and tools. whereas on the other hand the existence of introductions and tools will stimulate industry to use algebraic specifications. This book should be seen as a contribution towards creating this virtuous circle. The book will be of interest to software designers and programmers. It can also be used as material for an introductory course on algebraic specifications and software engineering at undergraduate or graduate level. Nowadays. there is general agreement that in large software projects appropriate specifications are a must in order to obtain quality software. Informal specifications alone are certainly not appropriate because they are incomplete. inconsistent. inaccurate and ambiguous and they rapidly become bulky and therefore useless. The only way to overcome this problem is to use formal specifications. An important remark here is that a specification formalism (language) alone is not sufficient. What is also needed is a design method to write specifications in that formalism.
The need for a comprehensive survey-type exposition on formal languages and related mainstream areas of computer science has been evident for some years. In the early 1970s, when . the book Formal Languages by the second mentioned editor appeared, it was still quite feasible to write a comprehensive book with that title and include also topics of current research interest. This would not be possible anymore. A standard-sized book on formal languages would either have to stay on a fairly low level or else be specialized and restricted to some narrow sector of the field. The setup becomes drastically different in a collection of contributions, where the best authorities in the world join forces, each of them concentrat ing on their own areas of specialization. The present three-volume Handbook constitutes such a unique collection. In these three volumes we present the current state of the art in formal language theory. We were most satisfied with the enthusiastic response given to our request for contributions by specialists representing various subfields. The need for a Handbook of Formal Languages was in many answers expressed in different ways: as an easily accessible his torical reference, a general source of information, an overall course-aid, and a compact collection of material for self-study. We are convinced that the final result will satisfy such various needs. The theory of formal languages constitutes the stem or backbone of the field of science now generally known as theoretical computer science.
This book is a detailed account of the Synthesizer Generator, a system for creat ing specialized editors that are customized for editing particular languages. The book is intended for those with an interest in software tools and in methods for building interactive systems. It is a must for people who are using the Syn thesizer Generator to build editors because it provides extensive discussions of how to write editor specifications. The book should also be valuable for people who are building specialized editors "by hand," without using an editor generating tool. The need to manage the development of large software systems is one of the most pressing problems faced by computer programmers. An important aspect of this problem is the design of new tools to aid interactive program develop ment. The Synthesizer Generator permits one to create specialized editors that are tailored for editing a particular language. In program editors built with the Synthesizer Generator, knowledge about the language is used to continuously assess whether a program contains errors and to determine where such errors occur. The information is then displayed on the terminal screen to provide feed back to the programmer as the program is developed and modified."
In the two and a half years since the frrst edition of this book was published, the field of logic programming has grown rapidly. Consequently, it seemed advisable to try to expand the subject matter covered in the first edition. The new material in the second edition has a strong database flavour, which reflects my own research interests over the last three years. However, despite the fact that the second edition has about 70% more material than the first edition, many worthwhile topic!! are still missing. I can only plead that the field is now too big to expect one author to cover everything. In the second edition, I discuss a larger class of programs than that discussed in the first edition. Related to this, I have also taken the opportunity to try to improve some of the earlier terminology. Firstly, I introduce "program statements", which are formulas of the form A+-W, where the head A is an atom and the body W is an arbitrary formula. A "program" is a finite set of program statements. There are various restrictions of this class. "Normal" programs are ones where the body of each program statement is a conjunction of literals. (The terminology "general", used in the first edition, is obviously now inappropriate).
This book is an anthology of the results of research and development in database query processing during the past decade. The relational model of data provided tremendous impetus for research into query processing. Since a relational query does not specify access paths to the stored data, the database management system (DBMS) must provide an intelligent query-processing subsystem which will evaluate a number of potentially efficient strategies for processing the query and select the one that optimizes a given performance measure. The degree of sophistication of this subsystem, often called the optimizer, critically affects the performance of the DBMS. Research into query processing thus started has taken off in several directions during the past decade. The emergence of research into distributed databases has enormously complicated the tasks of the optimizer. In a distributed environment, the database may be partitioned into horizontal or vertical fragments of relations. Replicas of the fragments may be stored in different sites of a network and even migrate to other sites. The measure of performance of a query in a distributed system must include the communication cost between sites. To minimize communication costs for-queries involving multiple relations across multiple sites, optimizers may also have to consider semi-join techniques.
Once programmers have grasped the basics of object-oriented programming and C++, the most important tool that they have at their disposal is the Standard Template Library (STL). This provides them with a library of re-usable objects and standard data structures. It has recently been accepted by the C++ Standards Committee. This textbook is an introduction to data structures and the STL. It provides a carefully integrated discussion of general data structures and their implementation and use in the STL. In so doing, the author is able to teach readers the important features of abstraction and how to develop applications using the STL. |
![]() ![]() You may like...
The Dynamics of Judicial Proof…
Marilyn MacCrimmon, Peter Tillers
Hardcover
R5,879
Discovery Miles 58 790
1 Recce: Volume 3 - Onsigbaarheid Is Ons…
Alexander Strachan
Paperback
|