![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming > Programming languages > General
Vorwort In der Natur entwickelten sich die Echtzeitsysteme seit einigen 100 Mil- Honen Jahren. Tierische Nervensysteme haben zur Aufgabe, auf die Nachrichten aus der Umwelt die Steuerungsbefehle an die aktiven Or- gane zu geben. Dabei spielen zum Beispiel bedingte Reflexe eine wichtige Rolle. Vielleicht kann man die Entstehung des Menschen etwa zu der Zeit ansetzen, als sein sich allmahlich entwickelndes Gehirn Gedanken entwickelte, deren Bedeutung in vorausplanender Weise iiber die gerade vorliegende Situation hinausging. Das fiihrte schliesslich unter anderem zum heutigen Wissenschaftler, der seine Theorien und Systeme aufgrund langwieriger Uberlegungen aufbaut. Die Entwicklung der Computer ging im wesentlichen den umgekehrten Weg. Zunachst diente sie nur der Durchfiihrung "starrer" Programme, wie z.B. das erste programmgesteuerte Rechengerat Z3, das der Unterzeichner im Jahre 1941 vorfiihren konnte. Es folgte unter an- derem ein Spezialgerat zur Fliigelvermessung, das man als den ersten Prozessrechner bezeichnen kann. Es wurden etwa vierzig als Analog- Digital-Wandler arbeitende Messuhren yom Rechnerautomaten abgele- sen und im Rahmen eines Programms als Variable verarbeitet. Abel' auch das erfolgte noch in starrer Reihenfolge. Die echte Prozesssteuerung - heute auch Echtzeitsysteme genannt - erfordert aber ein Reagieren auf bestandig wechselnde Situationen.
This book constitutes the refereed proceedings of the 10th International Conference on Software Engineering and Formal Methods, SEFM 2012, held in Thessaloniki, Greece, in October 2012. The 19 revised research papers presented together with 3 short papers, 2 tool papers, and 2 invited talks were carefully reviewed and selected from 98 full submissions. The SEFM conference aspires to advance the state-of-the-art in formal methods, to enhance their scalability and usability with regards to their application in the software industry and to promote their integration with practical engineering methods.
This book constitutes the refereed proceedings of the 2nd International Conference on Model and Data Engineering, MEDI 2012, held in Poitiers, France, in October 2012. The 12 revised full papers presented together with 5 short papers were carefully reviewed and selected from 35 submissions. The papers are cover the topics of model driven engineering, ontology engineering, formal modeling, security, and data mining.
In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.
Parallel Language and Compiler Research in Japan offers the international community an opportunity to learn in-depth about key Japanese research efforts in the particular software domains of parallel programming and parallelizing compilers. These are important topics that strongly bear on the effectiveness and affordability of high performance computing systems. The chapters of this book convey a comprehensive and current depiction of leading edge research efforts in Japan that focus on parallel software design, development, and optimization that could be obtained only through direct and personal interaction with the researchers themselves.
A Formal Approach to Hardware Design discusses designing computations to be realised by application specific hardware. It introduces a formal design approach based on a high-level design language called Synchronized Transitions. The models created using Synchronized Transitions enable the designer to perform different kinds of analysis and verification based on descriptions in a single language. It is, for example, possible to use exactly the same design description both for mechanically supported verification and synthesis. Synchronized Transitions is supported by a collection of public domain CAD tools. These tools can be used with the book in presenting a course on the subject. A Formal Approach to Hardware Design illustrates the benefits to be gained from adopting such techniques, but it does so without assuming prior knowledge of formal design methods. The book is thus not only an excellent reference, it is also suitable for use by students and practitioners.
Computersystemsresearch is heavilyinfluencedby changesincomputertechnol- ogy. As technology changes alterthe characteristics ofthe underlying hardware com- ponents of the system, the algorithms used to manage the system need to be re- examinedand newtechniques need to bedeveloped. Technological influencesare par- ticularly evident in the design of storage management systems such as disk storage managers and file systems. The influences have been so pronounced that techniques developed as recently as ten years ago are being made obsolete. The basic problem for disk storage managers is the unbalanced scaling of hard- warecomponenttechnologies. Disk storage managerdesign depends on the technolo- gy for processors, main memory, and magnetic disks. During the 1980s, processors and main memories benefited from the rapid improvements in semiconductortechnol- ogy and improved by several orders ofmagnitude in performance and capacity. This improvement has not been matched by disk technology, which is bounded by the me- chanics ofrotating magnetic media. Magnetic disks ofthe 1980s have improved by a factor of 10in capacity butonly a factor of2 in performance. This unbalanced scaling ofthe hardware components challenges the disk storage manager to compensate for the slower disks and allow performance to scale with the processor and main memory technology. Unless the performance of file systems can be improved over that of the disks, I/O-bound applications will be unable to use the rapid improvements in processor speeds to improve performance for computer users. Disk storage managers must break this bottleneck and decouple application perfor- mance from the disk.
This textbook provides an in depth course on data structures in the context of object oriented development. Its main themes are abstraction, implementation, encapsulation, and measurement: that is, that the software process begins with abstraction of data types, which then lead to alternate representations and encapsulation, and finally to resource measurement. A clear object oriented approach, making use of Booch components, will provide readers with a useful library of data structure components and experience in software reuse. Students using this book are expected to have a reasonable understanding of the basic logical structures such as stacks and queues. Throughout, Ada 95 is used and the author takes full advantage of Ada's encapsulation features and the ability to present specifications without implementational details. Ada code is supported by two suites available over the World Wide Web.
This book constitutes the refereed proceedings of the 6th International Workshop on Reachability Problems, RP 2012, held in Bordeaux, France, in September, 2012. The 8 revised full papers presented together with 4 invited talks were carefully reviewed and selected from 15 submissions. The papers present current research and original contributions related to reachability problems in different computational models and systems such as algebraic structures, computational models, hybrid systems, logic and verification. Reachability is a fundamental problem that appears in several different contexts: finite- and infinite-state concurrent systems, computational models like cellular automata and Petri nets, decision procedures for classical, modal and temporal logic, program analysis, discrete and continuous systems, time critical systems, and open systems modeled as games.
The theory of constructive (recursive) models follows from works of Froehlich, Shepherdson, Mal'tsev, Kuznetsov, Rabin, and Vaught in the 50s. Within the framework of this theory, algorithmic properties of abstract models are investigated by constructing representations on the set of natural numbers and studying relations between algorithmic and structural properties of these models. This book is a very readable exposition of the modern theory of constructive models and describes methods and approaches developed by representatives of the Siberian school of algebra and logic and some other researchers (in particular, Nerode and his colleagues). The main themes are the existence of recursive models and applications to fields, algebras, and ordered sets (Ershov), the existence of decidable prime models (Goncharov, Harrington), the existence of decidable saturated models (Morley), the existence of decidable homogeneous models (Goncharov and Peretyat'kin), properties of the Ehrenfeucht theories (Millar, Ash, and Reed), the theory of algorithmic dimension and conditions of autostability (Goncharov, Ash, Shore, Khusainov, Ventsov, and others), and the theory of computable classes of models with various properties. Future perspectives of the theory of constructive models are also discussed. Most of the results in the book are presented in monograph form for the first time. The theory of constructive models serves as a basis for recursive mathematics. It is also useful in computer science, in particular, in the study of programming languages, higher level languages of specification, abstract data types, and problems of synthesis and verification of programs. Therefore, the book will be useful for not only specialists in mathematical logic and the theory of algorithms but also for scientists interested in the mathematical fundamentals of computer science. The authors are eminent specialists in mathematical logic. They have established fundamental results on elementary theories, model theory, the theory of algorithms, field theory, group theory, applied logic, computable numberings, the theory of constructive models, and the theoretical computer science.
This book is a revised edition of the monograph which appeared under the same title in the series Research Notes in Theoretical Computer Science, Pit man, in 1986. In addition to a general effort to improve typography, English, and presentation, the main novelty of this second edition is the integration of some new material. Part of it is mine (mostly jointly with coauthors). Here is brief guide to these additions. I have augmented the account of categorical combinatory logic with a description of the confluence properties of rewriting systems of categor ical combinators (Hardin, Yokouchi), and of the newly developed cal culi of explicit substitutions (Abadi, Cardelli, Curien, Hardin, Levy, and Rios), which are similar in spirit to the categorical combinatory logic, but are closer to the syntax of A-calculus (Section 1.2). The study of the full abstraction problem for PCF and extensions of it has been enriched with a new full abstraction result: the model of sequential algorithms is fully abstract with respect to an extension of PCF with a control operator (Cartwright, Felleisen, Curien). An order extensional model of error-sensitive sequential algorithms is also fully abstract for a corresponding extension of PCF with a control operator and errors (Sections 2.6 and 4.1). I suggest that sequential algorithms lend themselves to a decomposition of the function spaces that leads to models of linear logic (Lamarche, Curien), and that connects sequentiality with games (Joyal, Blass, Abramsky) (Sections 2.1 and 2.6)."
Constraint and Integer Programming presents some of the basic ideas of constraint programming and mathematical programming, explores approaches to integration, brings us up to date on heuristic methods, and attempts to discern future directions in this fast-moving field.
A growing concern of mine has been the unrealistic expectations for new computer-related technologies introduced into all kinds of organizations. Unrealistic expectations lead to disappointment, and a schizophrenic approach to the introduction of new technologies. The UNIX and real-time UNIX operating system technologies are major examples of emerging technologies with great potential benefits but unrealistic expectations. Users want to use UNIX as a common operating system throughout large segments of their organizations. A common operating system would decrease software costs by helping to provide portability and interoperability between computer systems in today's multivendor environments. Users would be able to more easily purchase new equipment and technologies and cost-effectively reuse their applications. And they could more easily connect heterogeneous equipment in different departments without having to constantly write and rewrite interfaces. On the other hand, many users in various organizations do not understand the ramifications of general-purpose versus real-time UNIX. Users tend to think of "real-time" as a way to handle exotic heart-monitoring or robotics systems. Then these users use UNIX for transaction processing and office applications and complain about its performance, robustness, and reliability. Unfortunately, the users don't realize that real-time capabilities added to UNIX can provide better performance, robustness and reliability for these non-real-time applications. Many other vendors and users do realize this, however. There are indications even now that general-purpose UNIX will go away as a separate entity. It will be replaced by a real-time UNIX. General-purpose UNIX will exist only as a subset of real-time UNIX.
It is recognized that formal design and verification methods are an important requirement for the attainment of high quality system designs. The field has evolved enormously during the last few years, resulting in the fact that formal design and verification methods are nowadays supported by several tools, both commercial and academic. If different tools and users are to generate and read the same language then it is necessary that the same semantics is assigned by them to all constructs and elements of the language. The current IEEE standard VHDL language reference manual (LRM) tries to define VHDL as well as possible in a descriptive way, explaining the semantics in English. But rigor and clarity are very hard to maintain in a semantics defined in this way, and that has already given rise to many misconceptions and contradictory interpretations. Formal Semantics for VHDL is the first book that puts forward a cohesive set of semantics for the VHDL language. The chapters describe several semantics each based on a different underlying formalism: two of them use Petri nets as target language, and two of them higher order logic. Two use functional concepts, and finally another uses the concept of evolving algebras. Formal Semantics for VHDL is essential reading for researchers in formal methods and can be used as a text for an advanced course on the subject.
Parsing technology traditionally consists of two branches, which correspond to the two main application areas of context-free grammars and their generalizations. Efficient deterministic parsing algorithms have been developed for parsing programming languages, and quite different algorithms are employed for analyzing natural language. The Functional Treatment of Parsing provides a functional framework within which the different traditional techniques are restated and unified. The resulting theory provides new recursive implementations of parsers for context-free grammars. The new implementations, called recursive ascent parsers, avoid explicit manipulation of parse stacks and parse matrices, and are in many ways superior to conventional implementations. They are applicable to grammars for programming languages as well as natural languages. The book has been written primarily for students and practitioners of parsing technology. With its emphasis on modern functional methods, however, the book will also be of benefit to scientists interested in functional programming. The Functional Treatment of Parsing is an excellent reference and can be used as a text for a course on the subject.
Call-by-push-value is a programming language paradigm that,
surprisingly, breaks down the call-by-value and call-by-name
paradigms into simple primitives. This monograph, written for
graduate students and researchers, exposes the call-by-push-value
structure underlying a remarkable range of semantics, including
operational semantics, domains, possible worlds, continuations and
games.
This book constitutes the proceedings of the 16th Brazililan Symposium on Programming Languages, SBLP 2012, held in Natal, Brazil, in September 2012. The 10 full and 2 short papers were carefully reviewed and selected from 27 submissions. The papers cover various aspects of programming languages and software engineering.
This book constitutes the thoroughly refereed proceedings of the 10th International Symposium on Automated Technology for Verification and Analysis, ATVA 2012, held at Thiruvananthapuram, Kerala, India, in October 2012. The 25 regular papers, 3 invited papers and 4 tool papers presented were carefully selected from numerous submissions. Conference papers are organized in 9 technical sessions, covering the topics of automata theory, logics and proofs, model checking, software verification, synthesis, verification and parallelism, probabilistic verification, constraint solving and applications, and probabilistic systems.
This book shows readers how to get the most out of C# using Object Orientation. The author takes a hands-on approach to learning C# and object orientation, using lots of worked examples. The text provides an ideal base from which to start programming. After introducing the C# language and object orientation, John Hunt goes on to explain: how to construct a user interface for a simple editor; how to obtain information on files and directories and how objects can be stored and restored using serialization... -Presents C# and object-orientation as a coherent whole, using one to strengthen the presentation of the other -Includes lots of complete and worked examples to clarify readers'understanding -The source code for the examples is available at: http://www.guide-to-csharp.net -Hunt is a successful Springer author, and this book is written in the same style as his Java for Practitioners
This book constitutes the thoroughly refereed proceedings of the 19th International Symposium on Static Analysis, SAS 2012, held in Deauville, France, in September 2012. The 25 revised full papers presented together with 4 invited talks were selected from 62 submissions. The papers address all aspects of static analysis, including abstract domains, abstract interpretation, abstract testing, bug detection, data flow analysis, model checking, new applications, program transformation, program verification, security analysis, theoretical frameworks, and type checking.
The two-volume set LNCS 7609 and 7610 constitutes the thoroughly
refereed proceedings of the 5th International Symposium on
Leveraging Applications of Formal Methods, Verification and
Validation, held in Heraklion, Crete, Greece, in October 2012.
This book constitutes the thoroughly refereed post-workshop proceedings of the 9th International Workshop on Rewriting Logic and its Applications, WRLA 2012, held as a satellite event of ETAPS 2012, in Tallinn, Estonia, in March 2012. The 8 revised full papers presented together with 4 invited papers were carefully reviewed and selected from 12 initial submissions and 5 invited lectures. The papers address a great diversity of topics in the fields of rewriting logic such as: foundations and models, languages, logical and semantic framework, model-based software engineering, real-time and probabilistic extensions, verification techniques, and distributed systems.
Formal Methods for Open Object-Based Distributed Systems IV presents the leading edge in the fields of object-oriented programming, open distributed systems, and formal methods for object-oriented systems. With increased support within industry regarding these areas, this book captures the most up-to-date information on the subject. Papers in this volume focus on the following specific technologies: * components; * mobile code; * Java(R); * The Unified Modeling Language (UML); * refinement of specifications; * types and subtyping; * temporal and probabilistic systems. This volume comprises the proceedings of the Fourth International Workshop on Formal Methods for Open Object-Based Distributed Systems (FMOODS 2000), which was sponsored by the International Federation for Information Processing (IFIP) and held in Stanford, California, USA, in September 2000.
The two-volume set LNCS 7609 and 7610 constitutes the thoroughly refereed proceedings of the 5th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation, held in Heraklion, Crete, Greece, in October 2012. The two volumes contain papers presented in the topical sections on adaptable and evolving software for eternal systems, approaches for mastering change, runtime verification: the application perspective, model-based testing and model inference, learning techniques for software verification and validation, LearnLib tutorial: from finite automata to register interface programs, RERS grey-box challenge 2012, Linux driver verification, bioscientific data processing and modeling, process and data integration in the networked healthcare, timing constraints: theory meets practice, formal methods for the developent and certification of X-by-wire control systems, quantitative modelling and analysis, software aspects of robotic systems, process-oriented geoinformation systems and applications, handling heterogeneity in formal development of HW and SW Systems.
Object relationships in modern software systems are becoming increasingly numerous and complex, and program errors due to violations of object relationships are difficult to detect. Programmers need new tools that allow them to explore objects in a large system more efficiently and to detect broken object relationships instantaneously. Such tools incorporate approaches used in such areas as data visualization, pattern matching and extraction, database querying, active databases, and rule-based programming. The query-based debugging approach developed by the author of this book is another powerful yet efficient tool to be added to the developer's tool chest. Advanced Debugging Methods presents practice and tools for debugging computer programs. This book proposes new powerful approaches that simplify the daunting task of debugging complex software systems. Although debugging has been addressed in numerous research papers, many of its methods have yet to be explored in a book-length format. This book helps to fill this gap by presenting an overview of existing debugging tools with motivating examples and case studies, as well as presenting new, state-of-the-art debugging methods. Advanced Debugging Methods will be of use to software developers looking for tools to be applied in cutting edge practice; system architects looking at the relationship between software design and debugging; tools and programming language researchers looking for new ideas in run-time tool implementation as well as detailed descriptions of advanced implementations; and university professors and graduate students who will use this book as supplementary reading for graduate courses in programming tools, language implementation, and advanced object-oriented systems. Advanced Debugging Methods is also a handy reference of currently existing debugging methodologies as well as a springboard for cutting-edge research to simplify the difficult task of debugging and to facilitate the development of more robust software systems. |
![]() ![]() You may like...
C++ How to Program: Horizon Edition
Harvey Deitel, Paul Deitel
Paperback
R1,861
Discovery Miles 18 610
|