![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming > Programming languages
This book constitutes the refereed proceedings of the 10th Asian Symposium on Programming Languages and Systems, APLAS 2012, held in Kyoto, Japan, in December 2012. The 24 revised full papers presented together with the abstracts of 3 invited talks were carefully reviewed and selected from 58 submissions. The papers are organized in topical sections on concurrency, security, static analysis, language design, dynamic analysis, complexity and semantics, and program logics and verification.
Domain theory is a rich interdisciplinary area at the intersection of logic, computer science, and mathematics. This volume contains selected papers presented at the International Symposium on Domain Theory which took place in Shanghai in October 1999. Topics of papers range from the encounters between topology and domain theory, sober spaces, Lawson topology, real number computability and continuous functionals to fuzzy modelling, logic programming, and pi-calculi. This book is a valuable reference for researchers and students interested in this rapidly developing area of theoretical computer science.
This book presents the thoroughly refereed and revised post-workshop proceedings of the 17th Monterey Workshop, held in Oxford, UK, in March 2012. The workshop explored the challenges associated with the Development, Operation and Management of Large-Scale complex IT Systems. The 21 revised full papers presented were significantly extended and improved by the insights gained from the productive and lively discussions at the workshop, and the feedback from the post-workshop peer reviews.
Recent developments in computer science clearly show the need for a
better theoretical foundation for some central issues. Methods and
results from mathematical logic, in particular proof theory and
model theory, are of great help here and will be used much more in
future than previously. This book provides an excellent
introduction to the interplay of mathematical logic and computer
science. It contains extensively reworked versions of the lectures
given at the 1997 Marktoberdorf Summer School by leading
researchers in the field.
This book constitutes the thoroughly refereed post-conference proceedings of the 23rd International Symposium on Implementation and Application of Functional Languages, IFL 2011, held in Lawrence, Kansas, USA, in October 2011. The 11 revised full papers presented were carefully reviewed and selected from 33 submissions. The papers by researchers and practitioners who are actively engaged in the implementation and the use of functional and function based programming languages describe practical and theoretical work as well as applications and tools. They discuss new ideas and concepts, as well as work in progress and results.
Language, Compilers and Run-time Systems for Scalable Computers contains 20 articles based on presentations given at the third workshop of the same title, and 13 extended abstracts from the poster session. Starting with new developments in classical problems of parallel compiler design, such as dependence analysis and an exploration of loop parallelism, the book goes on to address the issues of compiler strategy for specific architectures and programming environments. Several chapters investigate support for multi-threading, object orientation, irregular computation, locality enhancement, and communication optimization. Issues of the interface between language and operating system support are also discussed. Finally, the load balance issues are discussed in different contexts, including sparse matrix computation and iteratively balanced adaptive solvers for partial differential equations. Some additional topics are also discussed in the extended abstracts. Each chapter provides a bibliography of relevant papers and the book can thus be used as a reference to the most up-to-date research in parallel software engineering.
FIELD has been a remarkably successful research project. The ideas first exhibited in the environment now form the basis for most of the current generation of programming environments, including Hewlett-Packard's Softbench, DEC's FUSE, Sun's Tooltalk, Lucid's Energize, and SGI's Codevision. FIELD pioneered the notion of broadcast messaging as a basis for tool integration. Moreover, many of the other tool concepts introduced in FIELD have made their way into these environments. Thus in discussing the FIELD environment, this book actually explains the inner workings of today's programming environments. The book will be valuable for those interested in the development of programming tools and environments, as well as serious users of programming environments. It will also be of interest to anyone undertaking a large software project, both by introducing the software tools needed to work on such a project and by demonstrating the concepts of message-based integration which can be applied to a variety of domains.
This book constitutes the refereed proceedings of the Fifth International Symposium on Search-Based Software Engineering, SSBSE 2013, held in St. Petersburg, Russia. The 14 revised full papers, 6 revised short papers, and 6 papers of the graduate track presented together with 2 keynotes, 2 challenge track papers and 1 tutorial paper were carefully reviewed and selected from 50 initial submissions. Search Based Software Engineering (SBSE) studies the application of meta-heuristic optimization techniques to various software engineering problems, ranging from requirements engineering to software testing and maintenance.
Mobile Computation with Functions explores distributed computation with languages which adopt functions as the main programming abstraction and support code mobility through the mobility of functions between remote sites. It aims to highlight the benefits of using languages of this family in dealing with the challenges of mobile computation. The possibility of exploiting existing static analysis techniques suggests that having functions at the core of mobile code language is a particularly apt choice. A range of problems which have impact on the safety, security and performance are discussed. It is shown that types extended with effects and other annotations can capture a significant amount of information about the dynamic behavior of mobile functions, and offer solutions to the problems under investigation. This book includes a survey of the languages Concurrent ML, Facile and PLAN which inherit the strengths of the functional paradigm in the context of concurrent and distributed computation. The languages which are defined in the subsequent chapters have their roots in these languages.
Scientific Data Analysis using Jython Scripting and Java presents practical approaches for data analysis using Java scripting based on Jython, a Java implementation of the Python language. The chapters essentially cover all aspects of data analysis, from arrays and histograms to clustering analysis, curve fitting, metadata and neural networks. A comprehensive coverage of data visualisation tools implemented in Java is also included. Written by the primary developer of the jHepWork data-analysis framework, the book provides a reliable and complete reference source laying the foundation for data-analysis applications using Java scripting. More than 250 code snippets (of around 10-20 lines each) written in Jython and Java, plus several real-life examples help the reader develop a genuine feeling for data analysis techniques and their programming implementation. This is the first data-analysis and data-mining book which is completely based on the Jython language, and opens doors to scripting using a fully multi-platform and multi-threaded approach. Graduate students and researchers will benefit from the information presented in this book.
mental improvements during the same period. What is clearly needed in verification techniques and technology is the equivalent of a synthesis productivity breakthrough. In the second edition of Writing Testbenches, Bergeron raises the verification level of abstraction by introducing coverage-driven constrained-random transaction-level self-checking testbenches all made possible through the introduction of hardware verification languages (HVLs), such as e from Verisity and OpenVera from Synopsys. The state-of-art methodologies described in Writing Test benches will contribute greatly to the much-needed equivalent of a synthesis breakthrough in verification productivity. I not only highly recommend this book, but also I think it should be required reading by anyone involved in design and verification of today's ASIC, SoCs and systems. Harry Foster Chief Architect Verplex Systems, Inc. xviii Writing Testbenches: Functional Verification of HDL Models PREFACE If you survey hardware design groups, you will learn that between 60% and 80% of their effort is now dedicated to verification.
Formal Languages and Computation: Models and Their Applications gives a clear, comprehensive introduction to formal language theory and its applications in computer science. It covers all rudimental topics concerning formal languages and their models, especially grammars and automata, and sketches the basic ideas underlying the theory of computation, including computability, decidability, and computational complexity. Emphasizing the relationship between theory and application, the book describes many real-world applications, including computer science engineering techniques for language processing and their implementation.
In short, this book represents a theoretically oriented treatment of formal languages and their models with a focus on their applications. It introduces all formalisms concerning them with enough rigors to make all results quite clear and valid. Every complicated mathematical passage is preceded by its intuitive explanation so that even the most complex parts of the book are easy to grasp. After studying this book, both student and professional should be able to understand the fundamental theory of formal languages and computation, write language processors, and confidently follow most advanced books on the subject."
This book constitutes the thoroughly refereed post-conference proceedings of the 18th International Conference on Principles and Practice of Constraint Programming (CP 2012), held in Quebec, Canada, in October 2012. The 68 revised full papers were carefully selected from 186 submissions. Beside the technical program, the conference featured two special tracks. The former was the traditional application track, which focused on industrial and academic uses of constraint technology and its comparison and integration with other optimization techniques (MIP, local search, SAT, etc.) The second track, featured for the first time in 2012, concentrated on multidisciplinary papers: cross-cutting methodology and challenging applications collecting papers that link CP technology with other techniques like machine learning, data mining, game theory, simulation, knowledge compilation, visualization, control theory, and robotics. In addition, the track focused on challenging application fields with a high social impact such as CP for life sciences, sustainability, energy efficiency, web, social sciences, finance, and verification.
This book constitutes the thoroughly refereed post-conference proceedings of the 7th International Haifa Verification Conference, HVC 2011, held in Haifa, Israel in December 2011. The 15 revised full papers presented together with 3 tool papers and 4 posters were carefully reviewed and selected from 43 submissions. The papers are organized in topical sections on synthesis, formal verification, software quality, testing and coverage, experience and tools, and posters- student event.
In conjunction with the 1993 International Conference on Logic Programming (ICLP'93), held in Budapest Hungary, two workshops were held concerning the implementations of logic programming systems: Practical Implementations and Sys- tems Experience in Logic Programming Systems, and Concurrent, Distributed, and Parallel Implementations of Logic Programming Systems. This collection presents 16 research papers in the area of the implementation of logic programming systems. The two workshops aimed to bring together sys- tems implementors for discussing real problems coming from their direct experience, therefore these papers have a special emphasis on practice rather than on theory. This book will be of immediate interest to practitioners who seek understanding of how to efficiently manage memory, generate fast code, perform sophisticated static analyses, and design high-performance runtime features. A major theme, throughout the papers, is how to effectively leverage host imple- mentation systems and technologies to implement target systems. Debray discusses implementing Janus in SICStus Prolog by exploiting the delay primitive, which is fur- ther expounded by Meier in his discussion of various ECRC systems implementations of delay primitives. Hausman discusses implementing Erlang in C, and Czajkowski and Zielinski discuss embedding Linda primitives in Strand. Denti et ai. discuss implementing object-oriented logic programs within SICStus Prolog, a theme also explored and compared to a WAM-based implementation by Bugliesi and Nardiello.
Vorwort In der Natur entwickelten sich die Echtzeitsysteme seit einigen 100 Mil- Honen Jahren. Tierische Nervensysteme haben zur Aufgabe, auf die Nachrichten aus der Umwelt die Steuerungsbefehle an die aktiven Or- gane zu geben. Dabei spielen zum Beispiel bedingte Reflexe eine wichtige Rolle. Vielleicht kann man die Entstehung des Menschen etwa zu der Zeit ansetzen, als sein sich allmahlich entwickelndes Gehirn Gedanken entwickelte, deren Bedeutung in vorausplanender Weise iiber die gerade vorliegende Situation hinausging. Das fiihrte schliesslich unter anderem zum heutigen Wissenschaftler, der seine Theorien und Systeme aufgrund langwieriger Uberlegungen aufbaut. Die Entwicklung der Computer ging im wesentlichen den umgekehrten Weg. Zunachst diente sie nur der Durchfiihrung "starrer" Programme, wie z.B. das erste programmgesteuerte Rechengerat Z3, das der Unterzeichner im Jahre 1941 vorfiihren konnte. Es folgte unter an- derem ein Spezialgerat zur Fliigelvermessung, das man als den ersten Prozessrechner bezeichnen kann. Es wurden etwa vierzig als Analog- Digital-Wandler arbeitende Messuhren yom Rechnerautomaten abgele- sen und im Rahmen eines Programms als Variable verarbeitet. Abel' auch das erfolgte noch in starrer Reihenfolge. Die echte Prozesssteuerung - heute auch Echtzeitsysteme genannt - erfordert aber ein Reagieren auf bestandig wechselnde Situationen.
This book constitutes the refereed proceedings of the International Conference on Multiscore Software Engineering, Performance, and Tools, MUSEPAT 2013, held in Saint Petersburg, Russia, in August 2013. The 9 revised papers were carefully reviewed and selected from 25 submissions. The accepted papers are organized into three main sessions and cover topics such as software engineering for multicore systems; specification, modeling and design; programing models, languages, compiler techniques and development tools; verification, testing, analysis, debugging and performance tuning, security testing; software maintenance and evolution; multicore software issues in scientific computing, embedded and mobile systems; energy-efficient computing as well as experience reports.
Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.
This book constitutes the refereed proceedings of the 10th International Conference on Software Engineering and Formal Methods, SEFM 2012, held in Thessaloniki, Greece, in October 2012. The 19 revised research papers presented together with 3 short papers, 2 tool papers, and 2 invited talks were carefully reviewed and selected from 98 full submissions. The SEFM conference aspires to advance the state-of-the-art in formal methods, to enhance their scalability and usability with regards to their application in the software industry and to promote their integration with practical engineering methods.
A Formal Approach to Hardware Design discusses designing computations to be realised by application specific hardware. It introduces a formal design approach based on a high-level design language called Synchronized Transitions. The models created using Synchronized Transitions enable the designer to perform different kinds of analysis and verification based on descriptions in a single language. It is, for example, possible to use exactly the same design description both for mechanically supported verification and synthesis. Synchronized Transitions is supported by a collection of public domain CAD tools. These tools can be used with the book in presenting a course on the subject. A Formal Approach to Hardware Design illustrates the benefits to be gained from adopting such techniques, but it does so without assuming prior knowledge of formal design methods. The book is thus not only an excellent reference, it is also suitable for use by students and practitioners.
Computersystemsresearch is heavilyinfluencedby changesincomputertechnol- ogy. As technology changes alterthe characteristics ofthe underlying hardware com- ponents of the system, the algorithms used to manage the system need to be re- examinedand newtechniques need to bedeveloped. Technological influencesare par- ticularly evident in the design of storage management systems such as disk storage managers and file systems. The influences have been so pronounced that techniques developed as recently as ten years ago are being made obsolete. The basic problem for disk storage managers is the unbalanced scaling of hard- warecomponenttechnologies. Disk storage managerdesign depends on the technolo- gy for processors, main memory, and magnetic disks. During the 1980s, processors and main memories benefited from the rapid improvements in semiconductortechnol- ogy and improved by several orders ofmagnitude in performance and capacity. This improvement has not been matched by disk technology, which is bounded by the me- chanics ofrotating magnetic media. Magnetic disks ofthe 1980s have improved by a factor of 10in capacity butonly a factor of2 in performance. This unbalanced scaling ofthe hardware components challenges the disk storage manager to compensate for the slower disks and allow performance to scale with the processor and main memory technology. Unless the performance of file systems can be improved over that of the disks, I/O-bound applications will be unable to use the rapid improvements in processor speeds to improve performance for computer users. Disk storage managers must break this bottleneck and decouple application perfor- mance from the disk.
This textbook provides an in depth course on data structures in the context of object oriented development. Its main themes are abstraction, implementation, encapsulation, and measurement: that is, that the software process begins with abstraction of data types, which then lead to alternate representations and encapsulation, and finally to resource measurement. A clear object oriented approach, making use of Booch components, will provide readers with a useful library of data structure components and experience in software reuse. Students using this book are expected to have a reasonable understanding of the basic logical structures such as stacks and queues. Throughout, Ada 95 is used and the author takes full advantage of Ada's encapsulation features and the ability to present specifications without implementational details. Ada code is supported by two suites available over the World Wide Web.
This book constitutes the refereed proceedings of the 2nd International Conference on Model and Data Engineering, MEDI 2012, held in Poitiers, France, in October 2012. The 12 revised full papers presented together with 5 short papers were carefully reviewed and selected from 35 submissions. The papers are cover the topics of model driven engineering, ontology engineering, formal modeling, security, and data mining.
In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.
This book constitutes the refereed proceedings of the 6th International Workshop on Reachability Problems, RP 2012, held in Bordeaux, France, in September, 2012. The 8 revised full papers presented together with 4 invited talks were carefully reviewed and selected from 15 submissions. The papers present current research and original contributions related to reachability problems in different computational models and systems such as algebraic structures, computational models, hybrid systems, logic and verification. Reachability is a fundamental problem that appears in several different contexts: finite- and infinite-state concurrent systems, computational models like cellular automata and Petri nets, decision procedures for classical, modal and temporal logic, program analysis, discrete and continuous systems, time critical systems, and open systems modeled as games. |
![]() ![]() You may like...
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R4,186
Discovery Miles 41 860
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
Advanced Visual Basic 6 - Power…
Matthew Curland, Gary Clarke
Paperback
R1,349
Discovery Miles 13 490
|