![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Programming languages
This book constitutes the refereed conference proceedings of the 20th International Workshop on Functional and Constraint Logic Programming, WFLP 2011, held in Odense, Denmark, in July 2011 as Part of the 13th International Symposium on Principles and Practice of Declarative Programming (PPDP 2011), the 22st International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2011), and the 4th International Workshop on Approaches and Applications of Inductive Programming (AAIP 2011). From the 10 papers submitted, 9 were accepted for presentation the proceeding. The papers cover current research in all areas of functional and logic programming as well as the integration of constraint logic and object-oriented programming, and term rewriting.
Mobile Computation with Functions explores distributed computation with languages which adopt functions as the main programming abstraction and support code mobility through the mobility of functions between remote sites. It aims to highlight the benefits of using languages of this family in dealing with the challenges of mobile computation. The possibility of exploiting existing static analysis techniques suggests that having functions at the core of mobile code language is a particularly apt choice. A range of problems which have impact on the safety, security and performance are discussed. It is shown that types extended with effects and other annotations can capture a significant amount of information about the dynamic behavior of mobile functions, and offer solutions to the problems under investigation. This book includes a survey of the languages Concurrent ML, Facile and PLAN which inherit the strengths of the functional paradigm in the context of concurrent and distributed computation. The languages which are defined in the subsequent chapters have their roots in these languages.
DSSSL (Document Style Semantics and Specification Language) is an ISO standard (ISO/IEC 10179: 1996) published in the year 1996. DSSSL is a standard of the SGML family (Standard Generalized Markup Language, ISO 8879:1986), whose aim is to establish a processing model for SGML documents. For a good understanding of the SGML standard, many books exist including Author's guide[BryanI988] and The SGML handbook[GoldfarbI990]. A DSSSL document is an SGML document, written with the same rules that guide any SGML document. The structure of a DSSSL document is explained in Chapter 2. DSSSL is based, in part, on scheme, a standard functional programming language. The DSSSL subset of scheme along with the procedures supported by DSSSL are explained in Chapter 3. The DSSSL standard starts with the supposition of a pre-existing SGML document, and offers a series of processes that can be performed on it: * Groves The first process that is performed on an SGML document in DSSSL is always the analysis of the document and the creation of a grove. The DSSSL standard shares many common characteristics with another standard of the SGML family, HyTime (ISO/IEC 10744). These standards were developed in parallel, and their developers designed a common data model, the grove, that would support the processing needs of each standard.
The Ada Generic Library provides an extensive well-documented library of generic packages whose use can substantially increase software productivity and reliability. The construction of the library follows a new approach whose principles include the following: - Extensive use of generic algorithms, such as generic "sort" and "merge." - Building up functionality in layers. - Obtaining high efficiency in spite of the layering through the use of Ada's "inline" compiler directive. This volume contains eight Ada packages, with over 170 subprograms for various linear data structures based on linked lists. Professional Ada programmers will find The Ada Generic Library an invaluable tool in building application programs or in further construction of generic libraries. For these users the source code can be obtained on diskettes. The volume will also be useful to those interested in programming methodology, software reusability, and software engineering.
In conjunction with the 1993 International Conference on Logic Programming (ICLP'93), held in Budapest Hungary, two workshops were held concerning the implementations of logic programming systems: Practical Implementations and Sys- tems Experience in Logic Programming Systems, and Concurrent, Distributed, and Parallel Implementations of Logic Programming Systems. This collection presents 16 research papers in the area of the implementation of logic programming systems. The two workshops aimed to bring together sys- tems implementors for discussing real problems coming from their direct experience, therefore these papers have a special emphasis on practice rather than on theory. This book will be of immediate interest to practitioners who seek understanding of how to efficiently manage memory, generate fast code, perform sophisticated static analyses, and design high-performance runtime features. A major theme, throughout the papers, is how to effectively leverage host imple- mentation systems and technologies to implement target systems. Debray discusses implementing Janus in SICStus Prolog by exploiting the delay primitive, which is fur- ther expounded by Meier in his discussion of various ECRC systems implementations of delay primitives. Hausman discusses implementing Erlang in C, and Czajkowski and Zielinski discuss embedding Linda primitives in Strand. Denti et ai. discuss implementing object-oriented logic programs within SICStus Prolog, a theme also explored and compared to a WAM-based implementation by Bugliesi and Nardiello.
mental improvements during the same period. What is clearly needed in verification techniques and technology is the equivalent of a synthesis productivity breakthrough. In the second edition of Writing Testbenches, Bergeron raises the verification level of abstraction by introducing coverage-driven constrained-random transaction-level self-checking testbenches all made possible through the introduction of hardware verification languages (HVLs), such as e from Verisity and OpenVera from Synopsys. The state-of-art methodologies described in Writing Test benches will contribute greatly to the much-needed equivalent of a synthesis breakthrough in verification productivity. I not only highly recommend this book, but also I think it should be required reading by anyone involved in design and verification of today's ASIC, SoCs and systems. Harry Foster Chief Architect Verplex Systems, Inc. xviii Writing Testbenches: Functional Verification of HDL Models PREFACE If you survey hardware design groups, you will learn that between 60% and 80% of their effort is now dedicated to verification.
This volume contains a selection of papers that focus on the state-of the-art in formal specification and verification of real-time computing systems. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D. C. A companion volume by the title Foundations of Real-Time Computing: Scheduling and Resource Management complements this hook by addressing many of the recently devised techniques and approaches for scheduling tasks and managing resources in real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real time computing systems on a scientific basis. The notion of real-time system has alternative interpretations, not all of which are intended usages in this collection of papers. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the formal specification and verification of computer software and systems whose correct performance is dependent on carefully orchestrated interactions with time, e. g., meeting deadlines and synchronizing with clocks. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End use applications of real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."
This book constitutes the thoroughly refereed post-proceedings of the 9th International Workshop on Declarative Agent Languages and Technologies, DALT 2011, held in Taipei, Taiwan, in May 2011. The volume contains 6 revised selected presented at DALT 2011, 7 best papers from the DALT series over the years, explaining how the research developed and how it influenced and impacted the community, the state-of-the-art and subsequent work, and two invited papers from the DALT Spring School, which took place in April 2011.
Object-oriented database systems have been approached with mainly two major intentions in mind, namely to better support new application areas including CAD/CAM, office automation, knowledge engineering, and to overcome the impendance mismatch' between data models and programming languages. This volume gives a comprehensive overwiew of developments in this flourishing area of current database research. Data model and language aspects, interface and database design issues, architectural and implementation questions are covered. Although based on a series of workshops, the contents of this book has been carefully edited to reflect the current state of international research in object oriented database design and implementation.
A Formal Approach to Hardware Design discusses designing computations to be realised by application specific hardware. It introduces a formal design approach based on a high-level design language called Synchronized Transitions. The models created using Synchronized Transitions enable the designer to perform different kinds of analysis and verification based on descriptions in a single language. It is, for example, possible to use exactly the same design description both for mechanically supported verification and synthesis. Synchronized Transitions is supported by a collection of public domain CAD tools. These tools can be used with the book in presenting a course on the subject. A Formal Approach to Hardware Design illustrates the benefits to be gained from adopting such techniques, but it does so without assuming prior knowledge of formal design methods. The book is thus not only an excellent reference, it is also suitable for use by students and practitioners.
This book constitutes the refereed proceedings of the 18th International SPIN workshop on Model Checking Software, SPIN 2011, held in Snowbird, UT, USA, in July 2011. The 10 revised full papers presented together with 2 tool demonstration papers and 1 invited contribution were carefully reviewed and selected from 29 submissions. The papers are organized in topical sections on abstractions and state-space reductions; search strategies; PROMELA encodings and extensions; and applications of model checking.
A growing concern of mine has been the unrealistic expectations for new computer-related technologies introduced into all kinds of organizations. Unrealistic expectations lead to disappointment, and a schizophrenic approach to the introduction of new technologies. The UNIX and real-time UNIX operating system technologies are major examples of emerging technologies with great potential benefits but unrealistic expectations. Users want to use UNIX as a common operating system throughout large segments of their organizations. A common operating system would decrease software costs by helping to provide portability and interoperability between computer systems in today's multivendor environments. Users would be able to more easily purchase new equipment and technologies and cost-effectively reuse their applications. And they could more easily connect heterogeneous equipment in different departments without having to constantly write and rewrite interfaces. On the other hand, many users in various organizations do not understand the ramifications of general-purpose versus real-time UNIX. Users tend to think of "real-time" as a way to handle exotic heart-monitoring or robotics systems. Then these users use UNIX for transaction processing and office applications and complain about its performance, robustness, and reliability. Unfortunately, the users don't realize that real-time capabilities added to UNIX can provide better performance, robustness and reliability for these non-real-time applications. Many other vendors and users do realize this, however. There are indications even now that general-purpose UNIX will go away as a separate entity. It will be replaced by a real-time UNIX. General-purpose UNIX will exist only as a subset of real-time UNIX.
Program understanding plays an important role in nearly all software related tasks. It is vital to the development, maintenance and reuse activities. Program understanding is indispensable for improving the quality of software development. Several development activities such as code reviews, debugging and some testing approaches require programmers to read and understand programs. Maintenance activities cannot be performed without a deep and correct understanding of the component to be maintained. Program understanding is vital to the reuse of code components because they cannot be utilized without a clear understanding of what they do. If a candidate reusable component needs to be modified, an understanding how it is designed is also required. of This monograph presents a* knowledge-based approach to the automation of program understanding. This approach generates rigorous program documentation mechanically by combining and building on strengths of a practical program decomposition method, the axiomatic correctness notation, and the knowledge based analysis approaches. More specifically, this approach documents programs by generating first order predicate logic annotations of their loops. In this approach, loops are classified according to their complexity levels. Based on this taxonomy, variations on the basic analysis approach that best fit each of the different classes are described. In general, mechanical annotation of loops is performed by first decomposing them using data flow analysis. This decomposition encapsulates interdependent statements in events, which can be analyzed individually.
This book constitutes the thoroughly refereed post-workshop proceedings of the 9th International Workshop on Rewriting Logic and its Applications, WRLA 2012, held as a satellite event of ETAPS 2012, in Tallinn, Estonia, in March 2012. The 8 revised full papers presented together with 4 invited papers were carefully reviewed and selected from 12 initial submissions and 5 invited lectures. The papers address a great diversity of topics in the fields of rewriting logic such as: foundations and models, languages, logical and semantic framework, model-based software engineering, real-time and probabilistic extensions, verification techniques, and distributed systems.
By developing object calculi in which objects are treated as primitives, the authors are able to explain both the semantics of objects and their typing rules, and also demonstrate how to develop all of the most important concepts of object-oriented programming languages: self, dynamic dispatch, classes, inheritance, protected and private methods, prototyping, subtyping, covariance and contravariance, and method specialization. An innovative and important approach to the subject for researchers and graduates.
This book constitutes the refereed proceedings of the 10th International Conference on Formal Modeling and Analysis of Timed Systems, FORMATS 2012, held in London, UK in September 2012. The 16 revised full papers presented together with 2 invited talks were carefully reviewed and selected from 34 submissions. The book covers topics of foundations and semantics, methods and tools, techniques, algorithms, hybrid automata, appilcations, real-time software and hardware circuits.
This book constitutes the refereed proceedings of the 6th International Workshop on Reachability Problems, RP 2012, held in Bordeaux, France, in September, 2012. The 8 revised full papers presented together with 4 invited talks were carefully reviewed and selected from 15 submissions. The papers present current research and original contributions related to reachability problems in different computational models and systems such as algebraic structures, computational models, hybrid systems, logic and verification. Reachability is a fundamental problem that appears in several different contexts: finite- and infinite-state concurrent systems, computational models like cellular automata and Petri nets, decision procedures for classical, modal and temporal logic, program analysis, discrete and continuous systems, time critical systems, and open systems modeled as games.
This book constitutes the refereed proceedings of the 8th International Workshop on OpenMP, held in in Rome, Italy, in June 2012. The 18 technical full papers presented together with 7 posters were carefully reviewed and selected from 30 submissions. The papers are organized in topical sections on proposed extensions to OpenMP, runtime environments, optimization and accelerators, task parallelism, validations and benchmarks
This book contains thoroughly refereed and revised papers from the 8th International Andrei Ershov Memorial Conference on Perspectives of System Informatics, PSI 2011, held in Akademgorodok, Novosibirsk, Russia, in June/July 2011. The 18 revised full papers and 10 revised short papers presented were carefully reviewed and selected from 60 submissions. The volume also contains 5 invited papers covering a range of hot topics in computer science and informatics. The papers are organized in topical sections on foundations of program and system development and analysis, partial evaluation, mixed computation, abstract interpretation, compiler construction, computer models and algorithms for bioinformatics, programming methodology and software engineering, information technologies, knowledge-based systems, and knowledge engineering.
This book constitutes the proceedings of the 13th International Workshop on Computational Logic in Multi-Agent Systems, CLIMA XIII, held in Montpellier, France, in August 2012. The 11 regular papers were carefully reviewed and selected from 27 submissions and presented with three invited papers. The purpose of the CLIMA workshops is to provide a forum for discussing techniques, based on computational logic, for representing, programming and reasoning about agents and multi-agent systems in a formal way.
The programming language SETL is a relatively new member of the so-called "very-high-level" class of languages, some of whose other well-known mem bers are LISP, APL, SNOBOL, and PROLOG. These languages all aim to reduce the cost of programming, recognized today as a main obstacle to future progress in the computer field, by allowing direct manipulation of large composite objects, considerably more complex than the integers, strings, etc., available in such well-known mainstream languages as PASCAL, PL/I, ALGOL, and Ada. For this purpose, LISP introduces structured lists as data objects, APL introduces vectors and matrices, and SETL introduces the objects characteristic for it, namely general finite sets and maps. The direct availability of these abstract, composite objects, and of powerful mathematical operations upon them, improves programmer speed and pro ductivity significantly, and also enhances program clarity and readability. The classroom consequence is that students, freed of some of the burden of petty programming detail, can advance their knowledge of significant algorithms and of broader strategic issues in program development more rapidly than with more conventional programming languages."
This book provides a superb introduction to and overview of the MIT PI System for custom VLSI placement and routing. Alan Sher man has done an excellent job of collecting and clearly presenting material that was previously available only in various theses, confer ence papers, and memoranda. He has provided here a balanced and comprehensive presentation of the key ideas and techniques used in PI, discussing part of his own Ph. D. work (primarily on the place ment problem) in the context of the overall design of PI and the contributions of the many other PI team members. I began the PI Project in 1981 after learning first-hand how dif ficult it is to manually place modules and route interconnections in a custom VLSI chip. In 1980 Adi Shamir, Leonard Adleman, and I designed a custom VLSI chip for performing RSA encryp tion/decryption [226]. I became fascinated with the combinatorial and algorithmic questions arising in placement and routing, and be gan active research in these areas. The PI Project was started in the belief that many of the most interesting research issues would arise during an actual implementation effort, and secondarily in the hope that a practically useful tool might result. The belief was well-founded, but I had underestimated the difficulty of building a large easily-used software tool for a complex domain; the PI soft ware should be considered as a prototype implementation validating the design choices made.
"Oil is the problem. Cars are the solution."
This book constitutes the refereed proceedings of the 11th International Symposium on Functional and Logic Programming, FLOPS 2012, held in Kobe, Japan, in May 2012. The 19 research papers and 3 system demonstrations presented in this volume were carefully reviewed and selected from 39 submissions. They deal with declarative programming, including functional programming and logic programming.
This book had its genesis in the following piece of computer mail: From allegra!joan-b Tue Dec 18 89:15:54 1984 To: sola!hjb Subj ect: 1 i spm Hank, I've been talking with Mark Plotnik and Bill Gale about asking you to conduct a basic course on using the lisp machine. Mark, for instance, would really like to cover basics like the flavor system, etc. , so he could start doing his own programming without a lot of trial and error, and Bill and I would be interested in this, too. I'm quite sure that Mark Jones, Bruce, Eric and Van would also be really interested. Would you like to do it? Bill has let me know that if you'd care to set something up, he's free to meet with us anytime this week or next (although I'll only be here on Wed. next week) so we can come up with a plan. What do you think? Joan. xiv Lisp Lore (All the people and computers mentioned above work at AT&T Bell Laboratories, in Murray Hill, New Jersey. ) I agreed, with some trepidation, to try teaching such a course. It wasn't clear how I was going to explain the Lisp Machine environment to a few dozen beginners when at the time I felt I was scarcely able to keep myself afloat. Particularly since many of the "beginners" had PhD's in computer science and a decade or two of programming experience. |
You may like...
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R3,940
Discovery Miles 39 400
Advanced Visual Basic 6 - Power…
Matthew Curland, Gary Clarke
Paperback
R1,273
Discovery Miles 12 730
Introducing Delphi Programming - Theory…
John Barrow, Linda Miller, …
Paperback
(1)R785 Discovery Miles 7 850
The Unicode Cookbook for Linguists
Steven Moran, Michael Cysouw
Hardcover
R999
Discovery Miles 9 990
|