Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer programming > Programming languages > General
Reversible grammar allows computational models to be built that are equally well suited for the analysis and generation of natural language utterances. This task can be viewed from very different perspectives by theoretical and computational linguists, and computer scientists. The papers in this volume present a broad range of approaches to reversible, bi-directional, and non-directional grammar systems that have emerged in recent years. This is also the first collection entirely devoted to the problems of reversibility in natural language processing. Most papers collected in this volume are derived from presentations at a workshop held at the University of California at Berkeley in the summer of 1991 organised under the auspices of the Association for Computational Linguistics. This book will be a valuable reference to researchers in linguistics and computer science with interests in computational linguistics, natural language processing, and machine translation, as well as in practical aspects of computability.
Concurrent constraint programming (ccp) is a recent development in programming language design. Its central contribution is the notion of partial information provided by a shared constraint store. This constraint store serves as a communication medium between concurrent threads of control and as a vehicle for their synchronization. Objects for Concurrent Constraint Programming analyzes the possibility of supporting object-oriented programming in ccp. Starting from established approaches, the book covers various object models and discusses their properties. Small Oz, a sublanguage of the ccp language Oz, is used as a model language for this analysis. This book presents a general-purpose object system for Small Oz and describes its implementation and expressivity for concurrent computation. Objects for Concurrent Constraint Programming is written for programming language researchers with an interest in programming language aspects of concurrency, object-oriented programming, or constraint programming. Programming language implementors will benefit from the rigorous treatment of the efficient implementation of Small Oz. Oz programmers will get a first-hand view of the design decisions that lie behind the Oz object system.
This monograph describes an innovative prototyping framework for data and knowledge intensive systems. The proposed approach will prove especially useful for advanced and research-oriented projects that aim to develop a traditional database perspective into fully-fledged advanced database approaches and knowledge engineering technologies. The book is organised in two parts. The first part, comprising chapters 1 to 4, provides an introduction to the concept of prototyping, to database and knowledge-based technologies, and to the main issues involved in the integration of data and knowledge engineering. The second part, comprising chapters 5 to 12, illustrates the proposed approach in technical detail. Audience: This volume will be of interest to researchers in the field of databases and knowledge engineering in general, and for software designers and knowledge engineers who aim to expand their expertise in data and knowledge intensive systems.
This book constitutes the refereed proceedings of the 24th IFIP WG 6.1 International Conference on Testing Software and Systems, ICTSS 2012, held in Aalborg, Denmark, in November 2012. The 16 revised full papers presented together with 2 invited talks were carefully selected from 48 submissions. The papers are organized in topical sections on testing in practice, test frameworks for distributed systems, testing of embedded systems, test optimization, and new testing methods.
Multiprocessor Execution of Logic Programs addresses the problem of efficient implementation of logic programming languages, specifically Prolog, on multiprocessor architectures. The approaches and implementations developed attempt to take full advantage of sequential implementation technology developed for Prolog (such as the WAM) while exploiting all forms of control parallelism present in logic programs, namely, or-parallelism, independent and-parallelism and dependent and-parallelism. Coverage includes a thorough survey of parallel implementation techniques and parallel systems developed for Prolog. Multiprocessor Execution of Logic Programs is recommended for people implementing parallel logic programming systems, parallel symbolic systems, parallel AI systems, and parallel theorem proving systems. It will also be useful to people who wish to learn about the implementation of parallel logic programming systems.
Behavioral Specifications of Businesses and Systems deals with the reading, writing and understanding of specifications. The papers presented in this book describe useful and sometimes elegant concepts, good practices (in programming and in specifications), and solid underlying theory that is of interest and importance to those who deal with increased complexity of business and systems. Most concepts have been successfully used in actual industrial projects, while others are from the forefront of research. Authors include practitioners, business thinkers, academics and applied mathematicians. These seemingly different papers address different aspects of a single problem - taming complexity. Behavioral Specifications of Businesses and Systems emphasizes simplicity and elegance in specifications without concentrating on particular methodologies, languages or tools. It shows how to handle complexity, and, specifically, how to succeed in understanding and specifying businesses and systems based upon precise and abstract concepts. It promotes reuse of such concepts, and of constructs based on them, without taking reuse for granted. Behavioral Specifications of Businesses and Systems is the second volume of papers based on a series of workshops held alongside ACM's annual conference on Object-Oriented Programming Systems Languages and Applications (OOPSLA) and European Conference on Object-Oriented Programming (ECOOP). The first volume, Object-Oriented Behavioral Specifications, edited by Haim Kilov and William Harvey, was published by Kluwer Academic Publishers in 1996.
Compiler technology is fundamental to computer science since it provides the means to implement many other tools. It is interesting that, in fact, many tools have a compiler framework - they accept input in a particular format, perform some processing and present output in another format. Such tools support the abstraction process and are crucial to productive systems development. The focus of Compiler Technology: Tools, Translators and Language Implementation is to enable quick development of analysis tools. Both lexical scanner and parser generator tools are provided as supplements to this book, since a hands-on approach to experimentation with a toy implementation aids in understanding abstract topics such as parse-trees and parse conflicts. Furthermore, it is through hands-on exercises that one discovers the particular intricacies of language implementation. Compiler Technology: Tools, Translators and Language Implementation is suitable as a textbook for an undergraduate or graduate level course on compiler technology, and as a reference for researchers and practitioners interested in compilers and language implementation.
The second half of this century will remain as the era of proliferation of electronic computers. They did exist before, but they were mechanical. During next century they may perform other mutations to become optical or molecular or even biological. Actually, all these aspects are only fancy dresses put on mathematical machines. This was always recognized to be true in the domain of software, where "machine" or "high level" languages are more or less rigourous, but immaterial, variations of the universaly accepted mathematical language aimed at specifying elementary operations, functions, algorithms and processes. But even a mathematical machine needs a physical support, and this is what hardware is all about. The invention of hardware description languages (HDL's) in the early 60's, was an attempt to stay longer at an abstract level in the design process and to push the stage of physical implementation up to the moment when no more technology independant decisions can be taken. It was also an answer to the continuous, exponential growth of complexity of systems to be designed. This problem is common to hardware and software and may explain why the syntax of hardware description languages has followed, with a reasonable delay of ten years, the evolution of the programming languages: at the end of the 60's they were" Algol like" , a decade later "Pascal like" and now they are "C or ADA-like". They have also integrated the new concepts of advanced software specification languages.
After a slow and somewhat tentative beginning, machine vision systems are now finding widespread use in industry. So far, there have been four clearly discernible phases in their development, based upon the types of images processed and how that processing is performed: (1) Binary (two level) images, processing in software (2) Grey-scale images, processing in software (3) Binary or grey-scale images processed in fast, special-purpose hardware (4) Coloured/multi-spectral images Third-generation vision systems are now commonplace, although a large number of binary and software-based grey-scale processing systems are still being sold. At the moment, colour image processing is commercially much less significant than the other three and this situation may well remain for some time, since many industrial artifacts are nearly monochrome and the use of colour increases the cost of the equipment significantly. A great deal of colour image processing is a straightforward extension of standard grey-scale methods. Industrial applications of machine vision systems can also be sub divided, this time into two main areas, which have largely retained distinct identities: (i) Automated Visual Inspection (A VI) (ii) Robot Vision (RV) This book is about a fifth generation of industrial vision systems, in which this distinction, based on applications, is blurred and the processing is marked by being much smarter (i. e. more "intelligent") than in the other four generations."
This book presents a powerful new language and methodology for programming complex reactive systems in a scenario-based manner. The language is live sequence charts (LSCs), a multimodal extension of sequence charts and UML's sequence diagrams, used in the past mainly for requirements. The methodology is play-in/play-out, an unusually convenient means for specifying inter-object scenario-based behavior directly from a GUI or an object model diagram, with the surprising ability to execute that behavior, or those requirements, directly. The language and methodology are supported by a fully implemented tool the Play-Engine which is attached to the book in CD form. Comments from experts in the field: The design of reactive systems is one of the most challenging problems in computer science. This books starts with a critical insight to explain the difficulty of this problem: there is a fundamental gap between the scenario-based way in which people think about such systems and the state-based way in which these systems are implemented. The book then offers a radical proposal to bridge this gap by means of playing scenarios. Systems can be specified by playing in scenarios and implemented by means of a Play-Engine that plays out scenarios. This idea is carried out and developed, lucidly, formally and playfully, to its fullest. The result is a compelling proposal, accompanied by a prototype software engine, for reactive systems design, which is bound to cause a splash in the software-engineering community. Moshe Y. Vardi, Rice University, Houston, Texas, USA Scenarios are a primary exchange tool in explaining system behavior to others, but their limited expressive power never made them able to fully describe systems, thus limiting their use. The language of Live Sequence Charts (LSCs) presented in this beautifully written book achieves this goal, and the attached Play-Engine software makes these LSCs really come alive. This is undoubtedly a key breakthrough that will start long-awaited and exciting new directions in systems specification, synthesis, and analysis. Gerard Berry, Esterel Technologies and INRIA, Sophia-Antipolis, France The approach of David Harel and Rami Marelly is a fascinating way of combining prototyping techniques with techniques for identifying behavior and user interfaces. Manfred Broy, Technical University of Munich, Germany"
Automatic Re-engineering of Software Using Genetic Programming describes the application of Genetic Programming to a real world application area - software re-engineering in general and automatic parallelization specifically. Unlike most uses of Genetic Programming, this book evolves sequences of provable transformations rather than actual programs. It demonstrates that the benefits of this approach are twofold: first, the time required for evaluating a population is drastically reduced, and second, the transformations can subsequently be used to prove that the new program is functionally equivalent to the original. Automatic Re-engineering of Software Using Genetic Programming shows that there are applications where it is more practical to use GP to assist with software engineering rather than to entirely replace it. It also demonstrates how the author isolated aspects of a problem that were particularly suited to GP, and used traditional software engineering techniques in those areas for which they were adequate. Automatic Re-engineering of Software Using Genetic Programming is an excellent resource for researchers in this exciting new field.
Massively Parallel Systems (MPSs) with their scalable computation and storage space promises are becoming increasingly important for high-performance computing. The growing acceptance of MPSs in academia is clearly apparent. However, in industrial companies, their usage remains low. The programming of MPSs is still the big obstacle, and solving this software problem is sometimes referred to as one of the most challenging tasks of the 1990's. The 1994 working conference on "Programming Environments for Massively Parallel Systems" was the latest event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) in this field. It succeeded the 1992 conference in Edinburgh on "Programming Environments for Parallel Computing". The research and development work discussed at the conference addresses the entire spectrum of software problems including virtual machines which are less cumbersome to program; more convenient programming models; advanced programming languages, and especially more sophisticated programming tools; but also algorithms and applications.
Software Visualization: From Theory to Practice was initially
selected as a special volume for "The Annals of Software
Engineering (ANSE) Journal," which has been discontinued. This
special edited volume, is the first to discuss software
visualization in the perspective of software engineering. It is a
collection of 14 chapters on software visualization, covering the
topics from theory to practical systems. The chapters are divided
into four Parts: Visual Formalisms, Human Factors, Architectural
Visualization, and Visualization in Practice. They cover a
comprehensive range of software visualization topics, including
Software Visualization: From Theory to Practice is designed to meet the needs of both an academic and a professional audience composed of researchers and software developers. This book is also suitable for senior undergraduate and graduate students in software engineering and computer science, as a secondary text or a reference.
The advent of the world-wide web and web-based applications have dramatically changed the nature of computer applications. Computer system design, in the light of these changes, involves understanding these modem workloads, identifying bottlenecks during their execution, and appropriately tailoring microprocessors, memory systems, and the overall system to minimize bottlenecks. This book contains ten chapters dealing with several contemporary programming paradigms including Java, web server and database workloads. The first two chapters concentrate on Java. While Barisone et al.'s characterization in Chapter 1 deals with instruction set usage of Java applications, Kim et al.'s analysis in Chapter 2 focuses on memory referencing behavior of Java workloads. Several applications including the SPECjvm98 suite are studied using interpreter and Just-In-Time (TIT) compilers. Barisone et al.'s work includes an analytical model to compute the utilization of various functional units. Kim et al. present information on locality, live-range of objects, object lifetime distribution, etc. Studying database workloads has been a challenge to research groups, due to the difficulty in accessing standard benchmarks. Configuring hardware and software for database benchmarks such as those from the Transactions Processing Council (TPC) requires extensive effort. In Chapter 3, Keeton and Patterson present a simplified workload (microbenchmark) that approximates the characteristics of complex standardized benchmarks.
Program understanding plays an important role in nearly all software related tasks. It is vital to the development, maintenance and reuse activities. Program understanding is indispensable for improving the quality of software development. Several development activities such as code reviews, debugging and some testing approaches require programmers to read and understand programs. Maintenance activities cannot be performed without a deep and correct understanding of the component to be maintained. Program understanding is vital to the reuse of code components because they cannot be utilized without a clear understanding of what they do. If a candidate reusable component needs to be modified, an understanding how it is designed is also required. of This monograph presents a* knowledge-based approach to the automation of program understanding. This approach generates rigorous program documentation mechanically by combining and building on strengths of a practical program decomposition method, the axiomatic correctness notation, and the knowledge based analysis approaches. More specifically, this approach documents programs by generating first order predicate logic annotations of their loops. In this approach, loops are classified according to their complexity levels. Based on this taxonomy, variations on the basic analysis approach that best fit each of the different classes are described. In general, mechanical annotation of loops is performed by first decomposing them using data flow analysis. This decomposition encapsulates interdependent statements in events, which can be analyzed individually.
DSSSL (Document Style Semantics and Specification Language) is an ISO standard (ISO/IEC 10179: 1996) published in the year 1996. DSSSL is a standard of the SGML family (Standard Generalized Markup Language, ISO 8879:1986), whose aim is to establish a processing model for SGML documents. For a good understanding of the SGML standard, many books exist including Author's guide[BryanI988] and The SGML handbook[GoldfarbI990]. A DSSSL document is an SGML document, written with the same rules that guide any SGML document. The structure of a DSSSL document is explained in Chapter 2. DSSSL is based, in part, on scheme, a standard functional programming language. The DSSSL subset of scheme along with the procedures supported by DSSSL are explained in Chapter 3. The DSSSL standard starts with the supposition of a pre-existing SGML document, and offers a series of processes that can be performed on it: * Groves The first process that is performed on an SGML document in DSSSL is always the analysis of the document and the creation of a grove. The DSSSL standard shares many common characteristics with another standard of the SGML family, HyTime (ISO/IEC 10744). These standards were developed in parallel, and their developers designed a common data model, the grove, that would support the processing needs of each standard.
This book constitutes the refereed proceedings of the 10th Asian Symposium on Programming Languages and Systems, APLAS 2012, held in Kyoto, Japan, in December 2012. The 24 revised full papers presented together with the abstracts of 3 invited talks were carefully reviewed and selected from 58 submissions. The papers are organized in topical sections on concurrency, security, static analysis, language design, dynamic analysis, complexity and semantics, and program logics and verification.
The need for a comprehensive survey-type exposition on formal languages and related mainstream areas of computer science has been evident for some years. In the early 1970s, when . the book Formal Languages by the second mentioned editor appeared, it was still quite feasible to write a comprehensive book with that title and include also topics of current research interest. This would not be possible anymore. A standard-sized book on formal languages would either have to stay on a fairly low level or else be specialized and restricted to some narrow sector of the field. The setup becomes drastically different in a collection of contributions, where the best authorities in the world join forces, each of them concentrat ing on their own areas of specialization. The present three-volume Handbook constitutes such a unique collection. In these three volumes we present the current state of the art in formal language theory. We were most satisfied with the enthusiastic response given to our request for contributions by specialists representing various subfields. The need for a Handbook of Formal Languages was in many answers expressed in different ways: as an easily accessible his torical reference, a general source of information, an overall course-aid, and a compact collection of material for self-study. We are convinced that the final result will satisfy such various needs. The theory of formal languages constitutes the stem or backbone of the field of science now generally known as theoretical computer science.
Variational Object-Oriented Programming Beyond Classes and Inheritance presents an approach for improving the standard object-oriented programming model. The proposal is aimed at supporting a larger range of incremental behavior variations and thus promises to be more effective in mastering the complexity of today's software. The material presented in this book is interesting to both beginners and students or professionals with an advanced knowledge of object-oriented programming: * The first part of the book can be used as supplementary material for students and professionals being introduced to object-oriented programming. It provides them with a very concise description of the main concepts of object-oriented programming, which are presented from a conceptual point of view rather than related to the features of a particular object-oriented programming language. The description of the main concepts is a synthesis of considerations from several leading works in data abstraction and object-oriented technology. Parts of the book are currently used as supplementary material for teaching a graduate course on object-oriented design.* The book provides experienced programmers with a conceptual view of the relationship between object-oriented programming, data abstraction, and previous programming models that promotes a deep understanding of the essence of object-oriented programming. * The book presents a synthesis of both the main achievements and the main shortcomings of object-oriented programming with respect to supporting incremental programming and promoting software reuse. It illustrates the behavior variations that can be performed incrementally and those that are not supported properly; the workarounds currently used for dealing with the latter case are described. * Recent developments from ongoing research in object-oriented programming are presented, showing that the problems they deal with can actually be traced to some form of context-dependent behavior. The developments considered include design patterns, subject-oriented programming, adaptive programming, reflection, open implementations, and aspect-oriented programming.* Advanced students interested in language design are not only provided with a comprehensive informal description of the new model, but also with a formal model and the description of a prototype implementation of RONDO embedded into the Smalltalk-80 environment. This can serve as a basis for experimenting with new concepts or with modifications of the proposed model. * The last chapter of the book is particularly beneficial to the practitioners of object technology, since it deals with issues in maintaining reusable object-oriented systems.
This is an important textbook on artificial intelligence that uses the unifying thread of search to bring together most of the major techniques used in symbolic artificial intelligence. The authors, aware of the pitfalls of being too general or too academic, have taken a practical approach in that they include program code to illustrate their ideas. Furthermore, code is offered in both POP-11 and Prolog, thereby giving a dual perspective, highlighting the merits of these languages. Each chapter covers one technique and divides up into three sections: a section which introduces the technique (and its usual applications) andsuggests how it can be understood as a variant/generalisation of search; a section which developed a low'-level (POP-11) implementation; a section which develops a high-level (Prolog) implementation of the technique. The authors also include useful notes on alternative treatments to the material, further reading and exercises. As a practical book it will be welcomed by a wide audience including, those already experienced in AI, students with some background in programming who are taking an introductory course in AI, and lecturers looking for a precise, professional and practical text book to use in their AI courses. About the authors: Dr Christopher Thornton has a BA in Economics, an Sc in Computer Science and a DPhil in Artificial Intelligence. Formerly a lecturer in the Department of AI at the University of Edinburgh, he is now a lecturer in AI in the School of Cognitive and Computing Sciences at the University of Sussex. Professor Benedict du Boulay has a BSc in Physics and a PhD in Artificial Intelligence. Previously a lecturer in the Department of Computing Science at the University of Aberdeen he is currently Professor of Artificial Intelligence, also in the School of Cognitive and Computing Sciences, University of Sussex.
Program synthesis is a solution to the software crisis. If we had a program that develops correct programs from specifications, then program validation and maintenance would disappear from the software life-cycle, and one could focus on the more creative tasks of specification elaboration, validation, and maintenance, because replay of program development would be less costly. This monograph describes a novel approach to Inductive Logic Programming (ILP), which cross-fertilizes logic programming and machine learning. Aiming at the synthesis of recursive logic programs only, and this from incomplete information, we take a software engineering approach that is more appropriate than a pure artificial intelligence approach. This book is suitable as a secondary text for graduate level courses in software engineering and artificial intelligence, and as a reference for practitioners of program synthesis.
One service mathematics has rendered the tEL moi, .... si j'avait su comment en revenir. je n'y serais point alle'.' human race. It has put common sense back Jules Verne where it belongs, on the topmost shelf next to the dusty canister labelled 'discarded non sense', The series is divergent; therefore we may be Eric T. Bell able to do something with it. O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics ...'; 'One service logic has rendered com puter science ...'; 'One service category theory has rendered mathematics, ..'. All arguably true. And all statements obtainable this way form part of the raison d'elre of this series."
Recent developments in computer science clearly show the need for a
better theoretical foundation for some central issues. Methods and
results from mathematical logic, in particular proof theory and
model theory, are of great help here and will be used much more in
future than previously. This book provides an excellent
introduction to the interplay of mathematical logic and computer
science. It contains extensively reworked versions of the lectures
given at the 1997 Marktoberdorf Summer School by leading
researchers in the field.
This book constitutes the thoroughly refereed post-conference proceedings of the 23rd International Symposium on Implementation and Application of Functional Languages, IFL 2011, held in Lawrence, Kansas, USA, in October 2011. The 11 revised full papers presented were carefully reviewed and selected from 33 submissions. The papers by researchers and practitioners who are actively engaged in the implementation and the use of functional and function based programming languages describe practical and theoretical work as well as applications and tools. They discuss new ideas and concepts, as well as work in progress and results.
Domain theory is a rich interdisciplinary area at the intersection of logic, computer science, and mathematics. This volume contains selected papers presented at the International Symposium on Domain Theory which took place in Shanghai in October 1999. Topics of papers range from the encounters between topology and domain theory, sober spaces, Lawson topology, real number computability and continuous functionals to fuzzy modelling, logic programming, and pi-calculi. This book is a valuable reference for researchers and students interested in this rapidly developing area of theoretical computer science. |
You may like...
C++ How to Program: Horizon Edition
Harvey Deitel, Paul Deitel
Paperback
R1,779
Discovery Miles 17 790
C How to Program: With Case Studies in…
Paul Deitel, Harvey Deitel
Paperback
R2,192
Discovery Miles 21 920
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
|