![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Programming languages
Variational Object-Oriented Programming Beyond Classes and Inheritance presents an approach for improving the standard object-oriented programming model. The proposal is aimed at supporting a larger range of incremental behavior variations and thus promises to be more effective in mastering the complexity of today's software. The material presented in this book is interesting to both beginners and students or professionals with an advanced knowledge of object-oriented programming: * The first part of the book can be used as supplementary material for students and professionals being introduced to object-oriented programming. It provides them with a very concise description of the main concepts of object-oriented programming, which are presented from a conceptual point of view rather than related to the features of a particular object-oriented programming language. The description of the main concepts is a synthesis of considerations from several leading works in data abstraction and object-oriented technology. Parts of the book are currently used as supplementary material for teaching a graduate course on object-oriented design.* The book provides experienced programmers with a conceptual view of the relationship between object-oriented programming, data abstraction, and previous programming models that promotes a deep understanding of the essence of object-oriented programming. * The book presents a synthesis of both the main achievements and the main shortcomings of object-oriented programming with respect to supporting incremental programming and promoting software reuse. It illustrates the behavior variations that can be performed incrementally and those that are not supported properly; the workarounds currently used for dealing with the latter case are described. * Recent developments from ongoing research in object-oriented programming are presented, showing that the problems they deal with can actually be traced to some form of context-dependent behavior. The developments considered include design patterns, subject-oriented programming, adaptive programming, reflection, open implementations, and aspect-oriented programming.* Advanced students interested in language design are not only provided with a comprehensive informal description of the new model, but also with a formal model and the description of a prototype implementation of RONDO embedded into the Smalltalk-80 environment. This can serve as a basis for experimenting with new concepts or with modifications of the proposed model. * The last chapter of the book is particularly beneficial to the practitioners of object technology, since it deals with issues in maintaining reusable object-oriented systems.
This is an important textbook on artificial intelligence that uses the unifying thread of search to bring together most of the major techniques used in symbolic artificial intelligence. The authors, aware of the pitfalls of being too general or too academic, have taken a practical approach in that they include program code to illustrate their ideas. Furthermore, code is offered in both POP-11 and Prolog, thereby giving a dual perspective, highlighting the merits of these languages. Each chapter covers one technique and divides up into three sections: a section which introduces the technique (and its usual applications) andsuggests how it can be understood as a variant/generalisation of search; a section which developed a low'-level (POP-11) implementation; a section which develops a high-level (Prolog) implementation of the technique. The authors also include useful notes on alternative treatments to the material, further reading and exercises. As a practical book it will be welcomed by a wide audience including, those already experienced in AI, students with some background in programming who are taking an introductory course in AI, and lecturers looking for a precise, professional and practical text book to use in their AI courses. About the authors: Dr Christopher Thornton has a BA in Economics, an Sc in Computer Science and a DPhil in Artificial Intelligence. Formerly a lecturer in the Department of AI at the University of Edinburgh, he is now a lecturer in AI in the School of Cognitive and Computing Sciences at the University of Sussex. Professor Benedict du Boulay has a BSc in Physics and a PhD in Artificial Intelligence. Previously a lecturer in the Department of Computing Science at the University of Aberdeen he is currently Professor of Artificial Intelligence, also in the School of Cognitive and Computing Sciences, University of Sussex.
This book constitutes the refereed proceedings of the 10th Asian Symposium on Programming Languages and Systems, APLAS 2012, held in Kyoto, Japan, in December 2012. The 24 revised full papers presented together with the abstracts of 3 invited talks were carefully reviewed and selected from 58 submissions. The papers are organized in topical sections on concurrency, security, static analysis, language design, dynamic analysis, complexity and semantics, and program logics and verification.
Domain theory is a rich interdisciplinary area at the intersection of logic, computer science, and mathematics. This volume contains selected papers presented at the International Symposium on Domain Theory which took place in Shanghai in October 1999. Topics of papers range from the encounters between topology and domain theory, sober spaces, Lawson topology, real number computability and continuous functionals to fuzzy modelling, logic programming, and pi-calculi. This book is a valuable reference for researchers and students interested in this rapidly developing area of theoretical computer science.
This book presents the thoroughly refereed and revised post-workshop proceedings of the 17th Monterey Workshop, held in Oxford, UK, in March 2012. The workshop explored the challenges associated with the Development, Operation and Management of Large-Scale complex IT Systems. The 21 revised full papers presented were significantly extended and improved by the insights gained from the productive and lively discussions at the workshop, and the feedback from the post-workshop peer reviews.
Recent developments in computer science clearly show the need for a
better theoretical foundation for some central issues. Methods and
results from mathematical logic, in particular proof theory and
model theory, are of great help here and will be used much more in
future than previously. This book provides an excellent
introduction to the interplay of mathematical logic and computer
science. It contains extensively reworked versions of the lectures
given at the 1997 Marktoberdorf Summer School by leading
researchers in the field.
This book constitutes the thoroughly refereed post-conference proceedings of the 23rd International Symposium on Implementation and Application of Functional Languages, IFL 2011, held in Lawrence, Kansas, USA, in October 2011. The 11 revised full papers presented were carefully reviewed and selected from 33 submissions. The papers by researchers and practitioners who are actively engaged in the implementation and the use of functional and function based programming languages describe practical and theoretical work as well as applications and tools. They discuss new ideas and concepts, as well as work in progress and results.
Language, Compilers and Run-time Systems for Scalable Computers contains 20 articles based on presentations given at the third workshop of the same title, and 13 extended abstracts from the poster session. Starting with new developments in classical problems of parallel compiler design, such as dependence analysis and an exploration of loop parallelism, the book goes on to address the issues of compiler strategy for specific architectures and programming environments. Several chapters investigate support for multi-threading, object orientation, irregular computation, locality enhancement, and communication optimization. Issues of the interface between language and operating system support are also discussed. Finally, the load balance issues are discussed in different contexts, including sparse matrix computation and iteratively balanced adaptive solvers for partial differential equations. Some additional topics are also discussed in the extended abstracts. Each chapter provides a bibliography of relevant papers and the book can thus be used as a reference to the most up-to-date research in parallel software engineering.
FIELD has been a remarkably successful research project. The ideas first exhibited in the environment now form the basis for most of the current generation of programming environments, including Hewlett-Packard's Softbench, DEC's FUSE, Sun's Tooltalk, Lucid's Energize, and SGI's Codevision. FIELD pioneered the notion of broadcast messaging as a basis for tool integration. Moreover, many of the other tool concepts introduced in FIELD have made their way into these environments. Thus in discussing the FIELD environment, this book actually explains the inner workings of today's programming environments. The book will be valuable for those interested in the development of programming tools and environments, as well as serious users of programming environments. It will also be of interest to anyone undertaking a large software project, both by introducing the software tools needed to work on such a project and by demonstrating the concepts of message-based integration which can be applied to a variety of domains.
This book constitutes the refereed proceedings of the Fifth International Symposium on Search-Based Software Engineering, SSBSE 2013, held in St. Petersburg, Russia. The 14 revised full papers, 6 revised short papers, and 6 papers of the graduate track presented together with 2 keynotes, 2 challenge track papers and 1 tutorial paper were carefully reviewed and selected from 50 initial submissions. Search Based Software Engineering (SBSE) studies the application of meta-heuristic optimization techniques to various software engineering problems, ranging from requirements engineering to software testing and maintenance.
Mobile Computation with Functions explores distributed computation with languages which adopt functions as the main programming abstraction and support code mobility through the mobility of functions between remote sites. It aims to highlight the benefits of using languages of this family in dealing with the challenges of mobile computation. The possibility of exploiting existing static analysis techniques suggests that having functions at the core of mobile code language is a particularly apt choice. A range of problems which have impact on the safety, security and performance are discussed. It is shown that types extended with effects and other annotations can capture a significant amount of information about the dynamic behavior of mobile functions, and offer solutions to the problems under investigation. This book includes a survey of the languages Concurrent ML, Facile and PLAN which inherit the strengths of the functional paradigm in the context of concurrent and distributed computation. The languages which are defined in the subsequent chapters have their roots in these languages.
Scientific Data Analysis using Jython Scripting and Java presents practical approaches for data analysis using Java scripting based on Jython, a Java implementation of the Python language. The chapters essentially cover all aspects of data analysis, from arrays and histograms to clustering analysis, curve fitting, metadata and neural networks. A comprehensive coverage of data visualisation tools implemented in Java is also included. Written by the primary developer of the jHepWork data-analysis framework, the book provides a reliable and complete reference source laying the foundation for data-analysis applications using Java scripting. More than 250 code snippets (of around 10-20 lines each) written in Jython and Java, plus several real-life examples help the reader develop a genuine feeling for data analysis techniques and their programming implementation. This is the first data-analysis and data-mining book which is completely based on the Jython language, and opens doors to scripting using a fully multi-platform and multi-threaded approach. Graduate students and researchers will benefit from the information presented in this book.
This book constitutes the thoroughly refereed post-conference proceedings of the 18th International Conference on Principles and Practice of Constraint Programming (CP 2012), held in Quebec, Canada, in October 2012. The 68 revised full papers were carefully selected from 186 submissions. Beside the technical program, the conference featured two special tracks. The former was the traditional application track, which focused on industrial and academic uses of constraint technology and its comparison and integration with other optimization techniques (MIP, local search, SAT, etc.) The second track, featured for the first time in 2012, concentrated on multidisciplinary papers: cross-cutting methodology and challenging applications collecting papers that link CP technology with other techniques like machine learning, data mining, game theory, simulation, knowledge compilation, visualization, control theory, and robotics. In addition, the track focused on challenging application fields with a high social impact such as CP for life sciences, sustainability, energy efficiency, web, social sciences, finance, and verification.
Vorwort In der Natur entwickelten sich die Echtzeitsysteme seit einigen 100 Mil- Honen Jahren. Tierische Nervensysteme haben zur Aufgabe, auf die Nachrichten aus der Umwelt die Steuerungsbefehle an die aktiven Or- gane zu geben. Dabei spielen zum Beispiel bedingte Reflexe eine wichtige Rolle. Vielleicht kann man die Entstehung des Menschen etwa zu der Zeit ansetzen, als sein sich allmahlich entwickelndes Gehirn Gedanken entwickelte, deren Bedeutung in vorausplanender Weise iiber die gerade vorliegende Situation hinausging. Das fiihrte schliesslich unter anderem zum heutigen Wissenschaftler, der seine Theorien und Systeme aufgrund langwieriger Uberlegungen aufbaut. Die Entwicklung der Computer ging im wesentlichen den umgekehrten Weg. Zunachst diente sie nur der Durchfiihrung "starrer" Programme, wie z.B. das erste programmgesteuerte Rechengerat Z3, das der Unterzeichner im Jahre 1941 vorfiihren konnte. Es folgte unter an- derem ein Spezialgerat zur Fliigelvermessung, das man als den ersten Prozessrechner bezeichnen kann. Es wurden etwa vierzig als Analog- Digital-Wandler arbeitende Messuhren yom Rechnerautomaten abgele- sen und im Rahmen eines Programms als Variable verarbeitet. Abel' auch das erfolgte noch in starrer Reihenfolge. Die echte Prozesssteuerung - heute auch Echtzeitsysteme genannt - erfordert aber ein Reagieren auf bestandig wechselnde Situationen.
This book constitutes the refereed proceedings of the International Conference on Multiscore Software Engineering, Performance, and Tools, MUSEPAT 2013, held in Saint Petersburg, Russia, in August 2013. The 9 revised papers were carefully reviewed and selected from 25 submissions. The accepted papers are organized into three main sessions and cover topics such as software engineering for multicore systems; specification, modeling and design; programing models, languages, compiler techniques and development tools; verification, testing, analysis, debugging and performance tuning, security testing; software maintenance and evolution; multicore software issues in scientific computing, embedded and mobile systems; energy-efficient computing as well as experience reports.
Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.
A Formal Approach to Hardware Design discusses designing computations to be realised by application specific hardware. It introduces a formal design approach based on a high-level design language called Synchronized Transitions. The models created using Synchronized Transitions enable the designer to perform different kinds of analysis and verification based on descriptions in a single language. It is, for example, possible to use exactly the same design description both for mechanically supported verification and synthesis. Synchronized Transitions is supported by a collection of public domain CAD tools. These tools can be used with the book in presenting a course on the subject. A Formal Approach to Hardware Design illustrates the benefits to be gained from adopting such techniques, but it does so without assuming prior knowledge of formal design methods. The book is thus not only an excellent reference, it is also suitable for use by students and practitioners.
Computersystemsresearch is heavilyinfluencedby changesincomputertechnol- ogy. As technology changes alterthe characteristics ofthe underlying hardware com- ponents of the system, the algorithms used to manage the system need to be re- examinedand newtechniques need to bedeveloped. Technological influencesare par- ticularly evident in the design of storage management systems such as disk storage managers and file systems. The influences have been so pronounced that techniques developed as recently as ten years ago are being made obsolete. The basic problem for disk storage managers is the unbalanced scaling of hard- warecomponenttechnologies. Disk storage managerdesign depends on the technolo- gy for processors, main memory, and magnetic disks. During the 1980s, processors and main memories benefited from the rapid improvements in semiconductortechnol- ogy and improved by several orders ofmagnitude in performance and capacity. This improvement has not been matched by disk technology, which is bounded by the me- chanics ofrotating magnetic media. Magnetic disks ofthe 1980s have improved by a factor of 10in capacity butonly a factor of2 in performance. This unbalanced scaling ofthe hardware components challenges the disk storage manager to compensate for the slower disks and allow performance to scale with the processor and main memory technology. Unless the performance of file systems can be improved over that of the disks, I/O-bound applications will be unable to use the rapid improvements in processor speeds to improve performance for computer users. Disk storage managers must break this bottleneck and decouple application perfor- mance from the disk.
This textbook provides an in depth course on data structures in the context of object oriented development. Its main themes are abstraction, implementation, encapsulation, and measurement: that is, that the software process begins with abstraction of data types, which then lead to alternate representations and encapsulation, and finally to resource measurement. A clear object oriented approach, making use of Booch components, will provide readers with a useful library of data structure components and experience in software reuse. Students using this book are expected to have a reasonable understanding of the basic logical structures such as stacks and queues. Throughout, Ada 95 is used and the author takes full advantage of Ada's encapsulation features and the ability to present specifications without implementational details. Ada code is supported by two suites available over the World Wide Web.
In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.
This book constitutes the refereed proceedings of the 6th International Workshop on Reachability Problems, RP 2012, held in Bordeaux, France, in September, 2012. The 8 revised full papers presented together with 4 invited talks were carefully reviewed and selected from 15 submissions. The papers present current research and original contributions related to reachability problems in different computational models and systems such as algebraic structures, computational models, hybrid systems, logic and verification. Reachability is a fundamental problem that appears in several different contexts: finite- and infinite-state concurrent systems, computational models like cellular automata and Petri nets, decision procedures for classical, modal and temporal logic, program analysis, discrete and continuous systems, time critical systems, and open systems modeled as games.
A growing concern of mine has been the unrealistic expectations for new computer-related technologies introduced into all kinds of organizations. Unrealistic expectations lead to disappointment, and a schizophrenic approach to the introduction of new technologies. The UNIX and real-time UNIX operating system technologies are major examples of emerging technologies with great potential benefits but unrealistic expectations. Users want to use UNIX as a common operating system throughout large segments of their organizations. A common operating system would decrease software costs by helping to provide portability and interoperability between computer systems in today's multivendor environments. Users would be able to more easily purchase new equipment and technologies and cost-effectively reuse their applications. And they could more easily connect heterogeneous equipment in different departments without having to constantly write and rewrite interfaces. On the other hand, many users in various organizations do not understand the ramifications of general-purpose versus real-time UNIX. Users tend to think of "real-time" as a way to handle exotic heart-monitoring or robotics systems. Then these users use UNIX for transaction processing and office applications and complain about its performance, robustness, and reliability. Unfortunately, the users don't realize that real-time capabilities added to UNIX can provide better performance, robustness and reliability for these non-real-time applications. Many other vendors and users do realize this, however. There are indications even now that general-purpose UNIX will go away as a separate entity. It will be replaced by a real-time UNIX. General-purpose UNIX will exist only as a subset of real-time UNIX.
Call-by-push-value is a programming language paradigm that,
surprisingly, breaks down the call-by-value and call-by-name
paradigms into simple primitives. This monograph, written for
graduate students and researchers, exposes the call-by-push-value
structure underlying a remarkable range of semantics, including
operational semantics, domains, possible worlds, continuations and
games.
Since the first edition of this book was written in 1977, there has been a tremendous increase in the use of Pascal. This increased use has had two significant effects. (1) It has produced a bett er understanding of the facilities of Pascal and their use. (2) It has fostered the production of the ISO standard for Pascal. This second edition reflects both this better understanding and the clarifications and changes to Pascal which have resulted from the production of the BSljlSO Pascal standard. The standard (BS 6192, which supplies the technical content for ISO 7185) is the definitive document on Pascal. My work on the Pascal standard has convinced me that the description of a programming language may be tutorial, or it may be definitive, or it may be neither! The chapters of this book do not constitute a definitive description of Pascal. They are essentially tutorial. The book is based on an introductory lecture course given at Manchester. In addition to lectures, the course consists of two kinds of practical work. The first is based on the solution of short pencil-and-paper exercises. The second requires the student to write complete programs and run them using interactive computer terminals. Each chapter of the book concludes with exercises and problems suitable forthese purposes. Although solutions to all of these are not presented in the book, teaching staff may obtain them by application to the authors.
Parsing technology traditionally consists of two branches, which correspond to the two main application areas of context-free grammars and their generalizations. Efficient deterministic parsing algorithms have been developed for parsing programming languages, and quite different algorithms are employed for analyzing natural language. The Functional Treatment of Parsing provides a functional framework within which the different traditional techniques are restated and unified. The resulting theory provides new recursive implementations of parsers for context-free grammars. The new implementations, called recursive ascent parsers, avoid explicit manipulation of parse stacks and parse matrices, and are in many ways superior to conventional implementations. They are applicable to grammars for programming languages as well as natural languages. The book has been written primarily for students and practitioners of parsing technology. With its emphasis on modern functional methods, however, the book will also be of benefit to scientists interested in functional programming. The Functional Treatment of Parsing is an excellent reference and can be used as a text for a course on the subject. |
You may like...
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
Advanced Visual Basic 6 - Power…
Matthew Curland, Gary Clarke
Paperback
R1,273
Discovery Miles 12 730
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
|