![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Programming languages > General
In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.
This book constitutes the refereed proceedings of the 2nd International Conference on Model and Data Engineering, MEDI 2012, held in Poitiers, France, in October 2012. The 12 revised full papers presented together with 5 short papers were carefully reviewed and selected from 35 submissions. The papers are cover the topics of model driven engineering, ontology engineering, formal modeling, security, and data mining.
Since the first edition of this book was written in 1977, there has been a tremendous increase in the use of Pascal. This increased use has had two significant effects. (1) It has produced a bett er understanding of the facilities of Pascal and their use. (2) It has fostered the production of the ISO standard for Pascal. This second edition reflects both this better understanding and the clarifications and changes to Pascal which have resulted from the production of the BSljlSO Pascal standard. The standard (BS 6192, which supplies the technical content for ISO 7185) is the definitive document on Pascal. My work on the Pascal standard has convinced me that the description of a programming language may be tutorial, or it may be definitive, or it may be neither! The chapters of this book do not constitute a definitive description of Pascal. They are essentially tutorial. The book is based on an introductory lecture course given at Manchester. In addition to lectures, the course consists of two kinds of practical work. The first is based on the solution of short pencil-and-paper exercises. The second requires the student to write complete programs and run them using interactive computer terminals. Each chapter of the book concludes with exercises and problems suitable forthese purposes. Although solutions to all of these are not presented in the book, teaching staff may obtain them by application to the authors.
Parsing technology traditionally consists of two branches, which correspond to the two main application areas of context-free grammars and their generalizations. Efficient deterministic parsing algorithms have been developed for parsing programming languages, and quite different algorithms are employed for analyzing natural language. The Functional Treatment of Parsing provides a functional framework within which the different traditional techniques are restated and unified. The resulting theory provides new recursive implementations of parsers for context-free grammars. The new implementations, called recursive ascent parsers, avoid explicit manipulation of parse stacks and parse matrices, and are in many ways superior to conventional implementations. They are applicable to grammars for programming languages as well as natural languages. The book has been written primarily for students and practitioners of parsing technology. With its emphasis on modern functional methods, however, the book will also be of benefit to scientists interested in functional programming. The Functional Treatment of Parsing is an excellent reference and can be used as a text for a course on the subject.
This book constitutes the refereed proceedings of the 15th International Conference on Model Driven Engineering Languages and Systems, MODELS 2012, held in Innsbruck, Austria, in September/October 2012. The 50 papers presented in this volume were carefully reviewed and selected from a total of 181 submissions. They are organized in topical sections named: metamodels and domain specific modeling; models at runtime; model management; modeling methods and tools, consistency analysis, software product lines; foundations of modeling; static analysis techniques; model testing and simulation; model transformation; model matching, tracing and synchronization; modeling practices and experience; and model analysis.
It is recognized that formal design and verification methods are an important requirement for the attainment of high quality system designs. The field has evolved enormously during the last few years, resulting in the fact that formal design and verification methods are nowadays supported by several tools, both commercial and academic. If different tools and users are to generate and read the same language then it is necessary that the same semantics is assigned by them to all constructs and elements of the language. The current IEEE standard VHDL language reference manual (LRM) tries to define VHDL as well as possible in a descriptive way, explaining the semantics in English. But rigor and clarity are very hard to maintain in a semantics defined in this way, and that has already given rise to many misconceptions and contradictory interpretations. Formal Semantics for VHDL is the first book that puts forward a cohesive set of semantics for the VHDL language. The chapters describe several semantics each based on a different underlying formalism: two of them use Petri nets as target language, and two of them higher order logic. Two use functional concepts, and finally another uses the concept of evolving algebras. Formal Semantics for VHDL is essential reading for researchers in formal methods and can be used as a text for an advanced course on the subject.
The theory of constructive (recursive) models follows from works of Froehlich, Shepherdson, Mal'tsev, Kuznetsov, Rabin, and Vaught in the 50s. Within the framework of this theory, algorithmic properties of abstract models are investigated by constructing representations on the set of natural numbers and studying relations between algorithmic and structural properties of these models. This book is a very readable exposition of the modern theory of constructive models and describes methods and approaches developed by representatives of the Siberian school of algebra and logic and some other researchers (in particular, Nerode and his colleagues). The main themes are the existence of recursive models and applications to fields, algebras, and ordered sets (Ershov), the existence of decidable prime models (Goncharov, Harrington), the existence of decidable saturated models (Morley), the existence of decidable homogeneous models (Goncharov and Peretyat'kin), properties of the Ehrenfeucht theories (Millar, Ash, and Reed), the theory of algorithmic dimension and conditions of autostability (Goncharov, Ash, Shore, Khusainov, Ventsov, and others), and the theory of computable classes of models with various properties. Future perspectives of the theory of constructive models are also discussed. Most of the results in the book are presented in monograph form for the first time. The theory of constructive models serves as a basis for recursive mathematics. It is also useful in computer science, in particular, in the study of programming languages, higher level languages of specification, abstract data types, and problems of synthesis and verification of programs. Therefore, the book will be useful for not only specialists in mathematical logic and the theory of algorithms but also for scientists interested in the mathematical fundamentals of computer science. The authors are eminent specialists in mathematical logic. They have established fundamental results on elementary theories, model theory, the theory of algorithms, field theory, group theory, applied logic, computable numberings, the theory of constructive models, and the theoretical computer science.
This book constitutes the proceedings of the 16th Brazililan Symposium on Programming Languages, SBLP 2012, held in Natal, Brazil, in September 2012. The 10 full and 2 short papers were carefully reviewed and selected from 27 submissions. The papers cover various aspects of programming languages and software engineering.
This book constitutes the thoroughly refereed proceedings of the 10th International Symposium on Automated Technology for Verification and Analysis, ATVA 2012, held at Thiruvananthapuram, Kerala, India, in October 2012. The 25 regular papers, 3 invited papers and 4 tool papers presented were carefully selected from numerous submissions. Conference papers are organized in 9 technical sessions, covering the topics of automata theory, logics and proofs, model checking, software verification, synthesis, verification and parallelism, probabilistic verification, constraint solving and applications, and probabilistic systems.
This book is a revised edition of the monograph which appeared under the same title in the series Research Notes in Theoretical Computer Science, Pit man, in 1986. In addition to a general effort to improve typography, English, and presentation, the main novelty of this second edition is the integration of some new material. Part of it is mine (mostly jointly with coauthors). Here is brief guide to these additions. I have augmented the account of categorical combinatory logic with a description of the confluence properties of rewriting systems of categor ical combinators (Hardin, Yokouchi), and of the newly developed cal culi of explicit substitutions (Abadi, Cardelli, Curien, Hardin, Levy, and Rios), which are similar in spirit to the categorical combinatory logic, but are closer to the syntax of A-calculus (Section 1.2). The study of the full abstraction problem for PCF and extensions of it has been enriched with a new full abstraction result: the model of sequential algorithms is fully abstract with respect to an extension of PCF with a control operator (Cartwright, Felleisen, Curien). An order extensional model of error-sensitive sequential algorithms is also fully abstract for a corresponding extension of PCF with a control operator and errors (Sections 2.6 and 4.1). I suggest that sequential algorithms lend themselves to a decomposition of the function spaces that leads to models of linear logic (Lamarche, Curien), and that connects sequentiality with games (Joyal, Blass, Abramsky) (Sections 2.1 and 2.6)."
Domain theory is a rich interdisciplinary area at the intersection of logic, computer science, and mathematics. This volume contains selected papers presented at the International Symposium on Domain Theory which took place in Shanghai in October 1999. Topics of papers range from the encounters between topology and domain theory, sober spaces, Lawson topology, real number computability and continuous functionals to fuzzy modelling, logic programming, and pi-calculi. This book is a valuable reference for researchers and students interested in this rapidly developing area of theoretical computer science.
This book constitutes the thoroughly refereed proceedings of the 19th International Symposium on Static Analysis, SAS 2012, held in Deauville, France, in September 2012. The 25 revised full papers presented together with 4 invited talks were selected from 62 submissions. The papers address all aspects of static analysis, including abstract domains, abstract interpretation, abstract testing, bug detection, data flow analysis, model checking, new applications, program transformation, program verification, security analysis, theoretical frameworks, and type checking.
In short, to-the-point chapters, Practical C++ Programming covers all aspects of programming including style, software engineering, programming design, object-oriented design, and debugging. It also covers common mistakes and how to find (and avoid) them. End of chapter exercises help you ensure you've mastered the material. Steve Oualline's clear, easy-going writing style and hands-on approach to learning make Practical C++ Programming a nearly painless way to master this complex but powerful programming language.
This book constitutes the thoroughly refereed post-conference proceedings of the 24th International Workshop on Languages and Compilers for Parallel Computing, LCPC 2011, held in Fort Collins, CO, USA, in September 2011. The 19 revised full papers presented and 19 poster papers were carefully reviewed and selected from 52 submissions. The scope of the workshop spans the theoretical and practical aspects of parallel and high-performance computing, and targets parallel platforms including concurrent, multithreaded, multicore, accelerator, multiprocessor, and cluster systems.
Formal Languages and Applications provides a comprehensive study-aid and self-tutorial for graduates students and researchers. The main results and techniques are presented in an readily accessible manner and accompanied by many references and directions for further research. This carefully edited monograph is intended to be the gateway to formal language theory and its applications, so it is very useful as a review and reference source of information in formal language theory.
The Constraint Solving and Language Processing (CSLP) workshop considers the role of constraints in the representation of language and the implementation of language processing applications. This theme should be interpreted inclusively: it includes contributions from linguistics, computer science, psycholinguistics and related areas, with a particular interest in interdisciplinary perspectives. Constraints are widely used in linguistics, computer science, and psychology. How they are used, however, varies widely according to the research domain: knowledge representation, cognitive modelling, problem solving mechanisms, etc. These different perspectives are complementary, each one adding a piece to the puzzle.
This book constitutes the thoroughly refereed proceedings of the 18th International Conference, Euro-Par 2012, held in Rhodes Islands, Greece, in August 2012. The 75 revised full papers presented were carefully reviewed and selected from 228 submissions. The papers are organized in topical sections on support tools and environments; performance prediction and evaluation; scheduling and load balancing; high-performance architectures and compilers; parallel and distributed data management; grid, cluster and cloud computing; peer to peer computing; distributed systems and algorithms; parallel and distributed programming; parallel numerical algorithms; multicore and manycore programming; theory and algorithms for parallel computation; high performance network and communication; mobile and ubiquitous computing; high performance and scientific applications; GPU and accelerators computing.
This book contains the selected peer-reviewed and revised papers from the 24th International Symposium on Implementation and Application of Functional Languages, IFL 2012, held in Oxford, UK, in August/September 2012. The 14 papers included in this volume were carefully reviewed and selected from 28 revised submissions received from originally 37 presentations at the conference. The papers relate to the implementation and application of functional languages and function-based programming.
Usability has become increasingly important as an essential part of the design and development of software and systems for all sectors of society, business, industry, government and education, as well as a topic of research. Today, we can safely say that, in many parts of the world, information technology and communications is or is becoming a central force in revolutionising the way that we all live and how our societies function. IFIP's mission states clearly that it "encourages and assists in the development, exploitation and application of information technology for the benefit of all people". The question that must be considered now is how much attention has been given to the usability of the IT-based systems that we use in our work and daily lives. There is much evidence to indicate that the real interests and needs of people have not yet been embraced in a substantial way by IT decision makers and when developing and implementing the IT systems that shape our lives, both as private individuals and at work. But some headway has been made. Three years ago, the IFIP Technical Committee on Human Computer Interaction (IFIP TC13) gave the subject of usability its top priority for future work in advancing HCI within the international community. This Usability Stream of the IFIP World Computer Congress is a result of this initiative. It provides a showcase on usability involving some practical business solutions and experiences, and some research findings.
Fast-track conference proceedings State-of-the-art research Up-to-date results
This book presents the thoroughly refereed post-conference proceedings of the International Conference on Formal Verification of Object-Oriented Software, FoVeOOS 2011, held in Turin, Italy, in October 2011 - organised by COST Action IC0701. The 10 revised full papers presented together with 5 invited talks were carefully reviewed and selected from 19 submissions. Formal software verification has outgrown the area of academic case studies, and industry is showing serious interest. The logical next goal is the verification of industrial software products. Most programming languages used in industrial practice are object-oriented, e.g. Java, C++, or C#. FoVeOOS 2011 aimed to foster collaboration and interactions among researchers in this area.
This book constitutes the refereed post-proceedings of the Second International Workshop on Foundational and Practical Aspects of Resource Analysis, FOPARA 2011, held in Madrid, Spain, in May 2011. The 8 revised full papers were carefully reviewed and selected from the papers presented at the workshop and papers submitted following an open call for contributions after the workshop. The papers are organized in the following topical sections: implicit complexity, analysis and verfication of cost expressions, and worst case execution time analysis.
Logic Programming is the name given to a distinctive style of programming, very different from that of conventional programming languages such as C++ and Java. By far the most widely used Logic Programming language is Prolog. Prolog is a good choice for developing complex applications, especially in the field of Artificial Intelligence. Logic Programming with Prolog does not assume that the reader is an experienced programmer or has a background in Mathematics, Logic or Artificial Intelligence. It starts from scratch and aims to arrive at the point where quite powerful programs can be written in the language. It is intended both as a textbook for an introductory course and as a self-study book. On completion readers will know enough to use Prolog in their own research or practical projects. Each chapter has self-assessment exercises so that readers may check their own progress. A glossary of the technical terms used completes the book. This second edition has been revised to be fully compatible with SWI-Prolog, a popular multi-platform public domain implementation of the language. Additional chapters have been added covering the use of Prolog to analyse English sentences and to illustrate how Prolog can be used to implement applications of an 'Artificial Intelligence' kind. Max Bramer is Emeritus Professor of Information Technology at the University of Portsmouth, England. He has taught Prolog to undergraduate computer science students and used Prolog in his own work for many years.
Two central ideas in the movement toward advanced automation systems are the office-of-the-future (or office automation system), and the factory of-the-future (or factory automation system). An office automation system is an integrated system with diversified office equipment, communication devices, intelligent terminals, intelligent copiers, etc., for providing information management and control in a dis tributed office environment. A factory automation system is also an inte grated system with programmable machine tools, robots, and other pro cess equipment such as new "peripherals," for providing manufacturing information management and control. Such advanced automation systems can be regarded as the response to the demand for greater variety, greater flexibility, customized designs, rapid response, and 'Just-in-time" delivery of office services or manufac tured goods. The economy of scope, which allows the production of a vari ety of similar products in random order, gradually replaces the economy of scale derived from overall volume of operations. In other words, we are gradually switching from the production of large volumes of standard products to systems for the production of a wide variety of similar products in small batches. This is the phenomenon of "demassification" of the marketplace, as described by Alvin Toffier in The Third Wave."
Compilers and operating systems constitute the basic interfaces between a programmer and the machine for which he is developing software. In this book we are concerned with the construction of the former. Our intent is to provide the reader with a firm theoretical basis for compiler construction and sound engineering principles for selecting alternate methods, imple menting them, and integrating them into a reliable, economically viable product. The emphasis is upon a clean decomposition employing modules that can be re-used for many compilers, separation of concerns to facilitate team programming, and flexibility to accommodate hardware and system constraints. A reader should be able to understand the questions he must ask when designing a compiler for language X on machine Y, what tradeoffs are possible, and what performance might be obtained. He should not feel that any part of the design rests on whim; each decision must be based upon specific, identifiable characteristics of the source and target languages or upon design goals of the compiler. The vast majority of computer professionals will never write a compiler. Nevertheless, study of compiler technology provides important benefits for almost everyone in the field . * It focuses attention on the basic relationships between languages and machines. Understanding of these relationships eases the inevitable tran sitions to new hardware and programming languages and improves a person's ability to make appropriate tradeoft's in design and implementa tion . |
You may like...
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R3,940
Discovery Miles 39 400
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
Advanced Visual Basic 6 - Power…
Matthew Curland, Gary Clarke
Paperback
R1,273
Discovery Miles 12 730
|