![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Programming languages > General
This book brings together experts to discuss relevant results in software process modeling, and expresses their personal view of this field. It is designed for a professional audience of researchers and practitioners in industry, and graduate-level students.
Reasoning under uncertainty is always based on a specified language or for malism, including its particular syntax and semantics, but also on its associated inference mechanism. In the present volume of the handbook the last aspect, the algorithmic aspects of uncertainty calculi are presented. Theory has suffi ciently advanced to unfold some generally applicable fundamental structures and methods. On the other hand, particular features of specific formalisms and ap proaches to uncertainty of course still influence strongly the computational meth ods to be used. Both general as well as specific methods are included in this volume. Broadly speaking, symbolic or logical approaches to uncertainty and nu merical approaches are often distinguished. Although this distinction is somewhat misleading, it is used as a means to structure the present volume. This is even to some degree reflected in the two first chapters, which treat fundamental, general methods of computation in systems designed to represent uncertainty. It has been noted early by Shenoy and Shafer, that computations in different domains have an underlying common structure. Essentially pieces of knowledge or information are to be combined together and then focused on some particular question or domain. This can be captured in an algebraic structure called valuation algebra which is described in the first chapter. Here the basic operations of combination and focus ing (marginalization) of knowledge and information is modeled abstractly subject to simple axioms."
Electronic Chips & Systems Design Languagesoutlines and describes the latest advances in design languages. The challenge of System on a Chip (SOC) design requires designers to work in a multi-lingual environment which is becoming increasingly difficult to master. It is therefore crucial for them to learn, almost in real time, from the experiences of their colleagues in the use of design languages and how these languages have become more advanced to cope with system design. System designers, as well as students willing to become system designers, often do not have the time to attend all scientific events where they could learn the necessary information. This book will bring them a selected digest of the best contributions and industry strength case studies. All the levels of abstraction that are relevant, from the informal user requirements down to the implementation specifications, are addressed by different contributors. The author, together with colleague authors who provide valuable additional experience, presents examples of actual industrial world applications. Furthermore the academic concepts presented in this book provide excellent theories to student readers and the concepts described are up to date and in so doing provide most suitable root information for Ph.D. postgraduates.
Written by the members of the IFIP Working Group 2.3 (Programming Methodology) this text constitutes an exciting reference on the front-line of research activity in programming methodology. The range of subjects reflects the current interests of the members, and will offer insightful and controversial opinions on modern programming methods and practice. The material is arranged in thematic sections, each one introduced by a problem which epitomizes the spirit of that topic. The exemplary problem will encourage vigorous discussion and will form the basis for an introduction/tutorial for its section.
The text contains a detailed and current presentation of the program analyses and transformations that extract the flow of data in computer memory systems. The emphasis is on a framework for the optimization of code for imperative programs and greater computer systems efficiency. In addition, the author shows that correctness of program transformations is guaranteed by the conservation of data flow. Professionals and researchers in software engineering, computer engineering, program design analysis, and compiler design will benefit from its presentation of data-flow methods and memory optimization of compilers.
Goal Directed Proof Theory presents a uniform and coherent methodology for automated deduction in non-classical logics, the relevance of which to computer science is now widely acknowledged. The methodology is based on goal-directed provability. It is a generalization of the logic programming style of deduction, and it is particularly favourable for proof search. The methodology is applied for the first time in a uniform way to a wide range of non-classical systems, covering intuitionistic, intermediate, modal and substructural logics. The book can also be used as an introduction to these logical systems form a procedural perspective. Readership: Computer scientists, mathematicians and philosophers, and anyone interested in the automation of reasoning based on non-classical logics. The book is suitable for self study, its only prerequisite being some elementary knowledge of logic and proof theory.
As information technologies become increasingly distributed and accessible to larger number of people and as commercial and government organizations are challenged to scale their applications and services to larger market shares, while reducing costs, there is demand for software methodologies and appli- tions to provide the following features: Richer application end-to-end functionality; Reduction of human involvement in the design and deployment of the software; Flexibility of software behaviour; and Reuse and composition of existing software applications and systems in novel or adaptive ways. When designing new distributed software systems, the above broad requi- ments and their translation into implementations are typically addressed by partial complementarities and overlapping technologies and this situation gives rise to significant software engineering challenges. Some of the challenges that may arise are: determining the components that the distributed applications should contain, organizing the application components, and determining the assumptions that one needs to make in order to implement distributed scalable and flexible applications, etc.
This work is Volume II of a two-volume monograph on the theory of deterministic parsing of context-free grammars. Volume I, "Languages and Parsing" (Chapters 1 to 5), was an introduction to the basic concepts of formal language theory and context-free parsing. Volume II (Chapters 6 to 10) contains a thorough treat ment of the theory of the two most important deterministic parsing methods: LR(k) and LL(k) parsing. Volume II is a continuation of Volume I; together these two volumes form an integrated work, with chapters, theorems, lemmas, etc. numbered consecutively. Volume II begins with Chapter 6 in which the classical con structions pertaining to LR(k) parsing are presented. These include the canonical LR(k) parser, and its reduced variants such as the LALR(k) parser and the SLR(k) parser. The grammarclasses for which these parsers are deterministic are called LR(k) grammars, LALR(k) grammars and SLR(k) grammars; properties of these grammars are also investigated in Chapter 6. A great deal of attention is paid to the rigorous development of the theory: detailed mathematical proofs are provided for most of the results presented."
Computer-Aided Reasoning: ACL2 Case Studies illustrates how the
computer-aided reasoning system ACL2 can be used in productive and
innovative ways to design, build, and maintain hardware and
software systems. Included here are technical papers written by
twenty-one contributors that report on self-contained case studies,
some of which are sanitized industrial projects. The papers deal
with a wide variety of ideas, including floating-point arithmetic,
microprocessor simulation, model checking, symbolic trajectory
evaluation, compilation, proof checking, real analysis, and several
others.
This book constitutes the thoroughly refereed post-proceedings of
the 4th International Conference on Software Language Engineering,
SLE 2011, held in Braga, Portugal, in July 2011.
This Festschrift volume is published in honor of Dexter Kozen on the occasion of his 60th birthday. Dexter Kozen has been a leader in the development of Kleene Algebras (KAs). The contributions in this volume reflect the breadth of his work and influence. The volume includes 19 full papers related to Dexter Kozen's research. They deal with coalgebraic methods, congruence closure; the completeness of various programming logics; decision procedure for logics; alternation; algorithms and complexity; and programming languages and program analysis. The second part of this volume includes laudatios from several collaborators, students and friends, including the members of his current band.
It is universally accepted today that parallel processing is here to stay but that software for parallel machines is still difficult to develop. However, there is little recognition of the fact that changes in processor architecture can significantly ease the development of software. In the seventies the availability of processors that could address a large name space directly, eliminated the problem of name management at one level and paved the way for the routine development of large programs. Similarly, today, processor architectures that can facilitate cheap synchronization and provide a global address space can simplify compiler development for parallel machines. If the cost of synchronization remains high, the pro gramming of parallel machines will remain significantly less abstract than programming sequential machines. In this monograph Bob Iannucci presents the design and analysis of an architecture that can be a better building block for parallel machines than any von Neumann processor. There is another very interesting motivation behind this work. It is rooted in the long and venerable history of dataflow graphs as a formalism for ex pressing parallel computation. The field has bloomed since 1974, when Dennis and Misunas proposed a truly novel architecture using dataflow graphs as the parallel machine language. The novelty and elegance of dataflow architectures has, however, also kept us from asking the real question: "What can dataflow architectures buy us that von Neumann ar chitectures can't?" In the following I explain in a round about way how Bob and I arrived at this question."
This volume, the 8th in the Transactions on Aspect-Oriented Software Development series, contains two regular submissions and a special section, consisting of five papers, on the industrial applications of aspect technology. The regular papers describe a framework for constructing aspect weavers, and patterns for reusable aspects. The special section begins with an invited contribution on how AspectJ is making its way from an exciting new hype topic to a valuable technology in enterprise computing. The remaining four papers each cover different industrial applications of aspect technology, which include a telecommunication platform, a framework for embedding user assistance in independently developed applications, a platform for digital publishing, and a framework for program code analysis and manipulation.
The Verilog Hardware Description Language was first introduced in 1984. Over the 20 year history of Verilog, every Verilog engineer has developed his own personal "bag of tricks" for coding with Verilog. These tricks enable modeling or verifying designs more easily and more accurately. Developing this bag of tricks is often based on years of trial and error. Through experience, engineers learn that one specific coding style works best in some circumstances, while in another situation, a different coding style is best. As with any high-level language, Verilog often provides engineers several ways to accomplish a specific task. Wouldn't it be wonderful if an engineer first learning Verilog could start with another engineer's bag of tricks, without having to go through years of trial and error to decide which style is best for which circumstance? That is where this book becomes an invaluable resource. The book presents dozens of Verilog tricks of the trade on how to best use the Verilog HDL for modeling designs at various level of abstraction, and for writing test benches to verify designs. The book not only shows the correct ways of using Verilog for different situations, it also presents alternate styles, and discusses the pros and cons of these styles.
In a model-based development of software systems different views on a system are elaborated using appropriate modeling languages and techniques. Because of the unavoidable heterogeneity of the viewpoint models, a semantic integration is required, to establish the correspondences of the models and allow checking of their relative consistency. The integration approach introduced in this book is based on a common semantic domain of abstract systems, their composition and development. Its applicability is shown through semantic interpretations and compositional comparisons of different specification approaches. These range from formal specification techniques like process calculi, Petri nets and rule-based formalisms to semiformal software modeling languages like those in the UML family.
The SCAN conference, the International Symposium on Scientific Com puting, Computer Arithmetic and Validated Numerics, takes place bian nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. It is important that the possibly first participation of 6 young researchers was made possible due to the obtained support. The number of East-European participants was relatively high. These results are especially valuable, since in contrast to the usual 2 years period, the present meeting was organized just one year after the last SCAN-xx conference."
Research into Fully Integrated Data Environments (FIDE) has the goal of substantially improving the quality of application systems while reducing the cost of building and maintaining them. Application systems invariably involve the long-term storage of data over months or years. Much unnecessary complexity obstructs the construction of these systems when conventional databases, file systems, operating systems, communication systems, and programming languages are used. This complexity limits the sophistication of the systems that can be built, generates operational and usability problems, and deleteriously impacts both reliability and performance. This book reports on the work of researchers in the Esprit FIDE projects to design and develop a new integrated environment to support the construction and operation of such persistent application systems. It reports on the principles they employed to design it, the prototypes they built to test it, and their experience using it.
There is an established interest in integrating databases and programming languages. This book on Data Types and Persistence evolved from the proceedings of a workshop held at the Appin in August 1985. The purpose of the Appin workshop was to focus on these two aspects: persistence and data types, and to bring together people from various disciplines who have thought about these problems. Particular topics of"interest include the design of type systems appropriate for database work, the representation of persistent objects such as data types and modules, and the provision of orthogonal persistence and certain aspects of transactions and concurrency. The programme was broken into three sessions: morning, late afternoon and evening to allow the participants to take advantage of two beautiful days in the Scottish Highlands. The financial assistance of the Science and Engineering Research Council, the National Science Foundation and International Computers Ltd. is gratefully acknowledged. We would also like to thank Isabel Graham, Anne Donnelly and Estelle Taylor for their help in organising the workshop. Finally our thanks to Pete Bailey, Ray Carick and Dave Munro for the immense task they undertook in typesetting the book. The convergence of programming languages and databases to a coherent and consistent whole requires ideas from, and adjustment in, both intellectual camps. The first group of chapters in this book present ideas and adjustments coming from the programming language research community. This community frequently discusses types and uses them as a framework for other discussions.
This book constitutes the thoroughly refereed
post-conference The 13 revised full papers presented were carefully reviewed and
were
During the last three decades several different styles of semantics for program ming languages have been developed. This book compares two of them: the operational and the denotational approach. On the basis of several exam ples we show how to define operational and denotational semantic models for programming languages. Furthermore, we introduce a general technique for comparing various semantic models for a given language. We focus on different degrees of nondeterminism in programming lan guages. Nondeterminism arises naturally in concurrent languages. It is also an important concept in specification languages. In the examples discussed, the degree of non determinism ranges from a choice between two alternatives to a choice between a collection of alternatives indexed by a closed interval of the real numbers. The former arises in a language with nondeterministic choices. A real time language with dense choices gives rise to the latter. We also consider the nondeterministic random assignment and parallel composition, both couched in a simple language. Besides non determinism our four example languages contain some form of recursion, a key ingredient of programming languages."
The second half of the 1970s was marked with impressive advances in array/vector architectures and vectorization techniques and compilers. This progress continued with a particular focus on vector machines until the middle of the 1980s. The major ity of supercomputers during this period were register-to-register (Cray 1) or memory-to-memory (CDC Cyber 205) vector (pipelined) machines. However, the increasing demand for higher computational rates lead naturally to parallel comput ers and software. Through the replication of autonomous processors in a coordinated system, one can skip over performance barriers due technology limitations. In princi ple, parallelism offers unlimited performance potential. Nevertheless, it is very difficult to realize this performance potential in practice. So far, we have seen only the tip of the iceberg called "parallel machines and parallel programming." Parallel programming in particular is a rapidly evolving art and, at present, highly empirical. In this book we discuss several aspects of parallel programming and parallelizing compilers. Instead of trying to develop parallel programming methodologies and paradigms, we often focus on more advanced topics assuming that the reader has an adequate background in parallel processing. The book is organized in three main parts. In the first part (Chapters 1 and 2) we set the stage and focus on program transformations and parallelizing compilers. The second part of this book (Chapters 3 and 4) discusses scheduling for parallel machines from the practical point of view macro and microtasking and supporting environments). Finally, the last part (Le."
This is the first textbook treatment of the algebraic approach to graph transformation, based on algebraic structures and category theory. It contains an introduction to classical graphs. Basic and advanced results are first shown for an abstract form of replacement systems and are then instantiated to several forms of graph and Petri net transformation systems. The book develops typed attributed graph transformation and contains a practical case study.
This book constitutes the refereed proceedings of the Fourth International Symposium on NASA Formal Methods, NFM 2012, held in Norfolk, VA, USA, in April 2012. The 36 revised regular papers presented together with 10 short papers, 3 invited talks were carefully reviewed and selected from 93 submissions. The topics are organized in topical sections on theorem proving, symbolic execution, model-based engineering, real-time and stochastic systems, model checking, abstraction and abstraction refinement, compositional verification techniques, static and dynamic analysis techniques, fault protection, cyber security, specification formalisms, requirements analysis and applications of formal techniques.
This multi-function volume starts off as an ideal basic textbook for teaching object modeling, fundamental concepts learning and system designing with thirteen UML diagrams. But it also contains a whole section devoted to advanced research topics, samples and case studies. It is an essential work for any system developer or graduate student in a discipline that requires the power of object modeling as part of a development methodology.
This Festschrift, published in honor of Bernhard Thalheim on the occasion of his 60th birthday presents 20 articles by colleagues from all over the world with whom Bernhard Thalheim had cooperation in various respects; also included is a scientific biography contributed by the volume editors. The 20 contributions reflect the breadth and the depth of the work of Bernhard Thalheim in conceptual modeling and database theory during his scientific career spanning more than 35 years of active research. In particular, ten articles are focusing on topics like database dependency theory, object-oriented databases, triggers, abstract state machines, database and information systems design, web semantics, and business processes. |
You may like...
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R3,940
Discovery Miles 39 400
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
Advanced Visual Basic 6 - Power…
Matthew Curland, Gary Clarke
Paperback
R1,273
Discovery Miles 12 730
|