![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Programming languages
This book brings together experts to discuss relevant results in software process modeling, and expresses their personal view of this field. It is designed for a professional audience of researchers and practitioners in industry, and graduate-level students.
Reasoning under uncertainty is always based on a specified language or for malism, including its particular syntax and semantics, but also on its associated inference mechanism. In the present volume of the handbook the last aspect, the algorithmic aspects of uncertainty calculi are presented. Theory has suffi ciently advanced to unfold some generally applicable fundamental structures and methods. On the other hand, particular features of specific formalisms and ap proaches to uncertainty of course still influence strongly the computational meth ods to be used. Both general as well as specific methods are included in this volume. Broadly speaking, symbolic or logical approaches to uncertainty and nu merical approaches are often distinguished. Although this distinction is somewhat misleading, it is used as a means to structure the present volume. This is even to some degree reflected in the two first chapters, which treat fundamental, general methods of computation in systems designed to represent uncertainty. It has been noted early by Shenoy and Shafer, that computations in different domains have an underlying common structure. Essentially pieces of knowledge or information are to be combined together and then focused on some particular question or domain. This can be captured in an algebraic structure called valuation algebra which is described in the first chapter. Here the basic operations of combination and focus ing (marginalization) of knowledge and information is modeled abstractly subject to simple axioms."
Contributions on UML address the application of UML in the
specification of embedded HW/SW systems. C-Based System Design
embraces the modeling of operating systems, modeling with different
models of computation, generation of test patterns, and experiences
from case studies with SystemC. Analog and Mixed-Signal Systems
covers rules for solving general modeling problems in VHDL-AMS,
modeling of multi-nature systems, synthesis, and modeling of
Mixed-Signal Systems with SystemC. Languages for formal methods are
addressed by contributions on formal specification and refinement
of hybrid, embedded and real-time stems.
As information technologies become increasingly distributed and accessible to larger number of people and as commercial and government organizations are challenged to scale their applications and services to larger market shares, while reducing costs, there is demand for software methodologies and appli- tions to provide the following features: Richer application end-to-end functionality; Reduction of human involvement in the design and deployment of the software; Flexibility of software behaviour; and Reuse and composition of existing software applications and systems in novel or adaptive ways. When designing new distributed software systems, the above broad requi- ments and their translation into implementations are typically addressed by partial complementarities and overlapping technologies and this situation gives rise to significant software engineering challenges. Some of the challenges that may arise are: determining the components that the distributed applications should contain, organizing the application components, and determining the assumptions that one needs to make in order to implement distributed scalable and flexible applications, etc.
Electronic Chips & Systems Design Languagesoutlines and describes the latest advances in design languages. The challenge of System on a Chip (SOC) design requires designers to work in a multi-lingual environment which is becoming increasingly difficult to master. It is therefore crucial for them to learn, almost in real time, from the experiences of their colleagues in the use of design languages and how these languages have become more advanced to cope with system design. System designers, as well as students willing to become system designers, often do not have the time to attend all scientific events where they could learn the necessary information. This book will bring them a selected digest of the best contributions and industry strength case studies. All the levels of abstraction that are relevant, from the informal user requirements down to the implementation specifications, are addressed by different contributors. The author, together with colleague authors who provide valuable additional experience, presents examples of actual industrial world applications. Furthermore the academic concepts presented in this book provide excellent theories to student readers and the concepts described are up to date and in so doing provide most suitable root information for Ph.D. postgraduates.
The text contains a detailed and current presentation of the program analyses and transformations that extract the flow of data in computer memory systems. The emphasis is on a framework for the optimization of code for imperative programs and greater computer systems efficiency. In addition, the author shows that correctness of program transformations is guaranteed by the conservation of data flow. Professionals and researchers in software engineering, computer engineering, program design analysis, and compiler design will benefit from its presentation of data-flow methods and memory optimization of compilers.
Written by the members of the IFIP Working Group 2.3 (Programming Methodology) this text constitutes an exciting reference on the front-line of research activity in programming methodology. The range of subjects reflects the current interests of the members, and will offer insightful and controversial opinions on modern programming methods and practice. The material is arranged in thematic sections, each one introduced by a problem which epitomizes the spirit of that topic. The exemplary problem will encourage vigorous discussion and will form the basis for an introduction/tutorial for its section.
I am very pleased to write these few brief paragraphs introducing this book, and would like to take this opportunity to attempt to set the Toolpack project in an appropriate historical context. The Toolpack project must be considered to have actually began in the Fall of 1978, when Prof. Webb C. Miller, at a meeting at Jet Propulsion Laboratories in Pasadena, California, suggested that there be a large-scale project, called Toolpack, aimed at pulling together a comprehensive collection of mathematical software development tools. It was suggested that the project follow the pattern of other "Pack" projects, such as Eispack, Linpack, and Funpack which had assembled and systematized comprehensive collections of mathematical software in such areas as eigenvalue computation, linear equation solution and special function approximation. From the that the Toolpack project would differ significantly from beginning it was recognized these earlier "Pack" projects in that it was attempting to assemble and systematize software in an area which was not well established and understood. Thus it was not clear how to organize and integrate the tools we were to collect into Toolpack. As a consequence Toolpack became simultaneously a research project and a development project. The research was aimed at determining effective strategies for large-scale integration of large-scale software tools, and the development project was aimed at implementing these strategies and using them to put high quality tools at the disposal of working mathematical software writers.
You might well wonder why TFPC in Practice is a part of the Monographs in Visualization series. However, if you really think about typesetting, especially fine typesetting, you soon realize that in large part it is a visual art as well as a science. 'lEX's algorithms produce in almost all cases aesthetic results of the highest quality. On the other hand, occasionally one may want to insert some additional space before a subscript or superscript, or one may want to adjust the vertical spacing in a fraction. Fortunately Donald Knuth, the author of 'lEX, allows one to program such corrections easily where needed. The four volumes of Stephan von Bechtolsheim's long awaited TFPC in Prac tice present a comprehensive view of 'lEX. His thorough discussion of each aspect of 'lEX is liberally laced with cogent illustrative examples. Many of these exam ples represent complete, ready to use macros that enhance the capabilities of 'lEX. These examples are of particular interest to both the typesetter and the 'lEX programmer. The typesetter can often solve an immediate problem by ei ther using one of the examples directly or by making minor changes to adapt it to the problem at hand. The 'lEX programmer can use the examples, along with Stephan's detailed discussion, to increase both the depth and breadth of his or her knowledge of 'lEX. The value of the text is further enhanced by Stephan's concerted effort to explain the reasoning behind each topic or example."
Software Engineering with OBJ: Algebraic Specification in Action is a comprehensive introduction to OBJ, the most widely used algebraic specification system. As a formal specification language, OBJ makes specifications and designs more precise and easier to read, as well as making maintenance easier and more accurate. OBJ differs from most other specification languages not just in having a formal semantics, but in being executable, either through symbolic execution with term rewriting, or more generally through theorem proving. One problem with specifications is that they are often wrong. OBJ can help validate specifications by executing test cases, and by proving properties. As well as providing a detailed introduction to the language and the OBJ system that implements it, Software Engineering with OBJ: Algebraic Specification in Action provides case studies by leading practitioners in the field, in areas such as computer graphics standards, hardware design, and parallel computation. The case studies demonstrate that OBJ can be used in a wide variety of ways to achieve a wide variety of practical aims in the system development process. The papers on various OBJ systems also demonstrate that the language is relatively easy to understand, implement, and use, and that it supports formal reasoning in a straightforward but powerful way. Software Engineering with OBJ: Algebraic Specification in Action will be of interest to students and teachers in the areas of data types, programming languages, semantics, theorem proving, and algebra, as well as to researchers and practitioners in software engineering.
Multi-Agent Programming is an essential reference for anyone interested in the most up-to-date developments in MAS programming. While previous research has focused on the development of formal and informal approaches to analyze and specify Multi-Agent Systems, this book focuses on the development of programming languages and tools which not only support MAS programming, but also implement key concepts of MAS in a unified framework. Part I describes approaches that rely on computational logic or process algebra - Jason, 3APL, IMPACT, and CLAIM/SyMPA. Part II presents languages and platforms that extend or are based on Java - JADE, Jadex and JACKTM. Part III provides two significant industry specific applications - The DEFACTO System for coordinating human-agent teams for disaster response, and the ARTIMIS rational dialogue agent technology. Also featured are seven appendices for quick reference and comparison.
Duration calculus constitutes a formal approach to the development of real-time systems; as an interval logic with special features for expressing and analyzing time durations of states in real-time systems, it allows for representing and formally reasoning about requirements and designs at an appropriate level of abstraction. This book presents the logical foundations of duration calculus in a coherent and thorough manner. Through selective case studies it explains how duration calculus can be applied to the formal specification and verification of real-time systems. The book also contains an extensive survey of the current research in this field. The material included in this book has been used for graduate and postgraduate courses, while it is also suitable for experienced researchers and professionals.
Extensive research and development has produce mutation tools for languages such as Fortran, Ada, C, and IDL; empirical evaluations comparing mutation with other test adequacy criteria; empirical evidence and theoretical justification for the coupling effect; and techniques for speeding up mutation testing using various types of high performance architectures. Mutation has received the attention of software developers and testers in such diverse areas as network protocols and nuclear simulation. Mutation Testing for the New Century brings together cutting edge research results in mutation testing from a wide range of researchers. This book provides answers to key questions related to mutation and raises questions yet to be answered. It is an excellent resource for researchers, practitioners, and students of software engineering.
This work is Volume II of a two-volume monograph on the theory of deterministic parsing of context-free grammars. Volume I, "Languages and Parsing" (Chapters 1 to 5), was an introduction to the basic concepts of formal language theory and context-free parsing. Volume II (Chapters 6 to 10) contains a thorough treat ment of the theory of the two most important deterministic parsing methods: LR(k) and LL(k) parsing. Volume II is a continuation of Volume I; together these two volumes form an integrated work, with chapters, theorems, lemmas, etc. numbered consecutively. Volume II begins with Chapter 6 in which the classical con structions pertaining to LR(k) parsing are presented. These include the canonical LR(k) parser, and its reduced variants such as the LALR(k) parser and the SLR(k) parser. The grammarclasses for which these parsers are deterministic are called LR(k) grammars, LALR(k) grammars and SLR(k) grammars; properties of these grammars are also investigated in Chapter 6. A great deal of attention is paid to the rigorous development of the theory: detailed mathematical proofs are provided for most of the results presented."
The Verilog Hardware Description Language was first introduced in 1984. Over the 20 year history of Verilog, every Verilog engineer has developed his own personal "bag of tricks" for coding with Verilog. These tricks enable modeling or verifying designs more easily and more accurately. Developing this bag of tricks is often based on years of trial and error. Through experience, engineers learn that one specific coding style works best in some circumstances, while in another situation, a different coding style is best. As with any high-level language, Verilog often provides engineers several ways to accomplish a specific task. Wouldn't it be wonderful if an engineer first learning Verilog could start with another engineer's bag of tricks, without having to go through years of trial and error to decide which style is best for which circumstance? That is where this book becomes an invaluable resource. The book presents dozens of Verilog tricks of the trade on how to best use the Verilog HDL for modeling designs at various level of abstraction, and for writing test benches to verify designs. The book not only shows the correct ways of using Verilog for different situations, it also presents alternate styles, and discusses the pros and cons of these styles.
This book provides an introductory overview of the rapid growth in interdisciplinary research into Thinking with Diagrams. Diagrammatic representations are becoming more common in everyday human experience, yet they offer unique challenges to cognitive science research. Neither linguistic nor perceptual theories are sufficient to completely explain their advantages and applications. These research challenges may be part of the reason why so many diagrams are badly designed or badly used. This is ironic when the user interfaces of computer software and the worldwide web are becoming so completely dominated by graphical and diagrammatic representations. This book includes chapters commissioned from leading researchers in the major disciplines involved in diagrams research. They review the philosophical status of diagrams, the cognitive processes involved in their application, and a range of specialist fields in which diagrams are central, including education, architectural design and visual programming languages. The result is immediately relevant to researchers in cognitive science and artificial intelligence, as well as in applied technology areas such as human-computer interaction and information design.
In a model-based development of software systems different views on a system are elaborated using appropriate modeling languages and techniques. Because of the unavoidable heterogeneity of the viewpoint models, a semantic integration is required, to establish the correspondences of the models and allow checking of their relative consistency. The integration approach introduced in this book is based on a common semantic domain of abstract systems, their composition and development. Its applicability is shown through semantic interpretations and compositional comparisons of different specification approaches. These range from formal specification techniques like process calculi, Petri nets and rule-based formalisms to semiformal software modeling languages like those in the UML family.
Research into Fully Integrated Data Environments (FIDE) has the goal of substantially improving the quality of application systems while reducing the cost of building and maintaining them. Application systems invariably involve the long-term storage of data over months or years. Much unnecessary complexity obstructs the construction of these systems when conventional databases, file systems, operating systems, communication systems, and programming languages are used. This complexity limits the sophistication of the systems that can be built, generates operational and usability problems, and deleteriously impacts both reliability and performance. This book reports on the work of researchers in the Esprit FIDE projects to design and develop a new integrated environment to support the construction and operation of such persistent application systems. It reports on the principles they employed to design it, the prototypes they built to test it, and their experience using it.
There is an established interest in integrating databases and programming languages. This book on Data Types and Persistence evolved from the proceedings of a workshop held at the Appin in August 1985. The purpose of the Appin workshop was to focus on these two aspects: persistence and data types, and to bring together people from various disciplines who have thought about these problems. Particular topics of"interest include the design of type systems appropriate for database work, the representation of persistent objects such as data types and modules, and the provision of orthogonal persistence and certain aspects of transactions and concurrency. The programme was broken into three sessions: morning, late afternoon and evening to allow the participants to take advantage of two beautiful days in the Scottish Highlands. The financial assistance of the Science and Engineering Research Council, the National Science Foundation and International Computers Ltd. is gratefully acknowledged. We would also like to thank Isabel Graham, Anne Donnelly and Estelle Taylor for their help in organising the workshop. Finally our thanks to Pete Bailey, Ray Carick and Dave Munro for the immense task they undertook in typesetting the book. The convergence of programming languages and databases to a coherent and consistent whole requires ideas from, and adjustment in, both intellectual camps. The first group of chapters in this book present ideas and adjustments coming from the programming language research community. This community frequently discusses types and uses them as a framework for other discussions.
Welcome to the 5th International Conference on Open Source Systems! It is quite an achievement to reach the five-year mark - that's the sign of a successful enterprise. This annual conference is now being recognized as the primary event for the open source research community, attracting not only high-quality papers, but also building a community around a technical program, a collection of workshops, and (starting this year) a Doctoral Consortium. Reaching this milestone reflects the efforts of many people, including the conference founders, as well as the organizers and participants in the previous conferences. My task has been easy, and has been greatly aided by the hard work of Kevin Crowston and Cornelia Boldyreff, the Program Committee, as well as the Organizing Team led by Bjoern Lundell. All of us are also grateful to our attendees, especially in the difficult economic climate of 2009. We hope the participants found the conference valuable both for its technical content and for its personal networking opportunities. To me, it is interesting to look back over the past five years, not just at this conference, but at the development and acceptance of open source software. Since 2004, the business and commercial side of open source has grown enormously. At that time, there were only a handful of open source businesses, led by RedHat and its Linux distribution. Companies such as MySQL and JBoss were still quite small.
The contributors present the main results and techniques of their specialties in an easily accessible way accompanied with many references: historical, hints for complete proofs or solutions to exercises and directions for further research. This volume contains applications which have not appeared in any collection of this type. The book is a general source of information in computation theory, at the undergraduate and research level.
The second half of the 1970s was marked with impressive advances in array/vector architectures and vectorization techniques and compilers. This progress continued with a particular focus on vector machines until the middle of the 1980s. The major ity of supercomputers during this period were register-to-register (Cray 1) or memory-to-memory (CDC Cyber 205) vector (pipelined) machines. However, the increasing demand for higher computational rates lead naturally to parallel comput ers and software. Through the replication of autonomous processors in a coordinated system, one can skip over performance barriers due technology limitations. In princi ple, parallelism offers unlimited performance potential. Nevertheless, it is very difficult to realize this performance potential in practice. So far, we have seen only the tip of the iceberg called "parallel machines and parallel programming." Parallel programming in particular is a rapidly evolving art and, at present, highly empirical. In this book we discuss several aspects of parallel programming and parallelizing compilers. Instead of trying to develop parallel programming methodologies and paradigms, we often focus on more advanced topics assuming that the reader has an adequate background in parallel processing. The book is organized in three main parts. In the first part (Chapters 1 and 2) we set the stage and focus on program transformations and parallelizing compilers. The second part of this book (Chapters 3 and 4) discusses scheduling for parallel machines from the practical point of view macro and microtasking and supporting environments). Finally, the last part (Le."
As part of the best-selling Pocket Primer series, this book is designed to prepare programmers for machine learning and deep learning/TensorFlow topics. It begins with a quick introduction to Python, followed by chapters that discuss NumPy, Pandas, Matplotlib, and scikit-learn. The final two chapters contain an assortment of TensorFlow 1.x code samples, including detailed code samples for TensorFlow Dataset (which is used heavily in TensorFlow 2 as well). A TensorFlow Dataset refers to the classes in the tf.data.Dataset namespace that enables programmers to construct a pipeline of data by means of method chaining so-called lazy operators, e.g., map(), filter(), batch(), and so forth, based on data from one or more data sources. Companion files with source code are available for downloading from the publisher. FEATURES, A practical introduction to Python, NumPy, Pandas, Matplotlib, and introductory aspects of TensorFlow 1.x, Contains relevant NumPy/Pandas code samples that are typical in machine learning topics, and also useful TensorFlow 1.x code samples for deep learning/TensorFlow topics, Includes many examples of TensorFlow Dataset APIs with lazy operators, e.g., map(), filter(), batch(), take() and also method chaining such operators, Assumes the reader has very limited experience, Includes companion files with all of the source code examples (download from the publisher).
This multi-function volume starts off as an ideal basic textbook for teaching object modeling, fundamental concepts learning and system designing with thirteen UML diagrams. But it also contains a whole section devoted to advanced research topics, samples and case studies. It is an essential work for any system developer or graduate student in a discipline that requires the power of object modeling as part of a development methodology.
Programming Languages: An Active Learning Approach introduces students to three programming paradigms: object-oriented/imperative languages using C++ and Ruby, functional languages using Standard ML, and logic programming using Prolog. This interactive textbook is intended to be used in and outside of class. Each chapter follows a pattern of presenting a topic followed by a practice exercise or exercises that encourage students to try what they have just read. This textbook is best-suited for students with a 2-3 course introduction to imperative programming. Key Features: (1) Accessible structure guides the student through various programming languages. (2) Seamlessly integrated practice exercises. (3) Classroom-tested. (4) Online support materials. Advance praise: |
You may like...
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R3,940
Discovery Miles 39 400
Introducing Delphi Programming - Theory…
John Barrow, Linda Miller, …
Paperback
(1)R751 Discovery Miles 7 510
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,258
Discovery Miles 42 580
Advanced Visual Basic 6 - Power…
Matthew Curland, Gary Clarke
Paperback
R1,273
Discovery Miles 12 730
Graphical Programming Using LabVIEW (TM…
Julio Cesar Rodriguez-Quinonez, Oscar Real-Moreno
Hardcover
|