![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Operating systems & graphical user interfaces (GUIs) > General
This volume constitutes the proceedings of the 12th British National Conference on Databases (BNCOD-12), held at Surrey, Guildford in July 1994. The BNCOD conferences are thought as a platform for exchange between theoreticians and practitioners, where researchers from academia and industry meet professionals interested in advanced database applications. The 13 refereed papers presented in the proceedings were selected from 47 submissions; they are organized in chapters on temporal databases, formal approaches, parallel databases, object-oriented databases, and distributed databases. In addition there are two invited presentations: "Managing open systems now that the "Glashouse" has gone" by R. Baker and "Knowledge reuse through networks of large KBs" by P.M.D. Gray.
Interest has grown rapidly over the past dozen years in the application of object-oriented programming and methods to the development of distributed, open systems. This volume presents the proceedings of a workshop intended to assess the current state of research in this field and to facilitate interaction between groups working on very different aspects of object-oriented distributed systems. The workshop was held as part of the 1993 European Conference on Object-Oriented Programming (ECOOP '93). Over fifty people submitted position papers and participated in the workshop, and almost half presented papers. The presented papers were carefully reviewed and revised after the workshop, and 14 papers were selected for this volume.
The International Conference on Compiler Construction provides a
forum for presentation and discussion of recent developments in the
area of compiler construction, language implementation and language
design. Its scope ranges from compilation methods and tools to
implementation techniques for specific requirements on languages
and target architectures. It also includes language design and
programming environment issues which are related to language
translation. There is an emphasis on practical and efficient
techniques.
J iirgen N ehmer Load distribution is a very important concept for distributed systems in order to achieve better performance, resource utilization and response times. Providing effi cient mechanisms for the transparent support of load distribution has proven to be an extremely difficult undertaking. As a matter of fact, there is no commercially avail able system which provides transparent load distribution right now. The monograph by D. Milojicic presents a novel load distribution scheme based on modern microker nel architectures. The remarkable results of D. MilojiCiC's approach show evidence for his hypothesis that load distribution is feasible even under strong efficiency con straints if built upon microkernel architectures. Based on a complete implementation using the NORMA-version of Mach, D. MilojiCic shows that substantial performance improvements of his load distribution scheme on top of Mach result from the dramatic reduction of state information to be managed in course of a task migration. For readers not familiar with the topic, the monograph gives a good survey of the load distribution problem and puts existing approaches into perspective. Contents Preface xvii 1 Introduction 1 1. 1 Motivation . . . . . 1 1. 2 Load Distribution 3 1. 3 Research Contributions . 5 1. 4 Thesis Outline. . . 6 2 Background and Related Work 9 2. 1 Introduction. 9 2. 2 Migration 9 2. 2. 1 Design 11 2. 2. 2 Issues 12 2. 2. 3 Previous Work 14 2. 3 Load Information Management 19 2. 3. 1 Design . . . . 20 2. 3. 2 Issues . . . ."
Windows may rule the world of popular computing on PCs around the globe, but DOS still has a place in the hearts and minds of computer users who vaguely remember what a C prompt looks like. Even if DOS (with all its arcane commands and its drab, boring look) isn't your idea of the best way to get things done on a PC, you'll find plenty of fast and friendly help on hand with the third edition of DOS For Dummies. Here's a plain-speaking reference guide to all the command-line stuff and nonsense that makes DOS work, whether you're a native DOS user or are an occasional dabbler who needs the operating system to run all those cool games under Windows. DOS For Dummies, 3rd Edition, avoids all the technical jargon to cut to the heart of things with clear, easy-to-understand explanations and step-by-step help for managing files, running DOS inside Windows, and installing and running DOS-based software programs. All the basic DOS commands, from APPEND to XCOPY, are demystified to make life in DOS much more bearable. And the book has plenty of helpful tips and tricks for bending DOS to your will, without having to dedicate your life (and all your free time) to mastering this little corner of the PC.
Systems, Models and Measures seeks to bridge the gap between the 'classical' and the newer technologies by constructing a systematic measurement framework for both. The authors use their experience as consultants in systems, software and quality engineering to take the subject from concept and theory, via strategy and procedure, to tools and applications. The book clarifies the key notions of system, model, measurement, product, process, specification and design. Practical examples demonstrate the 'architecture' of measurement schemes, extending them to object-oriented and subjective measurement. A detailed case study provides a measurement strategy for formal specifications, including Prolog, Z and VDM. The reader will be able to formulate problems in measurable terms, appraise and compare formal specifications, assess and enhance existing measurement practices, and devise measurement schemes for describing objective characteristics and expressing value judgements.
Formal specifications were first used in the description of program ming languages because of the central role that languages and their compilers play in causing a machine to perform the computations required by a programmer. In a relatively short time, specification notations have found their place in industry and are used for the description of a wide variety of software and hardware systems. A formal method - like VDM - must offer a mathematically-based specification language. On this language rests the other key element of the formal method: the ability to reason about a specification. Proofs can be empioyed in reasoning about the potential behaviour of a system and in the process of showing that the design satisfies the specification. The existence of a formal specification is a prerequisite for the use of proofs; but this prerequisite is not in itself sufficient. Both proofs and programs are large formal texts. Would-be proofs may therefore contain errors in the same way as code. During the difficult but inevitable process of revising specifications and devel opments, ensuring consistency is a major challenge. It is therefore evident that another requirement - for the successful use of proof techniques in the development of systems from formal descriptions - is the availability of software tools which support the manipu lation of large bodies of formulae and help the user in the design of the proofs themselves."
This book constitutes the thoroughly revised proceedings of the
Fourth International Workshop on Network and Operating System
Support for Digital Audio and Video (NOSSDAV '93), held in
Lancaster, UK in November 1993.
The last decade has seen an enormous change in the capability of information technology and also in the expectations of what that technology can provide. The personal computer revolution at the start of the 1980s brought computing power to the desktop in a way that, for the first time, non-technical users could understand and use in their everyday work. The invisible wall of mystique that had separated computers from their potential users for so long had been demolished, and the world of business would never be the same again. As we entered the 1990s, a decade later, we witnessed the beginnings of another revolution. This revolution is not so obvious, but its implications are even more far-reaching. It is not so obvious because it is happening behind the scenes, in the communications and computing infrastructure that support the machines that can be seen sitting on office desks and, increasingly, being carried with business people as standard equipment along with a briefcase and umbrella. It is potentially more far-reaching for the following reason. The per sonal computer of the 1980s brought computing power to the user in a box that could fit on a desk. The revolution of the 1990s brings to the user computing power that is distributed across the whole planet."
This book is about the advanced, object-oriented NEXTSTEp (TM) user envi ronment for NeXT and Intel-based computers. It is intended for those who already own a computer running NEXTSTEP and want to quickly learn what it can do and how to get the most out of it with the least effort. It's also for those who are considering the purchase of NEXTSTEP but want to learn more about how it works before making an investment. Why a book on NEXTSTEP? When I set out to learn how to use NEXT STEP several years ago, I found it extremely difficult to find information from the usual sources, such as books, magazines, user groups, and autho rized dealers. NEXTSTEP users were scarce and finding a computer store that sold NeXT-related products was even more rare. There were also only a handful of NeXT user groups in existence and those that did exist met so far away that joining one of them was impractical. The manuals I received from NeXT were helpful, but I had the feeling there must be something more to it than what was written in the User's Reference. It didn't describe many of the shortcuts that experienced users had found or the public domain and shareware utilities that were popular and how I could use them to make my work even easier and more fun.
This volume constitutes the proceedings of the Fifth International
Conference on Concurrency Theory, CONCUR '94, held at Uppsala,
Sweden in August 1994.
The REX School/Symposium "A Decade of Concurrency - Reflections and
Perspectives" was the final event of a ten-year period of
cooperation between three Dutch research groups working on the
foundations of concurrency.
Technological advances are revolutionizing computers and networks to supportdigital video and audio, leading to new design spaces in computer systems and applications. Under the surface of exciting multimedia technologies liesa mine of research problems. This volume presents the proceedings of an international workshop which brought together the leading researchers in allaspects of multimedia computing, communication, storage, and applications. The field of multimedia has witnessed an explosive growth in the last few years and the selection of papers for this workshop was extremely competitive. The volume contains 26 full papers and 14 short papers selected from 128 contributions, organized into parts on: network and operating system support for multimedia; multimedia on-demand services; media synchronization; distributed multimedia systems; network andoperating system support for multimedia; multimedia models, frameworks, and document architectures; and multimedia workstations and platforms.
This volume presents the proceedings of the fifth Conference on Advanced Information Systems Engineering, CAiSE '93, held at the University of Paris-Sorbonne in June 1993. Initiated by J. Bubenko from the Swedish Institute for Systems Development in Stockhom, Sweden, and A. Solvberg from the Norwegian Institute of Technology in Trondheim, Norway, this series of conferences evolved from a Nordic audience to a truly European one. All the conferences have attracted international papers of high quality, indicating the needfor an international conference on advanced information systems engineering topics. The spectrum of contributions contained in the present proceedings extends from inevitable and still controversial issues regarding modeling of information systems, via development environments and experiences, to various novel views forsome specific aspects of information systems development such as reuse, schema integration, and evolution.
Enterprise operation efficiency is seriously constrained by the inability to provide the right information, in the right place, at the right time. In spite of significant advances in technology it is still difficult to access information used or produced by different applications due to the hardware and software incompatibilities of manufacturing and information processing equipment. But it is this information and operational knowledge which makes up most of the business value of the enterprise and which enables it to compete in the marketplace. Therefore, sufficient and timely information access is a prerequisite for its efficient use in the operation of enterprises. It is the aim of the ESPRIT project AMICE to make this knowledge base available enterprise-wide. During several ESPRIT contracts the project has developed and validated CIMOSA: Open System Architecture for CIM. The CIMOSA concepts provide operation structuring based on cooperating processes. Enterprise operations are represented in terms of functionality and dynamic behaviour (control flow). Information needed and produced, as well as resources and organisational aspects relevant in the course of the operation are modelled in the process model. However, the different aspects may be viewed separately for additional structuring and detailing during the enterprise engineering process.
The main aims of the series of volumes "Advances in Petri Nets" are: - to present to the "outside" scientific community a fair picture of recent advances in the area of Petri nets, and - to encourage those interested in the applications and the theory of concurrent systems to take a closer look at Petri nets and then join the group of researchers working in this fascinating and challenging area. This volume is based on the proceedings of the 12th International Conference on Applications and Theory of Petri Nets, held in Gjern, Denmark, in June 1991. It contains 18 selected and revised papers covering all aspects of recent Petri net research.
This work presents a new, abstract and comprehensive view of open distributed systems. The starting point is a small number of core concepts and basic principles, which are informally introduced and precisely defined using mathematical logic. It is shown how the basic concepts of open systems interconnection (OSI), which are currently the most important standardization activities in the context of open distributed systems, can be obtained by specialization and extension of these basic concepts. Application examples include the formal treatment of the interaction point concept and the hierarchical development of communication systems. This book is a contribution to the field of software engineering in general and to the design of open distributed systems in particular. It is oriented towards the design and implementation of real systems, and brings together both formal logical reasoning and current software engineering practice.
This volume gives the proceedings of the Fourth Workshop on Computer-Aided Verification (CAV '92), held in Montreal, June 29 - July 1, 1992. The objective of this series of workshops is to bring together researchers and practitioners interested in the development and use of methods, tools and theories for the computer-aided verification of concurrent systems. The workshops provide an opportunity for comparing various verification methods and practical tools that can be used to assist the applications designer. Emphasis is placed on new research results and the application of existing results to real verification problems. The volume contains 31 papers selected from 75 submissions. These are organized into parts on reduction techniques, proof checking, symbolic verification, timing verification, partial-order approaches, case studies, model and proof checking, and other approaches. The volume starts with an invited lecture by Leslie Lamport entitled "Computer-hindered verification (humans can do it too)."
The offices of GMD-FOKUS in Berlin provided the venue for a meeting in December 1987 which signalled the birth of the ARGOSI project. The proposal gradually took shape over the following months, and after merging with another project proposal in the field of standardization of computer graphics, finally received funding from the Esprit programme in March 1989. The project stemmed from a recognition of the importance of computer graphics a'i an ena bling technology in many application areas, and of the need to build bridges between computer graphics and telecommunications. The overall aims of the pro ject were twofold: * Advance the state of the art in the transfer of graphical information across international networks. * Improve quality and applicability of standards in this area. This book records the key results of the project and the contributions the project has made to standardization related to the transfer of graphical information across open networks. Contributions have included a demonstration of a prototype appli cation - a road transport information system running over public international of a new data networks - shown at the Esprit '91 exhibition, the standardization FT AM document type allowing structured access to graphical information (represented according to the Computer Graphics Metafile (CGM) standard) and major contributions to a mapping of the X-Windows protocol onto an OSI stack. The project also organized two international workshops. the first on Graphics and Communications, and the second on Distributed Window Systems.
Writing a compiler is a very good practice for learning how complex problems could be solved using methods from software engineering. It is extremely important to program rather carefully and exactly, because we have to remember that a compiler is a program which has to handle an input that is usually incorrect. Therefore, the compiler itself must be error-free. Referring to Niklaus Wirth, we postulate that the grammatical structure of a language must be reflected in the structure of the compiler. Thus, the complexity of a language determines the complexity of the compiler (cf. Compilerbau. B. G. Teubner Verlag, Stuttgart, 1986). This book is about the translation of programs written in a high level programming language into machine code. It deals with all the major aspects of compilation systems (including a lot of examples and exercises), and was outlined for a one session course on compilers. The book can be used both as a teacher's reference and as a student's text book. In contrast to some other books on that topic, this text is rather concentrated to the point. However, it treats all aspects which are necessary to understand how compilation systems will work. Chapter One gives an introductory survey of compilers. Different types of compilation systems are explained, a general compiler environment is shown, and the principle phases of a compiler are introduced in an informal way to sensitize the reader for the topic of compilers.
About this Book This book is a detailed introduction to programming with the OSF /MotifI'M graphical user interface. It is an introduction in that it does not require the reader to have experience programming in the X Window environment. It is detailed in that it teaches you how to use the interface components provided by Motif in a complex application. Although it contains a great deal of reference material, it is not meant as an authoritative reference - that is the job of the OSF/Motif Programmer's Reference, which uses over 900 pages in the process. Instead, this book provides its reference material in a practical, "how to" manner and allows the reader to use the Programmer's Reference effectively. The target reader is an experienced C programmer and user of the X Window System under the UNIX operating system. 'the reader should be familiar with the tools provided by UNIX for the compilation and testing of programs; while this book does examine the process by which a Motif program is compiled, it does not explain that process. It also assumes that the reader is familiar with "x" terms such as 'pointer' and 'display'.
Code Generation - Concepts, Tools, Techniques is based upon the proceedings of the Dagstuhl workshop on code generation which took place from 20-24 May 1991. The aim of the workshop was to evaluate current methods of code generation and to indicate the main directions which future research is likely to take. It provided an excellent forum for the exchange of ideas and had the added advantage of bringing together European and American experts who were unlikely to meet at less specialised gatherings. This volume contains 14 of the 30 papers presented at the Dagstuhl workshop. The papers deal mainly with the following four topics: tools and techniques for code generation, code generation for parallel architectures, register allocation and phase ordering problems, and formal methods and validations. Most of the papers assess the progress of on-going research work, much of which is published here for the first time, while others provide a review of recently completed projects. The volume also contains summaries of two discussion groups which looked at code generation tools and parallel architectures. As a direct result of one of these discussions, a group of the participants have collaborated to make a pure BURS system available for public distribution. This system, named BURG, is currently being beta-tested. Code Generation - Concepts, Tools, Techniques provides a representative summary of state-of-the-art code generation techniques and an important assessment of possible future innovations. It will be an invaluable reference work for researchers and practitioners in this important area.
The trend towards powerful workstations and high-speed networks has enabled applications to communicate and manipulate digital audio and video. These are continuous media and differ from discrete media such as text and graphics in that they have stringent delay and bandwidth requirements. Neither the mechanisms used to transport ordinary data over networks nor present communication protocols are sufficient to communicate continuous media. Special operating system support must also be provided to meet the requirements of both discrete and continuous media in future multimedia applications. This volume contains the proceedings of the Second International Workshop on Network and Operating System Support for Digital Audio and Video, held in cooperation with ACM SIGCOMM and SIGOPSat the IBM European Networking Center in Heidelberg, Germany, in November 1991. The volume contains 33 selected papers together with summaries of the workshop sessions compiled by the session chairmen.
The main aims of the series of volumes "Advances in Petri Nets" are: - to present to the "outside" scientific community a fair picture of recent advances in the area of Petri nets, and - to encourage those interested in the applications and theory of concurrent systems to take a closer look at Petri nets and then join the group of researchers working in this fascinating and challenging area. The ESPRIT Basic Research Action DEMON (DEsign Methods based On Nets) has been a focus of developments withinthe Petri net community for the last three years. The papers presented in this special volume have been selected from papers submitted by participants in DEMON. The papers have been refereed and appear in revised form. The volume contains technical contributions giving insights into a number of major achievements of the DEMON project. It also contains four survey papers covering important research areas. The volume begins witha description of DEMON given by its coordinator E. Best.
This volume contains the proceedings of the fifth International Workshop on Distributed Algorithms (WDAG '91) held in Delphi, Greece, in October 1991. The workshop provided a forum for researchers and others interested in distributed algorithms, communication networks, and decentralized systems. The aim was to present recent research results, explore directions for future research, and identify common fundamental techniques that serve as building blocks in many distributed algorithms. The volume contains 23 papers selected by the Program Committee from about fifty extended abstracts on the basis of perceived originality and quality and on thematic appropriateness and topical balance. The workshop was organizedby the Computer Technology Institute of Patras University, Greece. |
![]() ![]() You may like...
Proceedings of International Conference…
Lalit Garg, Hemant Sharma, …
Hardcover
R8,713
Discovery Miles 87 130
Reliability Engineering
Aniello Amendola, Amalio Saiz De Bustamante
Hardcover
R9,001
Discovery Miles 90 010
Clinical Perspectives on Psychotherapy…
Stanley E. Greben, Ronald Ruskin
Hardcover
R1,837
Discovery Miles 18 370
Effective Treatment of Women's Pelvic…
Heather Lauren Davidson
Paperback
R1,105
Discovery Miles 11 050
Multicriteria and Multiobjective Models…
Adiel Teixeira de Almeida, Cristiano Alexandre Virginio Cavalcante, …
Hardcover
R4,529
Discovery Miles 45 290
The Map and the Territory - Exploring…
Shyam Wuppuluri, Francisco Antonio Doria
Hardcover
R3,943
Discovery Miles 39 430
Reliability, Maintenance and Logistic…
U. Dinesh Kumar, John Crocker, …
Hardcover
R6,182
Discovery Miles 61 820
Logic and Implication - An Introduction…
Petr Cintula, Carles Noguera
Hardcover
R3,629
Discovery Miles 36 290
|