![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Design and Analysis of Distributed Embedded Systems is organized similar to the conference. Chapters 1 and 2 deal with specification methods and their analysis while Chapter 6 concentrates on timing and performance analysis. Chapter 3 describes approaches to system verification at different levels of abstraction. Chapter 4 deals with fault tolerance and detection. Middleware and software reuse aspects are treated in Chapter 5. Chapters 7 and 8 concentrate on the distribution related topics such as partitioning, scheduling and communication. The book closes with a chapter on design methods and frameworks.
Real-time computer systems are very often subject to dependability requirements because of their application areas. Fly-by-wire airplane control systems, control of power plants, industrial process control systems and others are required to continue their function despite faults. Fault-tolerance and real-time requirements thus constitute a kind of natural combination in process control applications. Systematic fault-tolerance is based on redundancy, which is used to mask failures of individual components. The problem of replica determinism is thereby to ensure that replicated components show consistent behavior in the absence of faults. It might seem trivial that, given an identical sequence of inputs, replicated computer systems will produce consistent outputs. Unfortunately, this is not the case. The problem of replica non-determinism and the presentation of its possible solutions is the subject of Fault-Tolerant Real-Time Systems: The Problem of Replica Determinism. The field of automotive electronics is an important application area of fault-tolerant real-time systems. Systems like anti-lock braking, engine control, active suspension or vehicle dynamics control have demanding real-time and fault-tolerance requirements. These requirements have to be met even in the presence of very limited resources since cost is extremely important. Because of its interesting properties Fault-Tolerant Real-Time Systems gives an introduction to the application area of automotive electronics. The requirements of automotive electronics are a topic of discussion in the remainder of this work and are used as a benchmark to evaluate solutions to the problem of replica determinism.
The aim of IFIP Working Group 2.7 (13.4) for User Interface Engineering is to investigate the nature, concepts and construction of user interfaces for software systems. The group's scope is: * developing user interfaces based on knowledge of system and user behaviour; * developing frameworks for reasoning about interactive systems; and * developing engineering models for user interfaces. Every three years, the group holds a "working conference" on these issues. The conference mixes elements of a regular conference and a workshop. As in a regular conference, the papers describe relatively mature work and are thoroughly reviewed. As in a workshop, the audience is kept small, to enable in-depth discussions. The conference is held over 5-days (instead of the usual 3-days) to allow such discussions. Each paper is discussed after it is presented. A transcript of the discussion is found at the end of each paper in these proceedings, giving important insights about the paper. Each session was assigned a "notes taker", whose responsibility was to collect/transcribe the questions and answers during the session. After the conference, the original transcripts were distributed (via the Web) to the attendees and modifications that clarified the discussions were accepted.
This book constitutes the refereed proceedings of the 4th International Workshop on Reversible Computation, RC 2012, held in Copenhagen, Denmark, in July 2012. The 19 contributions presented in this volume were carefully reviewed and selected from 46 submissions. The papers cover theoretical considerations, reversible software and reversible hardware, and physical realizations and applications in quantum computing.
Constraint Logic Programming (CLP), an area of extreme research interest in recent years, extends the semantics of Prolog in such a way that the combinatorial explosion, a characteristic of most problems in the field of Artificial Intelligence, can be tackled efficiently. By employing solvers dedicated to each domain instead of the unification algorithm, CLP drastically reduces the search space of the problem, which leads to increased efficiency in the execution of logic programs. CLP offers the possibility of solving complex combinatorial problems in an efficient way, and at the same time maintains the advantages offered by the declarativeness of logic programming. The aim of this book is to present parallel and constraint logic programming, offering a basic understanding of the two fields to the reader new to the area. The first part of the book gives an introduction to the fundamental aspects of conventional logic programming which is necessary for understanding the parts that follow. The second part includes an introduction to parallel logic programming, architectures and implementations proposed in the area.Finally, the third part presents the principles of constraint logic programming. The last two parts also include descriptions of the supporting facilities for the two paradigms in two popular systems; ECLIPSe and SICStus. These platforms have been selected mainly because they offer both parallel and constraint features. Annotated and explained examples are also included in the relevant parts, offering a valuable guide and a first practical experience to the reader. Finally, applications of the covered paradigms are presented. The authors felt that a book of this kind should provide some theoretical background necessary for the understanding of the covered logic programming paradigms, and a quick start for the reader interested in writing parallel and constraint logic programming programs. However it is outside the scope of this book to provide a deep theoretical background of the two areas.In that sense, this book is addressed to a public interested in obtaining a knowledge of the domain, without spending the time and effort to understand the extensive theoretical work done in the field -- namely postgraduate and advanced undergraduate students in the area of logic programming. This book fills a gap in the current bibliography, since there is no comprehensive book of this level that covers the areas of conventional, parallel, and constraint logic programming. Parallel and Constraint Logic Programming: An Introduction to Logic, Parallelism and Constraints is appropriate for an advanced level course on Logic Programming or Constraints, and as a reference for practitioners and researchers in industry.
Formal Methods for Open Object-Based Distributed Systems presents the leading edge in several related fields, specifically object-orientated programming, open distributed systems and formal methods for object-oriented systems. With increased support within industry regarding these areas, this book captures the most up-to-date information on the subject. Many topics are discussed, including the following important areas: object-oriented design and programming; formal specification of distributed systems; open distributed platforms; types, interfaces and behaviour; formalisation of object-oriented methods. This volume comprises the proceedings of the International Workshop on Formal Methods for Open Object-based Distributed Systems (FMOODS), sponsored by the International Federation for Information Processing (IFIP) which was held in Florence, Italy, in February 1999. Formal Methods for Open Object-Based Distributed Systems is suitable as a secondary text for graduate-level courses in computer science and telecommunications, and as a reference for researchers and practitioners in industry, commerce and government.
This state-of-the-art survey features topics related to the impact of multicore, manycore, and coprocessor technologies in science and large-scale applications in an interdisciplinary environment. The papers included in this survey cover research in mathematical modeling, design of parallel algorithms, aspects of microprocessor architecture, parallel programming languages, hardware-aware computing, heterogeneous platforms, manycore technologies, performance tuning, and requirements for large-scale applications. The contributions presented in this volume are an outcome of an inspiring conference conceived and organized by the editors at the University of Applied Sciences (HfT) in Stuttgart, Germany, in September 2012. The 10 revised full papers selected from 21 submissions are presented together with the twelve poster abstracts and focus on combination of new aspects of microprocessor technologies, parallel applications, numerical simulation, and software development; thus they clearly show the potential of emerging technologies in the area of multicore and manycore processors that are paving the way towards personal supercomputing and very likely towards exascale computing.
This book constitutes the refereed proceedings of the 17th National Conference on Computer Engineering and Technology, NCCET 2013, held in Xining, China, in July 2013. The 26 papers presented were carefully reviewed and selected from 234 submissions. They are organized in topical sections named: Application Specific Processors; Communication Architecture; Computer Application and Software Optimization; IC Design and Test; Processor Architecture; Technology on the Horizon.
Secure Electronic Voting is an edited volume, which includes chapters authored by leading experts in the field of security and voting systems. The chapters identify and describe the given capabilities and the strong limitations, as well as the current trends and future perspectives of electronic voting technologies, with emphasis in security and privacy. Secure Electronic Voting includes state-of-the-art material on existing and emerging electronic and Internet voting technologies, which may eventually lead to the development of adequately secure e-voting systems. This book also includes an overview of the legal framework with respect to voting, a description of the user requirements for the development of a secure e-voting system, and a discussion on the relevant technical and social concerns. Secure Electronic Voting includes, also, three case studies on the use and evaluation of e-voting systems in three different real world environments.
LOTOS (Language Of Temporal Ordering Specification) became an international standard in 1989, although application of preliminary versions of the language to communication services and protocols of the ISO/OSI family dates back to 1984. This history of the use of LOTOS made it apparent that more advantages than the pure production of standard reference documents were to be expected from the use of such formal description techniques. LOTOSphere: Software Development with LOTOS describes in depth a five year project that moved LOTOS out of the ISO tower into software engineering practice. LOTOS became a vehicle for efficient, yet formally based industrial software specification, design, verification, implementation and testing. LOTOSphere: Software Development with LOTOS is divided into six parts. The first introduces the reader to LOTOS and the project LOTOSphere. The five remaining each treat an important part of the software development life cycle using LOTOS. This is the first book to give a comprehensive treatment of the use of these formal description techniques in a software engineering environment. It will thus be a valuable reference for researchers and software developers and can also be used as a text for an advanced course on the subject.
For real-time systems, the worst-case execution time (WCET) is the key objective to be considered. Traditionally, code for real-time systems is generated without taking this objective into account and the WCET is computed only after code generation. Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems presents the first comprehensive approach integrating WCET considerations into the code generation process. Based on the proposed reconciliation between a compiler and a timing analyzer, a wide range of novel optimization techniques is provided. Among others, the techniques cover source code and assembly level optimizations, exploit machine learning techniques and address the design of modern systems that have to meet multiple objectives. Using these optimizations, the WCET of real-time applications can be reduced by about 30% to 45% on the average. This opens opportunities for decreasing clock speeds, costs and energy consumption of embedded processors. The proposed techniques can be used for all types real-time systems, including automotive and avionics IT systems.
Mastering interoperability in a computing environment consisting of different operating systems and hardware architectures is a key requirement which faces system engineers building distributed information systems. Distributed applications are a necessity in most central application sectors of the contemporary computerized society, for instance, in office automation, banking, manufacturing, telecommunication and transportation. This book focuses on the techniques available or under development, with the goal of easing the burden of constructing reliable and maintainable interoperable information systems. The topics covered in this book include: * Management of distributed systems; * Frameworks and construction tools; * Open architectures and interoperability techniques; * Experience with platforms like CORBA and RMI; * Language interoperability (e.g. Java); * Agents and mobility; * Quality of service and fault tolerance; * Workflow and object modelling issues; and * Electronic commerce .The book contains the proceedings of the International Working Conference on Distributed Applications and Interoperable Systems II (DAIS'99), which was held June 28-July 1, 1999 in Helsinki, Finland. It was sponsored by the International Federation of Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported.
Embedded systems are becoming one of the major driving forces in computer science. Furthermore, it is the impact of embedded information technology that dictates the pace in most engineering domains. Nearly all technical products above a certain level of complexity are not only controlled but increasingly even dominated by their embedded computer systems. Traditionally, such embedded control systems have been implemented in a monolithic, centralized way. Recently, distributed solutions are gaining increasing importance. In this approach, the control task is carried out by a number of controllers distributed over the entire system and connected by some interconnect network, like fieldbuses. Such a distributed embedded system may consist of a few controllers up to several hundred, as in today's top-range automobiles. Distribution and parallelism in embedded systems design increase the engineering challenges and require new development methods and tools. This book is the result of the International Workshop on Distributed and Parallel Embedded Systems (DIPES'98), organized by the International Federation for Information Processing (IFIP) Working Groups 10.3 (Concurrent Systems) and 10.5 (Design and Engineering of Electronic Systems). The workshop took place in October 1998 in Schloss Eringerfeld, near Paderborn, Germany, and the resulting book reflects the most recent points of view of experts from Brazil, Finland, France, Germany, Italy, Portugal, and the USA. The book is organized in six chapters: `Formalisms for Embedded System Design': IP-based system design and various approaches to multi-language formalisms. `Synthesis from Synchronous/Asynchronous Specification': Synthesis techniques based on Message Sequence Charts (MSC), StateCharts, and Predicate/Transition Nets. `Partitioning and Load-Balancing': Application in simulation models and target systems. <`Verification and Validation': Formal techniques for precise verification and more pragmatic approaches to validation. `Design Environments' for distributed embedded systems and their impact on the industrial state of the art. `Object Oriented Approaches': Impact of OO-techniques on distributed embedded systems. GBP/LISTGBP This volume will be essential reading for computer science researchers and application developers.
This book constitutes the refereed proceedings of the 16th National Conference on Computer Engineering and Technology, NCCET 2012, held in Shanghai, China, in August 2012. The 27 papers presented were carefully reviewed and selected from 108 submissions. They are organized in topical sections named: microprocessor and implementation; design of integration circuit; I/O interconnect; and measurement, verification, and others.
The International Symposium on Supercomputing - New Horizon of Computational Science was held on September 1-3, 1997 at the Science MuseuminTokyo, tocelebrate60-yearbirthdayofProfessorDaiichiroSug imoto, who hasbeenleadingtheoreticalandnumericalastrophysicsfor 30 years. The conference coveredexceptionally wide range ofsubjects, to follow Sugimoto'saccomplishmentsinmanyfields.Onthefirstdaywehadthree talksonstellarevolutionandsixtalksonstellardynamics. Onthesecond day, six talks on special-purpose computingand four talks on large-scale computing in MolecularDynamicswere given. Onthethirdandthelast day, threetalks on dedicatedcomputerson LatticeQCDcalculationsand sixtalksonpresentandfutureofgeneral-purposeHPCsystemsweregiven. Inaddition, some30posterswerepresentedonvarioussubjectsincompu tationalscience. Instellarevolution, D.Arnett (Univ. ofArizona) gaveanexcellenttalk on the recent development in three-dimensionalsimulation ofSupernova, inparticularonquantitativecomparisonbetweendifferenttechniquessuch asgrid-basedmethodsandSPH (SmoothedParticleHydrodynamics). Y. Kondo (NASA) discussedresentadvanceinthemodelingoftheevolution ofbinarystars, and1.Hachisu(Univ. ofTokyo)discussedRayleigh-Taylor instabilitiesinsupernovae(contributionnotincluded). Instellardynamics, P.Hut(lAS)gaveasuperbreviewonthelong-term evolution ofstellarsystem, J. Makino (Univ. ofTokyo) described briefly theresultsobtainedonGRAPE-4special-purposecomputerandthefollow up project, GRAPE-6, whichisapprovedas ofJune 1997. GRAPE-6will be completed by year 2001 with the peak speed around 200 Tflops. R. Spurzem (Rechen-Inst.) and D. Heggie (Univ. of Edinburgh) talked on recentadvanceinthestudyofstarclusters, andE.Athanassoula(Marseille Observatory) describedthe work doneusingtheirGRAPE-3 systems. S. Ida (TokyoInst. ofTechnology) describedthe result ofthe simulationof theformationofMoon. Thefirst talkoftheseconddaywas given by F-H. Hsu oftheIBMT.J. Watson Research center, on "Deep Blue," the special-purpose computer for Chess, which, forthefirst timeinthehistory, wonthematchwiththe besthumanplayer, Mr. GaryKasparov(unfortunately, Hsu'scontribution isnot included in this volume). Then A. Bakker of Delft Inst. of Tech nology looked back his 20 years ofdevelopingspecial-purpose computers formoleculardynamicsandsimulationofspinsystems. J.Arnoldgavean overviewoftheemergingnewfieldofreconfigurablecomputing, whichfalls inbetweentraditionalgeneral-purposecomputersandspecial-purposecom puters. S.Okumura(NAO)describedthehistoryofultra-high-performance digital signalprocessors for radio astronomy. They havebuilt a machine with 20GaPS performance in early 80s, and keep improvingthe speed. M. Taiji (ISM) told on general aspects of GRAPE-type systems, and T. Narumi (Univ. of Tokyo) the 100-Tflops GRAPE-type machine for MD calculations, whichwillbefinished by 199
Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node.
The rapid development of optical fiber transmission technology has created the possibility for constructing digital networks that are as ubiquitous as the current voice network but which can carry video, voice, and data in massive qlJantities. How and when such networks will evolve, who will pay for them, and what new applications will use them is anyone's guess. There appears to be no doubt, however, that the trend in telecommunication networks is toward far greater transmission speeds and toward greater heterogeneity in the requirements of different applications. This book treats some of the central problems involved in these networks of the future. First, how does one switch data at speeds orders of magnitude faster than that of existing networks? This problem has roots in both classical switching for telephony and in switching for packet networks. There are a number of new twists here, however. The first is that the high speeds necessitate the use of highly parallel processing and place a high premium on computational simplicity. The second is that the required data speeds and allowable delays of different applications differ by many orders of magnitude. The third is that it might be desirable to support both point to point applications and also applications involving broadcast from one source to a large set of destinations.
This monograph evolved from my Ph. D dissertation completed at the Laboratory of Computer Science, MIT, during the Summer of 1986. In my dissertation I proposed a pipelined code mapping scheme for array operations on static dataflow architectures. The main addition to this work is found in Chapter 12, reflecting new research results developed during the last three years since I joined McGill University-results based upon the principles in my dissertation. The terminology dataflow soft ware pipelining has been consistently used since publication of our 1988 paper on the argument-fetching dataflow architecture model at McGill University 43]. In the first part of this book we describe the static data flow graph model as an operational model for concurrent computation. We look at timing considerations for program graph execution on an ideal static dataflow computer, examine the notion of pipe lining, and characterize its performance. We discuss balancing techniques used to transform certain graphs into fully pipelined data flow graphs. In particular, we show how optimal balancing of an acyclic data flow graph can be formulated as a linear programming problem for which an optimal solution exists. As a major result, we show the optimal balancing problem of acyclic data flow graphs is reduceable to a class of linear programming problem, the net work flow problem, for which well-known efficient algorithms exist. This result disproves the conjecture that such problems are computationally hard."
This monograph studies the logical aspects of domains as used in de notational semantics of programming languages. Frameworks of domain logics are introduced; these serve as foundations for systematic derivations of proof systems from denotational semantics of programming languages. Any proof system so derived is guaranteed to agree with denotational se mantics in the sense that the denotation of any program coincides with the set of assertions true of it. The study focuses on two categories for dena tational semantics: SFP domains, and the less standard, but important, category of stable domains. The intended readership of this monograph includes researchers and graduate students interested in the relation between semantics of program ming languages and formal means of reasoning about programs. A basic knowledge of denotational semantics, mathematical logic, general topology, and category theory is helpful for a full understanding of the material. Part I SFP Domains Chapter 1 Introduction This chapter provides a brief exposition to domain theory, denotational se mantics, program logics, and proof systems. It discusses the importance of ideas and results on logic and topology to the understanding of the relation between denotational semantics and program logics. It also describes the motivation for the work presented by this monograph, and how that work fits into a more general program. Finally, it gives a short summary of the results of each chapter. 1. 1 Domain Theory Programming languages are languages with which to perform computa tion."
Although integrating security into the design of applications has proven to deliver resilient products, there are few books available that provide guidance on how to incorporate security into the design of an application. Filling this need, Security for Service Oriented Architectures examines both application and security architectures and illustrates the relationship between the two. Supplying authoritative guidance on how to design distributed and resilient applications, the book provides an overview of the various standards that service oriented and distributed applications leverage, including SOAP, HTML 5, SAML, XML Encryption, XML Signature, WS-Security, and WS-SecureConversation. It examines emerging issues of privacy and discusses how to design applications within a secure context to facilitate the understanding of these technologies you need to make intelligent decisions regarding their design. This complete guide to security for web services and SOA considers the malicious user story of the abuses and attacks against applications as examples of how design flaws and oversights have subverted the goals of providing resilient business functionality. It reviews recent research on access control for simple and conversation-based web services, advanced digital identity management techniques, and access control for web-based workflows. Filled with illustrative examples and analyses of critical issues, this book provides both security and software architects with a bridge between software and service-oriented architectures and security architectures, with the goal of providing a means to develop software architectures that leverage security architectures. It is also a reliable source of reference on Web services standards. Coverage includes the four types of architectures, implementing and securing SOA, Web 2.0, other SOA platforms, auditing SOAs, and defending and detecting attacks.
Business Component-Based Software Engineering, an edited volume, aims to complement some other reputable books on CBSE, by stressing how components are built for large-scale applications, within dedicated development processes and for easy and direct combination. This book will emphasize these three facets and will offer a complete overview of some recent progresses. Projects and works explained herein will prompt graduate students, academics, software engineers, project managers and developers to adopt and to apply new component development methods gained from and validated by the authors. The authors of Business Component-Based Software Engineering are academic and professionals, experts in the field, who will introduce the state of the art on CBSE from their shared experience by working on the same projects. Business Component-Based Software Engineering is designed to meet the needs of practitioners and researchers in industry, and graduate-level students in Computer Science and Engineering.
The formal study of program behavior has become an essential ingredient in guiding the design of new computer architectures. Accurate characterization of applications leads to efficient design of high performing architectures. Quantitative and analytical characterization of workloads is important to understand and exploit the interesting features of workloads. This book includes ten chapters on various aspects of workload characterizati on. File caching characteristics of the industry-standard web-serving benchmark SPECweb99 are presented by Keller et al. in Chapter 1, while value locality of SPECJVM98 benchmarks are characterized by Rychlik et al. in Chapter 2. SPECJVM98 benchmarks are visited again in Chapter 3, where Tao et al. study the operating system activity in Java programs. In Chapter 4, KleinOsowski et al. describe how the SPEC2000 CPU benchmark suite may be adapted for computer architecture research and present the small, representative input data sets they created to reduce simulation time without compromising on accuracy. Their research has been recognized by the Standard Performance Evaluation Corporation (SPEC) and is listed on the official SPEC website, http://www. spec. org/osg/cpu2000/research/umnl. The main contribution of Chapter 5 is the proposal of a new measure called locality surface to characterize locality of reference in programs. Sorenson et al. describe how a three-dimensional surface can be used to represent both of programs. In Chapter 6, Thornock et al.
Soft computing is a consortium of computing methodologies that provide a foundation for the conception, design, and deployment of intelligent systems and aims to formalize the human ability to make rational decisions in an environment of uncertainty and imprecision. This book is based on a NATO Advanced Study Institute held in 1996 on soft computing and its applications. The distinguished contributors consider the principal constituents of soft computing, namely fuzzy logic, neurocomputing, genetic computing, and probabilistic reasoning, the relations between them, and their fusion in industrial applications. Two areas emphasized in the book are how to achieve a synergistic combination of the main constituents of soft computing and how the combination can be used to achieve a high Machine Intelligence Quotient.
The Second International Workshop on Cooperative Internet Computing (CIC2002) has brought together researchers, academics, and industry practitioners who are involved and interested in the development of advanced and emerging cooperative computing technologies. Cooperative computing is an important computing paradigm to enable different parties to work together towards a pre defined non-trivial goal. It encompasses important technological areas like computer supported cooperative work, workflow, computer assisted design and concurrent programming. As technologies continue to advance and evolve, there is an increasing need to research and develop new classes of middlewares and applications to leverage on the combined benefits of Internet and web to provide users and programmers with highly interactive and robust cooperative computing environment. It is the aim of this forum to promote close interactions and exchange of ideas among researchers, academics and practitioners on the state-of-the art researches in all of these exciting areas. We have partnered with Kluwer Acedamic Press this year to bring to you a book compilation of the papers that were presented at the CIC2002 workshop. The importance of the research area is reflected both in the quality and quantity of the submitted papers, where each paper was reviewed by at least three PC members. As a result, we were able to only accept 14 papers for full presentation at the workshop, while having to reject several excellent papers due to the limitations of the program schedule.
Distributed and Parallel Systems: Cluster and Grid Computing is the proceedings of the fourth Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by Johannes Kepler University, Linz, Austria and the MTA SZTAKI Computer and Automation Research Institute. The papers in this volume cover a broad range of research topics presented in four groups. The first one introduces cluster tools and techniques, especially the issues of load balancing and migration. Another six papers deal with grid and global computing including grid infrastructure, tools, applications and mobile computing. The next nine papers present general questions of distributed development and applications. The last four papers address a crucial issue in distributed computing: fault tolerance and dependable systems. This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems. |
![]() ![]() You may like...
Advances in Object-oriented Database…
Asuman Dogac, Alexandros Biliris, …
Hardcover
R2,625
Discovery Miles 26 250
Flash Memory Integration - Performance…
Jalil Boukhobza, Pierre Olivier
Hardcover
R1,942
Discovery Miles 19 420
Enterprise Big Data Engineering…
Martin Atzmueller, Samia Oussena, …
Hardcover
R5,590
Discovery Miles 55 900
Expert Oracle Indexing and Access Paths…
Darl Kuhn, Sam R. Alapati, …
Paperback
R3,090
Discovery Miles 30 900
|