![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
th The 20 anniversary of the IFIP WG6. 1 Joint International Conference on Fonna! Methods for Distributed Systems and Communication Protocols (FORTE XIII / PSTV XX) was celebrated by the year 2000 edition of the Conference, which was held for the first time in Italy, at Pisa, October 10-13, 2000. In devising the subtitle for this special edition --'Fonna! Methods Implementation Under Test' --we wanted to convey two main concepts that, in our opinion, are reflected in the contents of this book. First, the early, pioneering phases in the development of Formal Methods (FM's), with their conflicts between evangelistic and agnostic attitudes, with their over optimistic applications to toy examples and over-skeptical views about scalability to industrial cases, with their misconceptions and myths . . . , all this is essentially over. Many FM's have successfully reached their maturity, having been 'implemented' into concrete development practice: a number of papers in this book report about successful experiences in specifYing and verifYing real distributed systems and protocols. Second, one of the several myths about FM's - the fact that their adoption would eventually eliminate the need for testing - is still quite far from becoming a reality, and, again, this book indicates that testing theory and applications are still remarkably healthy. A total of 63 papers have been submitted to FORTEIPSTV 2000, out of which the Programme Committee has selected 22 for presentation at the Conference and inclusion in the Proceedings.
The aim of IFIP Working Group 2.7 (13.4) for User Interface Engineering is to investigate the nature, concepts and construction of user interfaces for software systems. The group's scope is: * developing user interfaces based on knowledge of system and user behaviour; * developing frameworks for reasoning about interactive systems; and * developing engineering models for user interfaces. Every three years, the group holds a "working conference" on these issues. The conference mixes elements of a regular conference and a workshop. As in a regular conference, the papers describe relatively mature work and are thoroughly reviewed. As in a workshop, the audience is kept small, to enable in-depth discussions. The conference is held over 5-days (instead of the usual 3-days) to allow such discussions. Each paper is discussed after it is presented. A transcript of the discussion is found at the end of each paper in these proceedings, giving important insights about the paper. Each session was assigned a "notes taker", whose responsibility was to collect/transcribe the questions and answers during the session. After the conference, the original transcripts were distributed (via the Web) to the attendees and modifications that clarified the discussions were accepted.
This book constitutes the thoroughly refereed conference proceedings of the 9th International Symposium on Reconfigurable Computing: Architectures, Tools and Applications, ARC 2013, held in Los Angeles, CA, USA, in March 2013. The 28 revised papers presented, consisting of 20 full papers and 11 poster papers were carefully selected from 41 submissions. The topics covered are applications, arithmetic, design optimization for FPGAs, architectures, place and routing.
This volume constitutes the refereed proceedings of the 11th International Conference on Applied Parallel and Scientific Computing, PARA 2012, held in Helsinki, Finland, in June 2012. The 35 revised full papers presented were selected from numerous submissions and are organized in five technical sessions covering the topics of advances in HPC applications, parallel algorithms, performance analyses and optimization, application of parallel computing in industry and engineering, and HPC interval methods. In addition, three of the topical minisymposia are described by a corresponding overview article on the minisymposia topic. In order to cover the state-of-the-art of the field, at the end of the book a set of abstracts describe some of the conference talks not elaborated into full articles.
This book constitutes the refereed proceedings of the 16th National Conference on Computer Engineering and Technology, NCCET 2012, held in Shanghai, China, in August 2012. The 27 papers presented were carefully reviewed and selected from 108 submissions. They are organized in topical sections named: microprocessor and implementation; design of integration circuit; I/O interconnect; and measurement, verification, and others.
Compilers and Operating Systems for Low Power focuses on both application-level compiler directed energy optimization and low-power operating systems. Chapters have been written exclusively for this volume by several of the leading researchers and application developers active in the field. The first six chapters focus on low energy operating systems, or more in general, energy-aware middleware services. The next five chapters are centered on compilation and code optimization. Finally, the last chapter takes a more general viewpoint on mobile computing. The material demonstrates the state-of-the-art work and proves that to obtain the best energy/performance characteristics, compilers, system software, and architecture must work together. The relationship between energy-aware middleware and wireless microsensors, mobile computing and other wireless applications are covered. This work will be of interest to researchers in the areas of low-power computing, embedded systems, compiler optimizations, and operating systems.
This volume contains a selection of papers that focus on the state-of the-art in real-time scheduling and resource management. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D.C. A companion volume by the title Foundations of Real-Time Computing: Fonnal Specifications and Methods complements this book by addressing many of the most advanced approaches currently being investigated in the arena of formal specification and verification of real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real-time computing systems on a scientific basis. Many of the papers in this book take care to define the notion of real-time system precisely, because it is often easy to misunderstand what is meant by that term. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the very difficult problems of scheduling tasks and resource management in computer systems whose performance is inextricably fused with the achievement of deadlines. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End-use applications of deadline-driven real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."
Constraint Logic Programming (CLP), an area of extreme research interest in recent years, extends the semantics of Prolog in such a way that the combinatorial explosion, a characteristic of most problems in the field of Artificial Intelligence, can be tackled efficiently. By employing solvers dedicated to each domain instead of the unification algorithm, CLP drastically reduces the search space of the problem, which leads to increased efficiency in the execution of logic programs. CLP offers the possibility of solving complex combinatorial problems in an efficient way, and at the same time maintains the advantages offered by the declarativeness of logic programming. The aim of this book is to present parallel and constraint logic programming, offering a basic understanding of the two fields to the reader new to the area. The first part of the book gives an introduction to the fundamental aspects of conventional logic programming which is necessary for understanding the parts that follow. The second part includes an introduction to parallel logic programming, architectures and implementations proposed in the area.Finally, the third part presents the principles of constraint logic programming. The last two parts also include descriptions of the supporting facilities for the two paradigms in two popular systems; ECLIPSe and SICStus. These platforms have been selected mainly because they offer both parallel and constraint features. Annotated and explained examples are also included in the relevant parts, offering a valuable guide and a first practical experience to the reader. Finally, applications of the covered paradigms are presented. The authors felt that a book of this kind should provide some theoretical background necessary for the understanding of the covered logic programming paradigms, and a quick start for the reader interested in writing parallel and constraint logic programming programs. However it is outside the scope of this book to provide a deep theoretical background of the two areas.In that sense, this book is addressed to a public interested in obtaining a knowledge of the domain, without spending the time and effort to understand the extensive theoretical work done in the field -- namely postgraduate and advanced undergraduate students in the area of logic programming. This book fills a gap in the current bibliography, since there is no comprehensive book of this level that covers the areas of conventional, parallel, and constraint logic programming. Parallel and Constraint Logic Programming: An Introduction to Logic, Parallelism and Constraints is appropriate for an advanced level course on Logic Programming or Constraints, and as a reference for practitioners and researchers in industry.
Secure Electronic Voting is an edited volume, which includes chapters authored by leading experts in the field of security and voting systems. The chapters identify and describe the given capabilities and the strong limitations, as well as the current trends and future perspectives of electronic voting technologies, with emphasis in security and privacy. Secure Electronic Voting includes state-of-the-art material on existing and emerging electronic and Internet voting technologies, which may eventually lead to the development of adequately secure e-voting systems. This book also includes an overview of the legal framework with respect to voting, a description of the user requirements for the development of a secure e-voting system, and a discussion on the relevant technical and social concerns. Secure Electronic Voting includes, also, three case studies on the use and evaluation of e-voting systems in three different real world environments.
LOTOS (Language Of Temporal Ordering Specification) became an international standard in 1989, although application of preliminary versions of the language to communication services and protocols of the ISO/OSI family dates back to 1984. This history of the use of LOTOS made it apparent that more advantages than the pure production of standard reference documents were to be expected from the use of such formal description techniques. LOTOSphere: Software Development with LOTOS describes in depth a five year project that moved LOTOS out of the ISO tower into software engineering practice. LOTOS became a vehicle for efficient, yet formally based industrial software specification, design, verification, implementation and testing. LOTOSphere: Software Development with LOTOS is divided into six parts. The first introduces the reader to LOTOS and the project LOTOSphere. The five remaining each treat an important part of the software development life cycle using LOTOS. This is the first book to give a comprehensive treatment of the use of these formal description techniques in a software engineering environment. It will thus be a valuable reference for researchers and software developers and can also be used as a text for an advanced course on the subject.
For real-time systems, the worst-case execution time (WCET) is the key objective to be considered. Traditionally, code for real-time systems is generated without taking this objective into account and the WCET is computed only after code generation. Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems presents the first comprehensive approach integrating WCET considerations into the code generation process. Based on the proposed reconciliation between a compiler and a timing analyzer, a wide range of novel optimization techniques is provided. Among others, the techniques cover source code and assembly level optimizations, exploit machine learning techniques and address the design of modern systems that have to meet multiple objectives. Using these optimizations, the WCET of real-time applications can be reduced by about 30% to 45% on the average. This opens opportunities for decreasing clock speeds, costs and energy consumption of embedded processors. The proposed techniques can be used for all types real-time systems, including automotive and avionics IT systems.
Mastering interoperability in a computing environment consisting of different operating systems and hardware architectures is a key requirement which faces system engineers building distributed information systems. Distributed applications are a necessity in most central application sectors of the contemporary computerized society, for instance, in office automation, banking, manufacturing, telecommunication and transportation. This book focuses on the techniques available or under development, with the goal of easing the burden of constructing reliable and maintainable interoperable information systems. The topics covered in this book include: * Management of distributed systems; * Frameworks and construction tools; * Open architectures and interoperability techniques; * Experience with platforms like CORBA and RMI; * Language interoperability (e.g. Java); * Agents and mobility; * Quality of service and fault tolerance; * Workflow and object modelling issues; and * Electronic commerce .The book contains the proceedings of the International Working Conference on Distributed Applications and Interoperable Systems II (DAIS'99), which was held June 28-July 1, 1999 in Helsinki, Finland. It was sponsored by the International Federation of Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported.
Formal Methods for Open Object-Based Distributed Systems presents the leading edge in several related fields, specifically object-orientated programming, open distributed systems and formal methods for object-oriented systems. With increased support within industry regarding these areas, this book captures the most up-to-date information on the subject. Many topics are discussed, including the following important areas: object-oriented design and programming; formal specification of distributed systems; open distributed platforms; types, interfaces and behaviour; formalisation of object-oriented methods. This volume comprises the proceedings of the International Workshop on Formal Methods for Open Object-based Distributed Systems (FMOODS), sponsored by the International Federation for Information Processing (IFIP) which was held in Florence, Italy, in February 1999. Formal Methods for Open Object-Based Distributed Systems is suitable as a secondary text for graduate-level courses in computer science and telecommunications, and as a reference for researchers and practitioners in industry, commerce and government.
Embedded systems are becoming one of the major driving forces in computer science. Furthermore, it is the impact of embedded information technology that dictates the pace in most engineering domains. Nearly all technical products above a certain level of complexity are not only controlled but increasingly even dominated by their embedded computer systems. Traditionally, such embedded control systems have been implemented in a monolithic, centralized way. Recently, distributed solutions are gaining increasing importance. In this approach, the control task is carried out by a number of controllers distributed over the entire system and connected by some interconnect network, like fieldbuses. Such a distributed embedded system may consist of a few controllers up to several hundred, as in today's top-range automobiles. Distribution and parallelism in embedded systems design increase the engineering challenges and require new development methods and tools. This book is the result of the International Workshop on Distributed and Parallel Embedded Systems (DIPES'98), organized by the International Federation for Information Processing (IFIP) Working Groups 10.3 (Concurrent Systems) and 10.5 (Design and Engineering of Electronic Systems). The workshop took place in October 1998 in Schloss Eringerfeld, near Paderborn, Germany, and the resulting book reflects the most recent points of view of experts from Brazil, Finland, France, Germany, Italy, Portugal, and the USA. The book is organized in six chapters: `Formalisms for Embedded System Design': IP-based system design and various approaches to multi-language formalisms. `Synthesis from Synchronous/Asynchronous Specification': Synthesis techniques based on Message Sequence Charts (MSC), StateCharts, and Predicate/Transition Nets. `Partitioning and Load-Balancing': Application in simulation models and target systems. <`Verification and Validation': Formal techniques for precise verification and more pragmatic approaches to validation. `Design Environments' for distributed embedded systems and their impact on the industrial state of the art. `Object Oriented Approaches': Impact of OO-techniques on distributed embedded systems. GBP/LISTGBP This volume will be essential reading for computer science researchers and application developers.
Communication Systems: The State of the Art captures the depth and breadth of the field of communication systems: -Architectures and Protocols for Distributed Systems; -Network and Internetwork Architectures; -Performance of Communication Systems; -Internet Applications Engineering; -Management of Networks and Distributed Systems; -Smart Networks; -Wireless Communications; -Communication Systems for Developing Countries; -Photonic Networking; -Communication Systems in Electronic Commerce. This volume's scope and authority present a rare opportunity for people in many different fields to gain a practical understanding of where the leading edge in communication systems lies today-and where it will be tomorrow.
Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node.
This monograph evolved from my Ph. D dissertation completed at the Laboratory of Computer Science, MIT, during the Summer of 1986. In my dissertation I proposed a pipelined code mapping scheme for array operations on static dataflow architectures. The main addition to this work is found in Chapter 12, reflecting new research results developed during the last three years since I joined McGill University-results based upon the principles in my dissertation. The terminology dataflow soft ware pipelining has been consistently used since publication of our 1988 paper on the argument-fetching dataflow architecture model at McGill University 43]. In the first part of this book we describe the static data flow graph model as an operational model for concurrent computation. We look at timing considerations for program graph execution on an ideal static dataflow computer, examine the notion of pipe lining, and characterize its performance. We discuss balancing techniques used to transform certain graphs into fully pipelined data flow graphs. In particular, we show how optimal balancing of an acyclic data flow graph can be formulated as a linear programming problem for which an optimal solution exists. As a major result, we show the optimal balancing problem of acyclic data flow graphs is reduceable to a class of linear programming problem, the net work flow problem, for which well-known efficient algorithms exist. This result disproves the conjecture that such problems are computationally hard."
General-purpose graphics processing units (GPGPU) have emerged as an important class of shared memory parallel processing architectures, with widespread deployment in every computer class from high-end supercomputers to embedded mobile platforms. Relative to more traditional multicore systems of today, GPGPUs have distinctly higher degrees of hardware multithreading (hundreds of hardware thread contexts vs. tens), a return to wide vector units (several tens vs. 1-10), memory architectures that deliver higher peak memory bandwidth (hundreds of gigabytes per second vs. tens), and smaller caches/scratchpad memories (less than 1 megabyte vs. 1-10 megabytes). In this book, we provide a high-level overview of current GPGPU architectures and programming models. We review the principles that are used in previous shared memory parallel platforms, focusing on recent results in both the theory and practice of parallel algorithms, and suggest a connection to GPGPU platforms. We aim to provide hints to architects about understanding algorithm aspect to GPGPU. We also provide detailed performance analysis and guide optimizations from high-level algorithms to low-level instruction level optimizations. As a case study, we use n-body particle simulations known as the fast multipole method (FMM) as an example. We also briefly survey the state-of-the-art in GPU performance analysis tools and techniques. Table of Contents: GPU Design, Programming, and Trends / Performance Principles / From Principles to Practice: Analysis and Tuning / Using Detailed Performance Analysis to Guide Optimization
The rapid development of optical fiber transmission technology has created the possibility for constructing digital networks that are as ubiquitous as the current voice network but which can carry video, voice, and data in massive qlJantities. How and when such networks will evolve, who will pay for them, and what new applications will use them is anyone's guess. There appears to be no doubt, however, that the trend in telecommunication networks is toward far greater transmission speeds and toward greater heterogeneity in the requirements of different applications. This book treats some of the central problems involved in these networks of the future. First, how does one switch data at speeds orders of magnitude faster than that of existing networks? This problem has roots in both classical switching for telephony and in switching for packet networks. There are a number of new twists here, however. The first is that the high speeds necessitate the use of highly parallel processing and place a high premium on computational simplicity. The second is that the required data speeds and allowable delays of different applications differ by many orders of magnitude. The third is that it might be desirable to support both point to point applications and also applications involving broadcast from one source to a large set of destinations.
Business Component-Based Software Engineering, an edited volume, aims to complement some other reputable books on CBSE, by stressing how components are built for large-scale applications, within dedicated development processes and for easy and direct combination. This book will emphasize these three facets and will offer a complete overview of some recent progresses. Projects and works explained herein will prompt graduate students, academics, software engineers, project managers and developers to adopt and to apply new component development methods gained from and validated by the authors. The authors of Business Component-Based Software Engineering are academic and professionals, experts in the field, who will introduce the state of the art on CBSE from their shared experience by working on the same projects. Business Component-Based Software Engineering is designed to meet the needs of practitioners and researchers in industry, and graduate-level students in Computer Science and Engineering.
The formal study of program behavior has become an essential ingredient in guiding the design of new computer architectures. Accurate characterization of applications leads to efficient design of high performing architectures. Quantitative and analytical characterization of workloads is important to understand and exploit the interesting features of workloads. This book includes ten chapters on various aspects of workload characterizati on. File caching characteristics of the industry-standard web-serving benchmark SPECweb99 are presented by Keller et al. in Chapter 1, while value locality of SPECJVM98 benchmarks are characterized by Rychlik et al. in Chapter 2. SPECJVM98 benchmarks are visited again in Chapter 3, where Tao et al. study the operating system activity in Java programs. In Chapter 4, KleinOsowski et al. describe how the SPEC2000 CPU benchmark suite may be adapted for computer architecture research and present the small, representative input data sets they created to reduce simulation time without compromising on accuracy. Their research has been recognized by the Standard Performance Evaluation Corporation (SPEC) and is listed on the official SPEC website, http://www. spec. org/osg/cpu2000/research/umnl. The main contribution of Chapter 5 is the proposal of a new measure called locality surface to characterize locality of reference in programs. Sorenson et al. describe how a three-dimensional surface can be used to represent both of programs. In Chapter 6, Thornock et al.
Java is an exciting new object-oriented technology. Hardware for supporting objects and other features of Java such as multithreading, dynamic linking and loading is the focus of this book. The impact of Java's features on micro-architectural resources and issues in the design of Java-specific architectures are interesting topics that require the immediate attention of the research community. While Java has become an important part of desktop applications, it is now being used widely in high-end server markets, and will soon be widespread in low-end embedded computing. Java Microarchitectures contains a collection of papers providing a snapshot of the state of the art in hardware support for Java. The book covers the behavior of Java applications, embedded processors for Java, memory system design, and high-performance single-chip architectures designed to execute Java applications efficiently.
Past and current research in computer performance analysis has focused primarily on dedicated parallel machines. However, future applications in the area of high-performance computing will not only use individual parallel systems but a large set of networked resources. This scenario of computational and data Grids is attracting a great deal of attention from both computer and computational scientists. In addition to the inherent complexity of parallel machines, the sharing and transparency of the available resources introduces new challenges on performance analysis, techniques, and systems. In order to meet those challenges, a multi-disciplinary approach to the multi-faceted problems of performance is required. New degrees of freedom will come into play with a direct impact on the performance of Grid computing, including wide-area network performance, quality-of-service (QoS), heterogeneity, and middleware systems, to mention only a few.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
The evolution of modern computers began more than 50 years ago and has been driven to a large extend by rapid advances in electronic technology during that period. The first computers ran one application (user) at a time. Without the benefit of operating systems or compilers, the application programmers were responsible for managing all aspects of the hardware. The introduction of compilers allowed programmers to express algorithms in abstract terms without being concerned with the bit level details of their implementation. Time sharing operating systems took computing systems one step further and allowed several users and/or applications to time share the computing services of com puters. With the advances of networks and software tools, users and applications were able to time share the logical and physical services that are geographically dispersed across one or more networks. Virtual Computing (VC) concept aims at providing ubiquitous open computing services in analogous way to the services offered by Telephone and Elec trical (utility) companies. The VC environment should be dynamically setup to meet the requirements of a single user and/or application. The design and development of a dynamically programmable virtual comput ing environments is a challenging research problem. However, the recent advances in processing and network technology and software tools have successfully solved many of the obstacles facing the wide deployment of virtual computing environments as will be outlined next." |
![]() ![]() You may like...
Creativity in Load-Balance Schemes for…
Alberto Garcia-Robledo, Arturo Diaz Perez, …
Hardcover
R4,279
Discovery Miles 42 790
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R7,388
Discovery Miles 73 880
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R3,225
Discovery Miles 32 250
|