![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book is a revision of my Ph. D. thesis dissertation submitted to Carnegie Mellon University in 1987. It documents the research and results of the compiler technology developed for the Warp machine. Warp is a systolic array built out of custom, high-performance processors, each of which can execute up to 10 million floating-point operations per second (10 MFLOPS). Under the direction of H. T. Kung, the Warp machine matured from an academic, experimental prototype to a commercial product of General Electric. The Warp machine demonstrated that the scalable architecture of high-peiformance, programmable systolic arrays represents a practical, cost-effective solu tion to the present and future computation-intensive applications. The success of Warp led to the follow-on iWarp project, a joint project with Intel, to develop a single-chip 20 MFLOPS processor. The availability of the highly integrated iWarp processor will have a significant impact on parallel computing. One of the major challenges in the development of Warp was to build an optimizing compiler for the machine. First, the processors in the xx A Systolic Array Optimizing Compiler array cooperate at a fine granularity of parallelism, interaction between processors must be considered in the generation of code for individual processors. Second, the individual processors themselves derive their performance from a VLIW (Very Long Instruction Word) instruction set and a high degree of internal pipelining and parallelism. The compiler contains optimizations pertaining to the array level of parallelism, as well as optimizations for the individual VLIW processors."
The organization of data is clearly of great importance in the design of high performance algorithms and architectures. Although there are several landmark papers on this subject, no comprehensive treatment has appeared. This monograph is intended to fill that gap. We introduce a model of computation for parallel computer architec tures, by which we are able to express the intrinsic complexity of data or ganization for specific architectures. We apply this model of computation to several existing parallel computer architectures, e.g., the CDC 205 and CRAY vector-computers, and the MPP binary array processor. The study of data organization in parallel computations was introduced as early as 1970. During the development of the ILLIAC IV system there was a need for a theory of possible data arrangements in interleaved mem ory systems. The resulting theory dealt primarily with storage schemes also called skewing schemes for 2-dimensional matrices, i.e., mappings from a- dimensional array to a number of memory banks. By means of the model of computation we are able to apply the theory of skewing schemes to var ious kinds of parallel computer architectures. This results in a number of consequences for both the design of parallel computer architectures and for applications of parallel processing."
Secure Electronic Voting is an edited volume, which includes chapters authored by leading experts in the field of security and voting systems. The chapters identify and describe the given capabilities and the strong limitations, as well as the current trends and future perspectives of electronic voting technologies, with emphasis in security and privacy. Secure Electronic Voting includes state-of-the-art material on existing and emerging electronic and Internet voting technologies, which may eventually lead to the development of adequately secure e-voting systems. This book also includes an overview of the legal framework with respect to voting, a description of the user requirements for the development of a secure e-voting system, and a discussion on the relevant technical and social concerns. Secure Electronic Voting includes, also, three case studies on the use and evaluation of e-voting systems in three different real world environments.
Document Processing and Retrieval: TEXPROS focuses on the design and implementation of a personal, customizable office information and document processing system called TEXPROS (a TEXt PROcessing System). TEXPROS is a personal, intelligent office information and document processing system for text-oriented documents. This system supports the storage, classification, categorization, retrieval and reproduction of documents, as well as extracting, browsing, retrieving and synthesizing information from a variety of documents. When using TEXPROS in a multi-user or distributed environment, it requires specific protocols for extracting, storing, transmitting and exchanging information. The authors have used a variety of techniques to implement TEXPROS, such as Object-Oriented Programming, Tcl/Tk, X-Windows, etc. The system can be used for many different purposes in many different applications, such as digital libraries, software documentation and information delivery. Audience: Provides in-depth, state-of-the-art coverage of information processing and retrieval, and documentation for such professionals as database specialists, information systems and software developers, and information providers.
Advanced research on the description of distributed systems and on design calculi for software and hardware is presented in this volume. Distinguished researchers give an overview of the latest state of the art.
Distributed computer systems are now widely available but, despite a number of recent advances, the design of software for these systems remains a challenging task, involving two main difficulties: the absence of a shared clock and the absence of a shared memory. The absence of a shared clock means that the concept of time is not useful in distributed systems. The absence of shared memory implies that the concept of a state of a distributed system also needs to be redefined. These two important concepts occupy a major portion of this book. Principles of Distributed Systems describes tools and techniques that have been successfully applied to tackle the problem of global time and state in distributed systems. The author demonstrates that the concept of time can be replaced by that of causality, and clocks can be constructed to provide causality information. The problem of not having a global state is alleviated by developing efficient algorithms for detecting properties and computing global functions. The author's major emphasis is in developing general mechanisms that can be applied to a variety of problems. For example, instead of discussing algorithms for standard problems, such as termination detection and deadlocks, the book discusses algorithms to detect general properties of a distributed computation. Also included are several worked examples and exercise problems that can be used for individual practice and classroom instruction. Audience: Can be used to teach a one-semester graduate course on distributed systems. Also an invaluable reference book for researchers and practitioners working on the many different aspects of distributed systems.
Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."
Workstation and computer users have an ever increasing need for solutions that offer high performance, low cost, small footprints (space requirements), and ease of use. Also, the availability of a wide range of software and hardware options (from a variety of independent vendors) is important because it simplifies the task of expanding existing applications and stretching into new ones. The SBus has been designed and optimized within this framework, and it represents a next-generation approach to a system's I/O intercon nect needs. This book is a collection of information intended to ease the task of developing and integrating new SBus-based products. The focus is primarily on hardware, due to the author's particular expertise, but firmware and software concepts are also included where appropriate. This book is based on revision B.O of the SBus Specification. This revision has been a driving force in the SBus market longer than any other, and is likely to remain a strong influence for some time to come. As of this writing there is currently an effort (desig nated P1496) within the IEEE to produce a new version of the SBus specification that conforms to that group's policies and requirements. This might result in some changes to the specifica tion, but in most cases these will be minor. Most of the information this book contains will remain timely and applicable. To help ensure this, the author has included key information about pro posed or planned changes."
The sheer complexity of computer systems has meant that automated reasoning, i.e. the ability of computers to perform logical inference, has become a vital component of program construction and of programming language design. This book meets the demand for a self-contained and broad-based account of the concepts, the machinery and the use of automated reasoning. The mathematical logic foundations are described in conjunction with practical application, all with the minimum of prerequisites. The approach is constructive, concrete and algorithmic: a key feature is that methods are described with reference to actual implementations (for which code is supplied) that readers can use, modify and experiment with. This book is ideally suited for those seeking a one-stop source for the general area of automated reasoning. It can be used as a reference, or as a place to learn the fundamentals, either in conjunction with advanced courses or for self study.
The book presents the best contributions, extracted from the theses written by the students who have attended the second edition of the Master in Microelectronics and Systems that has been organized by the Universita degli Studi di Catania and that has been held at the STMicroelectronics Company (Catania Site) from May 2000 to January 2001. In particular, the mentioned Master has been organized among the various ac tivities of the "Istituto Superiore di Catania per la Formazione di Eccellenza." The Institute is one of the Italian network of universities selected by MURST (Ministry University Research Scientific Technology). The first aim of tl;te Master in Microelectronics and Systems is to increase the skills of the students with the Laurea Degree in Physics or Electrical Engineering in the more advanced areas as VLSI system design, high-speed low-voltage low-power circuitS and RF systems. The second aim has been to involve in the educational program companies like STMicroelectronics, ACCENT and ITEL, interested in emergent microelectronics topics, to cooperate with the University in developing high-level research projects. Besides the tutorial activity during the teaching hours, provided by national and international researchers, a significant part of the School has been dedicated to the presentation of specific CAD tools and experiments in order to prepare the students to solve specific problems during the stage period and in the thesis work."
Mastering interoperability in a computing environment consisting of different operating systems and hardware architectures is a key requirement which faces system engineers building distributed information systems. Distributed applications are a necessity in most central application sectors of the contemporary computerized society, for instance, in office automation, banking, manufacturing, telecommunication and transportation. This book focuses on the techniques available or under development, with the goal of easing the burden of constructing reliable and maintainable interoperable information systems. The topics covered in this book include: * Management of distributed systems; * Frameworks and construction tools; * Open architectures and interoperability techniques; * Experience with platforms like CORBA and RMI; * Language interoperability (e.g. Java); * Agents and mobility; * Quality of service and fault tolerance; * Workflow and object modelling issues; and * Electronic commerce .The book contains the proceedings of the International Working Conference on Distributed Applications and Interoperable Systems II (DAIS'99), which was held June 28-July 1, 1999 in Helsinki, Finland. It was sponsored by the International Federation of Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported.
Embedded systems are becoming one of the major driving forces in computer science. Furthermore, it is the impact of embedded information technology that dictates the pace in most engineering domains. Nearly all technical products above a certain level of complexity are not only controlled but increasingly even dominated by their embedded computer systems. Traditionally, such embedded control systems have been implemented in a monolithic, centralized way. Recently, distributed solutions are gaining increasing importance. In this approach, the control task is carried out by a number of controllers distributed over the entire system and connected by some interconnect network, like fieldbuses. Such a distributed embedded system may consist of a few controllers up to several hundred, as in today's top-range automobiles. Distribution and parallelism in embedded systems design increase the engineering challenges and require new development methods and tools. This book is the result of the International Workshop on Distributed and Parallel Embedded Systems (DIPES'98), organized by the International Federation for Information Processing (IFIP) Working Groups 10.3 (Concurrent Systems) and 10.5 (Design and Engineering of Electronic Systems). The workshop took place in October 1998 in Schloss Eringerfeld, near Paderborn, Germany, and the resulting book reflects the most recent points of view of experts from Brazil, Finland, France, Germany, Italy, Portugal, and the USA. The book is organized in six chapters: `Formalisms for Embedded System Design': IP-based system design and various approaches to multi-language formalisms. `Synthesis from Synchronous/Asynchronous Specification': Synthesis techniques based on Message Sequence Charts (MSC), StateCharts, and Predicate/Transition Nets. `Partitioning and Load-Balancing': Application in simulation models and target systems. <`Verification and Validation': Formal techniques for precise verification and more pragmatic approaches to validation. `Design Environments' for distributed embedded systems and their impact on the industrial state of the art. `Object Oriented Approaches': Impact of OO-techniques on distributed embedded systems. GBP/LISTGBP This volume will be essential reading for computer science researchers and application developers.
Communication Systems: The State of the Art captures the depth and breadth of the field of communication systems: -Architectures and Protocols for Distributed Systems; -Network and Internetwork Architectures; -Performance of Communication Systems; -Internet Applications Engineering; -Management of Networks and Distributed Systems; -Smart Networks; -Wireless Communications; -Communication Systems for Developing Countries; -Photonic Networking; -Communication Systems in Electronic Commerce. This volume's scope and authority present a rare opportunity for people in many different fields to gain a practical understanding of where the leading edge in communication systems lies today-and where it will be tomorrow.
The advent of multimedia technology is creating a number of new problems in the fields of computer and communication systems. Perhaps the most important of these problems in communication, and certainly the most interesting, is that of designing networks to carry multimedia traffic, including digital audio and video, with acceptable quality. The main challenge in integrating the different services needed by the different types of traffic into the same network (an objective that is made worthwhile by its obvious economic advantages) is to satisfy the performance requirements of continuous media applications, as the quality of audio and video streams at the receiver can be guaranteed only if bounds on delay, delay jitters, bandwidth, and reliability are guaranteed by the network. Since such guarantees cannot be provided by traditional packet-switching technology, a number of researchers and research groups during the last several years have tried to meet the challenge by proposing new protocols or modifications of old ones, to make packet-switching networks capable of delivering audio and video with good quality while carrying all sorts of other traffic. The focus of this book is on HeiTS (the Heidelberg Transport System), and its contributions to integrated services network design. The HeiTS architecture is based on using the Internet Stream Protocol Version 2 (ST-II) at the network layer. The Heidelberg researchers were the first to implement ST-II. The author documents this activity in the book and provides thorough coverage of the improvements made to the protocol. The book also includes coverage of HeiTP as used in error handling, error control, congestion control, and the full specification of ST2+, a new version of ST-II. The ideas and techniques implemented by the Heidelberg group and their coverage in this volume apply to many other approaches to multimedia networking.
'Et moi ... si j'avait su comment en revenir. One service mathematics has rendered thl je n'y serais point aile: human race. It has put common sense back where it belongs. on the topmost shelf nexl Jules Verne to the dusty canister labelled 'discarded non. The series is divergent; therefore we may be sense'. Eric T. Bell able to do something with it O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non. Iinearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and fO other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic has rendered com. puter science ... .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series."
As more and more devices become interconnected through the Internet of Things (IoT), there is an even greater need for this book,which explains the technology, the internetworking, and applications that are making IoT an everyday reality. The book begins with a discussion of IoT "ecosystems" and the technology that enables them, which includes: Wireless Infrastructure and Service Discovery Protocols Integration Technologies and Tools Application and Analytics Enablement Platforms A chapter on next-generation cloud infrastructure explains hosting IoT platforms and applications. A chapter on data analytics throws light on IoT data collection, storage, translation, real-time processing, mining, and analysis, all of which can yield actionable insights from the data collected by IoT applications. There is also a chapter on edge/fog computing. The second half of the book presents various IoT ecosystem use cases. One chapter discusses smart airports and highlights the role of IoT integration. It explains how mobile devices, mobile technology, wearables, RFID sensors, and beacons work together as the core technologies of a smart airport. Integrating these components into the airport ecosystem is examined in detail, and use cases and real-life examples illustrate this IoT ecosystem in operation. Another in-depth look is on envisioning smart healthcare systems in a connected world. This chapter focuses on the requirements, promising applications, and roles of cloud computing and data analytics. The book also examines smart homes, smart cities, and smart governments. The book concludes with a chapter on IoT security and privacy. This chapter examines the emerging security and privacy requirements of IoT environments. The security issues and an assortment of surmounting techniques and best practices are also discussed in this chapter.
This volume describes our intellectual path from the physics of complex sys tems to the science of artificial cognitive systems. It was exciting to discover that many of the concepts and methods which succeed in describing the self organizing phenomena of the physical world are relevant also for understand ing cognitive processes. Several nonlinear physicists have felt the fascination of such discovery in recent years. In this volume, we will limit our discussion to artificial cognitive systems, without attempting to model either the cognitive behaviour or the nervous structure of humans or animals. On the one hand, such artificial systems are important per se; on the other hand, it can be expected that their study will shed light on some general principles which are relevant also to biological cognitive systems. The main purpose of this volume is to show that nonlinear dynamical systems have several properties which make them particularly attractive for reaching some of the goals of artificial intelligence. The enthusiasm which was mentioned above must however be qualified by a critical consideration of the limitations of the dynamical systems approach. Understanding cognitive processes is a tremendous scientific challenge, and the achievements reached so far allow no single method to claim that it is the only valid one. In particular, the approach based upon nonlinear dynamical systems, which is our main topic, is still in an early stage of development."
Automatic Performance Prediction of Parallel Programs presents a unified approach to the problem of automatically estimating the performance of parallel computer programs. The author focuses primarily on distributed memory multiprocessor systems, although large portions of the analysis can be applied to shared memory architectures as well. The author introduces a novel and very practical approach for predicting some of the most important performance parameters of parallel programs, including work distribution, number of transfers, amount of data transferred, network contention, transfer time, computation time and number of cache misses. This approach is based on advanced compiler analysis that carefully examines loop iteration spaces, procedure calls, array subscript expressions, communication patterns, data distributions and optimizing code transformations at the program level; and the most important machine specific parameters including cache characteristics, communication network indices, and benchmark data for computational operations at the machine level. The material has been fully implemented as part of P3T, which is an integrated automatic performance estimator of the Vienna Fortran Compilation System (VFCS), a state-of-the-art parallelizing compiler for Fortran77, Vienna Fortran and a subset of High Performance Fortran (HPF) programs. A large number of experiments using realistic HPF and Vienna Fortran code examples demonstrate highly accurate performance estimates, and the ability of the described performance prediction approach to successfully guide both programmer and compiler in parallelizing and optimizing parallel programs. A graphical user interface is described and displayed that visualizes each program source line together with the corresponding parameter values. P3T uses color-coded performance visualization to immediately identify hot spots in the parallel program. Performance data can be filtered and displayed at various levels of detail. Colors displayed by the graphical user interface are visualized in greyscale. Automatic Performance Prediction of Parallel Programs also includes coverage of fundamental problems of automatic parallelization for distributed memory multicomputers, a description of the basic parallelization strategy and a large variety of optimizing code transformations as included under VFCS.
Real-time computing systems are vital to a wide range of applications. For example, they are used in the control of nuclear reactors and automated manufacturing facilities, in controlling and tracking air traffic, and in communication systems. In recent years, real-time systems have also grown larger and become more critical. For instance, advanced aircraft such as the space shuttle must depend heavily on computer sys tems Carlow 84]. The centralized control of manufacturing facilities and assembly plants operated by robots are other examples at the heart of which lie embedded real-time systems. Military defense systems deployed in the air, on the ocean surface, land and underwater, have also been increasingly relying upon real-time systems for monitoring and operational safety purposes, and for retaliatory and containment measures. In telecommunications and in multi-media applications, real time characteristics are essential to maintain the integrity of transmitted data, audio and video signals. Many of these systems control, monitor or perform critical operations, and must respond quickly to emergency events in a wide range of embedded applications. They are therefore required to process tasks with stringent timing requirements and must perform these tasks in a way that these timing requirements are guaranteed to be met. Real-time scheduling al gorithms attempt to ensure that system timing behavior meets its specifications, but typically assume that tasks do not share logical or physical resources. Since resource-sharing cannot be eliminated, synchronization primitives must be used to ensure that resource consis tency constraints are not violated."
It has been widely recognized that artificial intelligence computations offer large potential for distributed and parallel processing. Unfortunately, not much is known about designing parallel AI algorithms and efficient, easy-to-use parallel computer architectures for AI applications. The field of parallel computation and computers for AI is in its infancy, but some significant ideas have appeared and initial practical experience has become available. The purpose of this book has been to collect in one volume contributions from several leading researchers and pioneers of AI that represent a sample of these ideas and experiences. This sample does not include all schools of thought nor contributions from all leading researchers, but it covers a relatively wide variety of views and topics and in this sense can be helpful in assessing the state ofthe art. We hope that the book will serve, at least, as a pointer to more specialized literature and that it will stimulate interest in the area of parallel AI processing. It has been a great pleasure and a privilege to cooperate with all contributors to this volume. They have my warmest thanks and gratitude. Mrs. Birgitta Knapp has assisted me in the editorial task and demonstrated a great deal of skill and patience. Janusz S. Kowalik vii INTRODUCTION Artificial intelligence (AI) computer programs can be very time-consuming.
This book contains the course notes of the Summerschool on High Performance Computing in Fluid Dynamics, held at the Delft University of Technology, June 24-28, 1996. The lectures presented deal to a large extent with algorithmic, programming and implementation issues, as well as experiences gained so far on parallel platforms. Attention is also given to mathematics aspects, notably domain decomposition and scalable algorithms. Topics considered are: basic concepts of parallel computers, parallelization strategies, programming aspects, parallel algorithms, applications in computational fluid dynamics, the present hardware situation and developments to be expected. The book is addressed to students on a graduate level and researchers in industry engaged in scientific computing, who have little or no experience with high performance computing, but who want to learn more, and/or want to port their code to parallel platforms. It is a good starting point for those who want to enter the field of high performance computing, especially if applications in fluid dynamics are envisaged.
This book constitutes the refereeds proceedings of the International Conference on High Performance Architecture and Grid Computing, HPAGC 2011, held in Chandigarh, India, in July 2011. The 87 revised full papers presented were carefully reviewed and selected from 240 submissions. The papers are organized in topical sections on grid and cloud computing; high performance architecture; information management and network security.
Mobile Computation with Functions explores distributed computation with languages which adopt functions as the main programming abstraction and support code mobility through the mobility of functions between remote sites. It aims to highlight the benefits of using languages of this family in dealing with the challenges of mobile computation. The possibility of exploiting existing static analysis techniques suggests that having functions at the core of mobile code language is a particularly apt choice. A range of problems which have impact on the safety, security and performance are discussed. It is shown that types extended with effects and other annotations can capture a significant amount of information about the dynamic behavior of mobile functions, and offer solutions to the problems under investigation. This book includes a survey of the languages Concurrent ML, Facile and PLAN which inherit the strengths of the functional paradigm in the context of concurrent and distributed computation. The languages which are defined in the subsequent chapters have their roots in these languages.
Distributed Sensor Networks is the first book of its kind to examine solutions to this problem using ideas taken from the field of multiagent systems. The field of multiagent systems has itself seen an exponential growth in the past decade, and has developed a variety of techniques for distributed resource allocation. Distributed Sensor Networks contains contributions from leading, international researchers describing a variety of approaches to this problem based on examples of implemented systems taken from a common distributed sensor network application; each approach is motivated, demonstrated and tested by way of a common challenge problem. The book focuses on both practical systems and their theoretical analysis, and is divided into three parts: the first part describes the common sensor network challenge problem; the second part explains the different technical approaches to the common challenge problem; and the third part provides results on the formal analysis of a number of approaches taken to address the challenge problem.
The evolution of modern computers began more than 50 years ago and has been driven to a large extend by rapid advances in electronic technology during that period. The first computers ran one application (user) at a time. Without the benefit of operating systems or compilers, the application programmers were responsible for managing all aspects of the hardware. The introduction of compilers allowed programmers to express algorithms in abstract terms without being concerned with the bit level details of their implementation. Time sharing operating systems took computing systems one step further and allowed several users and/or applications to time share the computing services of com puters. With the advances of networks and software tools, users and applications were able to time share the logical and physical services that are geographically dispersed across one or more networks. Virtual Computing (VC) concept aims at providing ubiquitous open computing services in analogous way to the services offered by Telephone and Elec trical (utility) companies. The VC environment should be dynamically setup to meet the requirements of a single user and/or application. The design and development of a dynamically programmable virtual comput ing environments is a challenging research problem. However, the recent advances in processing and network technology and software tools have successfully solved many of the obstacles facing the wide deployment of virtual computing environments as will be outlined next." |
You may like...
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,315
Discovery Miles 53 150
Energy-Efficient Communication…
Robert Fasthuber, Francky Catthoor, …
Hardcover
R4,705
Discovery Miles 47 050
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R6,687
Discovery Miles 66 870
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
|