![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.
Constraint Logic Programming (CLP), an area of extreme research interest in recent years, extends the semantics of Prolog in such a way that the combinatorial explosion, a characteristic of most problems in the field of Artificial Intelligence, can be tackled efficiently. By employing solvers dedicated to each domain instead of the unification algorithm, CLP drastically reduces the search space of the problem, which leads to increased efficiency in the execution of logic programs. CLP offers the possibility of solving complex combinatorial problems in an efficient way, and at the same time maintains the advantages offered by the declarativeness of logic programming. The aim of this book is to present parallel and constraint logic programming, offering a basic understanding of the two fields to the reader new to the area. The first part of the book gives an introduction to the fundamental aspects of conventional logic programming which is necessary for understanding the parts that follow. The second part includes an introduction to parallel logic programming, architectures and implementations proposed in the area.Finally, the third part presents the principles of constraint logic programming. The last two parts also include descriptions of the supporting facilities for the two paradigms in two popular systems; ECLIPSe and SICStus. These platforms have been selected mainly because they offer both parallel and constraint features. Annotated and explained examples are also included in the relevant parts, offering a valuable guide and a first practical experience to the reader. Finally, applications of the covered paradigms are presented. The authors felt that a book of this kind should provide some theoretical background necessary for the understanding of the covered logic programming paradigms, and a quick start for the reader interested in writing parallel and constraint logic programming programs. However it is outside the scope of this book to provide a deep theoretical background of the two areas.In that sense, this book is addressed to a public interested in obtaining a knowledge of the domain, without spending the time and effort to understand the extensive theoretical work done in the field -- namely postgraduate and advanced undergraduate students in the area of logic programming. This book fills a gap in the current bibliography, since there is no comprehensive book of this level that covers the areas of conventional, parallel, and constraint logic programming. Parallel and Constraint Logic Programming: An Introduction to Logic, Parallelism and Constraints is appropriate for an advanced level course on Logic Programming or Constraints, and as a reference for practitioners and researchers in industry.
This book describes several techniques to address variation-related design challenges for analog blocks in mixed-signal systems-on-chip. The methods presented are results from recent research works involving receiver front-end circuits, baseband filter linearization, and data conversion. These circuit-level techniques are described, with their relationships to emerging system-level calibration approaches, to tune the performances of analog circuits with digital assistance or control. Coverage also includes a strategy to utilize on-chip temperature sensors to measure the signal power and linearity characteristics of analog/RF circuits, as demonstrated by test chip measurements. Describes a variety of variation-tolerant analog circuit design examples, including from RF front-ends, high-performance ADCs and baseband filters;Includes built-in testing techniques, linked to current industrial trends;Balances digitally-assisted performance tuning with analog performance tuning and mismatch reduction approaches;Describes theoretical concepts as well as experimental results for test chips designed with variation-aware techniques."
Formal Methods for Open Object-Based Distributed Systems presents the leading edge in several related fields, specifically object-orientated programming, open distributed systems and formal methods for object-oriented systems. With increased support within industry regarding these areas, this book captures the most up-to-date information on the subject. Many topics are discussed, including the following important areas: object-oriented design and programming; formal specification of distributed systems; open distributed platforms; types, interfaces and behaviour; formalisation of object-oriented methods. This volume comprises the proceedings of the International Workshop on Formal Methods for Open Object-based Distributed Systems (FMOODS), sponsored by the International Federation for Information Processing (IFIP) which was held in Florence, Italy, in February 1999. Formal Methods for Open Object-Based Distributed Systems is suitable as a secondary text for graduate-level courses in computer science and telecommunications, and as a reference for researchers and practitioners in industry, commerce and government.
LOTOS (Language Of Temporal Ordering Specification) became an international standard in 1989, although application of preliminary versions of the language to communication services and protocols of the ISO/OSI family dates back to 1984. This history of the use of LOTOS made it apparent that more advantages than the pure production of standard reference documents were to be expected from the use of such formal description techniques. LOTOSphere: Software Development with LOTOS describes in depth a five year project that moved LOTOS out of the ISO tower into software engineering practice. LOTOS became a vehicle for efficient, yet formally based industrial software specification, design, verification, implementation and testing. LOTOSphere: Software Development with LOTOS is divided into six parts. The first introduces the reader to LOTOS and the project LOTOSphere. The five remaining each treat an important part of the software development life cycle using LOTOS. This is the first book to give a comprehensive treatment of the use of these formal description techniques in a software engineering environment. It will thus be a valuable reference for researchers and software developers and can also be used as a text for an advanced course on the subject.
For some years, specification of software and hardware systems has been influenced not only by algebraic methods but also by new developments in logic. These new developments in logic are partly based on the use of algorithmic techniques in deduction and proving methods, but are alsodue to new theoretical advances, to a great extent stimulated by computer science, which have led to new types of logic and new logical calculi. The new techniques, methods and tools from logic, combined with algebra-based ones, offer very powerful and useful tools for the computer scientist, which may soon become practical for commercial use, where, in particular, more powerful specification tools are needed for concurrent and distributed systems. This volume contains papers based on lectures by leading researchers which were originally given at an international summer school held in Marktoberdorf in 1991. The papers aim to give a foundation for combining logic and algebra for the purposes of specification under the aspects of automated deduction, proving techniques, concurrency and logic, abstract data types and operational semantics, and constructive methods.
Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node.
The rapid development of optical fiber transmission technology has created the possibility for constructing digital networks that are as ubiquitous as the current voice network but which can carry video, voice, and data in massive qlJantities. How and when such networks will evolve, who will pay for them, and what new applications will use them is anyone's guess. There appears to be no doubt, however, that the trend in telecommunication networks is toward far greater transmission speeds and toward greater heterogeneity in the requirements of different applications. This book treats some of the central problems involved in these networks of the future. First, how does one switch data at speeds orders of magnitude faster than that of existing networks? This problem has roots in both classical switching for telephony and in switching for packet networks. There are a number of new twists here, however. The first is that the high speeds necessitate the use of highly parallel processing and place a high premium on computational simplicity. The second is that the required data speeds and allowable delays of different applications differ by many orders of magnitude. The third is that it might be desirable to support both point to point applications and also applications involving broadcast from one source to a large set of destinations.
This monograph evolved from my Ph. D dissertation completed at the Laboratory of Computer Science, MIT, during the Summer of 1986. In my dissertation I proposed a pipelined code mapping scheme for array operations on static dataflow architectures. The main addition to this work is found in Chapter 12, reflecting new research results developed during the last three years since I joined McGill University-results based upon the principles in my dissertation. The terminology dataflow soft ware pipelining has been consistently used since publication of our 1988 paper on the argument-fetching dataflow architecture model at McGill University 43]. In the first part of this book we describe the static data flow graph model as an operational model for concurrent computation. We look at timing considerations for program graph execution on an ideal static dataflow computer, examine the notion of pipe lining, and characterize its performance. We discuss balancing techniques used to transform certain graphs into fully pipelined data flow graphs. In particular, we show how optimal balancing of an acyclic data flow graph can be formulated as a linear programming problem for which an optimal solution exists. As a major result, we show the optimal balancing problem of acyclic data flow graphs is reduceable to a class of linear programming problem, the net work flow problem, for which well-known efficient algorithms exist. This result disproves the conjecture that such problems are computationally hard."
Business Component-Based Software Engineering, an edited volume, aims to complement some other reputable books on CBSE, by stressing how components are built for large-scale applications, within dedicated development processes and for easy and direct combination. This book will emphasize these three facets and will offer a complete overview of some recent progresses. Projects and works explained herein will prompt graduate students, academics, software engineers, project managers and developers to adopt and to apply new component development methods gained from and validated by the authors. The authors of Business Component-Based Software Engineering are academic and professionals, experts in the field, who will introduce the state of the art on CBSE from their shared experience by working on the same projects. Business Component-Based Software Engineering is designed to meet the needs of practitioners and researchers in industry, and graduate-level students in Computer Science and Engineering.
The formal study of program behavior has become an essential ingredient in guiding the design of new computer architectures. Accurate characterization of applications leads to efficient design of high performing architectures. Quantitative and analytical characterization of workloads is important to understand and exploit the interesting features of workloads. This book includes ten chapters on various aspects of workload characterizati on. File caching characteristics of the industry-standard web-serving benchmark SPECweb99 are presented by Keller et al. in Chapter 1, while value locality of SPECJVM98 benchmarks are characterized by Rychlik et al. in Chapter 2. SPECJVM98 benchmarks are visited again in Chapter 3, where Tao et al. study the operating system activity in Java programs. In Chapter 4, KleinOsowski et al. describe how the SPEC2000 CPU benchmark suite may be adapted for computer architecture research and present the small, representative input data sets they created to reduce simulation time without compromising on accuracy. Their research has been recognized by the Standard Performance Evaluation Corporation (SPEC) and is listed on the official SPEC website, http://www. spec. org/osg/cpu2000/research/umnl. The main contribution of Chapter 5 is the proposal of a new measure called locality surface to characterize locality of reference in programs. Sorenson et al. describe how a three-dimensional surface can be used to represent both of programs. In Chapter 6, Thornock et al.
Past and current research in computer performance analysis has focused primarily on dedicated parallel machines. However, future applications in the area of high-performance computing will not only use individual parallel systems but a large set of networked resources. This scenario of computational and data Grids is attracting a great deal of attention from both computer and computational scientists. In addition to the inherent complexity of parallel machines, the sharing and transparency of the available resources introduces new challenges on performance analysis, techniques, and systems. In order to meet those challenges, a multi-disciplinary approach to the multi-faceted problems of performance is required. New degrees of freedom will come into play with a direct impact on the performance of Grid computing, including wide-area network performance, quality-of-service (QoS), heterogeneity, and middleware systems, to mention only a few.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
The Second International Workshop on Cooperative Internet Computing (CIC2002) has brought together researchers, academics, and industry practitioners who are involved and interested in the development of advanced and emerging cooperative computing technologies. Cooperative computing is an important computing paradigm to enable different parties to work together towards a pre defined non-trivial goal. It encompasses important technological areas like computer supported cooperative work, workflow, computer assisted design and concurrent programming. As technologies continue to advance and evolve, there is an increasing need to research and develop new classes of middlewares and applications to leverage on the combined benefits of Internet and web to provide users and programmers with highly interactive and robust cooperative computing environment. It is the aim of this forum to promote close interactions and exchange of ideas among researchers, academics and practitioners on the state-of-the art researches in all of these exciting areas. We have partnered with Kluwer Acedamic Press this year to bring to you a book compilation of the papers that were presented at the CIC2002 workshop. The importance of the research area is reflected both in the quality and quantity of the submitted papers, where each paper was reviewed by at least three PC members. As a result, we were able to only accept 14 papers for full presentation at the workshop, while having to reject several excellent papers due to the limitations of the program schedule.
High Performance Computing Systems and Applications contains fully refereed papers from the 15th Annual Symposium on High Performance Computing. These papers cover both fundamental and applied topics in HPC: parallel algorithms, distributed systems and architectures, distributed memory and performance, high level applications, tools and solvers, numerical methods and simulation, advanced computing systems, and the emerging area of computational grids. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
This book provides a comprehensive survey of recent progress in the design and implementation of Networks-on-Chip. It addresses a wide spectrum of on-chip communication problems, ranging from physical, network, to application layers. Specific topics that are explored in detail include packet routing, resource arbitration, error control/correction, application mapping, and communication scheduling. Additionally, a novel bi-directional communication channel NoC (BiNoC) architecture is described, with detailed explanation. Written for practicing engineers in need of practical knowledge about the design and implementation of networks-on-chip; Includes tutorial-like details to introduce readers to a diverse range of NoC designs, as well as in-depth analysis for designers with NoC experience to explore advanced issues; Describes a variety of on-chip communication architectures, including a novel bi-directional communication channel NoC. From the Foreword: Overall this book shows important advances over the state of the art that will affect future system design as well as R&D in tools and methods for NoC design. It represents an important reference point for both designers and electronic design automation researchers and developers. --Giovanni De Micheli"
This book constitutes the refereed proceedings of the 25th International Conference on Parallel Computational Fluid Dynamics, ParCFD 2013, held in Changsha, China, in May 2013. The 35 revised full papers presented were carefully reviewed and selected from more than 240 submissions. The papers address issues such as parallel algorithms, developments in software tools and environments, unstructured adaptive mesh applications, industrial applications, atmospheric and oceanic global simulation, interdisciplinary applications and evaluation of computer architectures and software environments.
Memory Issues in Embedded Systems-On-Chip: Optimizations and Explorations is designed for different groups in the embedded systems-on-chip arena. First, it is designed for researchers and graduate students who wish to understand the research issues involved in memory system optimization and exploration for embedded systems-on-chip. Second, it is intended for designers of embedded systems who are migrating from a traditional micro-controllers centered, board-based design methodology to newer design methodologies using IP blocks for processor-core-based embedded systems-on-chip. Also, since Memory Issues in Embedded Systems-on-Chip: Optimization and Explorations illustrates a methodology for optimizing and exploring the memory configuration of embedded systems-on-chip, it is intended for managers and system designers who may be interested in the emerging capabilities of embedded systems-on-chip design methodologies for memory-intensive applications.
Cooperating Heterogeneous Systems provides an in-depth introduction to the issues and techniques surrounding the integration and control of diverse and independent software components. Organizations increasingly rely upon diverse computer systems to perform a variety of knowledge-based tasks. This presents technical issues of interoperability and integration, as well as philosophical issues of how cooperation and interaction between computational entities is to be realized. Cooperating systems are systems that work together towards a common end. The concepts of cooperation must be realized in technically sound system architectures, having a uniform meta-layer between knowledge sources and the rest of the system. The layer consists of a family of interpreters, one for each knowledge source, and meta-knowledge. A system architecture to integrate and control diverse knowledge sources is presented. The architecture is based on the meta-level properties of the logic programming language Prolog. An implementation of the architecture is described, a Framework for Logic Programming Systems with Distributed Execution (FLiPSiDE). Knowledge-based systems play an important role in any up-to-date arsenal of decision support tools. The tremendous growth of computer communications infrastructure has made distributed computing a viable option, and often a necessity in geographically distributed organizations. It has become clear that to take knowledge-based systems to their next useful level, it is necessary to get independent knowledge-based systems to work together, much as we put together ad hoc work groups in our organizations to tackle complex problems. The book is for scientists and software engineers who have experience in knowledge-based systems and/or logic programming and seek a hands-on introduction to cooperating systems. Researchers investigating autonomous agents, distributed computation, and cooperating systems will find fresh ideas and new perspectives on well-established approaches to control, organization, and cooperation.
Modern multimedia systems are becoming increasingly multiprocessor and heterogeneous to match the high performance and low power demands placed on them by the large number of applications. The concurrent execution of these applications causes interference and unpredictability in the performance of these systems. In Multimedia Multiprocessor Systems, an analysis mechanism is presented to accurately predict the performance of multiple applications executing concurrently. With high consumer demand the time-to-market has become significantly lower. To cope with the complexity in designing such systems, an automated design-flow is needed that can generate systems from a high-level architectural description such that they are not error-prone and consume less time. Such a design methodology is presented for multiple use-cases -- combinations of active applications. A resource manager is also presented to manage the various resources in the system, and to achieve the goals of performance prediction, admission control and budget enforcement.
Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library presents ARCH, a library built as an extension to MPI. ARCH relies on a small set of programming abstractions that allow the writing of well-structured multi-threaded parallel codes according to the object-oriented programming style. ARCH has been written with C++. The book describes the built-in classes, and illustrates their use through several template application cases in several fields of interest: Distributed Algorithms (global completion detection, distributed process serialization), Parallel Combinatorial Optimization (A* procedure), Parallel Image-Processing (segmentation by region growing). It shows how new application-level distributed data types - such as a distributed tree and a distributed graph - can be derived from the built-in classes. A feature of interest to readers is that both the library and the application codes used for illustration purposes are available via the Internet. The material can be downloaded for installation and personal parallel code development on the reader's computer system. ARCH can be run on Unix/Linux as well as Windows NT-based platforms. Current installations include the IBM-SP2, the CRAY-T3E, the Intel Paragon, PC-networks under Linux or Windows NT. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is aimed at scientists who need to implement parallel/distributed algorithms requiring complicated local and/or distributed control structures. It can also benefit parallel/distributed program developers who wish to write codes in the object-oriented style. The author has been using ARCH for several years as a medium to teach parallel and network programming. Teachers can employ the library for the same purpose while students can use it for training. Although ARCH has been used so far in an academic environment, it will be an effective tool for professionals as well. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is suitable as a secondary text for a graduate level course on Data Communications and Networks, Programming Languages, Algorithms and Computational Theory and Distributed Computing and as a reference for researchers and practitioners in industry.
Internet heterogeneity is driving a new challenge in application development: adaptive software. Together with the increased Internet capacity and new access technologies, network congestion and the use of older technologies, wireless access, and peer-to-peer networking are increasing the heterogeneity of the Internet. Applications should provide gracefully degraded levels of service when network conditions are poor, and enhanced services when network conditions exceed expectations. Existing adaptive technologies, which are primarily end-to-end or proxy-based and often focus on a single deficient link, can perform poorly in heterogeneous networks. Instead, heterogeneous networks frequently require multiple, coordinated, and distributed remedial actions. Conductor: Distributed Adaptation for Heterogeneous Networks describes a new approach to graceful degradation in the face of network heterogeneity - distributed adaptation - in which adaptive code is deployed at multiple points within a network. The feasibility of this approach is demonstrated by conductor, a middleware framework that enables distributed adaptation of connection-oriented, application-level protocols. By adapting protocols, conductor provides application-transparent adaptation, supporting both existing applications and applications designed with adaptation in mind. Conductor: Distributed Adaptation for Heterogeneous Networks introduces new techniques that enable distributed adaptation, making it automatic, reliable, and secure. In particular, we introduce the notion of semantic segmentation, which maintains exactly-once delivery of the semantic elements of a data stream while allowing the stream to be arbitrarily adapted in transit. We also introduce a secure architecture for automatic adaptor selection, protecting user data from unauthorized adaptation. These techniques are described both in the context of conductor and in the broader context of distributed systems. Finally, this book presents empirical evidence from several case studies indicating that distributed adaptation can allow applications to degrade gracefully in heterogeneous networks, providing a higher quality of service to users than other adaptive techniques. Further, experimental results indicate that the proposed techniques can be employed without excessive cost. Thus, distributed adaptation is both practical and beneficial. Conductor: Distributed Adaptation for Heterogeneous Networks is designed to meet the needs of a professional audience composed of researchers and practitioners in industry and graduate-level students in computer science.
The emphasis of this text is on data networking, internetworking and distributed computing issues. The material surveys recent work in the area of satellite networks, introduces certain state-of-the-art technologies, and presents recent research results in these areas.
Under Quality of Service (QoS) routing, paths for flows are selected based upon the knowledge of resource availability at network nodes and the QoS requirements of flows. QoS routing schemes proposed differ in the way they gather information about the network state and select paths based on this information. We broadly categorize these schemes into best-path routing and proportional routing. The best-path routing schemes gather global network state information and always select the best path for an incoming flow based on this global view. On the other hand, proportional routing schemes proportion incoming flows among a set of candidate paths. We have shown that it is possible to compute near-optimal proportions using only locally collected information. Furthermore, a few good candidate paths can be selected using infrequently exchanged global information and thus with minimal communication overhead. Localized Quality Of Service Routing For The Internet, describes these schemes in detail demonstrating that proportional routing schemes can achieve higher throughput with lower overhead than best-path routing schemes. It first addresses the issue of finding near-optimal proportions for a given set of candidate paths based on locally collected flow statistics. This book will also look into the selection of a few good candidate paths based on infrequently exchanged global information. The final phase of this book will describe extensions to proportional routing approach to provide hierarchical routing across multiple areas in a large network. Localized Quality Of Service Routing For The Internet is designed for researchers and practitioners in industry, and is suitable for graduate level students in computer science as a secondary text.
Database and Application Security XV provides a forum for original research results, practical experiences, and innovative ideas in database and application security. With the rapid growth of large databases and the application systems that manage them, security issues have become a primary concern in business, industry, government and society. These concerns are compounded by the expanding use of the Internet and wireless communication technologies. This volume covers a wide variety of topics related to security and privacy of information in systems and applications, including: * Access control models; * Role and constraint-based access control; * Distributed systems; * Information warfare and intrusion detection; * Relational databases; * Implementation issues; * Multilevel systems; * New application areas including XML. Database and Application Security XV contains papers, keynote addresses, and panel discussions from the Fifteenth Annual Working Conference on Database and Application Security, organized by the International Federation for Information Processing (IFIP) Working Group 11.3 and held July 15-18, 2001 in Niagara on the Lake, Ontario, Canada. |
You may like...
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R5,924
Discovery Miles 59 240
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
Clean Architecture - A Craftsman's Guide…
Robert Martin
Paperback
(1)
|