0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (5)
  • R250 - R500 (23)
  • R500+ (2,639)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General

Data-Driven Intelligence in Wireless Networks - Concepts, Solutions, and Applications (Hardcover): Muhammad Khalil Afzal,... Data-Driven Intelligence in Wireless Networks - Concepts, Solutions, and Applications (Hardcover)
Muhammad Khalil Afzal, Muhammad Ateeq, Sung Won Kim
R3,216 Discovery Miles 32 160 Ships in 10 - 15 working days

Covers details on wireless communication problems, conducive for data-driven solutions Provides a comprehensive account of programming languages, tools, techniques, and good practices Provides an introduction to data-driven techniques applied to wireless communication systems Examines data-driven techniques, performance, and design issues in wireless networks Includes several case studies that examine data-driven solution for QoS in heterogeneous wireless networks

TRON Project 1988 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988): Ken Sakamura TRON Project 1988 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988)
Ken Sakamura
R1,450 Discovery Miles 14 500 Ships in 18 - 22 working days

It has been almost 5 years since the inauguration of the TRON project, a concept first proposed by Dr. K. Sakamura of the University of Tokyo. The TRON Association, which was founded as an independent organization in March 1988, has been taking over the activities of the earlier TRON Association, which was a division of Japan Electronic Industry Development Association (JEIDA). It has been expanding various operations to globalize the organizations activities. The number of member companies already exceeds 100, with increasing participation from overseas companies. It is truly an awaring historical event that so many members with the same qualifications and aims engaged in the research and development of the computer environment could be gathered together. The TRON concept aims at the creation of a new and complete environment beneficial to both computer and mankind. It has a very wide scope and great diversity. As it includes the open architecture concept and as the TRON machine should be able to work with various foreign languages, the TRON is targetted to be used internationally. In order for us to create a complete TRON world, at though there are several TRON products already on the market, continuous and aggressive participation from as members together with concentration as further development are indispensable. We, the TRON promoters, are much encouraged by such a driving force.

A VLSI Architecture for Concurrent Data Structures (Paperback, Softcover reprint of the original 1st ed. 1987): J W Dally A VLSI Architecture for Concurrent Data Structures (Paperback, Softcover reprint of the original 1st ed. 1987)
J W Dally
R4,004 Discovery Miles 40 040 Ships in 18 - 22 working days

Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and commu nication into data structures. This thesis develops a notation for describing concurrent data structures, presents examples of concurrent data structures, and describes an architecture to support concurrent data structures. Concurrent Smalltalk (CST), a derivative of Smalltalk-80 with extensions for concurrency, is developed to describe concurrent data structures. CST allows the programmer to specify objects that are distributed over the nodes of a concurrent computer. These distributed objects have many constituent objects and thus can process many messages simultaneously. They are the foundation upon which concurrent data structures are built. The balanced cube is a concurrent data structure for ordered sets. The set is distributed by a balanced recursive partition that maps to the subcubes of a binary 7lrcube using a Gray code. A search algorithm, VW search, based on the distance properties of the Gray code, searches a balanced cube in O(log N) time. Because it does not have the root bottleneck that limits all tree-based data structures to 0(1) concurrency, the balanced cube achieves 0C.: N) con currency. Considering graphs as concurrent data structures, graph algorithms are pre sented for the shortest path problem, the max-flow problem, and graph parti tioning. These algorithms introduce new synchronization techniques to achieve better performance than existing algorithms."

Memory Performance of Prolog Architectures (Paperback, Softcover reprint of the original 1st ed. 1988): Evan Tick Memory Performance of Prolog Architectures (Paperback, Softcover reprint of the original 1st ed. 1988)
Evan Tick
R4,001 Discovery Miles 40 010 Ships in 18 - 22 working days

One suspects that the people who use computers for their livelihood are growing more "sophisticated" as the field of computer science evolves. This view might be defended by the expanding use of languages such as C and Lisp in contrast to the languages such as FORTRAN and COBOL. This hypothesis is false however - computer languages are not like natural languages where successive generations stick with the language of their ancestors. Computer programmers do not grow more sophisticated - programmers simply take the time to muddle through the increasingly complex language semantics in an attempt to write useful programs. Of course, these programmers are "sophisticated" in the same sense as are hackers of MockLisp, PostScript, and Tex - highly specialized and tedious languages. It is quite frustrating how this myth of sophistication is propagated by some industries, universities, and government agencies. When I was an undergraduate at MIT, I distinctly remember the convoluted questions on exams concerning dynamic scoping in Lisp - the emphasis was placed solely on a "hacker's" view of computation, i. e. , the control and manipulation of storage cells. No consideration was given to the logical structure of programs. Within the past five years, Ada and Common Lisp have become programming language standards, despite their complexity (note that dynamic scoping was dropped even from Common Lisp). Of course, most industries' selection of programming languages are primarily driven by the requirement for compatibility (with previous software) and performance.

TRON Project 1989 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988): Ken Sakamura TRON Project 1989 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988)
Ken Sakamura
R1,432 Discovery Miles 14 320 Ships in 18 - 22 working days

It is almost six years since the inauguration of the TRON project, a con cept first proposed by Dr. K. Sakamura of the University of Tokyo, and it is almost 2 years since the foundation of the TRON Association on March 1988. The number of regular member companies registered in the TRON Association as of November 1988 is 145 which is a new re cord for the Association. Some of this year's major activities that I would particularly like to mention are: - Over 50 TRON project-related products have been or are about to be introduced to the marketplace, according to a preliminary report from the Future Study Committee of the TRON Association. In par ticular, I am happy to say that the ITRON subproject, which is ahead of the other subprojects, has progressed so far that several papers on ITRON applications will be presented at this conference, which means that the ITRON specifications are now ready for application to em bedded commercial and industrial products."

Modeling, Analysis and Optimization of Network-on-Chip Communication Architectures (Hardcover, 2013 ed.): Umit Y. Ogras, Radu... Modeling, Analysis and Optimization of Network-on-Chip Communication Architectures (Hardcover, 2013 ed.)
Umit Y. Ogras, Radu Marculescu
R3,310 Discovery Miles 33 100 Ships in 18 - 22 working days

Traditionally, design space exploration for Systems-on-Chip (SoCs) has focused on the computational aspects of the problem at hand. However, as the number of components on a single chip and their performance continue to increase, the communication architecture plays a major role in the area, performance and energy consumption of the overall system. As a result, a shift from computation-based to communication-based design becomes mandatory. Towards this end, network-on-chip (NoC) communication architectures have emerged recently as a promising alternative to classical bus and point-to-point communication architectures.

In this dissertation, we study outstanding research problems related to modeling, analysis and optimization of NoC communication architectures. More precisely, we present novel design methodologies, software tools and FPGA prototypes to aid the design of application-specific NoCs.

Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997): A. Migdalas, Panos M. Pardalos,... Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997)
A. Migdalas, Panos M. Pardalos, Sverre Storoy
R7,726 Discovery Miles 77 260 Ships in 18 - 22 working days

During the last three decades, breakthroughs in computer technology have made a tremendous impact on optimization. In particular, parallel computing has made it possible to solve larger and computationally more difficult prob lems. This volume contains mainly lecture notes from a Nordic Summer School held at the Linkoping Institute of Technology, Sweden in August 1995. In order to make the book more complete, a few authors were invited to contribute chapters that were not part of the course on this first occasion. The purpose of this Nordic course in advanced studies was three-fold. One goal was to introduce the students to the new achievements in a new and very active field, bring them close to world leading researchers, and strengthen their competence in an area with internationally explosive rate of growth. A second goal was to strengthen the bonds between students from different Nordic countries, and to encourage collaboration and joint research ventures over the borders. In this respect, the course built further on the achievements of the "Nordic Network in Mathematical Programming," which has been running during the last three years with the support ofthe Nordic Council for Advanced Studies (NorFA). The final goal was to produce literature on the particular subject, which would be available to both the participating students and to the students of the "next generation" ."

Fairness (Paperback, Softcover reprint of the original 1st ed. 1986): Nissim Francez Fairness (Paperback, Softcover reprint of the original 1st ed. 1986)
Nissim Francez
R1,413 Discovery Miles 14 130 Ships in 18 - 22 working days

The main purpose of this book is to bring together much of the research conducted in recent years in a subject I find both fascinating and impor tant, namely fairness. Much of the reported research is still in the form of technical reports, theses and conference papers, and only a small part has already appeared in the formal scientific journal literature. Fairness is one of those concepts that can intuitively be explained very brieft.y, but bear a lot of consequences, both in theory and the practicality of programming languages. Scientists have traditionally been attracted to studying such concepts. However, a rigorous study of the concept needs a lot of detailed development, evoking much machinery of both mathemat ics and computer science. I am fully aware of the fact that this field of research still lacks matu rity, as does the whole subject of theoretical studies of concurrency and nondeterminism. One symptom of this lack of maturity is the proliferation of models used by the research community to discuss these issues, a variety lacking the invariance property present, for example, in universal formalisms for sequential computing."

Parallel Execution of Logic Programs (Paperback, Softcover reprint of the original 1st ed. 1987): John S. Conery Parallel Execution of Logic Programs (Paperback, Softcover reprint of the original 1st ed. 1987)
John S. Conery
R1,373 Discovery Miles 13 730 Ships in 18 - 22 working days

This book is an updated version of my Ph.D. dissertation, The AND/OR Process Model for Parallel Interpretation of Logic Programs. The three years since that paper was finished (or so I thought then) have seen quite a bit of work in the area of parallel execution models and programming languages for logic programs. A quick glance at the bibliography here shows roughly 50 papers on these topics, 40 of which were published after 1983. The main difference between the book and the dissertation is the updated survey of related work. One of the appendices in the dissertation was an overview of a Prolog implementation of an interpreter based on the AND/OR Process Model, a simulator I used to get some preliminary measurements of parallelism in logic programs. In the last three years I have been involved with three other implementations. One was written in C and is now being installed on a small multiprocessor at the University of Oregon. Most of the programming of this interpreter was done by Nitin More under my direction for his M.S. project. The other two, one written in Multilisp and the other in Modula-2, are more limited, intended to test ideas about implementing specific aspects of the model. Instead of an appendix describing one interpreter, this book has more detail about implementation included in Chapters 5 through 7, based on a combination of ideas from the four interpreters.

High Performance Computational Methods for Biological Sequence Analysis (Paperback, Softcover reprint of the original 1st ed.... High Performance Computational Methods for Biological Sequence Analysis (Paperback, Softcover reprint of the original 1st ed. 1996)
Tieng K. Yap, Ophir Frieder, Robert L. Martino
R3,996 Discovery Miles 39 960 Ships in 18 - 22 working days

High Performance Computational Methods for Biological Sequence Analysis presents biological sequence analysis using an interdisciplinary approach that integrates biological, mathematical and computational concepts. These concepts are presented so that computer scientists and biomedical scientists can obtain the necessary background for developing better algorithms and applying parallel computational methods. This book will enable both groups to develop the depth of knowledge needed to work in this interdisciplinary field. This work focuses on high performance computational approaches that are used to perform computationally intensive biological sequence analysis tasks: pairwise sequence comparison, multiple sequence alignment, and sequence similarity searching in large databases. These computational methods are becoming increasingly important to the molecular biology community allowing researchers to explore the increasingly large amounts of sequence data generated by the Human Genome Project and other related biological projects. The approaches presented by the authors are state-of-the-art and show how to reduce analysis times significantly, sometimes from days to minutes. High Performance Computational Methods for Biological Sequence Analysis is tremendously important to biomedical science students and researchers who are interested in applying sequence analyses to their studies, and to computational science students and researchers who are interested in applying new computational approaches to biological sequence analyses.

A Systolic Array Optimizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1989): Monica S. Lam A Systolic Array Optimizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1989)
Monica S. Lam
R2,635 Discovery Miles 26 350 Ships in 18 - 22 working days

This book is a revision of my Ph. D. thesis dissertation submitted to Carnegie Mellon University in 1987. It documents the research and results of the compiler technology developed for the Warp machine. Warp is a systolic array built out of custom, high-performance processors, each of which can execute up to 10 million floating-point operations per second (10 MFLOPS). Under the direction of H. T. Kung, the Warp machine matured from an academic, experimental prototype to a commercial product of General Electric. The Warp machine demonstrated that the scalable architecture of high-peiformance, programmable systolic arrays represents a practical, cost-effective solu tion to the present and future computation-intensive applications. The success of Warp led to the follow-on iWarp project, a joint project with Intel, to develop a single-chip 20 MFLOPS processor. The availability of the highly integrated iWarp processor will have a significant impact on parallel computing. One of the major challenges in the development of Warp was to build an optimizing compiler for the machine. First, the processors in the xx A Systolic Array Optimizing Compiler array cooperate at a fine granularity of parallelism, interaction between processors must be considered in the generation of code for individual processors. Second, the individual processors themselves derive their performance from a VLIW (Very Long Instruction Word) instruction set and a high degree of internal pipelining and parallelism. The compiler contains optimizations pertaining to the array level of parallelism, as well as optimizations for the individual VLIW processors."

Data Organization in Parallel Computers (Paperback, Softcover Repri): Harry A.G. Wijshoff Data Organization in Parallel Computers (Paperback, Softcover Repri)
Harry A.G. Wijshoff
R2,645 Discovery Miles 26 450 Ships in 18 - 22 working days

The organization of data is clearly of great importance in the design of high performance algorithms and architectures. Although there are several landmark papers on this subject, no comprehensive treatment has appeared. This monograph is intended to fill that gap. We introduce a model of computation for parallel computer architec tures, by which we are able to express the intrinsic complexity of data or ganization for specific architectures. We apply this model of computation to several existing parallel computer architectures, e.g., the CDC 205 and CRAY vector-computers, and the MPP binary array processor. The study of data organization in parallel computations was introduced as early as 1970. During the development of the ILLIAC IV system there was a need for a theory of possible data arrangements in interleaved mem ory systems. The resulting theory dealt primarily with storage schemes also called skewing schemes for 2-dimensional matrices, i.e., mappings from a- dimensional array to a number of memory banks. By means of the model of computation we are able to apply the theory of skewing schemes to var ious kinds of parallel computer architectures. This results in a number of consequences for both the design of parallel computer architectures and for applications of parallel processing."

Secure Electronic Voting (Paperback, Softcover reprint of the original 1st ed. 2003): Dimitris A. Gritzalis Secure Electronic Voting (Paperback, Softcover reprint of the original 1st ed. 2003)
Dimitris A. Gritzalis
R2,638 Discovery Miles 26 380 Ships in 18 - 22 working days

Secure Electronic Voting is an edited volume, which includes chapters authored by leading experts in the field of security and voting systems. The chapters identify and describe the given capabilities and the strong limitations, as well as the current trends and future perspectives of electronic voting technologies, with emphasis in security and privacy. Secure Electronic Voting includes state-of-the-art material on existing and emerging electronic and Internet voting technologies, which may eventually lead to the development of adequately secure e-voting systems. This book also includes an overview of the legal framework with respect to voting, a description of the user requirements for the development of a secure e-voting system, and a discussion on the relevant technical and social concerns. Secure Electronic Voting includes, also, three case studies on the use and evaluation of e-voting systems in three different real world environments.

Document Processing and Retrieval - Texpros (Paperback, Softcover reprint of the original 1st ed. 1996): Qianhong Liu, Peter A.... Document Processing and Retrieval - Texpros (Paperback, Softcover reprint of the original 1st ed. 1996)
Qianhong Liu, Peter A. Ng
R4,021 Discovery Miles 40 210 Ships in 18 - 22 working days

Document Processing and Retrieval: TEXPROS focuses on the design and implementation of a personal, customizable office information and document processing system called TEXPROS (a TEXt PROcessing System). TEXPROS is a personal, intelligent office information and document processing system for text-oriented documents. This system supports the storage, classification, categorization, retrieval and reproduction of documents, as well as extracting, browsing, retrieving and synthesizing information from a variety of documents. When using TEXPROS in a multi-user or distributed environment, it requires specific protocols for extracting, storing, transmitting and exchanging information. The authors have used a variety of techniques to implement TEXPROS, such as Object-Oriented Programming, Tcl/Tk, X-Windows, etc. The system can be used for many different purposes in many different applications, such as digital libraries, software documentation and information delivery. Audience: Provides in-depth, state-of-the-art coverage of information processing and retrieval, and documentation for such professionals as database specialists, information systems and software developers, and information providers.

Deductive Program Design (Paperback, Softcover reprint of the original 1st ed. 1996): Manfred Broy Deductive Program Design (Paperback, Softcover reprint of the original 1st ed. 1996)
Manfred Broy
R5,193 Discovery Miles 51 930 Ships in 18 - 22 working days

Advanced research on the description of distributed systems and on design calculi for software and hardware is presented in this volume. Distinguished researchers give an overview of the latest state of the art.

Principles of Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1996): Vijay K Garg Principles of Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
Vijay K Garg
R4,232 Discovery Miles 42 320 Ships in 18 - 22 working days

Distributed computer systems are now widely available but, despite a number of recent advances, the design of software for these systems remains a challenging task, involving two main difficulties: the absence of a shared clock and the absence of a shared memory. The absence of a shared clock means that the concept of time is not useful in distributed systems. The absence of shared memory implies that the concept of a state of a distributed system also needs to be redefined. These two important concepts occupy a major portion of this book. Principles of Distributed Systems describes tools and techniques that have been successfully applied to tackle the problem of global time and state in distributed systems. The author demonstrates that the concept of time can be replaced by that of causality, and clocks can be constructed to provide causality information. The problem of not having a global state is alleviated by developing efficient algorithms for detecting properties and computing global functions. The author's major emphasis is in developing general mechanisms that can be applied to a variety of problems. For example, instead of discussing algorithms for standard problems, such as termination detection and deadlocks, the book discusses algorithms to detect general properties of a distributed computation. Also included are several worked examples and exercise problems that can be used for individual practice and classroom instruction. Audience: Can be used to teach a one-semester graduate course on distributed systems. Also an invaluable reference book for researchers and practitioners working on the many different aspects of distributed systems.

Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Paperback, Softcover reprint... Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
Nandit R. Soparkar, Henry F. Korth, Abraham Silberschatz
R2,617 Discovery Miles 26 170 Ships in 18 - 22 working days

Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."

SBus - Information, Applications, and Experience (Paperback, Softcover reprint of the original 1st ed. 1992): James D. Lyle SBus - Information, Applications, and Experience (Paperback, Softcover reprint of the original 1st ed. 1992)
James D. Lyle
R2,699 Discovery Miles 26 990 Ships in 18 - 22 working days

Workstation and computer users have an ever increasing need for solutions that offer high performance, low cost, small footprints (space requirements), and ease of use. Also, the availability of a wide range of software and hardware options (from a variety of independent vendors) is important because it simplifies the task of expanding existing applications and stretching into new ones. The SBus has been designed and optimized within this framework, and it represents a next-generation approach to a system's I/O intercon nect needs. This book is a collection of information intended to ease the task of developing and integrating new SBus-based products. The focus is primarily on hardware, due to the author's particular expertise, but firmware and software concepts are also included where appropriate. This book is based on revision B.O of the SBus Specification. This revision has been a driving force in the SBus market longer than any other, and is likely to remain a strong influence for some time to come. As of this writing there is currently an effort (desig nated P1496) within the IEEE to produce a new version of the SBus specification that conforms to that group's policies and requirements. This might result in some changes to the specifica tion, but in most cases these will be minor. Most of the information this book contains will remain timely and applicable. To help ensure this, the author has included key information about pro posed or planned changes."

Microelectronics and Microsystems - Emergent Design Techniques (Paperback, Softcover reprint of the original 1st ed. 2002):... Microelectronics and Microsystems - Emergent Design Techniques (Paperback, Softcover reprint of the original 1st ed. 2002)
Luigi Fortuna, Giuseppe Ferla, Antonio Imbruglia
R2,635 Discovery Miles 26 350 Ships in 18 - 22 working days

The book presents the best contributions, extracted from the theses written by the students who have attended the second edition of the Master in Microelectronics and Systems that has been organized by the Universita degli Studi di Catania and that has been held at the STMicroelectronics Company (Catania Site) from May 2000 to January 2001. In particular, the mentioned Master has been organized among the various ac tivities of the "Istituto Superiore di Catania per la Formazione di Eccellenza." The Institute is one of the Italian network of universities selected by MURST (Ministry University Research Scientific Technology). The first aim of tl;te Master in Microelectronics and Systems is to increase the skills of the students with the Laurea Degree in Physics or Electrical Engineering in the more advanced areas as VLSI system design, high-speed low-voltage low-power circuitS and RF systems. The second aim has been to involve in the educational program companies like STMicroelectronics, ACCENT and ITEL, interested in emergent microelectronics topics, to cooperate with the University in developing high-level research projects. Besides the tutorial activity during the teaching hours, provided by national and international researchers, a significant part of the School has been dedicated to the presentation of specific CAD tools and experiments in order to prepare the students to solve specific problems during the stage period and in the thesis work."

Distributed Applications and Interoperable Systems II - IFIP TC6 WG6.1 Second International Working Conference on Distributed... Distributed Applications and Interoperable Systems II - IFIP TC6 WG6.1 Second International Working Conference on Distributed Applications and Interoperable Systems (DAIS'99)June 28-July 1, 1999, Helsinki, Finland (Paperback, Softcover reprint of the original 1st ed. 1999)
Lea Kutvonen, Hartmut Koenig, Martti Tienari
R4,060 Discovery Miles 40 600 Ships in 18 - 22 working days

Mastering interoperability in a computing environment consisting of different operating systems and hardware architectures is a key requirement which faces system engineers building distributed information systems. Distributed applications are a necessity in most central application sectors of the contemporary computerized society, for instance, in office automation, banking, manufacturing, telecommunication and transportation. This book focuses on the techniques available or under development, with the goal of easing the burden of constructing reliable and maintainable interoperable information systems. The topics covered in this book include: * Management of distributed systems; * Frameworks and construction tools; * Open architectures and interoperability techniques; * Experience with platforms like CORBA and RMI; * Language interoperability (e.g. Java); * Agents and mobility; * Quality of service and fault tolerance; * Workflow and object modelling issues; and * Electronic commerce .The book contains the proceedings of the International Working Conference on Distributed Applications and Interoperable Systems II (DAIS'99), which was held June 28-July 1, 1999 in Helsinki, Finland. It was sponsored by the International Federation of Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported.

Distributed and Parallel Embedded Systems - IFIP WG10.3/WG10.5 International Workshop on Distributed and Parallel Embedded... Distributed and Parallel Embedded Systems - IFIP WG10.3/WG10.5 International Workshop on Distributed and Parallel Embedded Systems (DIPES'98) October 5-6, 1998, Schloss Eringerfeld, Germany (Paperback, Softcover reprint of the original 1st ed. 1999)
Franz J. Rammig
R5,133 Discovery Miles 51 330 Ships in 18 - 22 working days

Embedded systems are becoming one of the major driving forces in computer science. Furthermore, it is the impact of embedded information technology that dictates the pace in most engineering domains. Nearly all technical products above a certain level of complexity are not only controlled but increasingly even dominated by their embedded computer systems. Traditionally, such embedded control systems have been implemented in a monolithic, centralized way. Recently, distributed solutions are gaining increasing importance. In this approach, the control task is carried out by a number of controllers distributed over the entire system and connected by some interconnect network, like fieldbuses. Such a distributed embedded system may consist of a few controllers up to several hundred, as in today's top-range automobiles. Distribution and parallelism in embedded systems design increase the engineering challenges and require new development methods and tools. This book is the result of the International Workshop on Distributed and Parallel Embedded Systems (DIPES'98), organized by the International Federation for Information Processing (IFIP) Working Groups 10.3 (Concurrent Systems) and 10.5 (Design and Engineering of Electronic Systems). The workshop took place in October 1998 in Schloss Eringerfeld, near Paderborn, Germany, and the resulting book reflects the most recent points of view of experts from Brazil, Finland, France, Germany, Italy, Portugal, and the USA. The book is organized in six chapters: `Formalisms for Embedded System Design': IP-based system design and various approaches to multi-language formalisms. `Synthesis from Synchronous/Asynchronous Specification': Synthesis techniques based on Message Sequence Charts (MSC), StateCharts, and Predicate/Transition Nets. `Partitioning and Load-Balancing': Application in simulation models and target systems. <`Verification and Validation': Formal techniques for precise verification and more pragmatic approaches to validation. `Design Environments' for distributed embedded systems and their impact on the industrial state of the art. `Object Oriented Approaches': Impact of OO-techniques on distributed embedded systems. GBP/LISTGBP This volume will be essential reading for computer science researchers and application developers.

Communication Systems - The State of the Art IFIP 17th World Computer Congress - TC6 Stream on Communication Systems: The State... Communication Systems - The State of the Art IFIP 17th World Computer Congress - TC6 Stream on Communication Systems: The State of the Art August 25-30, 2002, Montreal, Quebec, Canada (Paperback, Softcover reprint of the original 1st ed. 2002)
Lyman Chapin
R4,041 Discovery Miles 40 410 Ships in 18 - 22 working days

Communication Systems: The State of the Art captures the depth and breadth of the field of communication systems: -Architectures and Protocols for Distributed Systems; -Network and Internetwork Architectures; -Performance of Communication Systems; -Internet Applications Engineering; -Management of Networks and Distributed Systems; -Smart Networks; -Wireless Communications; -Communication Systems for Developing Countries; -Photonic Networking; -Communication Systems in Electronic Commerce. This volume's scope and authority present a rare opportunity for people in many different fields to gain a practical understanding of where the leading edge in communication systems lies today-and where it will be tomorrow.

Design of Reservation Protocols for Multimedia Communication (Paperback, Softcover reprint of the original 1st ed. 1996): Luca... Design of Reservation Protocols for Multimedia Communication (Paperback, Softcover reprint of the original 1st ed. 1996)
Luca Delgrossi
R4,015 Discovery Miles 40 150 Ships in 18 - 22 working days

The advent of multimedia technology is creating a number of new problems in the fields of computer and communication systems. Perhaps the most important of these problems in communication, and certainly the most interesting, is that of designing networks to carry multimedia traffic, including digital audio and video, with acceptable quality. The main challenge in integrating the different services needed by the different types of traffic into the same network (an objective that is made worthwhile by its obvious economic advantages) is to satisfy the performance requirements of continuous media applications, as the quality of audio and video streams at the receiver can be guaranteed only if bounds on delay, delay jitters, bandwidth, and reliability are guaranteed by the network. Since such guarantees cannot be provided by traditional packet-switching technology, a number of researchers and research groups during the last several years have tried to meet the challenge by proposing new protocols or modifications of old ones, to make packet-switching networks capable of delivering audio and video with good quality while carrying all sorts of other traffic. The focus of this book is on HeiTS (the Heidelberg Transport System), and its contributions to integrated services network design. The HeiTS architecture is based on using the Internet Stream Protocol Version 2 (ST-II) at the network layer. The Heidelberg researchers were the first to implement ST-II. The author documents this activity in the book and provides thorough coverage of the improvements made to the protocol. The book also includes coverage of HeiTP as used in error handling, error control, congestion control, and the full specification of ST2+, a new version of ST-II. The ideas and techniques implemented by the Heidelberg group and their coverage in this volume apply to many other approaches to multimedia networking.

Self-Timed Control of Concurrent Processes - The Design of Aperiodic Logical Circuits in Computers and Discrete Systems... Self-Timed Control of Concurrent Processes - The Design of Aperiodic Logical Circuits in Computers and Discrete Systems (Paperback, Softcover reprint of the original 1st ed. 1990)
Victor I. Varshavsky
R2,689 Discovery Miles 26 890 Ships in 18 - 22 working days

'Et moi ... si j'avait su comment en revenir. One service mathematics has rendered thl je n'y serais point aile: human race. It has put common sense back where it belongs. on the topmost shelf nexl Jules Verne to the dusty canister labelled 'discarded non. The series is divergent; therefore we may be sense'. Eric T. Bell able to do something with it O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non. Iinearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and fO other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic has rendered com. puter science ... .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series."

Complex Systems and Cognitive Processes (Paperback, Softcover reprint of the original 1st ed. 1990): Roberto Serra, Gianni... Complex Systems and Cognitive Processes (Paperback, Softcover reprint of the original 1st ed. 1990)
Roberto Serra, Gianni Zanarini
R1,386 Discovery Miles 13 860 Ships in 18 - 22 working days

This volume describes our intellectual path from the physics of complex sys tems to the science of artificial cognitive systems. It was exciting to discover that many of the concepts and methods which succeed in describing the self organizing phenomena of the physical world are relevant also for understand ing cognitive processes. Several nonlinear physicists have felt the fascination of such discovery in recent years. In this volume, we will limit our discussion to artificial cognitive systems, without attempting to model either the cognitive behaviour or the nervous structure of humans or animals. On the one hand, such artificial systems are important per se; on the other hand, it can be expected that their study will shed light on some general principles which are relevant also to biological cognitive systems. The main purpose of this volume is to show that nonlinear dynamical systems have several properties which make them particularly attractive for reaching some of the goals of artificial intelligence. The enthusiasm which was mentioned above must however be qualified by a critical consideration of the limitations of the dynamical systems approach. Understanding cognitive processes is a tremendous scientific challenge, and the achievements reached so far allow no single method to claim that it is the only valid one. In particular, the approach based upon nonlinear dynamical systems, which is our main topic, is still in an early stage of development."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
CSS For Beginners - The Best CSS Guide…
Ethan Hall Hardcover R895 R773 Discovery Miles 7 730
Artificial Intelligence - Concepts…
Information Reso Management Association Hardcover R9,056 Discovery Miles 90 560
Tools and Technologies for the…
Sergey Balandin, Ekaterina Balandina Hardcover R6,502 Discovery Miles 65 020
Artificial Intelligence - Concepts…
Information Reso Management Association Hardcover R9,019 Discovery Miles 90 190
The Practice of Enterprise Architecture…
Svyatoslav Kotusev Hardcover R1,571 Discovery Miles 15 710
CSS and HTML for beginners - A Beginners…
Ethan Hall Hardcover R1,027 R881 Discovery Miles 8 810
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, … Paperback R2,281 Discovery Miles 22 810
Thinking Machines - Machine Learning and…
Shigeyuki Takano Paperback R2,011 Discovery Miles 20 110
Clean Architecture - A Craftsman's Guide…
Robert Martin Paperback  (1)
R860 R741 Discovery Miles 7 410
Advancements in Instrumentation and…
Srijan Bhattacharya Hardcover R6,138 Discovery Miles 61 380

 

Partners