0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (12)
  • R250 - R500 (38)
  • R500+ (3,096)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

Complex Systems and Cognitive Processes (Paperback, Softcover reprint of the original 1st ed. 1990): Roberto Serra, Gianni... Complex Systems and Cognitive Processes (Paperback, Softcover reprint of the original 1st ed. 1990)
Roberto Serra, Gianni Zanarini
R1,386 Discovery Miles 13 860 Ships in 18 - 22 working days

This volume describes our intellectual path from the physics of complex sys tems to the science of artificial cognitive systems. It was exciting to discover that many of the concepts and methods which succeed in describing the self organizing phenomena of the physical world are relevant also for understand ing cognitive processes. Several nonlinear physicists have felt the fascination of such discovery in recent years. In this volume, we will limit our discussion to artificial cognitive systems, without attempting to model either the cognitive behaviour or the nervous structure of humans or animals. On the one hand, such artificial systems are important per se; on the other hand, it can be expected that their study will shed light on some general principles which are relevant also to biological cognitive systems. The main purpose of this volume is to show that nonlinear dynamical systems have several properties which make them particularly attractive for reaching some of the goals of artificial intelligence. The enthusiasm which was mentioned above must however be qualified by a critical consideration of the limitations of the dynamical systems approach. Understanding cognitive processes is a tremendous scientific challenge, and the achievements reached so far allow no single method to claim that it is the only valid one. In particular, the approach based upon nonlinear dynamical systems, which is our main topic, is still in an early stage of development."

Automatic Performance Prediction of Parallel Programs (Paperback, Softcover reprint of the original 1st ed. 1996): Thomas... Automatic Performance Prediction of Parallel Programs (Paperback, Softcover reprint of the original 1st ed. 1996)
Thomas Fahringer
R2,652 Discovery Miles 26 520 Ships in 18 - 22 working days

Automatic Performance Prediction of Parallel Programs presents a unified approach to the problem of automatically estimating the performance of parallel computer programs. The author focuses primarily on distributed memory multiprocessor systems, although large portions of the analysis can be applied to shared memory architectures as well. The author introduces a novel and very practical approach for predicting some of the most important performance parameters of parallel programs, including work distribution, number of transfers, amount of data transferred, network contention, transfer time, computation time and number of cache misses. This approach is based on advanced compiler analysis that carefully examines loop iteration spaces, procedure calls, array subscript expressions, communication patterns, data distributions and optimizing code transformations at the program level; and the most important machine specific parameters including cache characteristics, communication network indices, and benchmark data for computational operations at the machine level. The material has been fully implemented as part of P3T, which is an integrated automatic performance estimator of the Vienna Fortran Compilation System (VFCS), a state-of-the-art parallelizing compiler for Fortran77, Vienna Fortran and a subset of High Performance Fortran (HPF) programs. A large number of experiments using realistic HPF and Vienna Fortran code examples demonstrate highly accurate performance estimates, and the ability of the described performance prediction approach to successfully guide both programmer and compiler in parallelizing and optimizing parallel programs. A graphical user interface is described and displayed that visualizes each program source line together with the corresponding parameter values. P3T uses color-coded performance visualization to immediately identify hot spots in the parallel program. Performance data can be filtered and displayed at various levels of detail. Colors displayed by the graphical user interface are visualized in greyscale. Automatic Performance Prediction of Parallel Programs also includes coverage of fundamental problems of automatic parallelization for distributed memory multicomputers, a description of the basic parallelization strategy and a large variety of optimizing code transformations as included under VFCS.

Synchronization in Real-Time Systems - A Priority Inheritance Approach (Paperback, Softcover reprint of the original 1st ed.... Synchronization in Real-Time Systems - A Priority Inheritance Approach (Paperback, Softcover reprint of the original 1st ed. 1991)
Ragunathan Rajkumar
R2,627 Discovery Miles 26 270 Ships in 18 - 22 working days

Real-time computing systems are vital to a wide range of applications. For example, they are used in the control of nuclear reactors and automated manufacturing facilities, in controlling and tracking air traffic, and in communication systems. In recent years, real-time systems have also grown larger and become more critical. For instance, advanced aircraft such as the space shuttle must depend heavily on computer sys tems Carlow 84]. The centralized control of manufacturing facilities and assembly plants operated by robots are other examples at the heart of which lie embedded real-time systems. Military defense systems deployed in the air, on the ocean surface, land and underwater, have also been increasingly relying upon real-time systems for monitoring and operational safety purposes, and for retaliatory and containment measures. In telecommunications and in multi-media applications, real time characteristics are essential to maintain the integrity of transmitted data, audio and video signals. Many of these systems control, monitor or perform critical operations, and must respond quickly to emergency events in a wide range of embedded applications. They are therefore required to process tasks with stringent timing requirements and must perform these tasks in a way that these timing requirements are guaranteed to be met. Real-time scheduling al gorithms attempt to ensure that system timing behavior meets its specifications, but typically assume that tasks do not share logical or physical resources. Since resource-sharing cannot be eliminated, synchronization primitives must be used to ensure that resource consis tency constraints are not violated."

Parallel Computation and Computers for Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1988):... Parallel Computation and Computers for Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1988)
J.S. Kowalik
R4,018 Discovery Miles 40 180 Ships in 18 - 22 working days

It has been widely recognized that artificial intelligence computations offer large potential for distributed and parallel processing. Unfortunately, not much is known about designing parallel AI algorithms and efficient, easy-to-use parallel computer architectures for AI applications. The field of parallel computation and computers for AI is in its infancy, but some significant ideas have appeared and initial practical experience has become available. The purpose of this book has been to collect in one volume contributions from several leading researchers and pioneers of AI that represent a sample of these ideas and experiences. This sample does not include all schools of thought nor contributions from all leading researchers, but it covers a relatively wide variety of views and topics and in this sense can be helpful in assessing the state ofthe art. We hope that the book will serve, at least, as a pointer to more specialized literature and that it will stimulate interest in the area of parallel AI processing. It has been a great pleasure and a privilege to cooperate with all contributors to this volume. They have my warmest thanks and gratitude. Mrs. Birgitta Knapp has assisted me in the editorial task and demonstrated a great deal of skill and patience. Janusz S. Kowalik vii INTRODUCTION Artificial intelligence (AI) computer programs can be very time-consuming.

High Performance Computing in Fluid Dynamics - Proceedings of the Summerschool on High Performance Computing in Fluid Dynamics... High Performance Computing in Fluid Dynamics - Proceedings of the Summerschool on High Performance Computing in Fluid Dynamics held at Delft University of Technology, The Netherlands, June 24-28 1996 (Paperback, Softcover reprint of the original 1st ed. 1996)
P Wesseling
R1,411 Discovery Miles 14 110 Ships in 18 - 22 working days

This book contains the course notes of the Summerschool on High Performance Computing in Fluid Dynamics, held at the Delft University of Technology, June 24-28, 1996. The lectures presented deal to a large extent with algorithmic, programming and implementation issues, as well as experiences gained so far on parallel platforms. Attention is also given to mathematics aspects, notably domain decomposition and scalable algorithms. Topics considered are: basic concepts of parallel computers, parallelization strategies, programming aspects, parallel algorithms, applications in computational fluid dynamics, the present hardware situation and developments to be expected. The book is addressed to students on a graduate level and researchers in industry engaged in scientific computing, who have little or no experience with high performance computing, but who want to learn more, and/or want to port their code to parallel platforms. It is a good starting point for those who want to enter the field of high performance computing, especially if applications in fluid dynamics are envisaged.

High Performance Architecture and Grid Computing - International Conference, HPAGC 2011, Chandigarh, India, July 19-20, 2011.... High Performance Architecture and Grid Computing - International Conference, HPAGC 2011, Chandigarh, India, July 19-20, 2011. Proceedings (Paperback, Edition.)
Archana Mantri, Suman Nandi, Gaurav Kumar, Sandeep Kumar
R2,776 Discovery Miles 27 760 Ships in 18 - 22 working days

This book constitutes the refereeds proceedings of the International Conference on High Performance Architecture and Grid Computing, HPAGC 2011, held in Chandigarh, India, in July 2011. The 87 revised full papers presented were carefully reviewed and selected from 240 submissions. The papers are organized in topical sections on grid and cloud computing; high performance architecture; information management and network security.

Mobile Computation with Functions (Paperback, Softcover reprint of the original 1st ed. 2002): Zeliha Dilsun Kirli Mobile Computation with Functions (Paperback, Softcover reprint of the original 1st ed. 2002)
Zeliha Dilsun Kirli
R2,616 Discovery Miles 26 160 Ships in 18 - 22 working days

Mobile Computation with Functions explores distributed computation with languages which adopt functions as the main programming abstraction and support code mobility through the mobility of functions between remote sites. It aims to highlight the benefits of using languages of this family in dealing with the challenges of mobile computation. The possibility of exploiting existing static analysis techniques suggests that having functions at the core of mobile code language is a particularly apt choice. A range of problems which have impact on the safety, security and performance are discussed. It is shown that types extended with effects and other annotations can capture a significant amount of information about the dynamic behavior of mobile functions, and offer solutions to the problems under investigation. This book includes a survey of the languages Concurrent ML, Facile and PLAN which inherit the strengths of the functional paradigm in the context of concurrent and distributed computation. The languages which are defined in the subsequent chapters have their roots in these languages.

Distributed Sensor Networks - A Multiagent Perspective (Paperback, Softcover reprint of the original 1st ed. 2003): Victor... Distributed Sensor Networks - A Multiagent Perspective (Paperback, Softcover reprint of the original 1st ed. 2003)
Victor Lesser, Charles L. Ortiz Jr, Milind Tambe
R4,037 Discovery Miles 40 370 Ships in 18 - 22 working days

Distributed Sensor Networks is the first book of its kind to examine solutions to this problem using ideas taken from the field of multiagent systems. The field of multiagent systems has itself seen an exponential growth in the past decade, and has developed a variety of techniques for distributed resource allocation. Distributed Sensor Networks contains contributions from leading, international researchers describing a variety of approaches to this problem based on examples of implemented systems taken from a common distributed sensor network application; each approach is motivated, demonstrated and tested by way of a common challenge problem. The book focuses on both practical systems and their theoretical analysis, and is divided into three parts: the first part describes the common sensor network challenge problem; the second part explains the different technical approaches to the common challenge problem; and the third part provides results on the formal analysis of a number of approaches taken to address the challenge problem.

Virtual Computing - Concept, Design, and Evaluation (Paperback, Softcover reprint of the original 1st ed. 2001): Dongmin Kim,... Virtual Computing - Concept, Design, and Evaluation (Paperback, Softcover reprint of the original 1st ed. 2001)
Dongmin Kim, Salim Hariri
R2,614 Discovery Miles 26 140 Ships in 18 - 22 working days

The evolution of modern computers began more than 50 years ago and has been driven to a large extend by rapid advances in electronic technology during that period. The first computers ran one application (user) at a time. Without the benefit of operating systems or compilers, the application programmers were responsible for managing all aspects of the hardware. The introduction of compilers allowed programmers to express algorithms in abstract terms without being concerned with the bit level details of their implementation. Time sharing operating systems took computing systems one step further and allowed several users and/or applications to time share the computing services of com puters. With the advances of networks and software tools, users and applications were able to time share the logical and physical services that are geographically dispersed across one or more networks. Virtual Computing (VC) concept aims at providing ubiquitous open computing services in analogous way to the services offered by Telephone and Elec trical (utility) companies. The VC environment should be dynamically setup to meet the requirements of a single user and/or application. The design and development of a dynamically programmable virtual comput ing environments is a challenging research problem. However, the recent advances in processing and network technology and software tools have successfully solved many of the obstacles facing the wide deployment of virtual computing environments as will be outlined next."

Distributed and Parallel Systems - Cluster and Grid Computing (Paperback, Softcover reprint of the original 1st ed. 2002):... Distributed and Parallel Systems - Cluster and Grid Computing (Paperback, Softcover reprint of the original 1st ed. 2002)
Peter Kacsuk, Dieter Kranzlmuller, Zsolt Nemeth, Jens Volkert
R2,637 Discovery Miles 26 370 Ships in 18 - 22 working days

Distributed and Parallel Systems: Cluster and Grid Computing is the proceedings of the fourth Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by Johannes Kepler University, Linz, Austria and the MTA SZTAKI Computer and Automation Research Institute.

The papers in this volume cover a broad range of research topics presented in four groups. The first one introduces cluster tools and techniques, especially the issues of load balancing and migration. Another six papers deal with grid and global computing including grid infrastructure, tools, applications and mobile computing. The next nine papers present general questions of distributed development and applications. The last four papers address a crucial issue in distributed computing: fault tolerance and dependable systems.

This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.

Robust Model-Based Fault Diagnosis for Dynamic Systems (Paperback, Softcover reprint of the original 1st ed. 1999): Jie Chen,... Robust Model-Based Fault Diagnosis for Dynamic Systems (Paperback, Softcover reprint of the original 1st ed. 1999)
Jie Chen, R.J. Patton
R7,657 Discovery Miles 76 570 Ships in 18 - 22 working days

There is an increasing demand for dynamic systems to become safer and more reliable. This requirement extends beyond the normally accepted safety-critical systems such as nuclear reactors and aircraft, where safety is of paramount importance, to systems such as autonomous vehicles and process control systems where the system availability is vital. It is clear that fault diagnosis is becoming an important subject in modern control theory and practice. Robust Model-Based Fault Diagnosis for Dynamic Systems presents the subject of model-based fault diagnosis in a unified framework. It contains many important topics and methods; however, total coverage and completeness is not the primary concern. The book focuses on fundamental issues such as basic definitions, residual generation methods and the importance of robustness in model-based fault diagnosis approaches. In this book, fault diagnosis concepts and methods are illustrated by either simple academic examples or practical applications. The first two chapters are of tutorial value and provide a starting point for newcomers to this field.The rest of the book presents the state of the art in model-based fault diagnosis by discussing many important robust approaches and their applications. This will certainly appeal to experts in this field. Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research. The book is useful for both researchers in academia and professional engineers in industry because both theory and applications are discussed. Although this is a research monograph, it will be an important text for postgraduate research students world-wide. The largest market, however, will be academics, libraries and practicing engineers and scientists throughout the world.

Handbook of Electronics Manufacturing Engineering (Paperback, Softcover reprint of the original 3rd ed. 1997): Bernie Matisoff Handbook of Electronics Manufacturing Engineering (Paperback, Softcover reprint of the original 3rd ed. 1997)
Bernie Matisoff
R5,251 Discovery Miles 52 510 Ships in 18 - 22 working days

This single source reference offers a pragmatic and accessible approach to the basic methods and procedures used in the manufacturing and design of modern electronic products. Providing a stategic yet simplified layout, this handbook is set up with an eye toward maximizing productivity in each phase of the eletronics manufacturing process. Not only does this handbook inform the reader on vital issues concerning electronics manufacturing and design, it also provides practical insight and will be of essential use to manufacturing and process engineers in electronics and aerospace manufacturing. In addition, electronics packaging engineers and electronics manufacturing managers and supervisors will gain a wealth of knowledge.

Computer Architecture: A Minimalist Perspective (Paperback, Softcover reprint of the original 1st ed. 2003): William F.... Computer Architecture: A Minimalist Perspective (Paperback, Softcover reprint of the original 1st ed. 2003)
William F. Gilreath, Phillip A Laplante
R3,997 Discovery Miles 39 970 Ships in 18 - 22 working days

The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines computer architecture, computability theory, and the history of computers from the perspective of one instruction set computing - a novel approach in which the computer supports only one, simple instruction. This bold, new paradigm offers significant promise in biological, chemical, optical, and molecular scale computers. Features include: - Provides a comprehensive study of computer architecture using computability theory as a base. - Provides a fresh perspective on computer architecture not found in any other text. - Covers history, theory, and practice of computer architecture from a minimalist perspective. Includes a complete implementation of a one instruction computer.- Includes exercises and programming assignments. Computer Architecture: A Minimalist Perspective is designed to meet the needs of a professional audience composed of researchers, computer hardware engineers, software engineers computational theorists, and systems engineers. The book is also intended for use in upper division undergraduate students and early graduate students studying computer architecture or embedded systems. It is an excellent text for use as a supplement or alternative in traditional Computer Architecture Courses, or in courses entitled Special Topics in Computer Architecture.

Ontology Learning for the Semantic Web (Paperback, Softcover reprint of the original 1st ed. 2002): Alexander Maedche Ontology Learning for the Semantic Web (Paperback, Softcover reprint of the original 1st ed. 2002)
Alexander Maedche
R2,647 Discovery Miles 26 470 Ships in 18 - 22 working days

Ontology Learning for the Semantic Web explores techniques for applying knowledge discovery techniques to different web data sources (such as HTML documents, dictionaries, etc.), in order to support the task of engineering and maintaining ontologies. The approach of ontology learning proposed in Ontology Learning for the Semantic Web includes a number of complementary disciplines that feed in different types of unstructured and semi-structured data. This data is necessary in order to support a semi-automatic ontology engineering process. Ontology Learning for the Semantic Web is designed for researchers and developers of semantic web applications. It also serves as an excellent supplemental reference to advanced level courses in ontologies and the semantic web.

Wafer Scale Integration (Paperback, Softcover reprint of the original 1st ed. 1989): Earl E. Swartzlander Jr Wafer Scale Integration (Paperback, Softcover reprint of the original 1st ed. 1989)
Earl E. Swartzlander Jr
R5,206 Discovery Miles 52 060 Ships in 18 - 22 working days

Wafer Scale Integration (WSI) is the culmination of the quest for larger integrated circuits. In VLSI chips are developed by fabricating a wafer with hundreds of identical circuits, testing the circuits, dicing the wafer, and packaging the good dice. In contrast in WSI, a wafer is fabricated with several types of circuits (generally referred to as cells), with multiple instances of each cell type, the cells are tested, and good cells are interconnected to realize a system on the wafer. Since most signal lines stay on the wafer, stray capacitance is low, so that high speeds are achieved with low power consumption. For the same technology a WSI implementation may be a factor of five faster, dissipate a factor of ten less power, and require one hundredth to one thousandth the volume. Successful development of WSI involves many overlapping disciplines, ranging from architecture to test design to fabrication (including laser linking and cutting, multiple levels of interconnection, and packaging). This book concentrates on the areas that are unique to WSI and that are as a result not well covered by any of the many books on VLSI design. A unique aspect of WSI is that the finished circuits are so large that there will be defects in some portions of the circuit. Accordingly much attention must be devoted to designing architectures that facilitate fault detection and reconfiguration to of WSI include fabrication circumvent the faults. Other unique aspects technology and packaging.

Hardware Design and Simulation in VAL/VHDL (Paperback, Softcover reprint of the original 1st ed. 1991): Larry M. Augustin,... Hardware Design and Simulation in VAL/VHDL (Paperback, Softcover reprint of the original 1st ed. 1991)
Larry M. Augustin, David C Luckham, Benoit A. Gennart, Youm Huh, A Stanculescu
R2,666 Discovery Miles 26 660 Ships in 18 - 22 working days

The VHSIC Hardware Description Language (VHDL) provides a standard machine processable notation for describing hardware. VHDL is the result of a collaborative effort between IBM, Intermetrics, and Texas Instruments; sponsored by the Very High Speed Integrated Cir cuits (VHSIC) program office of the Department of Defense, beginning in 1981. Today it is an IEEE standard (1076-1987), and several simulators and other automated support tools for it are available commercially. By providing a standard notation for describing hardware, especially in the early stages of the hardware design process, VHDL is expected to reduce both the time lag and the cost involved in building new systems and upgrading existing ones. VHDL is the result of an evolutionary approach to language devel opment starting with high level hardware description languages existing in 1981. It has a decidedly programming language flavor, resulting both from the orientation of hardware languages of that time, and from a ma jor requirement that VHDL use Ada constructs wherever appropriate. During the 1980's there has been an increasing current of research into high level specification languages for systems, particularly in the software area, and new methods of utilizing specifications in systems de velopment. This activity is worldwide and includes, for example, object oriented design, various rigorous development methods, mathematical verification, and synthesis from high level specifications. VAL (VHDL Annotation Language) is a simple further step in the evolution of hardware description languages in the direction of applying new methods that have developed since VHDL was designed."

Disseminating Security Updates at Internet Scale (Paperback, Softcover reprint of the original 1st ed. 2003): Jun Li, Peter... Disseminating Security Updates at Internet Scale (Paperback, Softcover reprint of the original 1st ed. 2003)
Jun Li, Peter Reiher, Gerald J. Popek
R2,621 Discovery Miles 26 210 Ships in 18 - 22 working days

Disseminating Security Updates at Internet Scale describes a new system, "Revere", that addresses these problems. "Revere" builds large-scale, self-organizing and resilient overlay networks on top of the Internet to push security updates from dissemination centers to individual nodes. "Revere" also sets up repository servers for individual nodes to pull missed security updates. This book further discusses how to protect this push-and-pull dissemination procedure and how to secure "Revere" overlay networks, considering possible attacks and countermeasures. Disseminating Security Updates at Internet Scale presents experimental measurements of a prototype implementation of "Revere" gathered using a large-scale oriented approach. These measurements suggest that "Revere" can deliver security updates at the required scale, speed and resiliency for a reasonable cost. Disseminating Security Updates at Internet Scale will be helpful to those trying to design peer systems at large scale when security is a concern, since many of the issues faced by these designs are also faced by "Revere". The "Revere" solutions may not always be appropriate for other peer systems with very different goals, but the analysis of the problems and possible solutions discussed here will be helpful in designing a customized approach for such systems.

Neural Circuits and Networks - Proceedings of the NATO advanced Study Institute on Neuronal Circuits and Networks, held at the... Neural Circuits and Networks - Proceedings of the NATO advanced Study Institute on Neuronal Circuits and Networks, held at the Ettore Majorana Center, Erice, Italy, June 15-27 1997 (Paperback, Softcover reprint of the original 1st ed. 1998)
Vincent Torre, John Nicholls
R2,644 Discovery Miles 26 440 Ships in 18 - 22 working days

The understanding of parallel processing and of the mechanisms underlying neural networks in the brain is certainly one of the most challenging problems of contemporary science. During the last decades significant progress has been made by the combination of different techniques, which have elucidated properties at a cellular and molecular level. However, in order to make significant progress in this field, it is necessary to gather more direct experimental data on the parallel processing occurring in the nervous system. Indeed the nervous system overcomes the limitations of its elementary components by employing a massive degree of parallelism, through the extremely rich set of synaptic interconnections between neurons. This book gathers a selection of the contributions presented during the NATO ASI School "Neuronal Circuits and Networks" held at the Ettore Majorana Center in Erice, Sicily, from June 15 to 27, 1997. The purpose of the School was to present an overview of recent results on single cell properties, the dynamics of neuronal networks and modelling of the nervous system. The School and the present book propose an interdisciplinary approach of experimental and theoretical aspects of brain functions combining different techniques and methodologies.

Compositional Verification of Concurrent and Real-Time Systems (Paperback, Softcover reprint of the original 1st ed. 2002):... Compositional Verification of Concurrent and Real-Time Systems (Paperback, Softcover reprint of the original 1st ed. 2002)
Eric Y.T. Juan, Jeffrey J.P. Tsai
R2,633 Discovery Miles 26 330 Ships in 18 - 22 working days

With the rapid growth of networking and high-computing power, the demand for large-scale and complex software systems has increased dramatically. Many of the software systems support or supplant human control of safety-critical systems such as flight control systems, space shuttle control systems, aircraft avionics control systems, robotics, patient monitoring systems, nuclear power plant control systems, and so on. Failure of safety-critical systems could result in great disasters and loss of human life. Therefore, software used for safety critical systems should preserve high assurance properties. In order to comply with high assurance properties, a safety-critical system often shares resources between multiple concurrently active computing agents and must meet rigid real-time constraints. However, concurrency and timing constraints make the development of a safety-critical system much more error prone and arduous. The correctness of software systems nowadays depends mainly on the work of testing and debugging. Testing and debugging involve the process of de tecting, locating, analyzing, isolating, and correcting suspected faults using the runtime information of a system. However, testing and debugging are not sufficient to prove the correctness of a safety-critical system. In contrast, static analysis is supported by formalisms to specify the system precisely. Formal verification methods are then applied to prove the logical correctness of the system with respect to the specification. Formal verifica tion gives us greater confidence that safety-critical systems meet the desired assurance properties in order to avoid disastrous consequences.

Foundations of Real-Time Computing: Formal Specifications and Methods (Paperback, Softcover reprint of the original 1st ed.... Foundations of Real-Time Computing: Formal Specifications and Methods (Paperback, Softcover reprint of the original 1st ed. 1991)
Andre M.Van Tilborg, Gary M. Koob
R4,021 Discovery Miles 40 210 Ships in 18 - 22 working days

This volume contains a selection of papers that focus on the state-of the-art in formal specification and verification of real-time computing systems. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D. C. A companion volume by the title Foundations of Real-Time Computing: Scheduling and Resource Management complements this hook by addressing many of the recently devised techniques and approaches for scheduling tasks and managing resources in real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real time computing systems on a scientific basis. The notion of real-time system has alternative interpretations, not all of which are intended usages in this collection of papers. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the formal specification and verification of computer software and systems whose correct performance is dependent on carefully orchestrated interactions with time, e. g., meeting deadlines and synchronizing with clocks. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End use applications of real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."

Multicasting on the Internet and its Applications (Paperback, Softcover reprint of the original 1st ed. 1998): Sanjoy Paul Multicasting on the Internet and its Applications (Paperback, Softcover reprint of the original 1st ed. 1998)
Sanjoy Paul
R4,055 Discovery Miles 40 550 Ships in 18 - 22 working days

This book covers the entire spectrum of multicasting on the Internet from link- to application-layer issues, including multicasting in broadcast and non-broadcast links, multicast routing, reliable and real-time multicast transport, group membership and total ordering in multicast groups. In-depth consideration is given to describing IP multicast routing protocols, such as, DVMRP, MOSPF, PIM and CBT, quality of service issues in network-layer using RSVP and ST-2, as well as the relationship between ATM and IP multicast. These discussions include coverage of key concepts using illustrative diagrams and various real-world applications. The protocols and the architecture of MBone are described, real-time multicast transport issues are addressed and various reliable multicast transport protocols are compared both conceptually and analytically. Also included is a discussion of video multicast and other cutting-edge research on multicast with an assessment of their potential impact on future internetworks.Multicasting on the Internet and Its Applications is an invaluable reference work for networking professionals and researchers, network software developers, information technology managers and graduate students.

Computers in Building - Proceedings of the CAADfutures'99 Conference. Proceedings of the Eighth International Conference... Computers in Building - Proceedings of the CAADfutures'99 Conference. Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures held at Georgia Institute of Technology, Atlanta, Georgia, USA on June 7-8, 1999 (Paperback, Softcover reprint of the original 1st ed. 1999)
Godfried Augenbroe, Charles Eastman
R4,044 Discovery Miles 40 440 Ships in 18 - 22 working days

Since the establishment of the CAAD Futures Foundation in 1985, CAAD experts from all over the world meet every two years to present and document the state of the art of research in Computer Aided Architectural Design. Together, the series provides a good record of the evolving state of research in this area over the last fourteen years. The Proceedings this year is the eighth in the series. The conference held at Georgia Institute of Technology in Atlanta, Georgia, includes twenty-five papers presenting new and exciting results and capabilities in areas such as computer graphics, building modeling, digital sketching and drawing systems, Web-based collaboration and information exchange. An overall reading shows that computers in architecture is still a young field, with many exciting results emerging out of both greater understanding of the human processes and information processing needed to support design and also the continuously expanding capabilities of digital technology.

Still Image Compression on Parallel Computer Architectures (Paperback, Softcover reprint of the original 1st ed. 1999): Savitri... Still Image Compression on Parallel Computer Architectures (Paperback, Softcover reprint of the original 1st ed. 1999)
Savitri Bevinakoppa
R3,995 Discovery Miles 39 950 Ships in 18 - 22 working days

Still Image Compression on Parallel Computer Architectures investigates the application of parallel-processing techniques to digital image compression. Digital image compression is used to reduce the number of bits required to store an image in computer memory and/or transmit it over a communication link. Over the past decade advancements in technology have spawned many applications of digital imaging, such as photo videotex, desktop publishing, graphics arts, color facsimile, newspaper wire phototransmission and medical imaging. For many other contemporary applications, such as distributed multimedia systems, rapid transmission of images is necessary. Dollar cost as well as time cost of transmission and storage tend to be directly proportional to the volume of data. Therefore, application of digital image compression techniques becomes necessary to minimize costs. A number of digital image compression algorithms have been developed and standardized. With the success of these algorithms, research effort is now directed towards improving implementation techniques. The Joint Photographic Experts Group (JPEG) and Motion Photographic Experts Group(MPEG) are international organizations which have developed digital image compression standards. Hardware (VLSI chips) which implement the JPEG image compression algorithm are available. Such hardware is specific to image compression only and cannot be used for other image processing applications. A flexible means of implementing digital image compression algorithms is still required. An obvious method of processing different imaging applications on general purpose hardware platforms is to develop software implementations. JPEG uses an 8 x 8 block of image samples as the basic element for compression. These blocks are processed sequentially. There is always the possibility of having similar blocks in a given image. If similar blocks in an image are located, then repeated compression of these blocks is not necessary. By locating similar blocks in the image, the speed of compression can be increased and the size of the compressed image can be reduced. Based on this concept an enhancement to the JPEG algorithm is proposed, called Bock Comparator Technique (BCT). Still Image Compression on Parallel Computer Architectures is designed for advanced students and practitioners of computer science. This comprehensive reference provides a foundation for understanding digital image compression techniques and parallel computer architectures.

Performance and Reliability Analysis of Computer Systems - An Example-Based Approach Using the SHARPE Software Package... Performance and Reliability Analysis of Computer Systems - An Example-Based Approach Using the SHARPE Software Package (Paperback, Softcover reprint of the original 1st ed. 1996)
Robin A. Sahner, Kishor Trivedi, Antonio Puliafito
R4,046 Discovery Miles 40 460 Ships in 18 - 22 working days

Performance and Reliability Analysis of Computer Systems: An Example-Based Approach Using the SHARPE Software Package provides a variety of probabilistic, discrete-state models used to assess the reliability and performance of computer and communication systems. The models included are combinatorial reliability models (reliability block diagrams, fault trees and reliability graphs), directed, acyclic task precedence graphs, Markov and semi-Markov models (including Markov reward models), product-form queueing networks and generalized stochastic Petri nets. A practical approach to system modeling is followed; all of the examples described are solved and analyzed using the SHARPE tool. In structuring the book, the authors have been careful to provide the reader with a methodological approach to analytical modeling techniques. These techniques are not seen as alternatives but rather as an integral part of a single process of assessment which, by hierarchically combining results from different kinds of models, makes it possible to use state-space methods for those parts of a system that require them and non-state-space methods for the more well-behaved parts of the system. The SHARPE (Symbolic Hierarchical Automated Reliability and Performance Evaluator) package is the `toolchest' that allows the authors to specify stochastic models easily and solve them quickly, adopting model hierarchies and very efficient solution techniques. All the models described in the book are specified and solved using the SHARPE language; its syntax is described and the source code of almost all the examples discussed is provided. Audience: Suitable for use in advanced level courses covering reliability and performance of computer and communications systems and by researchers and practicing engineers whose work involves modeling of system performance and reliability.

Cooperative Computer-Aided Authoring and Learning - A Systems Approach (Paperback, Softcover reprint of the original 1st ed.... Cooperative Computer-Aided Authoring and Learning - A Systems Approach (Paperback, Softcover reprint of the original 1st ed. 1995)
Max Muhlhauser
R5,163 Discovery Miles 51 630 Ships in 18 - 22 working days

Cooperative Computer-Aided Authoring and Learning: A Systems Approach describes in detail a practical system for computer assisted authoring and learning. Drawing from the experiences gained during the Nestor project, jointly run between the Universities of Karlsruhe, Kaiserslautern and Freiburg and the Digital Equipment Corp. Center for Research and Advanced Development, the book presents a concrete example of new concepts in the domain of computer-aided authoring and learning. The conceptual foundation is laid by a reference architecture for an integrated environment for authoring and learning. This overall architecture represents the nucleus, shell and common denominator for the R&D activities carried out. From its conception, the reference architecture was centered around three major issues: * Cooperation among and between authors and learners in an open, multimedia and distributed system as the most important attribute; * Authoring/learning as the central topic; * Laboratory as the term which evoked the most suitable association with the envisioned authoring/learning environment.Within this framework, the book covers four major topics which denote the most important technical domains, namely: * The system kernel, based on object orientation and hypermedia; * Distributed multimedia support; * Cooperation support, and * Reusable instructional design support. Cooperative Computer-Aided Authoring and Learning: A Systems Approach is a major contribution to the emerging field of collaborative computing and is essential reading for researchers and practitioners alike. Its pedagogic flavor also makes it suitable for use as a text for a course on the subject.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Everything Harder Than Everyone Else…
Jenny Valentish Hardcover R561 R527 Discovery Miles 5 270
Tram Cay Nghin Canh - Hard Cover - Phong…
Ha Nguyen Du Hardcover R960 Discovery Miles 9 600
Make It Scream, Make It Burn
Leslie Jamison Paperback  (1)
R269 Discovery Miles 2 690
Managing Muslim Mobilities - Between…
A. Fabos, R. Isotalo Hardcover R1,823 Discovery Miles 18 230
The Racket - A Rogue Reporter vs The…
Matt Kennard Paperback R295 R272 Discovery Miles 2 720
Why You Won't Get Rich - And Why You…
Robert Verkaik Paperback R290 Discovery Miles 2 900
Handbook of Research Methods and…
Monika Buscher, Malene Freudendal-Pedersen, … Paperback R1,421 Discovery Miles 14 210
Nightingales - True Stories of Escape…
Mimi Melkonian Hardcover R612 Discovery Miles 6 120
The President's Keepers - Those Keeping…
Jacques Pauw Paperback  (74)
R395 R353 Discovery Miles 3 530
The Snakehead - An Epic Tale of the…
Patrick Radden Keefe Paperback R299 R271 Discovery Miles 2 710

 

Partners