0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (11)
  • R250 - R500 (39)
  • R500+ (3,211)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

Multiprocessing - Trade-Offs in Computation and Communication (Paperback, Softcover reprint of the original 1st ed. 1993):... Multiprocessing - Trade-Offs in Computation and Communication (Paperback, Softcover reprint of the original 1st ed. 1993)
Vijay K. Naik
R2,935 Discovery Miles 29 350 Ships in 10 - 15 working days

Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.

Computer Systems and Software Engineering - State-of-the-art (Paperback, Softcover reprint of the original 1st ed. 1992):... Computer Systems and Software Engineering - State-of-the-art (Paperback, Softcover reprint of the original 1st ed. 1992)
Patrick de Wilde, Joos P.L. Vandewalle
R4,537 Discovery Miles 45 370 Ships in 10 - 15 working days

Computer Systems and Software Engineering is a compilation of sixteen state-of-the-art lectures and keynote speeches given at the COMPEURO '92 conference. The contributions are from leading researchers, each of whom gives a new insight into subjects ranging from hardware design through parallelism to computer applications. The pragmatic flavour of the contributions makes the book a valuable asset for both researchers and designers alike. The book covers the following subjects: Hardware Design: memory technology, logic design, algorithms and architecture; Parallel Processing: programming, cellular neural networks and load balancing; Software Engineering: machine learning, logic programming and program correctness; Visualization: the graphical computer interface.

Image and Text Compression (Paperback, Softcover reprint of the original 1st ed. 1992): James A. Storer Image and Text Compression (Paperback, Softcover reprint of the original 1st ed. 1992)
James A. Storer
R4,508 Discovery Miles 45 080 Ships in 10 - 15 working days

This book presents exciting recent research on the compression of images and text. Part 1 presents the (lossy) image compression techniques of vector quantization, iterated transforms (fractal compression), and techniques that employ optical hardware. Part 2 presents the (lossless) text compression techniques of arithmetic coding, context modeling, and dictionary methods (LZ methods); this part of the book also addresses practical massively parallel architectures for text compression. Part 3 presents theoretical work in coding theory that has applications to both text and image compression. The book ends with an extensive bibliography of data compression papers and books which can serve as a valuable aid to researchers in the field. Points of Interest: * Data compression is becoming a key factor in the digital storage of text, speech graphics, images, and video, digital communications, data bases, and supercomputing. * The book addresses 'hot' data compression topics such as vector quantization, fractal compression, optical data compression hardware, massively parallel hardware, LZ methods, arithmetic coding. * Contributors are all accomplished researchers.* Extensive bibliography to aid researchers in the field.

Synchronization Design for Digital Systems (Paperback, Softcover reprint of the original 1st ed. 1991): Teresa H. Meng Synchronization Design for Digital Systems (Paperback, Softcover reprint of the original 1st ed. 1991)
Teresa H. Meng
R2,928 Discovery Miles 29 280 Ships in 10 - 15 working days

Synchronization is one of the important issues in digital system design. While other approaches have always been intriguing, up until now synchro nous operation using a common clock has been the dominant design philo sophy. However, we have reached the point, with advances in technology, where other options should be given serious consideration. This is because the clock periods are getting much smaller in relation to the interconnect propagation delays, even within a single chip and certainly at the board and backplane level. To a large extent, this problem can be overcome with care ful clock distribution in synchronous design, and tools for computer-aided design of clock distribution. However, this places global constraints on the design, making it necessary, for example, to redesign the clock distribution each time any part of the system is changed. In this book, some alternative approaches to synchronization in digital sys tem design are described and developed. We owe these techniques to a long history of effort in both digital system design and in digital communica tions, the latter field being relevant because large propagation delays have always been a dominant consideration in design. While synchronous design is discussed and contrasted to the other techniques in Chapter 6, the dom inant theme of this book is alternative approaches.

Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Paperback, Softcover... Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Paperback, Softcover reprint of the original 1st ed. 1993)
David E. Hudak, Santosh G. Abraham
R2,921 Discovery Miles 29 210 Ships in 10 - 15 working days

The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs.In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cache certain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns.This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.

Systolic Computations (Paperback, Softcover reprint of the original 1st ed. 1992): M.A. Frumkin Systolic Computations (Paperback, Softcover reprint of the original 1st ed. 1992)
M.A. Frumkin
R2,975 Discovery Miles 29 750 Ships in 10 - 15 working days

'Et moi, .. " si j'avait su comment en revenir, je One service mathematics bas rendered the human race. It bas put common sense back n'y serais point aile.' where it belongs, on the topmost shelf next to Jules Verne the dusty canister labelled 'discarded nonsense' . Eric T. Bell The series is divergent; therefore we may be able to do something with it O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and nonlineari ties abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sci ences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One ser vice topology has rendered mathematical physics .. .'; 'One service logic has rendered computer science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'ctre of this series."

Matrix Computations on Systolic-Type Arrays (Paperback, Softcover reprint of the original 1st ed. 1992): Jaime Moreno, Tomas... Matrix Computations on Systolic-Type Arrays (Paperback, Softcover reprint of the original 1st ed. 1992)
Jaime Moreno, Tomas Lang
R4,490 Discovery Miles 44 900 Ships in 10 - 15 working days

Matrix Computations on Systolic-Type Arrays provides a framework which permits a good understanding of the features and limitations of processor arrays for matrix algorithms. It describes the tradeoffs among the characteristics of these systems, such as internal storage and communication bandwidth, and the impact on overall performance and cost. A system which allows for the analysis of methods for the design/mapping of matrix algorithms is also presented. This method identifies stages in the design/mapping process and the capabilities required at each stage. Matrix Computations on Systolic-Type Arrays provides a much needed description of the area of processor arrays for matrix algorithms and of the methods used to derive those arrays. The ideas developed here reduce the space of solutions in the design/mapping process by establishing clear criteria to select among possible options as well as by a-priori rejection of alternatives which are not adequate (but which are considered in other approaches). The end result is a method which is more specific than other techniques previously available (suitable for a class of matrix algorithms) but which is more systematic, better defined and more effective in reaching the desired objectives. Matrix Computations on Systolic-Type Arrays will interest researchers and professionals who are looking for systematic mechanisms to implement matrix algorithms either as algorithm-specific structures or using specialized architectures. It provides tools that simplify the design/mapping process without introducing degradation, and that permit tradeoffs between performance/cost measures selected by the designer.

Real-Time UNIX (R) Systems - Design and Application Guide (Paperback, Softcover reprint of the original 1st ed. 1991): Borko... Real-Time UNIX (R) Systems - Design and Application Guide (Paperback, Softcover reprint of the original 1st ed. 1991)
Borko Furht, Dan Grostick, David Gluch, Guy Rabbat, John Parker, …
R4,477 Discovery Miles 44 770 Ships in 10 - 15 working days

A growing concern of mine has been the unrealistic expectations for new computer-related technologies introduced into all kinds of organizations. Unrealistic expectations lead to disappointment, and a schizophrenic approach to the introduction of new technologies. The UNIX and real-time UNIX operating system technologies are major examples of emerging technologies with great potential benefits but unrealistic expectations. Users want to use UNIX as a common operating system throughout large segments of their organizations. A common operating system would decrease software costs by helping to provide portability and interoperability between computer systems in today's multivendor environments. Users would be able to more easily purchase new equipment and technologies and cost-effectively reuse their applications. And they could more easily connect heterogeneous equipment in different departments without having to constantly write and rewrite interfaces. On the other hand, many users in various organizations do not understand the ramifications of general-purpose versus real-time UNIX. Users tend to think of "real-time" as a way to handle exotic heart-monitoring or robotics systems. Then these users use UNIX for transaction processing and office applications and complain about its performance, robustness, and reliability. Unfortunately, the users don't realize that real-time capabilities added to UNIX can provide better performance, robustness and reliability for these non-real-time applications. Many other vendors and users do realize this, however. There are indications even now that general-purpose UNIX will go away as a separate entity. It will be replaced by a real-time UNIX. General-purpose UNIX will exist only as a subset of real-time UNIX.

Arrays, Functional Languages, and Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1991): Lenore... Arrays, Functional Languages, and Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1991)
Lenore M.Restifo Mullin; Contributions by Michael Jenkins, Gaetan Hains, Robert Bernecky, Guang R. Gao
R4,498 Discovery Miles 44 980 Ships in 10 - 15 working days

During a meeting in Toronto last winter, Mike Jenkins, Bob Bernecky and I were discussing how the two existing theories on arrays influenced or were in fluenced by programming languages and systems. More's Army Theory was the basis for NIAL and APL2 and Mullin's A Mathematics of A rmys(MOA), is being used as an algebra of arrays in functional and A-calculus based pro gramming languages. MOA was influenced by Iverson's initial and extended algebra, the foundations for APL and J respectively. We discussed that there is a lot of interest in the Computer Science and Engineering communities concerning formal methods for languages that could support massively parallel operations in scientific computing, a back to-roots interest for both Mike and myself. Languages for this domain can no longer be informally developed since it is necessary to map languages easily to many multiprocessor architectures. Software systems intended for parallel computation require a formal basis so that modifications can be done with relative ease while ensuring integrity in design. List based lan guages are profiting from theoretical foundations such as the Bird-Meertens formalism. Their theory has been successfully used to describe list based parallel algorithms across many classes of architectures."

Designing TSVs for 3D Integrated Circuits (Paperback, 2013): Nauman Khan, Soha Hassoun Designing TSVs for 3D Integrated Circuits (Paperback, 2013)
Nauman Khan, Soha Hassoun
R1,807 Discovery Miles 18 070 Ships in 10 - 15 working days

This book explores the challenges and presents best strategies for designing Through-Silicon Vias (TSVs) for 3D integrated circuits. It describes a novel technique to mitigate TSV-induced noise, the GND Plug, which is superior to others adapted from 2-D planar technologies, such as a backside ground plane and traditional substrate contacts. The book also investigates, in the form of a comparative study, the impact of TSV size and granularity, spacing of C4 connectors, off-chip power delivery network, shared and dedicated TSVs, and coaxial TSVs on the quality of power delivery in 3-D ICs. The authors provide detailed best design practices for designing 3-D power delivery networks. Since TSVs occupy silicon real-estate and impact device density, this book provides four iterative algorithms to minimize the number of TSVs in a power delivery network. Unlike other existing methods, these algorithms can be applied in early design stages when only functional block- level behaviors and a floorplan are available. Finally, the authors explore the use of Carbon Nanotubes for power grid design as a futuristic alternative to Copper.

Design, Analysis and Test of Logic Circuits Under Uncertainty (Hardcover, 2012): Smita Krishnaswamy, Igor L Markov, John P.... Design, Analysis and Test of Logic Circuits Under Uncertainty (Hardcover, 2012)
Smita Krishnaswamy, Igor L Markov, John P. Hayes
R3,565 Discovery Miles 35 650 Ships in 10 - 15 working days

Logic circuits are becoming increasingly susceptible to probabilistic behavior caused by external radiation and process variation. In addition, inherently probabilistic quantum- and nano-technologies are on the horizon as we approach the limits of CMOS scaling. Ensuring the reliability of such circuits despite the probabilistic behavior is a key challenge in IC design---one that necessitates a fundamental, probabilistic reformulation of synthesis and testing techniques. This monograph will present techniques for analyzing, designing, and testing logic circuits with probabilistic behavior.

Hierarchical Scheduling in Parallel and Cluster Systems (Paperback, Softcover reprint of the original 1st ed. 2003): Sivarama... Hierarchical Scheduling in Parallel and Cluster Systems (Paperback, Softcover reprint of the original 1st ed. 2003)
Sivarama Dandamudi
R4,480 Discovery Miles 44 800 Ships in 10 - 15 working days

Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e. , to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space.

High Performance Memory Systems (Paperback, Softcover reprint of the original 1st ed. 2004): Haldun Hadimioglu, David Kaeli,... High Performance Memory Systems (Paperback, Softcover reprint of the original 1st ed. 2004)
Haldun Hadimioglu, David Kaeli, Jeffrey Kuskin, Ashwini Nanda, Josep Torrellas
R1,562 Discovery Miles 15 620 Ships in 10 - 15 working days

The State of Memory Technology Over the past decade there has been rapid growth in the speed of micropro cessors. CPU speeds are approximately doubling every eighteen months, while main memory speed doubles about every ten years. The International Tech nology Roadmap for Semiconductors (ITRS) study suggests that memory will remain on its current growth path. The ITRS short-and long-term targets indicate continued scaling improvements at about the current rate by 2016. This translates to bit densities increasing at two times every two years until the introduction of 8 gigabit dynamic random access memory (DRAM) chips, after which densities will increase four times every five years. A similar growth pattern is forecast for other high-density chip areas and high-performance logic (e.g., microprocessors and application specific inte grated circuits (ASICs)). In the future, molecular devices, 64 gigabit DRAMs and 28 GHz clock signals are targeted. Although densities continue to grow, we still do not see significant advances that will improve memory speed. These trends have created a problem that has been labeled the Memory Wall or Memory Gap."

System-Level Validation - High-Level Modeling and Directed Test Generation Techniques (Hardcover, 2013): Mingsong Chen, Xiaoke... System-Level Validation - High-Level Modeling and Directed Test Generation Techniques (Hardcover, 2013)
Mingsong Chen, Xiaoke Qin, Heon-Mo Koo, Prabhat Mishra
R3,991 Discovery Miles 39 910 Ships in 10 - 15 working days

This book covers state-of-the art techniques for high-level modeling and validation of complex hardware/software systems, including those with multicore architectures. Readers will learn to avoid time-consuming and error-prone validation from the comprehensive coverage of system-level validation, including high-level modeling of designs and faults, automated generation of directed tests, and efficient validation methodology using directed tests and assertions. The methodologies described in this book will help designers to improve the quality of their validation, performing as much validation as possible in the early stages of the design, while reducing the overall validation effort and cost.

A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures (Paperback, Softcover reprint of the... A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures (Paperback, Softcover reprint of the original 1st ed. 2003)
Ian N. Dunn, Gerard G.L. Meyer
R2,903 Discovery Miles 29 030 Ships in 10 - 15 working days

Despite five decades of research, parallel computing remains an exotic, frontier technology on the fringes of mainstream computing. Its much-heralded triumph over sequential computing has yet to materialize. This is in spite of the fact that the processing needs of many signal processing applications continue to eclipse the capabilities of sequential computing. The culprit is largely the software development environment. Fundamental shortcomings in the development environment of many parallel computer architectures thwart the adoption of parallel computing. Foremost, parallel computing has no unifying model to accurately predict the execution time of algorithms on parallel architectures. Cost and scarce programming resources prohibit deploying multiple algorithms and partitioning strategies in an attempt to find the fastest solution. As a consequence, algorithm design is largely an intuitive art form dominated by practitioners who specialize in a particular computer architecture. This, coupled with the fact that parallel computer architectures rarely last more than a couple of years, makes for a complex and challenging design environment. To navigate this environment, algorithm designers need a road map, a detailed procedure they can use to efficiently develop high performance, portable parallel algorithms. The focus of this book is to draw such a road map. The Parallel Algorithm Synthesis Procedure can be used to design reusable building blocks of adaptable, scalable software modules from which high performance signal processing applications can be constructed. The hallmark of the procedure is a semi-systematic process for introducing parameters to control the partitioning and scheduling of computation and communication. This facilitates the tailoring of software modules to exploit different configurations of multiple processors, multiple floating-point units, and hierarchical memories. To showcase the efficacy of this procedure, the book presents three case studies requiring various degrees of optimization for parallel execution.

Active Middleware Services - From the Proceedings of the 2nd Annual Workshop on Active Middleware Services (Paperback, 2000... Active Middleware Services - From the Proceedings of the 2nd Annual Workshop on Active Middleware Services (Paperback, 2000 ed.)
Salim Hariri, Craig A. Lee, Cauligi S. Raghavendra
R2,941 Discovery Miles 29 410 Ships in 10 - 15 working days

The papers in this volume were presented at the Second Annual Work shop on Active Middleware Services and were selected for inclusion here by the Editors. The AMS workshop was organized with support from both the National Science Foundation and the CAT center at the Uni versity of Arizona, and was held in Pittsburgh, Pennsylvania, on August 1, 2000, in conjunction with the 9th IEEE International Symposium on High Performance Distributed Computing (HPDC-9). The explosive growth of Internet-based applications and the prolifer ation of networking technologies has been transforming most areas of computer science and engineering as well as computational science and commercial application areas. This opens an outstanding opportunity to explore new, Internet-oriented software technologies that will open new research and application opportunities not only for the multimedia and commercial world, but also for the scientific and high-performance computing applications community. Two emerging technologies - agents and active networks - allow increased programmability to enable bring ing new services to Internet based applications. The AMS workshop presented research results and working papers in the areas of active net works, mobile and intelligent agents, software tools for high performance distributed computing, network operating systems, and application pro gramming models and environments. The success of an endeavor such as this depends on the contributions of many individuals. We would like to thank Dr. Frederica Darema and the NSF for sponsoring the workshop.

Ad Hoc Mobile Wireless Networks - Principles, Protocols, and Applications, Second Edition (Hardcover, 2nd edition): Subir Kumar... Ad Hoc Mobile Wireless Networks - Principles, Protocols, and Applications, Second Edition (Hardcover, 2nd edition)
Subir Kumar Sarkar, T. G. Basavaraju, C. Puttamadappa
R4,610 Discovery Miles 46 100 Ships in 12 - 17 working days

The military, the research community, emergency services, and industrial environments all rely on ad hoc mobile wireless networks because of their simple infrastructure and minimal central administration. Now in its second edition, Ad Hoc Mobile Wireless Networks: Principles, Protocols, and Applications explains the concepts, mechanism, design, and performance of these highly valued systems.

Following an overview of wireless network fundamentals, the book explores MAC layer, routing, multicast, and transport layer protocols for ad hoc mobile wireless networks. Next, it examines quality of service and energy management systems. Additional chapters cover mobility models for multi-hop ad hoc wireless networks as well as cross-layer design issues.

Exploring Bluetooth, IrDA (Infrared Data Association), HomeRF, WiFi, WiMax, Wireless Internet, and Mobile IP, the book contains appropriate examples and problems at the end of each chapter to illustrate each concept. This second edition has been completely updated with the latest technology and includes a new chapter on recent developments in the field, including sensor networks, personal area networks (PANs), smart dress, and vehicular ad hoc networks.

Self-organized, self-configured, and self-controlled, ad hoc mobile wireless networks will continue to be valued for a range of applications, as they can be set up and deployed anywhere and anytime. This volume captures the current state of the field as well as upcoming challenges awaiting researchers.

Interlinking of Computer Networks - Proceedings of the NATO Advanced Study Institute held at Bonas, France, August 28 -... Interlinking of Computer Networks - Proceedings of the NATO Advanced Study Institute held at Bonas, France, August 28 - September 8, 1978 (Paperback, Softcover reprint of the original 1st ed. 1979)
K.G. Beauchamp
R5,819 Discovery Miles 58 190 Ships in 10 - 15 working days

This volume contains the papers presented at the NATO Advanced Study Institute on the Interlinking of Computer Networks held between August 28th and September 8th 1978 at Bonas, France. The development of computer networks has proceeded over the last few decades to the point where a number of scientific and commercial networks are firmly established - albeit using different philosophies of design and operation. Many of these networks are serving similar communities having the same basic computer needs and those communities where the computer resources are complementary. Consequently there is now a considerable interest in the possibility of linking computer networks to provide resource sharing over quite wide geographical distances. The purpose of the Institute organisers was to consider the problems that arise when this form of interlinking is attempted. The problems fall into three categories, namely technical problems, compatibility and management. Only within the last few years have the technical problems been understood sufficiently well to enable interlinking to take place. Consequently considerable value was given during the meeting to discussing the compatibility and management problems that require solution before x FOREWORD global interlinking becomes an accepted and cost effective operation. Existing computer networks were examined in depth and case-histories of their operations were presented by delegates drawn from the international community. The scope and detail of the papers presented should provide a valuable contribution to this emerging field and be useful to Communications Specialists and Managers as well as those concerned with Computer Operations and Development."

Formal Methods for Open Object-Based Distributed Systems IV - IFIP TC6/WG6.1. Fourth International Conference on Formal Methods... Formal Methods for Open Object-Based Distributed Systems IV - IFIP TC6/WG6.1. Fourth International Conference on Formal Methods for Open Object-Based Distributed Systems (FMOODS 2000) September 6-8, 2000, Stanford, California, USA (Paperback, Softcover reprint of the original 1st ed. 2000)
Scott F. Smith, Carolyn L. Talcott
R5,801 Discovery Miles 58 010 Ships in 10 - 15 working days

Formal Methods for Open Object-Based Distributed Systems IV presents the leading edge in the fields of object-oriented programming, open distributed systems, and formal methods for object-oriented systems. With increased support within industry regarding these areas, this book captures the most up-to-date information on the subject. Papers in this volume focus on the following specific technologies: * components; * mobile code; * Java(R); * The Unified Modeling Language (UML); * refinement of specifications; * types and subtyping; * temporal and probabilistic systems. This volume comprises the proceedings of the Fourth International Workshop on Formal Methods for Open Object-Based Distributed Systems (FMOODS 2000), which was sponsored by the International Federation for Information Processing (IFIP) and held in Stanford, California, USA, in September 2000.

Partial Reconfiguration on FPGAs - Architectures, Tools and Applications (Hardcover, 2012 ed.): Dirk Koch Partial Reconfiguration on FPGAs - Architectures, Tools and Applications (Hardcover, 2012 ed.)
Dirk Koch
R4,523 Discovery Miles 45 230 Ships in 10 - 15 working days

This is the first book to focus on designing run-time reconfigurable systems on FPGAs, in order to gain resource and power efficiency, as well as to improve speed. Case studies in partial reconfiguration guide readers through the FPGA jungle, straight toward a working system. The discussion of partial reconfiguration is comprehensive and practical, with models introduced together with methods to implement efficiently the corresponding systems. Coverage includes concepts for partial module integration and corresponding communication architectures, floorplanning of the on-FPGA resources, physical implementation aspects starting from constraining primitive placement and routing all the way down to the bitstream required to configure the FPGA, and verification of reconfigurable systems.

Service-Oriented Computing - ICSOC  2011 Workshops - ICSOC 2011, International Workshops WESOA, NFPSLAM-SOC, and Satellite... Service-Oriented Computing - ICSOC 2011 Workshops - ICSOC 2011, International Workshops WESOA, NFPSLAM-SOC, and Satellite Events, Paphos, Cyprus, December 5-8, 2011. Revised Selected Papers (Paperback, 2012 ed.)
George Pallis, Mohamed Jmaiel, Anis Charfi, Sven Graupner, Yucel Karabulut, …
R1,574 Discovery Miles 15 740 Ships in 10 - 15 working days

This book constitutes the thoroughly refereed proceedings of the 2011 ICSOC Workshops consisting of 5 scientific satellite events, organized in 4 tracks: workshop track (WESOA 2011; NFPSLAM-SOC 2011), PhD symposium track, demonstration track, and industry track; held in conjunction with the 2011 International Conference on Service-Oriented Computing (ICSOC), in Paphos, Greece, December 2011. The 39 revised papers presented together with 2 introductory descriptions address topics such as software engineering services; the management of service level agreements; Web services and service composition; general or domain-specific challenges of service-oriented computing and its transition towards cloud computing; architecture and modeling of services; workflow management; performance analysis as well as crowdsourcing for improving service processes and for knowledge discovery.

High Performance Computational Methods for Biological Sequence Analysis (Paperback, Softcover reprint of the original 1st ed.... High Performance Computational Methods for Biological Sequence Analysis (Paperback, Softcover reprint of the original 1st ed. 1996)
Tieng K. Yap, Ophir Frieder, Robert L. Martino
R4,466 Discovery Miles 44 660 Ships in 10 - 15 working days

High Performance Computational Methods for Biological Sequence Analysis presents biological sequence analysis using an interdisciplinary approach that integrates biological, mathematical and computational concepts. These concepts are presented so that computer scientists and biomedical scientists can obtain the necessary background for developing better algorithms and applying parallel computational methods. This book will enable both groups to develop the depth of knowledge needed to work in this interdisciplinary field. This work focuses on high performance computational approaches that are used to perform computationally intensive biological sequence analysis tasks: pairwise sequence comparison, multiple sequence alignment, and sequence similarity searching in large databases. These computational methods are becoming increasingly important to the molecular biology community allowing researchers to explore the increasingly large amounts of sequence data generated by the Human Genome Project and other related biological projects. The approaches presented by the authors are state-of-the-art and show how to reduce analysis times significantly, sometimes from days to minutes. High Performance Computational Methods for Biological Sequence Analysis is tremendously important to biomedical science students and researchers who are interested in applying sequence analyses to their studies, and to computational science students and researchers who are interested in applying new computational approaches to biological sequence analyses.

Dynamic Reconfiguration in Real-Time Systems - Energy, Performance, and Thermal Perspectives (Hardcover, 2012): Weixun Wang,... Dynamic Reconfiguration in Real-Time Systems - Energy, Performance, and Thermal Perspectives (Hardcover, 2012)
Weixun Wang, Prabhat Mishra, Sanjay Ranka
R3,891 Discovery Miles 38 910 Ships in 10 - 15 working days

Given the widespread use of real-time multitasking systems, there are tremendous optimization opportunities if reconfigurable computing can be effectively incorporated while maintaining performance and other design constraints of typical applications. The focus of this book is to describe the dynamic reconfiguration techniques that can be safely used in real-time systems. This book provides comprehensive approaches by considering synergistic effects of computation, communication as well as storage together to significantly improve overall performance, power, energy and temperature."

Fundamentals of Graphics Using MATLAB (Hardcover): Ranjan Parekh Fundamentals of Graphics Using MATLAB (Hardcover)
Ranjan Parekh
R2,685 Discovery Miles 26 850 Ships in 12 - 17 working days

This book introduces fundamental concepts and principles of 2D and 3D graphics and is written for undergraduate and postgraduate students of computer science, graphics, multimedia, and data science. It demonstrates the use of MATLAB (R) programming for solving problems related to graphics and discusses a variety of visualization tools to generate graphs and plots. The book covers important concepts like transformation, projection, surface generation, parametric representation, curve fitting, interpolation, vector representation, and texture mapping, all of which can be used in a wide variety of educational and research fields. Theoretical concepts are illustrated using a large number of practical examples and programming codes, which can be used to visualize and verify the results. Key Features: Covers fundamental concepts and principles of 2D and 3D graphics Demonstrates the use of MATLAB (R) programming for solving problems on graphics Provides MATLAB (R) codes as answers to specific numerical problems Provides codes in a simple copy and execute format for the novice learner Focuses on learning through visual representation with extensive use of graphs and plots Helps the reader gain in-depth knowledge about the subject matter through practical examples Contains review questions and practice problems with answers for self-evaluation

Modern Compiler Design (Hardcover, 2nd ed. 2012): Dick Grune, Kees van Reeuwijk, Henri E. Bal, Ceriel J.H. Jacobs, Koen... Modern Compiler Design (Hardcover, 2nd ed. 2012)
Dick Grune, Kees van Reeuwijk, Henri E. Bal, Ceriel J.H. Jacobs, Koen Langendoen
R3,672 Discovery Miles 36 720 Ships in 10 - 15 working days

"Modern Compiler Design" makes the topic of compiler design more accessible by focusing on principles and techniques of wide application. By carefully distinguishing between the essential (material that has a high chance of being useful) and the incidental (material that will be of benefit only in exceptional cases) much useful information was packed in this comprehensive volume. The student who has finished this book can expect to understand the workings of and add to a language processor for each of the modern paradigms, and be able to read the literature on how to proceed. The first provides a firm basis, the second potential for growth.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
SAUK 1936-1995 - Bedreigde Spesie... Of…
Wynand Harmse Paperback R10 R8 Discovery Miles 80
Confronting Inequality - The South…
Michael Nassen Smith Paperback R250 R195 Discovery Miles 1 950
How To Fix (Unf*ck) A Country - 6 Things…
Roy Havemann Paperback R310 R210 Discovery Miles 2 100
Power And Loss In South African…
Glenda Daniels Paperback R300 R234 Discovery Miles 2 340
Prisoner 913 - The Release Of Nelson…
Riaan de Villiers, Jan-Ad Stemmet Paperback R399 R343 Discovery Miles 3 430
The Year Of Facing Fire - A Memoir
Helena Kriel Paperback R315 R271 Discovery Miles 2 710
Boereverneukers - Afrikaanse…
Izak du Plessis Paperback  (1)
R240 R192 Discovery Miles 1 920
Sabotage - Eskom Under Siege
Kyle Cowan Paperback  (2)
R300 R240 Discovery Miles 2 400
Land Matters - South Africa's Failed…
Tembeka Ngcukaitobi Paperback  (4)
R320 R256 Discovery Miles 2 560
So, For The Record - Behind The…
Anton Harber Paperback R290 R232 Discovery Miles 2 320

 

Partners