0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (3)
  • R250 - R500 (23)
  • R500+ (2,632)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General

Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems (Paperback, 2011 ed.): Paul Lokuciejewski, Peter... Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems (Paperback, 2011 ed.)
Paul Lokuciejewski, Peter Marwedel
R4,011 Discovery Miles 40 110 Ships in 18 - 22 working days

For real-time systems, the worst-case execution time (WCET) is the key objective to be considered. Traditionally, code for real-time systems is generated without taking this objective into account and the WCET is computed only after code generation. Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems presents the first comprehensive approach integrating WCET considerations into the code generation process. Based on the proposed reconciliation between a compiler and a timing analyzer, a wide range of novel optimization techniques is provided. Among others, the techniques cover source code and assembly level optimizations, exploit machine learning techniques and address the design of modern systems that have to meet multiple objectives. Using these optimizations, the WCET of real-time applications can be reduced by about 30% to 45% on the average. This opens opportunities for decreasing clock speeds, costs and energy consumption of embedded processors. The proposed techniques can be used for all types real-time systems, including automotive and avionics IT systems.

Logic and Algebra of Specification (Paperback, Softcover reprint of the original 1st ed. 1993): Friedrich L. Bauer, Wilfried... Logic and Algebra of Specification (Paperback, Softcover reprint of the original 1st ed. 1993)
Friedrich L. Bauer, Wilfried Brauer, Helmut Schwichtenberg
R4,070 Discovery Miles 40 700 Ships in 18 - 22 working days

For some years, specification of software and hardware systems has been influenced not only by algebraic methods but also by new developments in logic. These new developments in logic are partly based on the use of algorithmic techniques in deduction and proving methods, but are alsodue to new theoretical advances, to a great extent stimulated by computer science, which have led to new types of logic and new logical calculi. The new techniques, methods and tools from logic, combined with algebra-based ones, offer very powerful and useful tools for the computer scientist, which may soon become practical for commercial use, where, in particular, more powerful specification tools are needed for concurrent and distributed systems. This volume contains papers based on lectures by leading researchers which were originally given at an international summer school held in Marktoberdorf in 1991. The papers aim to give a foundation for combining logic and algebra for the purposes of specification under the aspects of automated deduction, proving techniques, concurrency and logic, abstract data types and operational semantics, and constructive methods.

Distributed and Parallel Embedded Systems - IFIP WG10.3/WG10.5 International Workshop on Distributed and Parallel Embedded... Distributed and Parallel Embedded Systems - IFIP WG10.3/WG10.5 International Workshop on Distributed and Parallel Embedded Systems (DIPES'98) October 5-6, 1998, Schloss Eringerfeld, Germany (Paperback, Softcover reprint of the original 1st ed. 1999)
Franz J. Rammig
R5,133 Discovery Miles 51 330 Ships in 18 - 22 working days

Embedded systems are becoming one of the major driving forces in computer science. Furthermore, it is the impact of embedded information technology that dictates the pace in most engineering domains. Nearly all technical products above a certain level of complexity are not only controlled but increasingly even dominated by their embedded computer systems. Traditionally, such embedded control systems have been implemented in a monolithic, centralized way. Recently, distributed solutions are gaining increasing importance. In this approach, the control task is carried out by a number of controllers distributed over the entire system and connected by some interconnect network, like fieldbuses. Such a distributed embedded system may consist of a few controllers up to several hundred, as in today's top-range automobiles. Distribution and parallelism in embedded systems design increase the engineering challenges and require new development methods and tools. This book is the result of the International Workshop on Distributed and Parallel Embedded Systems (DIPES'98), organized by the International Federation for Information Processing (IFIP) Working Groups 10.3 (Concurrent Systems) and 10.5 (Design and Engineering of Electronic Systems). The workshop took place in October 1998 in Schloss Eringerfeld, near Paderborn, Germany, and the resulting book reflects the most recent points of view of experts from Brazil, Finland, France, Germany, Italy, Portugal, and the USA. The book is organized in six chapters: `Formalisms for Embedded System Design': IP-based system design and various approaches to multi-language formalisms. `Synthesis from Synchronous/Asynchronous Specification': Synthesis techniques based on Message Sequence Charts (MSC), StateCharts, and Predicate/Transition Nets. `Partitioning and Load-Balancing': Application in simulation models and target systems. <`Verification and Validation': Formal techniques for precise verification and more pragmatic approaches to validation. `Design Environments' for distributed embedded systems and their impact on the industrial state of the art. `Object Oriented Approaches': Impact of OO-techniques on distributed embedded systems. GBP/LISTGBP This volume will be essential reading for computer science researchers and application developers.

Computer Engineering and Technology - 16th National Conference, NCCET 2012, Shanghai, China, August 17-19, 2012, Revised... Computer Engineering and Technology - 16th National Conference, NCCET 2012, Shanghai, China, August 17-19, 2012, Revised Selected Papers (Paperback, 2013 ed.)
Weixia Xu, Liquan Xiao, Pingjing Lu, Jinwen Li, Chengyi Zhang
R1,404 Discovery Miles 14 040 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 16th National Conference on Computer Engineering and Technology, NCCET 2012, held in Shanghai, China, in August 2012. The 27 papers presented were carefully reviewed and selected from 108 submissions. They are organized in topical sections named: microprocessor and implementation; design of integration circuit; I/O interconnect; and measurement, verification, and others.

New Horizons of Computational Science - Proceedings of the International Symposium on Supercomputing held in Tokyo, Japan,... New Horizons of Computational Science - Proceedings of the International Symposium on Supercomputing held in Tokyo, Japan, September 1-3, 1997 (Paperback, Softcover reprint of the original 1st ed. 2001)
Toshikazu Ebisuzaki, Junichiro Makino
R5,136 Discovery Miles 51 360 Ships in 18 - 22 working days

The International Symposium on Supercomputing - New Horizon of Computational Science was held on September 1-3, 1997 at the Science MuseuminTokyo, tocelebrate60-yearbirthdayofProfessorDaiichiroSug imoto, who hasbeenleadingtheoreticalandnumericalastrophysicsfor 30 years. The conference coveredexceptionally wide range ofsubjects, to follow Sugimoto'saccomplishmentsinmanyfields.Onthefirstdaywehadthree talksonstellarevolutionandsixtalksonstellardynamics. Onthesecond day, six talks on special-purpose computingand four talks on large-scale computing in MolecularDynamicswere given. Onthethirdandthelast day, threetalks on dedicatedcomputerson LatticeQCDcalculationsand sixtalksonpresentandfutureofgeneral-purposeHPCsystemsweregiven. Inaddition, some30posterswerepresentedonvarioussubjectsincompu tationalscience. Instellarevolution, D.Arnett (Univ. ofArizona) gaveanexcellenttalk on the recent development in three-dimensionalsimulation ofSupernova, inparticularonquantitativecomparisonbetweendifferenttechniquessuch asgrid-basedmethodsandSPH (SmoothedParticleHydrodynamics). Y. Kondo (NASA) discussedresentadvanceinthemodelingoftheevolution ofbinarystars, and1.Hachisu(Univ. ofTokyo)discussedRayleigh-Taylor instabilitiesinsupernovae(contributionnotincluded). Instellardynamics, P.Hut(lAS)gaveasuperbreviewonthelong-term evolution ofstellarsystem, J. Makino (Univ. ofTokyo) described briefly theresultsobtainedonGRAPE-4special-purposecomputerandthefollow up project, GRAPE-6, whichisapprovedas ofJune 1997. GRAPE-6will be completed by year 2001 with the peak speed around 200 Tflops. R. Spurzem (Rechen-Inst.) and D. Heggie (Univ. of Edinburgh) talked on recentadvanceinthestudyofstarclusters, andE.Athanassoula(Marseille Observatory) describedthe work doneusingtheirGRAPE-3 systems. S. Ida (TokyoInst. ofTechnology) describedthe result ofthe simulationof theformationofMoon. Thefirst talkoftheseconddaywas given by F-H. Hsu oftheIBMT.J. Watson Research center, on "Deep Blue," the special-purpose computer for Chess, which, forthefirst timeinthehistory, wonthematchwiththe besthumanplayer, Mr. GaryKasparov(unfortunately, Hsu'scontribution isnot included in this volume). Then A. Bakker of Delft Inst. of Tech nology looked back his 20 years ofdevelopingspecial-purpose computers formoleculardynamicsandsimulationofspinsystems. J.Arnoldgavean overviewoftheemergingnewfieldofreconfigurablecomputing, whichfalls inbetweentraditionalgeneral-purposecomputersandspecial-purposecom puters. S.Okumura(NAO)describedthehistoryofultra-high-performance digital signalprocessors for radio astronomy. They havebuilt a machine with 20GaPS performance in early 80s, and keep improvingthe speed. M. Taiji (ISM) told on general aspects of GRAPE-type systems, and T. Narumi (Univ. of Tokyo) the 100-Tflops GRAPE-type machine for MD calculations, whichwillbefinished by 199

Computing with T.Node Parallel Architecture (Paperback, Softcover reprint of the original 1st ed. 1991): D. Heidrich, J. C... Computing with T.Node Parallel Architecture (Paperback, Softcover reprint of the original 1st ed. 1991)
D. Heidrich, J. C Grossetie
R4,006 Discovery Miles 40 060 Ships in 18 - 22 working days

Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node.

Switching and Traffic Theory for Integrated Broadband Networks (Paperback, Softcover reprint of the original 1st ed. 1990):... Switching and Traffic Theory for Integrated Broadband Networks (Paperback, Softcover reprint of the original 1st ed. 1990)
Joseph Y. Hui
R5,159 Discovery Miles 51 590 Ships in 18 - 22 working days

The rapid development of optical fiber transmission technology has created the possibility for constructing digital networks that are as ubiquitous as the current voice network but which can carry video, voice, and data in massive qlJantities. How and when such networks will evolve, who will pay for them, and what new applications will use them is anyone's guess. There appears to be no doubt, however, that the trend in telecommunication networks is toward far greater transmission speeds and toward greater heterogeneity in the requirements of different applications. This book treats some of the central problems involved in these networks of the future. First, how does one switch data at speeds orders of magnitude faster than that of existing networks? This problem has roots in both classical switching for telephony and in switching for packet networks. There are a number of new twists here, however. The first is that the high speeds necessitate the use of highly parallel processing and place a high premium on computational simplicity. The second is that the required data speeds and allowable delays of different applications differ by many orders of magnitude. The third is that it might be desirable to support both point to point applications and also applications involving broadcast from one source to a large set of destinations.

A Code Mapping Scheme for Dataflow Software Pipelining (Paperback, Softcover reprint of the original 1st ed. 1991): Guang R. Gao A Code Mapping Scheme for Dataflow Software Pipelining (Paperback, Softcover reprint of the original 1st ed. 1991)
Guang R. Gao
R2,648 Discovery Miles 26 480 Ships in 18 - 22 working days

This monograph evolved from my Ph. D dissertation completed at the Laboratory of Computer Science, MIT, during the Summer of 1986. In my dissertation I proposed a pipelined code mapping scheme for array operations on static dataflow architectures. The main addition to this work is found in Chapter 12, reflecting new research results developed during the last three years since I joined McGill University-results based upon the principles in my dissertation. The terminology dataflow soft ware pipelining has been consistently used since publication of our 1988 paper on the argument-fetching dataflow architecture model at McGill University 43]. In the first part of this book we describe the static data flow graph model as an operational model for concurrent computation. We look at timing considerations for program graph execution on an ideal static dataflow computer, examine the notion of pipe lining, and characterize its performance. We discuss balancing techniques used to transform certain graphs into fully pipelined data flow graphs. In particular, we show how optimal balancing of an acyclic data flow graph can be formulated as a linear programming problem for which an optimal solution exists. As a major result, we show the optimal balancing problem of acyclic data flow graphs is reduceable to a class of linear programming problem, the net work flow problem, for which well-known efficient algorithms exist. This result disproves the conjecture that such problems are computationally hard."

Logic of Domains (Paperback, Softcover reprint of the original 1st ed. 1991): G. Zhang Logic of Domains (Paperback, Softcover reprint of the original 1st ed. 1991)
G. Zhang
R2,648 Discovery Miles 26 480 Ships in 18 - 22 working days

This monograph studies the logical aspects of domains as used in de notational semantics of programming languages. Frameworks of domain logics are introduced; these serve as foundations for systematic derivations of proof systems from denotational semantics of programming languages. Any proof system so derived is guaranteed to agree with denotational se mantics in the sense that the denotation of any program coincides with the set of assertions true of it. The study focuses on two categories for dena tational semantics: SFP domains, and the less standard, but important, category of stable domains. The intended readership of this monograph includes researchers and graduate students interested in the relation between semantics of program ming languages and formal means of reasoning about programs. A basic knowledge of denotational semantics, mathematical logic, general topology, and category theory is helpful for a full understanding of the material. Part I SFP Domains Chapter 1 Introduction This chapter provides a brief exposition to domain theory, denotational se mantics, program logics, and proof systems. It discusses the importance of ideas and results on logic and topology to the understanding of the relation between denotational semantics and program logics. It also describes the motivation for the work presented by this monograph, and how that work fits into a more general program. Finally, it gives a short summary of the results of each chapter. 1. 1 Domain Theory Programming languages are languages with which to perform computa tion."

Business Component-Based Software Engineering (Paperback, Softcover reprint of the original 1st ed. 2003): Franck Barbier Business Component-Based Software Engineering (Paperback, Softcover reprint of the original 1st ed. 2003)
Franck Barbier
R2,650 Discovery Miles 26 500 Ships in 18 - 22 working days

Business Component-Based Software Engineering, an edited volume, aims to complement some other reputable books on CBSE, by stressing how components are built for large-scale applications, within dedicated development processes and for easy and direct combination. This book will emphasize these three facets and will offer a complete overview of some recent progresses. Projects and works explained herein will prompt graduate students, academics, software engineers, project managers and developers to adopt and to apply new component development methods gained from and validated by the authors. The authors of Business Component-Based Software Engineering are academic and professionals, experts in the field, who will introduce the state of the art on CBSE from their shared experience by working on the same projects. Business Component-Based Software Engineering is designed to meet the needs of practitioners and researchers in industry, and graduate-level students in Computer Science and Engineering.

Workload Characterization of Emerging Computer Applications (Paperback, Softcover reprint of the original 1st ed. 2001): Lizy... Workload Characterization of Emerging Computer Applications (Paperback, Softcover reprint of the original 1st ed. 2001)
Lizy Kurian John, Ann Marie Grizzaffi Maynard
R5,130 Discovery Miles 51 300 Ships in 18 - 22 working days

The formal study of program behavior has become an essential ingredient in guiding the design of new computer architectures. Accurate characterization of applications leads to efficient design of high performing architectures. Quantitative and analytical characterization of workloads is important to understand and exploit the interesting features of workloads. This book includes ten chapters on various aspects of workload characterizati on. File caching characteristics of the industry-standard web-serving benchmark SPECweb99 are presented by Keller et al. in Chapter 1, while value locality of SPECJVM98 benchmarks are characterized by Rychlik et al. in Chapter 2. SPECJVM98 benchmarks are visited again in Chapter 3, where Tao et al. study the operating system activity in Java programs. In Chapter 4, KleinOsowski et al. describe how the SPEC2000 CPU benchmark suite may be adapted for computer architecture research and present the small, representative input data sets they created to reduce simulation time without compromising on accuracy. Their research has been recognized by the Standard Performance Evaluation Corporation (SPEC) and is listed on the official SPEC website, http://www. spec. org/osg/cpu2000/research/umnl. The main contribution of Chapter 5 is the proposal of a new measure called locality surface to characterize locality of reference in programs. Sorenson et al. describe how a three-dimensional surface can be used to represent both of programs. In Chapter 6, Thornock et al.

Computational Intelligence: Soft Computing and Fuzzy-Neuro Integration with Applications (Paperback, Softcover reprint of the... Computational Intelligence: Soft Computing and Fuzzy-Neuro Integration with Applications (Paperback, Softcover reprint of the original 1st ed. 1998)
Okyay Kaynak, Lotfi A. Zadeh, Burhan Turksen, Imre J. Rudas
R4,082 Discovery Miles 40 820 Ships in 18 - 22 working days

Soft computing is a consortium of computing methodologies that provide a foundation for the conception, design, and deployment of intelligent systems and aims to formalize the human ability to make rational decisions in an environment of uncertainty and imprecision. This book is based on a NATO Advanced Study Institute held in 1996 on soft computing and its applications. The distinguished contributors consider the principal constituents of soft computing, namely fuzzy logic, neurocomputing, genetic computing, and probabilistic reasoning, the relations between them, and their fusion in industrial applications. Two areas emphasized in the book are how to achieve a synergistic combination of the main constituents of soft computing and how the combination can be used to achieve a high Machine Intelligence Quotient.

Cooperative Internet Computing (Paperback, Softcover reprint of the original 1st ed. 2003): Alvin T. S. Chan, Stephen Chan,... Cooperative Internet Computing (Paperback, Softcover reprint of the original 1st ed. 2003)
Alvin T. S. Chan, Stephen Chan, Hong Va Leong, Vincent Ng
R4,005 Discovery Miles 40 050 Ships in 18 - 22 working days

The Second International Workshop on Cooperative Internet Computing (CIC2002) has brought together researchers, academics, and industry practitioners who are involved and interested in the development of advanced and emerging cooperative computing technologies. Cooperative computing is an important computing paradigm to enable different parties to work together towards a pre defined non-trivial goal. It encompasses important technological areas like computer supported cooperative work, workflow, computer assisted design and concurrent programming. As technologies continue to advance and evolve, there is an increasing need to research and develop new classes of middlewares and applications to leverage on the combined benefits of Internet and web to provide users and programmers with highly interactive and robust cooperative computing environment. It is the aim of this forum to promote close interactions and exchange of ideas among researchers, academics and practitioners on the state-of-the art researches in all of these exciting areas. We have partnered with Kluwer Acedamic Press this year to bring to you a book compilation of the papers that were presented at the CIC2002 workshop. The importance of the research area is reflected both in the quality and quantity of the submitted papers, where each paper was reviewed by at least three PC members. As a result, we were able to only accept 14 papers for full presentation at the workshop, while having to reject several excellent papers due to the limitations of the program schedule.

High Performance Computing Systems and Applications (Paperback, Softcover reprint of the original 1st ed. 2003): Robert D.... High Performance Computing Systems and Applications (Paperback, Softcover reprint of the original 1st ed. 2003)
Robert D. Kent, Todd W. Sands
R4,032 Discovery Miles 40 320 Ships in 18 - 22 working days

High Performance Computing Systems and Applications contains fully refereed papers from the 15th Annual Symposium on High Performance Computing. These papers cover both fundamental and applied topics in HPC: parallel algorithms, distributed systems and architectures, distributed memory and performance, high level applications, tools and solvers, numerical methods and simulation, advanced computing systems, and the emerging area of computational grids. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.

Synchronization in Real-Time Systems - A Priority Inheritance Approach (Paperback, Softcover reprint of the original 1st ed.... Synchronization in Real-Time Systems - A Priority Inheritance Approach (Paperback, Softcover reprint of the original 1st ed. 1991)
Ragunathan Rajkumar
R2,627 Discovery Miles 26 270 Ships in 18 - 22 working days

Real-time computing systems are vital to a wide range of applications. For example, they are used in the control of nuclear reactors and automated manufacturing facilities, in controlling and tracking air traffic, and in communication systems. In recent years, real-time systems have also grown larger and become more critical. For instance, advanced aircraft such as the space shuttle must depend heavily on computer sys tems Carlow 84]. The centralized control of manufacturing facilities and assembly plants operated by robots are other examples at the heart of which lie embedded real-time systems. Military defense systems deployed in the air, on the ocean surface, land and underwater, have also been increasingly relying upon real-time systems for monitoring and operational safety purposes, and for retaliatory and containment measures. In telecommunications and in multi-media applications, real time characteristics are essential to maintain the integrity of transmitted data, audio and video signals. Many of these systems control, monitor or perform critical operations, and must respond quickly to emergency events in a wide range of embedded applications. They are therefore required to process tasks with stringent timing requirements and must perform these tasks in a way that these timing requirements are guaranteed to be met. Real-time scheduling al gorithms attempt to ensure that system timing behavior meets its specifications, but typically assume that tasks do not share logical or physical resources. Since resource-sharing cannot be eliminated, synchronization primitives must be used to ensure that resource consis tency constraints are not violated."

Performance Analysis and Grid Computing - Selected Articles from the Workshop on Performance Analysis and Distributed Computing... Performance Analysis and Grid Computing - Selected Articles from the Workshop on Performance Analysis and Distributed Computing August 19-23, 2002, Dagstuhl, Germany (Paperback, 2004 ed.)
Vladimir Getov, Michael Gerndt, Adolfy Hoisie, Allen Malony, Barton Miller
R4,015 Discovery Miles 40 150 Ships in 18 - 22 working days

Past and current research in computer performance analysis has focused primarily on dedicated parallel machines. However, future applications in the area of high-performance computing will not only use individual parallel systems but a large set of networked resources. This scenario of computational and data Grids is attracting a great deal of attention from both computer and computational scientists. In addition to the inherent complexity of parallel machines, the sharing and transparency of the available resources introduces new challenges on performance analysis, techniques, and systems. In order to meet those challenges, a multi-disciplinary approach to the multi-faceted problems of performance is required. New degrees of freedom will come into play with a direct impact on the performance of Grid computing, including wide-area network performance, quality-of-service (QoS), heterogeneity, and middleware systems, to mention only a few.

Virtual Computing - Concept, Design, and Evaluation (Paperback, Softcover reprint of the original 1st ed. 2001): Dongmin Kim,... Virtual Computing - Concept, Design, and Evaluation (Paperback, Softcover reprint of the original 1st ed. 2001)
Dongmin Kim, Salim Hariri
R2,614 Discovery Miles 26 140 Ships in 18 - 22 working days

The evolution of modern computers began more than 50 years ago and has been driven to a large extend by rapid advances in electronic technology during that period. The first computers ran one application (user) at a time. Without the benefit of operating systems or compilers, the application programmers were responsible for managing all aspects of the hardware. The introduction of compilers allowed programmers to express algorithms in abstract terms without being concerned with the bit level details of their implementation. Time sharing operating systems took computing systems one step further and allowed several users and/or applications to time share the computing services of com puters. With the advances of networks and software tools, users and applications were able to time share the logical and physical services that are geographically dispersed across one or more networks. Virtual Computing (VC) concept aims at providing ubiquitous open computing services in analogous way to the services offered by Telephone and Elec trical (utility) companies. The VC environment should be dynamically setup to meet the requirements of a single user and/or application. The design and development of a dynamically programmable virtual comput ing environments is a challenging research problem. However, the recent advances in processing and network technology and software tools have successfully solved many of the obstacles facing the wide deployment of virtual computing environments as will be outlined next."

Parallel Computational Fluid Dynamics - 25th International Conference, ParCFD 2013, Changsha, China, May 20-24, 2013. Revised... Parallel Computational Fluid Dynamics - 25th International Conference, ParCFD 2013, Changsha, China, May 20-24, 2013. Revised Selected Papers (Paperback, 2014 ed.)
Kenli Li, Zheng Xiao, Yan Wang, Jiayi Du, Keqin Li
R4,266 Discovery Miles 42 660 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 25th International Conference on Parallel Computational Fluid Dynamics, ParCFD 2013, held in Changsha, China, in May 2013. The 35 revised full papers presented were carefully reviewed and selected from more than 240 submissions. The papers address issues such as parallel algorithms, developments in software tools and environments, unstructured adaptive mesh applications, industrial applications, atmospheric and oceanic global simulation, interdisciplinary applications and evaluation of computer architectures and software environments.

Memory Issues in Embedded Systems-on-Chip - Optimizations and Exploration (Paperback, Softcover reprint of the original 1st ed.... Memory Issues in Embedded Systems-on-Chip - Optimizations and Exploration (Paperback, Softcover reprint of the original 1st ed. 1999)
Preeti Ranjan Panda, Nikil D. Dutt, Alexandru Nicolau
R2,631 Discovery Miles 26 310 Ships in 18 - 22 working days

Memory Issues in Embedded Systems-On-Chip: Optimizations and Explorations is designed for different groups in the embedded systems-on-chip arena. First, it is designed for researchers and graduate students who wish to understand the research issues involved in memory system optimization and exploration for embedded systems-on-chip. Second, it is intended for designers of embedded systems who are migrating from a traditional micro-controllers centered, board-based design methodology to newer design methodologies using IP blocks for processor-core-based embedded systems-on-chip. Also, since Memory Issues in Embedded Systems-on-Chip: Optimization and Explorations illustrates a methodology for optimizing and exploring the memory configuration of embedded systems-on-chip, it is intended for managers and system designers who may be interested in the emerging capabilities of embedded systems-on-chip design methodologies for memory-intensive applications.

Cooperating Heterogeneous Systems (Paperback, Softcover reprint of the original 1st ed. 1995): David G. Schwartz Cooperating Heterogeneous Systems (Paperback, Softcover reprint of the original 1st ed. 1995)
David G. Schwartz
R2,635 Discovery Miles 26 350 Ships in 18 - 22 working days

Cooperating Heterogeneous Systems provides an in-depth introduction to the issues and techniques surrounding the integration and control of diverse and independent software components. Organizations increasingly rely upon diverse computer systems to perform a variety of knowledge-based tasks. This presents technical issues of interoperability and integration, as well as philosophical issues of how cooperation and interaction between computational entities is to be realized. Cooperating systems are systems that work together towards a common end. The concepts of cooperation must be realized in technically sound system architectures, having a uniform meta-layer between knowledge sources and the rest of the system. The layer consists of a family of interpreters, one for each knowledge source, and meta-knowledge. A system architecture to integrate and control diverse knowledge sources is presented. The architecture is based on the meta-level properties of the logic programming language Prolog. An implementation of the architecture is described, a Framework for Logic Programming Systems with Distributed Execution (FLiPSiDE). Knowledge-based systems play an important role in any up-to-date arsenal of decision support tools. The tremendous growth of computer communications infrastructure has made distributed computing a viable option, and often a necessity in geographically distributed organizations. It has become clear that to take knowledge-based systems to their next useful level, it is necessary to get independent knowledge-based systems to work together, much as we put together ad hoc work groups in our organizations to tackle complex problems. The book is for scientists and software engineers who have experience in knowledge-based systems and/or logic programming and seek a hands-on introduction to cooperating systems. Researchers investigating autonomous agents, distributed computation, and cooperating systems will find fresh ideas and new perspectives on well-established approaches to control, organization, and cooperation.

Computer Architecture: A Minimalist Perspective (Paperback, Softcover reprint of the original 1st ed. 2003): William F.... Computer Architecture: A Minimalist Perspective (Paperback, Softcover reprint of the original 1st ed. 2003)
William F. Gilreath, Phillip A Laplante
R3,997 Discovery Miles 39 970 Ships in 18 - 22 working days

The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines computer architecture, computability theory, and the history of computers from the perspective of one instruction set computing - a novel approach in which the computer supports only one, simple instruction. This bold, new paradigm offers significant promise in biological, chemical, optical, and molecular scale computers. Features include: - Provides a comprehensive study of computer architecture using computability theory as a base. - Provides a fresh perspective on computer architecture not found in any other text. - Covers history, theory, and practice of computer architecture from a minimalist perspective. Includes a complete implementation of a one instruction computer.- Includes exercises and programming assignments. Computer Architecture: A Minimalist Perspective is designed to meet the needs of a professional audience composed of researchers, computer hardware engineers, software engineers computational theorists, and systems engineers. The book is also intended for use in upper division undergraduate students and early graduate students studying computer architecture or embedded systems. It is an excellent text for use as a supplement or alternative in traditional Computer Architecture Courses, or in courses entitled Special Topics in Computer Architecture.

Multimedia Multiprocessor Systems - Analysis, Design and Management (Paperback, 2010 ed.): Akash Kumar, Henk Corporaal, Bart... Multimedia Multiprocessor Systems - Analysis, Design and Management (Paperback, 2010 ed.)
Akash Kumar, Henk Corporaal, Bart Mesman, Yajun Ha
R2,653 Discovery Miles 26 530 Ships in 18 - 22 working days

Modern multimedia systems are becoming increasingly multiprocessor and heterogeneous to match the high performance and low power demands placed on them by the large number of applications. The concurrent execution of these applications causes interference and unpredictability in the performance of these systems. In Multimedia Multiprocessor Systems, an analysis mechanism is presented to accurately predict the performance of multiple applications executing concurrently. With high consumer demand the time-to-market has become significantly lower. To cope with the complexity in designing such systems, an automated design-flow is needed that can generate systems from a high-level architectural description such that they are not error-prone and consume less time. Such a design methodology is presented for multiple use-cases -- combinations of active applications. A resource manager is also presented to manage the various resources in the system, and to achieve the goals of performance prediction, admission control and budget enforcement.

Multi-Threaded Object-Oriented MPI-Based Message Passing Interface - The ARCH Library (Paperback, Softcover reprint of the... Multi-Threaded Object-Oriented MPI-Based Message Passing Interface - The ARCH Library (Paperback, Softcover reprint of the original 1st ed. 1998)
Jean-Marc Adamo
R3,987 Discovery Miles 39 870 Ships in 18 - 22 working days

Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library presents ARCH, a library built as an extension to MPI. ARCH relies on a small set of programming abstractions that allow the writing of well-structured multi-threaded parallel codes according to the object-oriented programming style. ARCH has been written with C++. The book describes the built-in classes, and illustrates their use through several template application cases in several fields of interest: Distributed Algorithms (global completion detection, distributed process serialization), Parallel Combinatorial Optimization (A* procedure), Parallel Image-Processing (segmentation by region growing). It shows how new application-level distributed data types - such as a distributed tree and a distributed graph - can be derived from the built-in classes. A feature of interest to readers is that both the library and the application codes used for illustration purposes are available via the Internet. The material can be downloaded for installation and personal parallel code development on the reader's computer system. ARCH can be run on Unix/Linux as well as Windows NT-based platforms. Current installations include the IBM-SP2, the CRAY-T3E, the Intel Paragon, PC-networks under Linux or Windows NT. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is aimed at scientists who need to implement parallel/distributed algorithms requiring complicated local and/or distributed control structures. It can also benefit parallel/distributed program developers who wish to write codes in the object-oriented style. The author has been using ARCH for several years as a medium to teach parallel and network programming. Teachers can employ the library for the same purpose while students can use it for training. Although ARCH has been used so far in an academic environment, it will be an effective tool for professionals as well. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is suitable as a secondary text for a graduate level course on Data Communications and Networks, Programming Languages, Algorithms and Computational Theory and Distributed Computing and as a reference for researchers and practitioners in industry.

Internetworking and Computing Over Satellite Networks (Paperback, Softcover reprint of the original 1st ed. 2003): Yongguang... Internetworking and Computing Over Satellite Networks (Paperback, Softcover reprint of the original 1st ed. 2003)
Yongguang Zhang
R2,651 Discovery Miles 26 510 Ships in 18 - 22 working days

The emphasis of this text is on data networking, internetworking and distributed computing issues. The material surveys recent work in the area of satellite networks, introduces certain state-of-the-art technologies, and presents recent research results in these areas.

Localized Quality of Service Routing for the Internet (Paperback, Softcover reprint of the original 1st ed. 2003): Srihari... Localized Quality of Service Routing for the Internet (Paperback, Softcover reprint of the original 1st ed. 2003)
Srihari Nelakuditi, Zhi-Li Zhang
R1,363 Discovery Miles 13 630 Ships in 18 - 22 working days

Under Quality of Service (QoS) routing, paths for flows are selected based upon the knowledge of resource availability at network nodes and the QoS requirements of flows. QoS routing schemes proposed differ in the way they gather information about the network state and select paths based on this information. We broadly categorize these schemes into best-path routing and proportional routing. The best-path routing schemes gather global network state information and always select the best path for an incoming flow based on this global view. On the other hand, proportional routing schemes proportion incoming flows among a set of candidate paths. We have shown that it is possible to compute near-optimal proportions using only locally collected information. Furthermore, a few good candidate paths can be selected using infrequently exchanged global information and thus with minimal communication overhead. Localized Quality Of Service Routing For The Internet, describes these schemes in detail demonstrating that proportional routing schemes can achieve higher throughput with lower overhead than best-path routing schemes. It first addresses the issue of finding near-optimal proportions for a given set of candidate paths based on locally collected flow statistics. This book will also look into the selection of a few good candidate paths based on infrequently exchanged global information. The final phase of this book will describe extensions to proportional routing approach to provide hierarchical routing across multiple areas in a large network. Localized Quality Of Service Routing For The Internet is designed for researchers and practitioners in industry, and is suitable for graduate level students in computer science as a secondary text.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
The Oosterbeek Affair
Geof Willis Paperback R444 Discovery Miles 4 440
Vusi - Business & Life Lessons From a…
Vusi Thembekwayo Paperback  (3)
R300 R268 Discovery Miles 2 680
Bamboozled - In Search Of Joy In A World…
Melinda Ferguson Paperback R382 Discovery Miles 3 820
The Sea Gate
Jane Johnson Paperback R448 Discovery Miles 4 480
Land That I Love - a Novel of the Texas…
Gail Kittleson Hardcover R777 Discovery Miles 7 770
The Villager - How Africans Consume…
Feyi Olubodun Paperback R250 R223 Discovery Miles 2 230
Confidently You
Joyce Meyer Hardcover R215 R192 Discovery Miles 1 920
Win! - Compelling Conversations With 20…
Jeremy Maggs Paperback R294 Discovery Miles 2 940
Born For Greatness
Gerald J. Maarman Paperback R195 R180 Discovery Miles 1 800
Journey Into Reciprocal Space - A…
A.M. Glazer Paperback R758 Discovery Miles 7 580

 

Partners