0
Your cart

Your cart is empty

Browse All Departments
Price
  • R50 - R100 (2)
  • R100 - R250 (160)
  • R250 - R500 (638)
  • R500+ (9,740)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Applications of computing > General

Bounded Queries in Recursion Theory (Hardcover, 1999 ed.): William Levine, Georgia Martin Bounded Queries in Recursion Theory (Hardcover, 1999 ed.)
William Levine, Georgia Martin
R2,851 Discovery Miles 28 510 Ships in 18 - 22 working days

One of the major concerns of theoretical computer science is the classifi cation of problems in terms of how hard they are. The natural measure of difficulty of a function is the amount of time needed to compute it (as a function of the length of the input). Other resources, such as space, have also been considered. In recursion theory, by contrast, a function is considered to be easy to compute if there exists some algorithm that computes it. We wish to classify functions that are hard, i.e., not computable, in a quantitative way. We cannot use time or space, since the functions are not even computable. We cannot use Turing degree, since this notion is not quantitative. Hence we need a new notion of complexity-much like time or spac that is quantitative and yet in some way captures the level of difficulty (such as the Turing degree) of a function."

Building Scalable Network Services - Theory and Practice (Hardcover, 2004 ed.): Cheng Jin, Sugih Jamin, Danny Raz, Yuval Shavitt Building Scalable Network Services - Theory and Practice (Hardcover, 2004 ed.)
Cheng Jin, Sugih Jamin, Danny Raz, Yuval Shavitt
R2,735 Discovery Miles 27 350 Ships in 18 - 22 working days

Building Scalable Network Services: Theory and Practice is on building scalable network services on the Internet or in a network service provider's network. The focus is on network services that are provided through the use of a set of servers. The authors present a tiered scalable network service model and evaluate various services within this architecture. The service model simplifies design tasks by implementing only the most basic functionalities at lower tiers where the need for scalability dominates functionality.
The book includes a number of theoretical results that are practical and applicable to real networks, such as building network-wide measurement, monitoring services, and strategies for building better P2P networks. Various issues in scalable system design and placement algorithms for service nodes are discussed. Using existing network services as well as potentially new but useful services as examples, the authors formalize the problem of placing service nodes and provide practical solutions for them.

ECSCW 2001 (Hardcover, 2001 ed.): Wolfgang Prinz, Matthias Jarke, Yvonne Rogers, K. Schmidt, Volker Wulf ECSCW 2001 (Hardcover, 2001 ed.)
Wolfgang Prinz, Matthias Jarke, Yvonne Rogers, K. Schmidt, Volker Wulf
R2,884 Discovery Miles 28 840 Ships in 18 - 22 working days

Schmidt and Bannon (1992) introduced the concept of common information space by contrasting it with technical conceptions of shared information: Cooperative work is not facilitated simply by the provisioning of a shared database, but rather requires the active construction by the participants of a common information space where the meanings of the shared objects are debated and resolved, at least locally and temporarily. (Schmidt and Bannon, p. 22) A CIS, then, encompasses not only the information but also the practices by which actors establish its meaning for their collective work. These negotiated understandings of the information are as important as the availability of the information itself: The actors must attempt to jointly construct a common information space which goes beyond their individual personal information spaces. . . . The common information space is negotiated and established by the actors involved. (Schmidt and Bannon, p. 28) This is not to suggest that actors' understandings of the information are identical; they are simply "common" enough to coordinate the work. People understand how the information is relevant for their own work. Therefore, individuals engaged in different activities will have different perspectives on the same information. The work of maintaining the common information space is the work that it takes to balance and accommodate these different perspectives. A "bug" report in software development is a simple example. Software developers and quality assurance personnel have access to the same bug report information. However, access to information is not sufficient to coordinate their work.

OmeGA - A Competent Genetic Algorithm for Solving Permutation and Scheduling Problems (Hardcover, 2002 ed.): Dimitri Knjazew OmeGA - A Competent Genetic Algorithm for Solving Permutation and Scheduling Problems (Hardcover, 2002 ed.)
Dimitri Knjazew
R2,745 Discovery Miles 27 450 Ships in 18 - 22 working days

OmeGA: A Competent Genetic Algorithm for Solving Permutation and Scheduling Problems addresses two increasingly important areas in GA implementation and practice. OmeGA, or the ordering messy genetic algorithm, combines some of the latest in competent GA technology to solve scheduling and other permutation problems. Competent GAs are those designed for principled solutions of hard problems, quickly, reliably, and accurately. Permutation and scheduling problems are difficult combinatorial optimization problems with commercial import across a variety of industries.

This book approaches both subjects systematically and clearly. The first part of the book presents the clearest description of messy GAs written to date along with an innovative adaptation of the method to ordering problems. The second part of the book investigates the algorithm on boundedly difficult test functions, showing principled scale up as problems become harder and longer. Finally, the book applies the algorithm to a test function drawn from the literature of scheduling.

Intelligent Environments - Methods, Algorithms and Applications (Hardcover, 2009 ed.): Dorothy Monekosso, Yoshinori Kuno, Paolo... Intelligent Environments - Methods, Algorithms and Applications (Hardcover, 2009 ed.)
Dorothy Monekosso, Yoshinori Kuno, Paolo Remagnino
R2,683 Discovery Miles 26 830 Ships in 18 - 22 working days

Relatively new research ?elds such as ambient intelligence, intelligent envir- ments, ubiquitous computing, and wearable devices have emerged in recent years. These ?elds are related by a common theme: making use of novel technologies to enhance user experience by providing user-centric intelligent environments, - moving computers from the desktop and making computing available anywhere and anytime. It must be said that the concept of intelligent environments is not new and beganwithhomeautomation. Thechoiceofnameforthe?eldvariessomewhatfrom continent to continent in the English-speaking world. In general intelligent space is synonymous to intelligent environments or smart spaces of which smart homes is a sub?eld. In this collection, the terms intelligent environments and ambient int- ligence are used interchangeably throughout. Such environments are made possible by permeating living spaces with intelligent technology that enhances quality of life. In particular, advances in technologies such as miniaturized sensors, advances in communication and networking technology including high-bandwidth wireless devices and the reduction in power consumption have made possible the concept of intelligent environments. Environments such as a home, an of?ce, a shopping mall, and a travel port utilize data provided by users to adapt the environment to meet the user's needs and improve human-machine interactions. The user information is gathered either via wearable devices or by pervasive sensors or a combination of both. Intelligent environments brings together a number of research ?elds from computer science, such as arti?cial intelligence, computer vision, machine learning, and robotics as well as engineering and architecture.

Emergence in Complex, Cognitive, Social, and Biological Systems (Hardcover, 2002 ed.): Gianfranco Minati, Eliano Pessa Emergence in Complex, Cognitive, Social, and Biological Systems (Hardcover, 2002 ed.)
Gianfranco Minati, Eliano Pessa
R4,333 Discovery Miles 43 330 Ships in 18 - 22 working days

The systems movement is made up of many systems societies as well as of disciplinary researchers and researches, explicitly or implicitly focusing on the subject of systemics, officially introduced in the scientific community fifty years ago. Many researches in different fields have been and continue to be sources of new ideas and challenges for the systems community. To this regard, a very important topic is the one of EMERGENCE. Between the goals for the actual and future systems scientists there is certainly the definition of a general theory of emergence and the building of a general model of it. The Italian Systems Society, Associazione Italiana per la Ricerca sui Sistemi (AIRS), decided to devote its Second National Conference to this subject. Because AIRS is organized under the form of a network of researchers, institutions, scholars, professionals, and teachers, its research activity has an impact at different levels and in different ways. Thus the topic of emergence was not only the focus of this conference but it is actually the main subject of many AIRS activities.

Visual Explorations in Finance - with Self-Organizing Maps (Hardcover, 1998 ed.): Guido Deboeck, Teuvo Kohonen Visual Explorations in Finance - with Self-Organizing Maps (Hardcover, 1998 ed.)
Guido Deboeck, Teuvo Kohonen
R1,452 Discovery Miles 14 520 Ships in 18 - 22 working days

Self-organizing maps (SOM) have proven to be of significant economic value in the areas of finance, economic and marketing applications. As a result, this area is rapidly becoming a non-academic technology. This book looks at near state-of-the-art SOM applications in the above areas, and is a multi-authored volume, edited by Guido Deboeck, a leading exponent in the use of computational methods in financial and economic forecasting, and by the originator of SOM, Teuvo Kohonen. The book contains chapters on applications of unsupervised neural networks using Kohonen's self-organizing map approach.

Software Project Management for Distributed Computing - Life-Cycle Methods for Developing Scalable and Reliable Tools... Software Project Management for Distributed Computing - Life-Cycle Methods for Developing Scalable and Reliable Tools (Hardcover, 1st ed. 2017)
Zaigham Mahmood
R2,713 Discovery Miles 27 130 Ships in 18 - 22 working days

This unique volume explores cutting-edge management approaches to developing complex software that is efficient, scalable, sustainable, and suitable for distributed environments. Practical insights are offered by an international selection of pre-eminent authorities, including case studies, best practices, and balanced corporate analyses. Emphasis is placed on the use of the latest software technologies and frameworks for life-cycle methods, including the design, implementation and testing stages of software development. Topics and features: * Reviews approaches for reusability, cost and time estimation, and for functional size measurement of distributed software applications * Discusses the core characteristics of a large-scale defense system, and the design of software project management (SPM) as a service * Introduces the 3PR framework, research on crowdsourcing software development, and an innovative approach to modeling large-scale multi-agent software systems * Examines a system architecture for ambient assisted living, and an approach to cloud migration and management assessment * Describes a software error proneness mechanism, a novel Scrum process for use in the defense domain, and an ontology annotation for SPM in distributed environments* Investigates the benefits of agile project management for higher education institutions, and SPM that combines software and data engineering This important text/reference is essential reading for project managers and software engineers involved in developing software for distributed computing environments. Students and researchers interested in SPM technologies and frameworks will also find the work to be an invaluable resource. Prof. Zaigham Mahmood is a Senior Technology Consultant at Debesis Education UK and an Associate Lecturer (Research) at the University of Derby, UK. He also holds positions as Foreign Professor at NUST and IIU in Islamabad, Pakistan, and Professor Extraordinaire at the North West University Potchefstroom, South Africa.

Impact of Information Technology - From practice to curriculum (Hardcover, 1996 ed.): Yaacov Katz, Daniel Millin, Baruch Offir Impact of Information Technology - From practice to curriculum (Hardcover, 1996 ed.)
Yaacov Katz, Daniel Millin, Baruch Offir
R4,088 Discovery Miles 40 880 Ships in 18 - 22 working days

The aim of this book is to present readers with state-of-the-art options which allow pupils as well as teachers to cope with the social impacts and implications of information technology and the rapid technological developments of the past 25 years. The book explores the following key areas: the adaption of curricula to the social needs of society; the influences of multimedia on social interaction; morals, values and ethics in the information technology curriculum; social and pedagogical variables which promote information technology use; and social implications of distance learning through the medium of information technology. This volume contains the selected proceedings of the TC3/TC9 International Working Conference of the Impact of Information technology, sponsored by the International Federation for Information Processing and held in Israel, March, 1996.

Relaxation Techniques for the Simulation of VLSI Circuits (Hardcover, 1987 ed.): Jacob K. White, Alberto L.... Relaxation Techniques for the Simulation of VLSI Circuits (Hardcover, 1987 ed.)
Jacob K. White, Alberto L. Sangiovanni-Vincentelli
R2,766 Discovery Miles 27 660 Ships in 18 - 22 working days

Circuit simulation has been a topic of great interest to the integrated circuit design community for many years. It is a difficult, and interesting, problem be cause circuit simulators are very heavily used, consuming thousands of computer hours every year, and therefore the algorithms must be very efficient. In addi tion, circuit simulators are heavily relied upon, with millions of dollars being gambled on their accuracy, and therefore the algorithms must be very robust. At the University of California, Berkeley, a great deal of research has been devoted to the study of both the numerical properties and the efficient imple mentation of circuit simulation algorithms. Research efforts have led to several programs, starting with CANCER in the 1960's and the enormously successful SPICE program in the early 1970's, to MOTIS-C, SPLICE, and RELAX in the late 1970's, and finally to SPLICE2 and RELAX2 in the 1980's. Our primary goal in writing this book was to present some of the results of our current research on the application of relaxation algorithms to circuit simu lation. As we began, we realized that a large body of mathematical and exper imental results had been amassed over the past twenty years by graduate students, professors, and industry researchers working on circuit simulation. It became a secondary goal to try to find an organization of this mass of material that was mathematically rigorous, had practical relevance, and still retained the natural intuitive simplicity of the circuit simulation subject."

Novel Developments in Granular Computing - Applications for Advanced Human Reasoning and Soft Computation (Hardcover, New):... Novel Developments in Granular Computing - Applications for Advanced Human Reasoning and Soft Computation (Hardcover, New)
JingTao Yao
R4,671 Discovery Miles 46 710 Ships in 18 - 22 working days

One of the fastest growing areas in computer science, granular computing, covers theories, methodologies, techniques, and tools that make use of granules in complex problem solving and reasoning. Novel Developments in Granular Computing: Applications for Advanced Human Reasoning and Soft Computation analyzes developments and current trends of granular computing, reviewing the most influential research and predicting future trends. This book not only presents a comprehensive summary of existing practices, but enhances understanding on human reasoning.

The In-System Configuration Handbook: - A Designer's Guide to ISC (Hardcover, 2004 ed.): Neil G. Jacobson The In-System Configuration Handbook: - A Designer's Guide to ISC (Hardcover, 2004 ed.)
Neil G. Jacobson
R2,670 Discovery Miles 26 700 Ships in 18 - 22 working days

This handbook provides design considerations and rules-of-thumb to ensure the functionality you want will work. It brings together all the information needed by systems designers to develop applications that include configurability, from the simplest implementations to the most complicated.

Architecture of Systems Problem Solving (Hardcover, 2nd ed. 2003): George J. Klir, Doug Elias Architecture of Systems Problem Solving (Hardcover, 2nd ed. 2003)
George J. Klir, Doug Elias
R2,701 Discovery Miles 27 010 Ships in 18 - 22 working days

One criterion for classifying books is whether they are written for a single pur pose or for multiple purposes. This book belongs to the category of multipurpose books, but one of its roles is predominant-it is primarily a textbook. As such, it can be used for a variety ofcourses at the first-year graduate or upper-division undergraduate level. A common characteristic of these courses is that they cover fundamental systems concepts, major categories of systems problems, and some selected methods for dealing with these problems at a rather general level. A unique feature of the book is that the concepts, problems, and methods are introduced in the context of an architectural formulation of an expert system referred to as the general systems problem solver or aSPS-whose aim is to provide users ofall kinds with computer-based systems knowledge and methodo logy. Theasps architecture, which is developed throughout the book, facilitates a framework that is conducive to acoherent, comprehensive, and pragmaticcoverage ofsystems fundamentals-concepts, problems, and methods. A course that covers systems fundamentals is now offered not only in sys tems science, information science, or systems engineering programs, but in many programs in other disciplines as well. Although the level ofcoverage for systems science or engineering students is surely different from that used for students in other disciplines, this book is designed to serve both of these needs."

A Priori Wire Length Estimates for Digital Design (Hardcover, 2001 ed.): Dirk Stroobandt A Priori Wire Length Estimates for Digital Design (Hardcover, 2001 ed.)
Dirk Stroobandt
R4,183 Discovery Miles 41 830 Ships in 18 - 22 working days

The design of digital (computer) systems requires several design phases: from the behavioural design, over the logical structural design to the physical design, where the logical structure is implemented in the physical structure of the system (the chip). Due to the ever increasing demands on computer system performance, the physical design phase being one of the most complex design steps in the entire process. The major goal of this book is to develop a priori wire length estimation methods that can help the designer in finding a good lay-out of a circuit in less iterations of physical design steps and that are useful to compare different physical architectures. For modelling digital circuits, the interconnection complexity is of major importance. It can be described by the so called Rent's rule and the Rent exponent. A Priori Wire Length Estimates for Digital Design will provide the reader with more insight in this rule and clearly outlines when and where the rule can be used and when and where it fails. Also, for the first time, a comprehensive model for the partitioning behaviour of multi-terminal nets is developed. This leads to a new parameter for circuits that describes the distribution of net degrees over the nets in the circuit. This multi-terminal net model is used throughout the book for the wire length estimates but it also induces a method for the generation of synthetic benchmark circuits that has major advantages over existing benchmark generators. In the domain of wire length estimations, the most important contributions of this work are (i) a new model for placement optimization in a physical (computer) architecture and (ii) the inclusion of the multi-terminal net modelin the wire length estimates. The combination of the placement optimization model with Donath's model for a hierarchical partitioning and placement results in more accurate wire length estimates. The multi-terminal net model allows accurate assessments of the impact of multi-terminal nets on wire length estimates. We distinguish between delay-related applications, ' for which the length of source-sink pairs is important, and routing-related applications, ' for which the entire (Steiner) length of the multi-terminal net has to be taken into account. The wire length models are further extended by taking into account the interconnections between internal components and the chip boundary. The application of the models to three-dimensional systems broadens the scope to more exotic architectures and to opto-electronic design techniques. We focus on anisotropic three-dimensional systems and propose a way to estimate wire lengths for opto-electronic systems. The wire length estimates can be used for prediction of circuit characteristics, for improving placement and routing tools in Computer-Aided Design and for evaluating new computer architectures. All new models are validated with experiments on benchmark circuits.

Cost-Benefit Analysis and the Theory of Fuzzy Decisions - Identification and Measurement Theory (Hardcover, 2004 ed.): Kofi... Cost-Benefit Analysis and the Theory of Fuzzy Decisions - Identification and Measurement Theory (Hardcover, 2004 ed.)
Kofi Kissi Dompere
R4,237 Discovery Miles 42 370 Ships in 18 - 22 working days

The genus of definitions for the theoretical sciences is (the province of) the habitus of the intellective intention, for the practical sciences, however, that of the effective intention; the objects and ends constitute the specific differ ence There is nothing in the intellect that has not already been in the senses, that is, in the sensory organs, that has not already been in sensible things from which are distinguished things not perceptible to the senses. Nothing can be of the mind, sensation and the thing inferred therefrom except the operation itself. Real learning is cognition of things in themselves. It thus has the basis of its certainty in the known thing. This is established in two ways: by demon stration in the case of contemplative things, and by induction in the case of things perceptible to the senses. In contrast with real learning there is pos sible, probable and fictive learning. Antonius Gvilielmus Amo Afer (1827) This research has been long in the making. Its conception began in my last years in the doctoral program at Temple University, Philadelphia, Pa. It was simultaneously conceived with my two books on the Neo Keynesian Theory of Optimal aggregate investment and output dynamics [201] [202] as well as reflections on the methodology of decision-choice rationality and development economics [440] [441]. Economic theories and social policies were viewed to have, among other things, one impor tant thing in common in that they relate to decision making under different.

Applications of Circularly Polarized Radiation Using Synchrotron and Ordinary Sources (Hardcover, 1985 ed.): Fritz Allen,... Applications of Circularly Polarized Radiation Using Synchrotron and Ordinary Sources (Hardcover, 1985 ed.)
Fritz Allen, Carlos Bustamante
R2,821 Discovery Miles 28 210 Ships in 18 - 22 working days

viii The experimental research presented at the conference and reported here deals mainly with the visible wavelength region and slight extensions to either side (roughly from 150 nrn to 1000 nrn, 8. 3 eV to 1. 2 eV). A single exception was that dealing with a description of spin-resolved photoelectron spectroscopy at energies up to 40 eV (31 nm). This work was done using circularly polarized radiation emitted above and below the plane of the circulating electrons in a synchrotron ring. The device at BESSY (West Germany) in which the experiments were carried out seems to be the only one presently capable of providing circularly polarized radiation in the X--ray through vacuum ultraviolet energy range. A much more intense source is needed in this range. A possible solution was proposed which could provide not only circularly polarized photons over a wide energy range, but could in principle modulate the polarization of the beam between two orthogonal polarization states. Realization of this device, or an equivalent one, would be a vital step towards the goal of determining all components of the Mueller matrix for each spectroscopic experiment. A variety of theoretical treatments are presented describing the different phenomena emerging from the interaction of matter and polarized radiation in a wide range of energies. From this work we expect to learn what are the most useful wavelength regions and what types of samples are the most suitable for study.

Utility Maximization in Nonconvex Wireless Systems (Hardcover, 2012): Johannes Brehmer Utility Maximization in Nonconvex Wireless Systems (Hardcover, 2012)
Johannes Brehmer
R2,654 Discovery Miles 26 540 Ships in 18 - 22 working days

This monograph develops a framework for modeling and solving utility maximization problems in nonconvex wireless systems. The first part develops a model for utility optimization in wireless systems. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed in the second part of the book. The development is based on a careful examination of the properties that are required for the application of each method. This part focuses on problems whose initial formulation does not allow for a solution by standard methods and discusses alternative approaches. The last part presents two case studies to demonstrate the application of the proposed framework. In both cases, utility maximization in multi-antenna broadcast channels is investigated.

Active Networks and Active Network Management - A Proactive Management Framework (Mixed media product, 2001 ed.): Stephen F.... Active Networks and Active Network Management - A Proactive Management Framework (Mixed media product, 2001 ed.)
Stephen F. Bush, Amit B. Kulkarni
R2,763 Discovery Miles 27 630 Ships in 18 - 22 working days

Active networking is an exciting new paradigm in digital networking that has the potential to revolutionize the manner in which communication takes place. It is an emerging technology, one in which new ideas are constantly being formulated and new topics of research are springing up even as this book is being written. This technology is very likely to appeal to a broad spectrum of users from academia and industry. Therefore, this book was written in a way that enables all these groups to understand the impact of active networking in their sphere of interest. Information services managers, network administrators, and e-commerce developers would like to know the potential benefits of the new technology to their businesses, networks, and applications. The book introduces the basic active networking paradigm and its potential impacts on the future of information handling in general and on communications in particular. This is useful for forward-looking businesses that wish to actively participate in the development of active networks and ensure a head start in the integration of the technology in their future products, be they applications or networks. Areas in which active networking is likely to make significant impact are identified, and the reader is pointed to any related ongoing research efforts in the area. The book also provides a deeper insight into the active networking model for students and researchers, who seek challenging topics that define or extend frontiers of the technology. It describes basic components of the model, explains some of the terms used by the active networking community, and provides the reader with taxonomy of the research being conducted at the time this book was written. Current efforts are classified based on typical research areas such as mobility, security, and management. The intent is to introduce the serious reader to the background regarding some of the models adopted by the community, to outline outstanding issues concerning active networking, and to provide a snapshot of the fast-changing landscape in active networking research. Management is a very important issue in active networks because of its open nature. The latter half of the book explains the architectural concepts of a model for managing active networks and the motivation for a reference model that addresses limitations of the current network management framework by leveraging the powerful features of active networking to develop an integrated framework. It also describes a novel application enabled by active network technology called the Active Virtual Network Management Prediction (AVNMP) algorithm. AVNMP is a pro-active management system; in other words, it provides the ability to solve a potential problem before it impacts the system by modeling network devices within the network itself and running that model ahead of real time.

Parallel Processors - Will They Ever Meet? (Hardcover, New): Gil Lerman, Larry Rudolph Parallel Processors - Will They Ever Meet? (Hardcover, New)
Gil Lerman, Larry Rudolph
R2,453 Discovery Miles 24 530 Ships in 18 - 22 working days

1. Introduction.- 2. Classification of Parallel Processors.- 2.1. A Brief History of Classification Schemes.- 2.2. The Classification Scheme Used in This Work.- 2.3. A Look at the Classification Characteristics.- 2.3.1. Applications.- 2.3.2. Control.- 2.3.3. Data Exchange and Synchronization.- 2.3.4. Number and Type of Processors.- 2.3.5. Interconnection Network.- 2.3.6. Memory Organization and Addressing.- 2.3.7. Type of Constructing Institution.- 2.3.8. Period of Construction.- 2.4. Information-Gathering Details.- 2.4.1. Classification Choices.- 2.4.2. Qualifications for Inclusion.- 2.4.3. Extent.- 2.4.4. Sources.- 2.5. An Apology.- 3. Emergent Trends.- 3.1. Applications.- 3.1.1. Correlation with Period of Construction.- 3.1.2. Correlation with Constructing Institution.- 3.1.3. Correlation with the Control Mechanism.- 3.1.4. Correlation with the Data Exchange and Synchronization Mechanism.- 3.1.5. Correlation with the Number and Type of Processors.- 3.1.6. Correlation with the Interconnection Network.- 3.1.7. Correlation with the Memory Organization.- 3.2. Mode of Control.- 3.2.1. Correlation with the Period of Construction.- 3.2.2. Correlation with the Type of Constructing Institution.- 3.2.3. Correlation with the Data Exchange and Synchronization Mechanism.- 3.2.4. Correlation with the Number and Type of Processors.- 3.2.5. Correlation with the Interconnection Network.- 3.2.6. Correlation with the Memory Organization.- 3.3. Data Exchange and Synchronization.- 3.3.1. Correlation with the Period of Construction.- 3.3.2. Correlation with the Type of Constructing Institution.- 3.3.3. Correlation with the Number and Type of PEs.- 3.3.4. Correlation with the Interconnection Network.- 3.3.5. Correlation with the Memory Organization.- 3.4. The Number and Type of PEs.- 3.4.1. Correlation with the Period of Construction.- 3.4.2. Correlation with the Constructing Institution.- 3.4.3. Correlation with the Interconnection Network.- 3.4.4. Correlation with the Memory Organization.- 3.5. Interconnection Network.- 3.5.1. Correlation with the Period of Construction.- 3.5.2. Correlation with the Type of Constructing Institution.- 3.5.3. Correlation with the Memory Organization.- 3.6. Memory Organization.- 3.6.1. Correlation with the Period of Construction.- 3.6.2. Correlation with the Type of Constructing Institution.- 3.7. Type of Constructing Institution.- 3.7.1. Correlation with the Construction Period.- 3.8. Period of Construction.- 3.9. Summary of the Correlations.- 4. Popular Machine Models.- 4.1. Exposing the Complex Patterns.- 4.2. General-Purpose Machines.- 4.2.1. Model I - MIMD, Shared Memory.- 4.2.2. Model I, the High-End, Numeric Variant.- 4.2.3. Model II - MIMD, Message Passing.- 4.2.4. Model II, the High End.- 4.2.5. Model III - General Purpose SIMD Machines.- 4.3. Model IV - Image (and Signal) Processing SIMD Machines.- 4.4. Model V - Database MIMD Machines, Two Variants.- 4.5. Trends in Commercialization.- 4.5.1. The Number Crunchers.- 4.5.2. The Multiprocessor Midrange.- 4.5.3. The Hypercube.- 5. The Shape of Things to Come?.- 5.1. Underlying Assumptions.- 5.2. Applications.- 5.3. Control.- 5.4. Data Exchange and Synchronization.- 5.5. Number and Type of PEs.- 5.6. Interconnection Networks.- 5.7. Memory Organization.- 5.8. Sources.- 5.9. Classification of Parallel Computers.- 5.10. Summary.- Appendix: Information about the Systems.

Handbook of Randomized Computing - Volume I/II (Hardcover, 2001 ed.): Sanguthevar Rajasekaran, Panos M. Pardalos, J. H. Reif,... Handbook of Randomized Computing - Volume I/II (Hardcover, 2001 ed.)
Sanguthevar Rajasekaran, Panos M. Pardalos, J. H. Reif, Jose Rolim
R1,669 Discovery Miles 16 690 Ships in 18 - 22 working days

The technique of randomization has been employed to solve numerous prob lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often is a collection of articles written by renowned experts in practice. This book in the area of randomized parallel computing. A brief introduction to randomized algorithms In the analysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O(nlogn). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at the O(n logn) average run time for quicksort is that each input permutation is equally likely. Clearly, any average case analysis is only as good as how valid the assumption made on the input space is. Randomized algorithms achieve superior performances without making any assumptions on the inputs by making coin flips within the algorithm. Any analysis done of randomized algorithms will be valid for all possible inputs.

Multi-Level Simulation for VLSI Design (Hardcover, 1987 ed.): D. D. Hill, D. R. Coelho Multi-Level Simulation for VLSI Design (Hardcover, 1987 ed.)
D. D. Hill, D. R. Coelho
R2,770 Discovery Miles 27 700 Ships in 18 - 22 working days

AND BACKGROUND 1. 1 CAD, Specification and Simulation Computer Aided Design (CAD) is today a widely used expression referring to the study of ways in which computers can be used to expedite the design process. This can include the design of physical systems, architectural environments, manufacturing processes, and many other areas. This book concentrates on one area of CAD: the design of computer systems. Within this area, it focusses on just two aspects of computer design, the specification and the simulation of digital systems. VLSI design requires support in many other CAD areas, induding automatic layout. IC fabrication analysis, test generation, and others. The problem of specification is unique, however, in that it i > often the first one encountered in large chip designs, and one that is unlikely ever to be completely automated. This is true because until a design's objectives are specified in a machine-readable form, there is no way for other CAD tools to verify that the target system meets them. And unless the specifications can be simulated, it is unlikely that designers will have confidence in them, since specifications are potentially erroneous themselves. (In this context the term target system refers to the hardware and/or software that will ultimately be fabricated. ) On the other hand, since the functionality of a VLSI chip is ultimately determined by its layout geometry, one might question the need for CAD tools that work with areas other than layout.

Situational Method Engineering: Fundamentals and Experiences - Proceedings of the IFIP WG 8.1 Working Conference, 12-14... Situational Method Engineering: Fundamentals and Experiences - Proceedings of the IFIP WG 8.1 Working Conference, 12-14 September 2007, Geneva, Switzerland (Hardcover, 2007 ed.)
Jolita Ralyte, Sjaak Brinkkemper, Brian Henderson-Sellers
R2,703 Discovery Miles 27 030 Ships in 18 - 22 working days

This book contains the papers from the IFIP Working Group 8.1 conference on Situational Method Engineering. Over the last decade, Method Engineering, defined as the engineering discipline to design, construct and adapt methods, including supportive tools, has emerged as the research and application area for using methods for systems development.

A Designer's Guide to VHDL Synthesis (Hardcover, 1994 ed.): Douglas E. Ott, Thomas J. Wilderotter A Designer's Guide to VHDL Synthesis (Hardcover, 1994 ed.)
Douglas E. Ott, Thomas J. Wilderotter
R4,190 Discovery Miles 41 900 Ships in 18 - 22 working days

A Designer's Guide to VHDL Synthesis is intended for both design engineers who want to use VHDL-based logic synthesis ASICs and for managers who need to gain a practical understanding of the issues involved in using this technology. The emphasis is placed more on practical applications of VHDL and synthesis based on actual experiences, rather than on a more theoretical approach to the language. VHDL and logic synthesis tools provide very powerful capabilities for ASIC design, but are also very complex and represent a radical departure from traditional design methods. This situation has made it difficult to get started in using this technology for both designers and management, since a major learning effort and culture' change is required. A Designer's Guide to VHDL Synthesis has been written to help design engineers and other professionals successfully make the transition to a design methodology based on VHDL and log synthesis instead of the more traditional schematic based approach. While there are a number of texts on the VHDL language and its use in simulation, little has been written from a designer's viewpoint on how to use VHDL and logic synthesis to design real ASIC systems. The material in this book is based on experience gained in successfully using these techniques for ASIC design and relies heavily on realistic examples to demonstrate the principles involved.

Design of Reservation Protocols for Multimedia Communication (Hardcover, 1996 ed.): Luca Delgrossi Design of Reservation Protocols for Multimedia Communication (Hardcover, 1996 ed.)
Luca Delgrossi
R4,175 Discovery Miles 41 750 Ships in 18 - 22 working days

The advent of multimedia technology is creating a number of new problems in the fields of computer and communication systems. Perhaps the most important of these problems in communication, and certainly the most interesting, is that of designing networks to carry multimedia traffic, including digital audio and video, with acceptable quality. The main challenge in integrating the different services needed by the different types of traffic into the same network (an objective that is made worthwhile by its obvious economic advantages) is to satisfy the performance requirements of continuous media applications, as the quality of audio and video streams at the receiver can be guaranteed only if bounds on delay, delay jitters, bandwidth, and reliability are guaranteed by the network. Since such guarantees cannot be provided by traditional packet-switching technology, a number of researchers and research groups during the last several years have tried to meet the challenge by proposing new protocols or modifications of old ones, to make packet-switching networks capable of delivering audio and video with good quality while carrying all sorts of other traffic. The focus of this book is on HeiTS (the Heidelberg Transport System), and its contributions to integrated services network design. The HeiTS architecture is based on using the Internet Stream Protocol Version 2 (ST-II) at the network layer. The Heidelberg researchers were the first to implement ST-II. The author documents this activity in the book and provides thorough coverage of the improvements made to the protocol. The book also includes coverage of HeiTP as used in error handling, error control, congestion control, and the full specification of ST2+, a new version of ST-II. The ideas and techniques implemented by the Heidelberg group and their coverage in this volume apply to many other approaches to multimedia networking.

Computational Complexity and Feasibility of Data Processing and Interval Computations (Hardcover, 1998 ed.): V. Kreinovich,... Computational Complexity and Feasibility of Data Processing and Interval Computations (Hardcover, 1998 ed.)
V. Kreinovich, A.V. Lakeyev, J Rohn, P.T. Kahl
R5,398 Discovery Miles 53 980 Ships in 18 - 22 working days

Targeted audience * Specialists in numerical computations, especially in numerical optimiza tion, who are interested in designing algorithms with automatie result ver ification, and who would therefore be interested in knowing how general their algorithms caIi in principle be. * Mathematicians and computer scientists who are interested in the theory 0/ computing and computational complexity, especially computational com plexity of numerical computations. * Students in applied mathematics and computer science who are interested in computational complexity of different numerical methods and in learning general techniques for estimating this computational complexity. The book is written with all explanations and definitions added, so that it can be used as a graduate level textbook. What this book .is about Data processing. In many real-life situations, we are interested in the value of a physical quantity y that is diflicult (or even impossible) to measure directly. For example, it is impossible to directly measure the amount of oil in an oil field or a distance to a star. Since we cannot measure such quantities directly, we measure them indirectly, by measuring some other quantities Xi and using the known relation between y and Xi'S to reconstruct y. The algorithm that transforms the results Xi of measuring Xi into an estimate fj for y is called data processing.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Beginner's Guide to SolidWorks 2015…
Alejandro Reyes Paperback R1,827 Discovery Miles 18 270
Mechanics and Control - Proceedings of…
R.S. Guttalu Hardcover R2,466 Discovery Miles 24 660
Impacts in Mechanical Systems - Analysis…
Bernard Brogliato Hardcover R2,806 Discovery Miles 28 060
Principles & Design of Mechanical Face…
AO Lebeck Hardcover R7,869 Discovery Miles 78 690
Biolubricants - Science and Technology
J.C.J. Bart, E. Gucciardi, … Hardcover R6,096 Discovery Miles 60 960
Mechanics Of Materials - SI Edition
Barry Goodno, James Gere Paperback R1,430 R1,329 Discovery Miles 13 290
Hyperbolic Conservation Laws in…
Constantine M. Dafermos Hardcover R6,637 Discovery Miles 66 370
Numerical Methods for Nonsmooth…
Vincent Acary, Bernard Brogliato Hardcover R7,926 Discovery Miles 79 260
Remote Control Robotics
Craig Sayers Hardcover R2,679 Discovery Miles 26 790
Techniques of Scientific Computing (Part…
P.G. Ciarlet Hardcover R3,262 Discovery Miles 32 620

 

Partners