![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
THE CONTEXT OF PARALLEL PROCESSING The field of digital computer architecture has grown explosively in the past two decades. Through a steady stream of experimental research, tool-building efforts, and theoretical studies, the design of an instruction-set architecture, once considered an art, has been transformed into one of the most quantitative branches of computer technology. At the same time, better understanding of various forms of concurrency, from standard pipelining to massive parallelism, and invention of architectural structures to support a reasonably efficient and user-friendly programming model for such systems, has allowed hardware performance to continue its exponential growth. This trend is expected to continue in the near future. This explosive growth, linked with the expectation that performance will continue its exponential rise with each new generation of hardware and that (in stark contrast to software) computer hardware will function correctly as soon as it comes off the assembly line, has its down side. It has led to unprecedented hardware complexity and almost intolerable dev- opment costs. The challenge facing current and future computer designers is to institute simplicity where we now have complexity; to use fundamental theories being developed in this area to gain performance and ease-of-use benefits from simpler circuits; to understand the interplay between technological capabilities and limitations, on the one hand, and design decisions based on user and application requirements on the other.
There is an increasing demand for dynamic systems to become safer, more reliable and more economical in operation. This requirement extends beyond the normally accepted safety-critical systems e.g., nuclear reactors, aircraft and many chemical processes, to systems such as autonomous vehicles and some process control systems where the system availability is vital. The field of fault diagnosis for dynamic systems (including fault detection and isolation) has become an important topic of research. Many applications of qualitative and quantitative modelling, statistical processing and neural networks are now being planned and developed in complex engineering systems. Issues of Fault Diagnosis for Dynamic Systems has been prepared by experts in fault detection and isolation (FDI) and fault diagnosis with wide ranging experience.Subjects featured include: - Real plant application studies; - Non-linear observer methods; - Robust approaches to FDI; - The use of parity equations; - Statistical process monitoring; - Qualitative modelling for diagnosis; - Parameter estimation approaches to FDI; - Fault diagnosis for descriptor systems; - FDI in inertial navigation; - Stuctured approaches to FDI; - Change detection methods; - Bio-medical studies. Researchers and industrial experts will appreciate the combination of practical issues and mathematical theory with many examples. Control engineers will profit from the application studies.
The primary audience for this book are advanced undergraduate students and graduate students. Computer architecture, as it happened in other fields such as electronics, evolved from the small to the large, that is, it left the realm of low-level hardware constructs, and gained new dimensions, as distributed systems became the keyword for system implementation. As such, the system architect, today, assembles pieces of hardware that are at least as large as a computer or a network router or a LAN hub, and assigns pieces of software that are self-contained, such as client or server programs, Java applets or pro tocol modules, to those hardware components. The freedom she/he now has, is tremendously challenging. The problems alas, have increased too. What was before mastered and tested carefully before a fully-fledged mainframe or a closely-coupled computer cluster came out on the market, is today left to the responsibility of computer engineers and scientists invested in the role of system architects, who fulfil this role on behalf of software vendors and in tegrators, add-value system developers, R&D institutes, and final users. As system complexity, size and diversity grow, so increases the probability of in consistency, unreliability, non responsiveness and insecurity, not to mention the management overhead. What System Architects Need to Know The insight such an architect must have includes but goes well beyond, the functional properties of distributed systems."
Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.
This book describes a circuit architecture for converting real analog signals into a digital format, suitable for digital signal processors. This architecture, referred to as multi-stage noise-shaping (MASH) Continuous-Time Sigma-Delta Modulators (CT- M), has the potential to provide better digital data quality and achieve better data rate conversion with lower power consumption. The authors not only cover MASH continuous-time sigma delta modulator fundamentals, but also provide a literature review that will allow students, professors, and professionals to catch up on the latest developments in related technology.
Recently, a variety ofresults on the complexitystatusofthegraph isomorphism problem has been obtained. These results belong to the so-called structural part of Complexity Theory. Our idea behind this book is to summarize such results which might otherwise not be easily accessible in the literature, and also, to give the reader an understanding of the aims and topics in Structural Complexity Theory, in general. The text is basically self contained; the only prerequisite for reading it is some elementary knowledge from Complexity Theory and Probability Theory. It can be used to teach a seminar or a monographic graduate course, but also parts of it (especially Chapter 1) provide a source of examples for a standard graduate course on Complexity Theory. Many people have helped us in different ways III the process of writing this book. Especially, we would like to thank V. Arvind, R.V. Book, E. May ordomo, and the referee who gave very constructive comments. This book project was especially made possible by a DAAD grant in the "Acciones In tegrada" program. The third author has been supported by the ESPRIT project ALCOM-II."
Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node.
Multithreaded computer architecture has emerged as one of the most promising and exciting avenues for the exploitation of parallelism. This new field represents the confluence of several independent research directions which have united over a common set of issues and techniques. Multithreading draws on recent advances in dataflow, RISC, compiling for fine-grained parallel execution, and dynamic resource management. It offers the hope of dramatic performance increases through parallel execution for a broad spectrum of significant applications based on extensions to traditional' approaches. Multithreaded Computer Architecture is divided into four parts, reflecting four major perspectives on the topic. Part I provides the reader with basic background information, definitions, and surveys of work which have in one way or another been pivotal in defining and shaping multithreading as an architectural discipline. Part II examines key elements of multithreading, highlighting the fundamental nature of latency and synchronization. This section presents clever techniques for hiding latency and supporting large synchronization name spaces. Part III looks at three major multithreaded systems, considering issues of machine organization and compilation strategy. Part IV concludes the volume with an analysis of multithreaded architectures, showcasing methodologies and actual measurements. Multithreaded Computer Architecture: A Summary of the State of the Art is an excellent reference source and may be used as a text for advanced courses on the subject.
This book presents the cyber culture of micro, macro, cosmological, and virtual computing. The book shows how these work to formulate, explain, and predict the current processes and phenomena monitoring and controlling technology in the physical and virtual space.The authors posit a basic proposal to transform description of the function truth table and structure adjacency matrix to a qubit vector that focuses on memory-driven computing based on logic parallel operations performance. The authors offer a metric for the measurement of processes and phenomena in a cyberspace, and also the architecture of logic associative computing for decision-making and big data analysis.The book outlines an innovative theory and practice of design, test, simulation, and diagnosis of digital systems based on the use of a qubit coverage-vector to describe the functional components and structures. Authors provide a description of the technology for SoC HDL-model diagnosis, based on Test Assertion Blocks Activated Graph. Examples of cyber-physical systems for digital monitoring and cloud management of social objects and transport are proposed. A presented automaton model of cosmological computing explains the cyclical and harmonious evolution of matter-energy essence, and also a space-time form of the Universe.
This book contains the papers that have been presented at the ninth Very Large Scale Integrated Systems conference VLSI'97 that is organized biannually by IFIP Working Group 10.5. It took place at Hotel Serra Azul, in Gramado Brazil from 26-30 August 1997. Previous conferences have taken place in Edinburgh, Trondheim, Vancouver, Munich, Grenoble and Tokyo. The papers in this book report on all aspects of importance to the design of the current and future integrated systems. The current trend towards the realization of versatile Systems-on-a-Chip require attention of embedded hardware/software systems, dedicated ASIC hardware, sensors and actuators, mixed analog/digital design, video and image processing, low power battery operation and wireless communication. The papers as presented in Jhis book have been organized in two tracks, where one is dealing with VLSI System Design and Applications and the other presents VLSI Design Methods and CAD. The following topics are addressed: VLSI System Design and Applications Track * VLSI for Video and Image Processing. * Microsystem and Mixed-mode design. * Communication And Memory System Design * Cow-voltage & Low-power Analog Circuits. * High Speed Circuit Techniques * Application Specific DSP Architectures. VLSI Design Methods and CAD Track * Specification and Simulation at System Level. * Synthesis and Technology Mapping. * CAD Techniques for Low-Power Design. * Physical Design Issues in Sub-micron Technologies. * Architectural Design and Synthesis. * Testing in Complex Mixed Analog and Digital Systems.
This book provides a comprehensive survey of recent progress in the design and implementation of Networks-on-Chip. It addresses a wide spectrum of on-chip communication problems, ranging from physical, network, to application layers. Specific topics that are explored in detail include packet routing, resource arbitration, error control/correction, application mapping, and communication scheduling. Additionally, a novel bi-directional communication channel NoC (BiNoC) architecture is described, with detailed explanation. Written for practicing engineers in need of practical knowledge about the design and implementation of networks-on-chip; Includes tutorial-like details to introduce readers to a diverse range of NoC designs, as well as in-depth analysis for designers with NoC experience to explore advanced issues; Describes a variety of on-chip communication architectures, including a novel bi-directional communication channel NoC. From the Foreword: Overall this book shows important advances over the state of the art that will affect future system design as well as R&D in tools and methods for NoC design. It represents an important reference point for both designers and electronic design automation researchers and developers. --Giovanni De Micheli"
The most important use of computing in the future will be in the context of the global "digital convergence" where everything becomes digital and every thing is inter-networked. The application will be dominated by storage, search, retrieval, analysis, exchange and updating of information in a wide variety of forms. Heavy demands will be placed on systems by many simultaneous re quests. And, fundamentally, all this shall be delivered at much higher levels of dependability, integrity and security. Increasingly, large parallel computing systems and networks are providing unique challenges to industry and academia in dependable computing, espe cially because of the higher failure rates intrinsic to these systems. The chal lenge in the last part of this decade is to build a systems that is both inexpensive and highly available. A machine cluster built of commodity hardware parts, with each node run ning an OS instance and a set of applications extended to be fault resilient can satisfy the new stringent high-availability requirements. The focus of this book is to present recent techniques and methods for im plementing fault-tolerant parallel and distributed computing systems. Section I, Fault-Tolerant Protocols, considers basic techniques for achieving fault-tolerance in communication protocols for distributed systems, including synchronous and asynchronous group communication, static total causal order ing protocols, and fail-aware datagram service that supports communications by time."
A revealing insight into the issues surrounding information infrastructure implementation in large, global corporations. Case study material from six different international corporations-AstraZeneca, IBM, Norsk Hydro, Roche, SKF, and Statoil-shows a complex picture of implementation, and one that cannot be interpreted using current management thinking. The authors suggest that the economics of standards and increasing returns be joined with the perspectives from the social studies of science and technology to provide the fundamentals for a fresh view of the management of IT in global corporations.
DAPSYS (International Conference on Distributed and Parallel Systems) is an international biannual conference series dedicated to all aspects of distributed and parallel computing. DAPSYS 2008, the 7th International Conference on Distributed and Parallel Systems was held in September 2008 in Hungary. Distributed and Parallel Systems: Desktop Grid Computing, based on DAPSYS 2008, presents original research, novel concepts and methods, and outstanding results. Contributors investigate parallel and distributed techniques, algorithms, models and applications; present innovative software tools, environments and middleware; focus on various aspects of grid computing; and introduce novel methods for development, deployment, testing and evaluation. This volume features a special focus on desktop grid computing as well. Designed for a professional audience composed of practitioners and researchers in industry, this book is also suitable for advanced-level students in computer science.
This book presents high-/mixed-voltage analog and radio frequency (RF) circuit techniques for developing low-cost multistandard wireless receivers in nm-length CMOS processes. Key benefits of high-/mixed-voltage RF and analog CMOS circuits are explained, state-of-the-art examples are studied, and circuit solutions before and after voltage-conscious design are compared. Three real design examples are included, which demonstrate the feasibility of high-/mixed-voltage circuit techniques. Provides a valuable summary and real case studies of the state-of-the-art in high-/mixed-voltage circuits and systems; Includes novel high-/mixed-voltage analog and RF circuit techniques - from concept to practice; Describes the first high-voltage-enabled mobile-TVRF front-end in 90nm CMOS and the first mixed-voltage full-band mobile-TV Receiver in 65nm CMOS;Demonstrates the feasibility of high-/mixed-voltage circuit techniques with real design examples."
Before use, standard ERP systems such as SAP R/3 need to be customized to meet the concrete requirements of the individual enterprise. This book provides an overview of the process models, methods, and tools offered by SAP and its partners to support this complex and time-consuming process. It begins by characterizing the foundations of the latest ERP systems from both a conceptual and technical viewpoint, whereby the most important components and functions of SAP R/3 are described. The main part of the book then goes on to present the current methods and tools for the R/3 implementation based on newer process models (roadmaps).
The Compact Disc (CD), as a standardized information carrier, has become one of the most successful consumer products ever marketed. Although the original disc was intended for audio playback, its specific advantages opened very quickly the way towards various computer applications. The standardization of the Compact Disc Read-Only Memory (CD-ROM) and of all succeeding similar products, like Compact Disc interactive (CD-i), Photo and Video CD, CD Recordable (CD-R), and CD Rewritable (CD R/W), has substantially enlarged the range of possible applications. The plastic disc represented from the very beginning a removable medium of large storage capacity. The advent of the personal computer accompa nied by the increasing demand for both data distribution and exchange have strongly marked the evolution of the CD-ROM drive. The number of sold CD-ROM units exceeded 60 millions in 1997 when compared to about 2.5 millions in 1992. As computing power continuously improved over the years, computer pe ripherals have also targeted better performance specifications. In particular, the speed of CD-ROM drives increased from the so-called 1X in 1984 to dou ble speed in 1992, and further to 32X at the beginning of 1998. The average time needed to access data on disc has dropped from about 300 ms to less than 90 ms within the same period of time."
In today s world, services and data are integrated in ever new constellations, requiring the easy, flexible and scalable integration of autonomous, heterogeneous components into complex systems at any time. Event-based architectures inherently decouple system components. Event-based components are not designed to work with specific other components in a traditional request/reply mode, but separate communication from computation through asynchronous communication mechanisms via a dedicated notification service. Muhl, Fiege, and Pietzuch provide the reader with an in-depth description of event-based systems. They cover the complete spectrum of topics, ranging from a treatment of local event matching and distributed event forwarding algorithms, through a more practical discussion of software engineering issues raised by the event-based style, to a presentation of state-of-the-art research topics in event-based systems, such as composite event detection and security. Their presentation gives researchers a comprehensive overview of the area and lots of hints for future research. In addition, they show the power of event-based architectures in modern system design, thus encouraging professionals to exploit this technique in next generation large-scale distributed applications like information dissemination, network monitoring, enterprise application integration, or mobile systems.
This book provides the most comprehensive and consistent survey of the field of IC design for Biological Sensing and Processing. The authors describe a multitude of applications that require custom CMOS IC design and highlight the techniques in analog and mixed-signal circuit design that potentially can cross boundaries and benefit the very wide community of bio-medical engineers.
This book is intended to serve as a textbook for a second course in the im plementation (Le. microarchitecture) of computer architectures. The subject matter covered is the collection of techniques that are used to achieve the highest performance in single-processor machines; these techniques center the exploitation of low-level parallelism (temporal and spatial) in the processing of machine instructions. The target audience consists students in the final year of an undergraduate program or in the first year of a postgraduate program in computer science, computer engineering, or electrical engineering; professional computer designers will also also find the book useful as an introduction to the topics covered. Typically, the author has used the material presented here as the basis of a full-semester undergraduate course or a half-semester post graduate course, with the other half of the latter devoted to multiple-processor machines. The background assumed of the reader is a good first course in computer architecture and implementation - to the level in, say, Computer Organization and Design, by D. Patterson and H. Hennessy - and familiarity with digital-logic design. The book consists of eight chapters: The first chapter is an introduction to all of the main ideas that the following chapters cover in detail: the topics covered are the main forms of pipelining used in high-performance uniprocessors, a taxonomy of the space of pipelined processors, and performance issues. It is also intended that this chapter should be readable as a brief "stand-alone" survey."
This book describes the algorithms and computer architectures used to create and analyze photographs in modern digital cameras. It also puts the capabilities of digital cameras into context for applications in art, entertainment, and video analysis. The author discusses the entire range of topics relevant to digital camera design, including image processing, computer vision, image sensors, system-on-chip, and optics, while clearly describing the interactions between design decisions at these different levels of abstraction. Readers will benefit from this comprehensive view of digital camera design, describing the range of algorithms used to compose, enhance, and analyze images, as well as the characteristics of optics, image sensors, and computing platforms that determine the physical limits of image capture and computing. The content is designed to be used by algorithm designers and does not require an extensive background in optics or electronics.
Performance Evaluation, Prediction and Visualization in Parallel Systems presents a comprehensive and systematic discussion of theoretics, methods, techniques and tools for performance evaluation, prediction and visualization of parallel systems. Chapter 1 gives a short overview of performance degradation of parallel systems, and presents a general discussion on the importance of performance evaluation, prediction and visualization of parallel systems. Chapter 2 analyzes and defines several kinds of serial and parallel runtime, points out some of the weaknesses of parallel speedup metrics, and discusses how to improve and generalize them. Chapter 3 describes formal definitions of scalability, addresses the basic metrics affecting the scalability of parallel systems, discusses scalability of parallel systems from three aspects: parallel architecture, parallel algorithm and parallel algorithm-architecture combinations, and analyzes the relations of scalability and speedup. Chapter 4 discusses the methodology of performance measurement, describes the benchmark- oriented performance test and analysis and how to measure speedup and scalability in practice. Chapter 5 analyzes the difficulties in performance prediction, discusses application-oriented and architecture-oriented performance prediction and how to predict speedup and scalability in practice. Chapter 6 discusses performance visualization techniques and tools for parallel systems from three stages: performance data collection, performance data filtering and performance data visualization, and classifies the existing performance visualization tools. Chapter 7 describes parallel compiling-based, search-based and knowledge-based performance debugging, which assists programmers to optimize the strategy or algorithm in their parallel programs, and presents visual programming-based performance debugging to help programmers identify the location and cause of the performance problem. It also provides concrete suggestions on how to modify their parallel program to improve the performance. Chapter 8 gives an overview of current interconnection networks for parallel systems, analyzes the scalability of interconnection networks, and discusses how to measure and improve network performances. Performance Evaluation, Prediction and Visualization in Parallel Systems serves as an excellent reference for researchers, and may be used as a text for advanced courses on the topic.
This book guides readers through the design of hardware architectures using VHDL for digital communication and image processing applications that require performance computing. Further it includes the description of all the VHDL-related notions, such as language, levels of abstraction, combinational vs. sequential logic, structural and behavioral description, digital circuit design, and finite state machines. It also includes numerous examples to make the concepts presented in text more easily understandable.
The book is designed to provide graduate students and research novices with an introductory review of recent developments in the field of magneto-optics. The field encompasses many of the most important subjects in solid state physics, chemical physics and electronic engineering. The book deals with (1) optical spectroscopy of paramagnetic, antiferromagnetic, and ferromagnetic materials, (2) studies of photo-induced magnetism, and (3) their applications to opto-electronics. Many of these studies originate from those of ligand-field spectra of solids, which are considered to have contributed to advances in materials research for solid-state lasers. |
You may like...
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
Practical TCP/IP and Ethernet Networking…
Deon Reynders, Edwin Wright
Paperback
R1,491
Discovery Miles 14 910
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,315
Discovery Miles 53 150
|