Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This single source reference offers a pragmatic and accessible approach to the basic methods and procedures used in the manufacturing and design of modern electronic products. Providing a stategic yet simplified layout, this handbook is set up with an eye toward maximizing productivity in each phase of the eletronics manufacturing process. Not only does this handbook inform the reader on vital issues concerning electronics manufacturing and design, it also provides practical insight and will be of essential use to manufacturing and process engineers in electronics and aerospace manufacturing. In addition, electronics packaging engineers and electronics manufacturing managers and supervisors will gain a wealth of knowledge.
Ontology Learning for the Semantic Web explores techniques for applying knowledge discovery techniques to different web data sources (such as HTML documents, dictionaries, etc.), in order to support the task of engineering and maintaining ontologies. The approach of ontology learning proposed in Ontology Learning for the Semantic Web includes a number of complementary disciplines that feed in different types of unstructured and semi-structured data. This data is necessary in order to support a semi-automatic ontology engineering process. Ontology Learning for the Semantic Web is designed for researchers and developers of semantic web applications. It also serves as an excellent supplemental reference to advanced level courses in ontologies and the semantic web.
With the rapid growth of networking and high-computing power, the demand for large-scale and complex software systems has increased dramatically. Many of the software systems support or supplant human control of safety-critical systems such as flight control systems, space shuttle control systems, aircraft avionics control systems, robotics, patient monitoring systems, nuclear power plant control systems, and so on. Failure of safety-critical systems could result in great disasters and loss of human life. Therefore, software used for safety critical systems should preserve high assurance properties. In order to comply with high assurance properties, a safety-critical system often shares resources between multiple concurrently active computing agents and must meet rigid real-time constraints. However, concurrency and timing constraints make the development of a safety-critical system much more error prone and arduous. The correctness of software systems nowadays depends mainly on the work of testing and debugging. Testing and debugging involve the process of de tecting, locating, analyzing, isolating, and correcting suspected faults using the runtime information of a system. However, testing and debugging are not sufficient to prove the correctness of a safety-critical system. In contrast, static analysis is supported by formalisms to specify the system precisely. Formal verification methods are then applied to prove the logical correctness of the system with respect to the specification. Formal verifica tion gives us greater confidence that safety-critical systems meet the desired assurance properties in order to avoid disastrous consequences.
The understanding of parallel processing and of the mechanisms underlying neural networks in the brain is certainly one of the most challenging problems of contemporary science. During the last decades significant progress has been made by the combination of different techniques, which have elucidated properties at a cellular and molecular level. However, in order to make significant progress in this field, it is necessary to gather more direct experimental data on the parallel processing occurring in the nervous system. Indeed the nervous system overcomes the limitations of its elementary components by employing a massive degree of parallelism, through the extremely rich set of synaptic interconnections between neurons. This book gathers a selection of the contributions presented during the NATO ASI School "Neuronal Circuits and Networks" held at the Ettore Majorana Center in Erice, Sicily, from June 15 to 27, 1997. The purpose of the School was to present an overview of recent results on single cell properties, the dynamics of neuronal networks and modelling of the nervous system. The School and the present book propose an interdisciplinary approach of experimental and theoretical aspects of brain functions combining different techniques and methodologies.
The State of Memory Technology Over the past decade there has been rapid growth in the speed of micropro cessors. CPU speeds are approximately doubling every eighteen months, while main memory speed doubles about every ten years. The International Tech nology Roadmap for Semiconductors (ITRS) study suggests that memory will remain on its current growth path. The ITRS short-and long-term targets indicate continued scaling improvements at about the current rate by 2016. This translates to bit densities increasing at two times every two years until the introduction of 8 gigabit dynamic random access memory (DRAM) chips, after which densities will increase four times every five years. A similar growth pattern is forecast for other high-density chip areas and high-performance logic (e.g., microprocessors and application specific inte grated circuits (ASICs)). In the future, molecular devices, 64 gigabit DRAMs and 28 GHz clock signals are targeted. Although densities continue to grow, we still do not see significant advances that will improve memory speed. These trends have created a problem that has been labeled the Memory Wall or Memory Gap."
Universal access and management of information has been one of the driving forces in the evolution of computer technology. Central computing gave the ability to perform large and complex computations and advanced information manipulation. Advances in networking connected computers together and led to distributed computing. Web technology and the Internet went even further to provide hyper-linked information access and global computing. However, restricting access stations to physical locations limits the boundary of the vision. The real global network can be achieved only via the ability to compute and access information from anywhere and anytime. This is the fundamental wish that motivates mobile computing. This evolution is the cumulative result of both hardware and software advances at various levels motivated by tangible application needs. Infrastructure research on communications and networking is essential for realizing wireless systems.Equally important is the design and implementation of data management applications for these systems, a task directly affected by the characteristics of the wireless medium and the resulting mobility of data resources and computation. Although a relatively new area, mobile data management has provoked a proliferation of research efforts motivated both by a great market potential and by many challenging research problems. The focus of Data Management for Mobile Computing is on the impact of mobile computing on data management beyond the networking level. The purpose is to provide a thorough and cohesive overview of recent advances in wireless and mobile data management. The book is written with a critical attitude. This volume probes the new issues introduced by wireless and mobile access to data and their conceptual and practical consequences. Data Management for Mobile Computing provides a single source for researchers and practitioners who want to keep abreast of the latest innovations in the field.It can also serve as a textbook for an advanced course on mobile computing or as a companion text for a variety of courses including courses on distributed systems, database management, transaction management, operating or file systems, information retrieval or dissemination, and web computing.
Since the establishment of the CAAD Futures Foundation in 1985, CAAD experts from all over the world meet every two years to present and document the state of the art of research in Computer Aided Architectural Design. Together, the series provides a good record of the evolving state of research in this area over the last fourteen years. The Proceedings this year is the eighth in the series. The conference held at Georgia Institute of Technology in Atlanta, Georgia, includes twenty-five papers presenting new and exciting results and capabilities in areas such as computer graphics, building modeling, digital sketching and drawing systems, Web-based collaboration and information exchange. An overall reading shows that computers in architecture is still a young field, with many exciting results emerging out of both greater understanding of the human processes and information processing needed to support design and also the continuously expanding capabilities of digital technology.
Mobile Computation with Functions explores distributed computation with languages which adopt functions as the main programming abstraction and support code mobility through the mobility of functions between remote sites. It aims to highlight the benefits of using languages of this family in dealing with the challenges of mobile computation. The possibility of exploiting existing static analysis techniques suggests that having functions at the core of mobile code language is a particularly apt choice. A range of problems which have impact on the safety, security and performance are discussed. It is shown that types extended with effects and other annotations can capture a significant amount of information about the dynamic behavior of mobile functions, and offer solutions to the problems under investigation. This book includes a survey of the languages Concurrent ML, Facile and PLAN which inherit the strengths of the functional paradigm in the context of concurrent and distributed computation. The languages which are defined in the subsequent chapters have their roots in these languages.
Language, Compilers and Run-time Systems for Scalable Computers contains 20 articles based on presentations given at the third workshop of the same title, and 13 extended abstracts from the poster session. Starting with new developments in classical problems of parallel compiler design, such as dependence analysis and an exploration of loop parallelism, the book goes on to address the issues of compiler strategy for specific architectures and programming environments. Several chapters investigate support for multi-threading, object orientation, irregular computation, locality enhancement, and communication optimization. Issues of the interface between language and operating system support are also discussed. Finally, the load balance issues are discussed in different contexts, including sparse matrix computation and iteratively balanced adaptive solvers for partial differential equations. Some additional topics are also discussed in the extended abstracts. Each chapter provides a bibliography of relevant papers and the book can thus be used as a reference to the most up-to-date research in parallel software engineering.
Disseminating Security Updates at Internet Scale describes a new system, "Revere", that addresses these problems. "Revere" builds large-scale, self-organizing and resilient overlay networks on top of the Internet to push security updates from dissemination centers to individual nodes. "Revere" also sets up repository servers for individual nodes to pull missed security updates. This book further discusses how to protect this push-and-pull dissemination procedure and how to secure "Revere" overlay networks, considering possible attacks and countermeasures. Disseminating Security Updates at Internet Scale presents experimental measurements of a prototype implementation of "Revere" gathered using a large-scale oriented approach. These measurements suggest that "Revere" can deliver security updates at the required scale, speed and resiliency for a reasonable cost. Disseminating Security Updates at Internet Scale will be helpful to those trying to design peer systems at large scale when security is a concern, since many of the issues faced by these designs are also faced by "Revere". The "Revere" solutions may not always be appropriate for other peer systems with very different goals, but the analysis of the problems and possible solutions discussed here will be helpful in designing a customized approach for such systems.
The papers in this volume were presented at the Second Annual Work shop on Active Middleware Services and were selected for inclusion here by the Editors. The AMS workshop was organized with support from both the National Science Foundation and the CAT center at the Uni versity of Arizona, and was held in Pittsburgh, Pennsylvania, on August 1, 2000, in conjunction with the 9th IEEE International Symposium on High Performance Distributed Computing (HPDC-9). The explosive growth of Internet-based applications and the prolifer ation of networking technologies has been transforming most areas of computer science and engineering as well as computational science and commercial application areas. This opens an outstanding opportunity to explore new, Internet-oriented software technologies that will open new research and application opportunities not only for the multimedia and commercial world, but also for the scientific and high-performance computing applications community. Two emerging technologies - agents and active networks - allow increased programmability to enable bring ing new services to Internet based applications. The AMS workshop presented research results and working papers in the areas of active net works, mobile and intelligent agents, software tools for high performance distributed computing, network operating systems, and application pro gramming models and environments. The success of an endeavor such as this depends on the contributions of many individuals. We would like to thank Dr. Frederica Darema and the NSF for sponsoring the workshop.
Most of the papers in this volume were presented at the NATO Advanced Research Workshop High Performance Computing: Technology and Application, held in Cetraro, Italy from 24 to 26 of June, 1996. The main purpose of the Workshop was to discuss some key scientific and technological developments in high performance computing, identify significant trends and defme desirable research objectives. The volume structure corresponds, in general, to the outline of the workshop technical agenda: general concepts and emerging systems, software technology, algorithms and applications. One of the Workshop innovations was an effort to extend slightly the scope of the meeting from scientific/engineering computing to enterprise-wide computing. The papers on performance and scalability of database servers, and Oracle DBMS reflect this attempt We hope that after reading this collection of papers the readers will have a good idea about some important research and technological issues in high performance computing. We wish to give our thanks to the NATO Scientific and Environmental Affairs Division for being the principal sponsor for the Workshop. Also we are pleased to acknowledge other institutions and companies that supported the Workshop: European Union: European Commission DGIII-Industry, CNR: National Research Council of Italy, University of Calabria, Alenia Spazio, Centro Italiano Ricerche Aerospaziali, ENEA: Italian National Agency for New Technology, Energy and the Environment, Fujitsu, Hewlett Packard-Convex, Hitachi, NEC, Oracle, and Silicon Graphics-Cray Research. Editors January 1997 vii LIST OF CONTRIBUTORS Ecole Nonnale Sucentsrieure de Lyon, 69364 Abarbanel. Robert
New to the Second Edition: offers the latest developments in standards activities (JPEG-LS, MPEG-4, MPEG-7, and H.263) provides a comprehensive review of recent activities on multimedia enhanced processors, multimedia coprocessors, and dedicated processors, including examples from industry. Image and Video Compression Standards: Algorithms and Architectures, Second Edition presents an introduction to the algorithms and architectures that form the underpinnings of the image and video compressions standards, including JPEG (compression of still-images), H.261 and H.263 (video teleconferencing), and MPEG-1 and MPEG-2 (video storage and broadcasting). The next generation of audiovisual coding standards, such as MPEG-4 and MPEG-7, are also briefly described. In addition, the book covers the MPEG and Dolby AC-3 audio coding standards and emerging techniques for image and video compression, such as those based on wavelets and vector quantization. Image and Video Compression Standards: Algorithms and Architectures, Second Edition emphasizes the foundations of these standards; namely, techniques such as predictive coding, transform-based coding such as the discrete cosine transform (DCT), motion estimation, motion compensation, and entropy coding, as well as how they are applied in the standards. The implementation details of each standard are avoided; however, the book provides all the material necessary to understand the workings of each of the compression standards, including information that can be used by the reader to evaluate the efficiency of various software and hardware implementations conforming to these standards. Particular emphasis is placed on those algorithms and architectures that have been found to be useful in practical software or hardware implementations. Image and Video Compression Standards: Algorithms and Architectures, Second Edition uniquely covers all major standards (JPEG, MPEG-1, MPEG-2, MPEG-4, H.261, H.263) in a simple and tutorial manner, while fully addressing the architectural considerations involved when implementing these standards. As such, it serves as a valuable reference for the graduate student, researcher or engineer. The book is also used frequently as a text for courses on the subject, in both academic and professional settings.
Designing VLSI systems represents a challenging task. It is a transfonnation among different specifications corresponding to different levels of design: abstraction, behavioral, stntctural and physical. The behavioral level describes the functionality of the design. It consists of two components; static and dynamic. The static component describes operations, whereas the dynamic component describes sequencing and timing. The structural level contains infonnation about components, control and connectivity. The physical level describes the constraints that should be imposed on the floor plan, the placement of components, and the geometry of the design. Constraints of area, speed and power are also applied at this level. To implement such multilevel transfonnation, a design methodology should be devised, taking into consideration the constraints, limitations and properties of each level. The mapping process between any of these domains is non-isomorphic. A single behavioral component may be transfonned into more than one structural component. Design methodologies are the most recent evolution in the design automation era, which started off with the introduction and subsequent usage of module generation especially for regular structures such as PLA's and memories. A design methodology should offer an integrated design system rather than a set of separate unrelated routines and tools. A general outline of a desired integrated design system is as follows: * Decide on a certain unified framework for all design levels. * Derive a design method based on this framework. * Create a design environment to implement this design method.
Distributed and Parallel Systems: From Instruction Parallelism to Cluster Computing is the proceedings of the third Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by the Austrian Computer Society and the MTA SZTAKI Computer and Automation Research Institute. This book contains 18 full papers and 12 short papers from 14 countries around the world, including Japan, Korea and Brazil. The paper sessions cover a broad range of research topics in the area of parallel and distributed systems, including software development environments, performance evaluation, architectures, languages, algorithms, web and cluster computing. This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.
Instruction-Level Parallelism presents a collection of papers that attempts to capture the most significant work that took place during the 1980s in the area of instruction-level (ILP) parallel processing. The papers in this book discuss both compiler techniques and actual implementation experience on very long instruction word (VLIW) and superscalar architectures.
In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.
Computersystemsresearch is heavilyinfluencedby changesincomputertechnol- ogy. As technology changes alterthe characteristics ofthe underlying hardware com- ponents of the system, the algorithms used to manage the system need to be re- examinedand newtechniques need to bedeveloped. Technological influencesare par- ticularly evident in the design of storage management systems such as disk storage managers and file systems. The influences have been so pronounced that techniques developed as recently as ten years ago are being made obsolete. The basic problem for disk storage managers is the unbalanced scaling of hard- warecomponenttechnologies. Disk storage managerdesign depends on the technolo- gy for processors, main memory, and magnetic disks. During the 1980s, processors and main memories benefited from the rapid improvements in semiconductortechnol- ogy and improved by several orders ofmagnitude in performance and capacity. This improvement has not been matched by disk technology, which is bounded by the me- chanics ofrotating magnetic media. Magnetic disks ofthe 1980s have improved by a factor of 10in capacity butonly a factor of2 in performance. This unbalanced scaling ofthe hardware components challenges the disk storage manager to compensate for the slower disks and allow performance to scale with the processor and main memory technology. Unless the performance of file systems can be improved over that of the disks, I/O-bound applications will be unable to use the rapid improvements in processor speeds to improve performance for computer users. Disk storage managers must break this bottleneck and decouple application perfor- mance from the disk.
This book covers the entire spectrum of multicasting on the Internet from link- to application-layer issues, including multicasting in broadcast and non-broadcast links, multicast routing, reliable and real-time multicast transport, group membership and total ordering in multicast groups. In-depth consideration is given to describing IP multicast routing protocols, such as, DVMRP, MOSPF, PIM and CBT, quality of service issues in network-layer using RSVP and ST-2, as well as the relationship between ATM and IP multicast. These discussions include coverage of key concepts using illustrative diagrams and various real-world applications. The protocols and the architecture of MBone are described, real-time multicast transport issues are addressed and various reliable multicast transport protocols are compared both conceptually and analytically. Also included is a discussion of video multicast and other cutting-edge research on multicast with an assessment of their potential impact on future internetworks.Multicasting on the Internet and Its Applications is an invaluable reference work for networking professionals and researchers, network software developers, information technology managers and graduate students.
Despite five decades of research, parallel computing remains an exotic, frontier technology on the fringes of mainstream computing. Its much-heralded triumph over sequential computing has yet to materialize. This is in spite of the fact that the processing needs of many signal processing applications continue to eclipse the capabilities of sequential computing. The culprit is largely the software development environment. Fundamental shortcomings in the development environment of many parallel computer architectures thwart the adoption of parallel computing. Foremost, parallel computing has no unifying model to accurately predict the execution time of algorithms on parallel architectures. Cost and scarce programming resources prohibit deploying multiple algorithms and partitioning strategies in an attempt to find the fastest solution. As a consequence, algorithm design is largely an intuitive art form dominated by practitioners who specialize in a particular computer architecture. This, coupled with the fact that parallel computer architectures rarely last more than a couple of years, makes for a complex and challenging design environment. To navigate this environment, algorithm designers need a road map, a detailed procedure they can use to efficiently develop high performance, portable parallel algorithms. The focus of this book is to draw such a road map. The Parallel Algorithm Synthesis Procedure can be used to design reusable building blocks of adaptable, scalable software modules from which high performance signal processing applications can be constructed. The hallmark of the procedure is a semi-systematic process for introducing parameters to control the partitioning and scheduling of computation and communication. This facilitates the tailoring of software modules to exploit different configurations of multiple processors, multiple floating-point units, and hierarchical memories. To showcase the efficacy of this procedure, the book presents three case studies requiring various degrees of optimization for parallel execution.
This volume contains a selection of papers that focus on the state-of the-art in formal specification and verification of real-time computing systems. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D. C. A companion volume by the title Foundations of Real-Time Computing: Scheduling and Resource Management complements this hook by addressing many of the recently devised techniques and approaches for scheduling tasks and managing resources in real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real time computing systems on a scientific basis. The notion of real-time system has alternative interpretations, not all of which are intended usages in this collection of papers. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the formal specification and verification of computer software and systems whose correct performance is dependent on carefully orchestrated interactions with time, e. g., meeting deadlines and synchronizing with clocks. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End use applications of real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."
Performance Evaluation, Prediction and Visualization in Parallel Systems presents a comprehensive and systematic discussion of theoretics, methods, techniques and tools for performance evaluation, prediction and visualization of parallel systems. Chapter 1 gives a short overview of performance degradation of parallel systems, and presents a general discussion on the importance of performance evaluation, prediction and visualization of parallel systems. Chapter 2 analyzes and defines several kinds of serial and parallel runtime, points out some of the weaknesses of parallel speedup metrics, and discusses how to improve and generalize them. Chapter 3 describes formal definitions of scalability, addresses the basic metrics affecting the scalability of parallel systems, discusses scalability of parallel systems from three aspects: parallel architecture, parallel algorithm and parallel algorithm-architecture combinations, and analyzes the relations of scalability and speedup. Chapter 4 discusses the methodology of performance measurement, describes the benchmark- oriented performance test and analysis and how to measure speedup and scalability in practice. Chapter 5 analyzes the difficulties in performance prediction, discusses application-oriented and architecture-oriented performance prediction and how to predict speedup and scalability in practice. Chapter 6 discusses performance visualization techniques and tools for parallel systems from three stages: performance data collection, performance data filtering and performance data visualization, and classifies the existing performance visualization tools. Chapter 7 describes parallel compiling-based, search-based and knowledge-based performance debugging, which assists programmers to optimize the strategy or algorithm in their parallel programs, and presents visual programming-based performance debugging to help programmers identify the location and cause of the performance problem. It also provides concrete suggestions on how to modify their parallel program to improve the performance. Chapter 8 gives an overview of current interconnection networks for parallel systems, analyzes the scalability of interconnection networks, and discusses how to measure and improve network performances. Performance Evaluation, Prediction and Visualization in Parallel Systems serves as an excellent reference for researchers, and may be used as a text for advanced courses on the topic.
Dependable Network Computing provides insights into various problems facing millions of global users resulting from the 'internet revolution'. It covers real-time problems involving software, servers, and large-scale storage systems with adaptive fault-tolerant routing and dynamic reconfiguration techniques. Also included is material on routing protocols, QoS, and dead- and live-lock free related issues. All chapters are written by leading specialists in their respective fields. Dependable Network Computing provides useful information for scientists, researchers, and application developers building networks based on commercially off-the-shelf components.
Computations with Markov Chains presents the edited and reviewed proceedings of the Second International Workshop on the Numerical Solution of Markov Chains, held January 16--18, 1995, in Raleigh, North Carolina. New developments of particular interest include recent work on stability and conditioning, Krylov subspace-based methods for transient solutions, quadratic convergent procedures for matrix geometric problems, further analysis of the GTH algorithm, the arrival of stochastic automata networks at the forefront of modelling stratagems, and more. An authoritative overview of the field for applied probabilists, numerical analysts and systems modelers, including computer scientists and engineers.
Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms. |
You may like...
The Adventures Of Jhansi Rani - A Pet…
Priya Mary Sebastian
Hardcover
The Wildest Dog And Other African Tales
Avril van der Merwe
Paperback
|