![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Component Models and Systems for Grid Applications is the essential reference for the most current research on Grid technologies. This first volume of the CoreGRID series addresses such vital issues as the architecture of the Grid, the way software will influence the development of the Grid, and the practical applications of Grid technologies for individuals and businesses alike. Part I of the book, "Application-Oriented Designs," focuses on development methodology and how it may contribute to a more component-based use of the Grid. "Middleware Architecture," the second part, examines portable Grid engines, hierarchical infrastructures, interoperability, as well as workflow modeling environments. The final part of the book, "Communication Frameworks," looks at dynamic self-adaptation, collective operations, and higher-order components. With Component Models and Systems for Grid Applications, editors Vladimir Getov and Thilo Kielmann offer the computing professional and the computing researcher the most informative, up-to-date, and forward-looking thoughts on the fast-growing field of Grid studies.
Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.
Cache And Interconnect Architectures In Multiprocessors Eilat, Israel May 25-261989 Michel Dubois UniversityofSouthernCalifornia Shreekant S. Thakkar SequentComputerSystems The aim of the workshop was to bring together researchers working on cache coherence protocols for shared-memory multiprocessors with various interconnect architectures. Shared-memory multiprocessors have become viable systems for many applications. Bus based shared-memory systems (Eg. Sequent's Symmetry, Encore's Multimax) are currently limited to 32 processors. The fIrst goal of the workshop was to learn about the performance ofapplications on current cache-based systems. The second goal was to learn about new network architectures and protocols for future scalable systems. These protocols and interconnects would allow shared-memory architectures to scale beyond current imitations. The workshop had 20 speakers who talked about their current research. The discussions were lively and cordial enough to keep the participants away from the wonderful sand and sun for two days. The participants got to know each other well and were able to share their thoughts in an informal manner. The workshop was organized into several sessions. The summary of each session is described below. This book presents revisions of some of the papers presented at the workshop."
This book aids in the rehabilitation of the wrongfully deprecated work of William Parry, and is the only full-length investigation into Parry-type propositional logics. A central tenet of the monograph is that the sheer diversity of the contexts in which the mereological analogy emerges - its effervescence with respect to fields ranging from metaphysics to computer programming - provides compelling evidence that the study of logics of analytic implication can be instrumental in identifying connections between topics that would otherwise remain hidden. More concretely, the book identifies and discusses a host of cases in which analytic implication can play an important role in revealing distinct problems to be facets of a larger, cross-disciplinary problem. It introduces an element of constancy and cohesion that has previously been absent in a regrettably fractured field, shoring up those who are sympathetic to the worth of mereological analogy. Moreover, it generates new interest in the field by illustrating a wide range of interesting features present in such logics - and highlighting these features to appeal to researchers in many fields.
Instruction-Level Parallelism presents a collection of papers that attempts to capture the most significant work that took place during the 1980s in the area of instruction-level (ILP) parallel processing. The papers in this book discuss both compiler techniques and actual implementation experience on very long instruction word (VLIW) and superscalar architectures.
This volume comprises a collection of twenty written versions of invited as well as contributed papers presented at the conference held from 20-24 May 1996 in Beijing, China. It covers many areas of logic and the foundations of mathematics, as well as computer science. Also included is an article by M. Yasugi on the Asian Logic Conference which first appeared in Japanese, to provide a glimpse into the history and development of the series.
This book is devoted to logic synthesis and design techniques for asynchronous circuits. It uses the mathematical theory of Petri Nets and asynchronous automata to develop practical algorithms implemented in a public domain CAD tool. Asynchronous circuits have so far been designed mostly by hand, and are thus much less common than their synchronous counterparts, which have enjoyed a high level of design automation since the mid-1970s. Asynchronous circuits, on the other hand, can be very useful to tackle clock distribution, modularity, power dissipation and electro-magnetic interference in digital integrated circuits. This book provides the foundation needed for CAD-assisted design of such circuits, and can also be used as the basis for a graduate course on logic design.
This book addresses challenges faced by both the algorithm designer
and the chip designer, who need to deal with the ongoing increase
of algorithmic complexity and required data throughput for today s
mobile applications. The focus is on implementation aspects and
implementation constraints of individual components that are needed
in transceivers for current standards, such as UMTS, LTE, WiMAX and
DVB-S2. The application domain is the so called outer receiver,
which comprises the channel coding, interleaving stages, modulator,
and multiple antenna transmission. Throughout the book, the focus
is on advanced algorithms that are actually in use
The communication complexity of two-party protocols is an only 15 years old complexity measure, but it is already considered to be one of the fundamen tal complexity measures of recent complexity theory. Similarly to Kolmogorov complexity in the theory of sequential computations, communication complex ity is used as a method for the study of the complexity of concrete computing problems in parallel information processing. Especially, it is applied to prove lower bounds that say what computer resources (time, hardware, memory size) are necessary to compute the given task. Besides the estimation of the compu tational difficulty of computing problems the proved lower bounds are useful for proving the optimality of algorithms that are already designed. In some cases the knowledge about the communication complexity of a given problem may be even helpful in searching for efficient algorithms to this problem. The study of communication complexity becomes a well-defined indepen dent area of complexity theory. In addition to a strong relation to several funda mental complexity measures (and so to several fundamental problems of com plexity theory) communication complexity has contributed to the study and to the understanding of the nature of determinism, nondeterminism, and random ness in algorithmics. There already exists a non-trivial mathematical machinery to handle the communication complexity of concrete computing problems, which gives a hope that the approach based on communication complexity will be in strumental in the study of several central open problems of recent complexity theory."
This book concentrates on the quality of electronic products. Electronics in general, including semiconductor technology and software, has become the key technology for wide areas of industrial production. In nearly all expanding branches of industry electronics, especially digital electronics, is involved. And the spread of electronic technology has not yet come to an end. This rapid development, coupled with growing competition and the shorter innovation cycle, have caused economic problems which tend to have adverse effects on quality. Therefore, good quality at low cost is a very attractive goal in industry today. The demand for better quality continues along with a demand for more studies in quality assurance. At the same time, many companies are experiencing a drop in profits just when better quality of their products is essential in order to survive against the competition. There have been many proposals in the past to improve quality without increase in cost, or to reduce cost for quality assurance without loss of quality. This book tries to summarize the practical content of many of these proposals and to give some advice, above all to the designer and manufacturer of electronic devices. It mainly addresses practically minded engineers and managers. It is probably of less interest to pure scientists. The book covers all aspects of quality assurance of components used in electronic devices. Integrated circuits (lCs) are considered to be the most important components because the degree of integration is still rising.
This book brings together a selection of the best papers from the eighteenth edition of the Forum on specification and Design Languages Conference (FDL), which took place on September 14-16, 2015, in Barcelona, Spain. FDL is a well-established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modeling and verification of integrated circuits, complex hardware/software embedded systems, and mixed-technology systems.
Term rewriting techniques are applicable to various fields of computer science, including software engineering, programming languages, computer algebra, program verification, automated theorem proving and Boolean algebra. These powerful techniques can be successfully applied in all areas that demand efficient methods for reasoning with equations. One of the major problems encountered is the characterization of classes of rewrite systems that have a desirable property, like confluence or termination. In a system that is both terminating and confluent, every computation leads to a result that is unique, regardless of the order in which the rewrite rules are applied. This volume provides a comprehensive and unified presentation of termination and confluence, as well as related properties. Topics and features: *unified presentation and notation for important advanced topics *comprehensive coverage of conditional term-rewriting systems *state-of-the-art survey of modularity in term rewriting *presentation of unified framework for term and graph rewriting *up-to-date discussion of transformational methods for proving termination of logic programs, including the TALP system This unique book offers a comprehensive and unified view of the subject that is suitable for all computer scientists, program designers, and software engineers who study and use term rewriting techniques. Practitioners, researchers and professionals will find the book an essential and authoritative resource and guide for the latest developments and results in the field.
Microsystem technology (MST) integrates very small (up to a few nanometers) mechanical, electronic, optical, and other components on a substrate to construct functional devices. These devices are used as intelligent sensors, actuators, and controllers for medical, automotive, household and many other purposes. This book is a basic introduction to MST for students, engineers, and scientists. It is the first of its kind to cover MST in its entirety. It gives a comprehensive treatment of all important parts of MST such as microfabrication technologies, microactuators, microsensors, development and testing of microsystems, and information processing in microsystems. It surveys products built to date and experimental products and gives a comprehensive view of all developments leading to MST devices and robots.
This book brings together a selection of the best papers from the nineteenth edition of the Forum on specification and Design Languages Conference (FDL), which took place on September 14-16, 2016, in Bremen, Germany. FDL is a well-established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modeling and verification of integrated circuits, complex hardware/software embedded systems, and mixed-technology systems.
This book investigates the modeling and optimization issues in mobile social networks (MSNs). Firstly, the architecture and applications of MSNs are examined. The existing works on MSNs are reviewed by specifying the critical challenges and research issues. Then, with the introduction of MSN-based social graph and information dissemination mechanisms, the analytical model for epidemic information dissemination with opportunistic Links in MSNs is discussed. In addition, optimal resource allocation is studied based on a heterogeneous architecture, which provides mobile social services with high capacity and low latency. Finally, this book summarize some open problems and future research directions in MSNs. Written for researchers and academics, this book is useful for anyone working on mobile networks, network architecture, or content delivery. It is also valuable for advanced-level students of computer science.
This book presents the original concepts and modern techniques for specification, synthesis, optimisation and implementation of parallel logical control devices. It deals with essential problems of reconfigurable control systems like dependability, modularity and portability. Reconfigurable systems require a wider variety of design and verification options than the application-specific integrated circuits. The book presents a comprehensive selection of possible design techniques. The diversity of the modelling approaches covers Petri nets, state machines and activity diagrams. The preferences of the presented optimization and synthesis methods are not limited to increasing of the efficiency of resource use. One of the biggest advantages of the presented methods is the platform independence, the FPGA devices and single board computers are some of the examples of possible platforms. These issues and problems are illustrated with practical cases of complete control systems. If you expect a new look at the reconfigurable systems designing process or need ideas for improving the quality of the project, this book is a good choice.g process or need ideas for improving the quality of the project, this book is a good choice.
This volume contains the papers presented at the NATO Advanced Study Institute on the Interlinking of Computer Networks held between August 28th and September 8th 1978 at Bonas, France. The development of computer networks has proceeded over the last few decades to the point where a number of scientific and commercial networks are firmly established - albeit using different philosophies of design and operation. Many of these networks are serving similar communities having the same basic computer needs and those communities where the computer resources are complementary. Consequently there is now a considerable interest in the possibility of linking computer networks to provide resource sharing over quite wide geographical distances. The purpose of the Institute organisers was to consider the problems that arise when this form of interlinking is attempted. The problems fall into three categories, namely technical problems, compatibility and management. Only within the last few years have the technical problems been understood sufficiently well to enable interlinking to take place. Consequently considerable value was given during the meeting to discussing the compatibility and management problems that require solution before x FOREWORD global interlinking becomes an accepted and cost effective operation. Existing computer networks were examined in depth and case-histories of their operations were presented by delegates drawn from the international community. The scope and detail of the papers presented should provide a valuable contribution to this emerging field and be useful to Communications Specialists and Managers as well as those concerned with Computer Operations and Development."
Until now, there has been a lack of a complete knowledge base to fully comprehend Low power (LP) design and power aware (PA) verification techniques and methodologies and deploy them all together in a real design verification and implementation project. This book is a first approach to establishing a comprehensive PA knowledge base. LP design, PA verification, and Unified Power Format (UPF) or IEEE-1801 power format standards are no longer special features. These technologies and methodologies are now part of industry-standard design, verification, and implementation flows (DVIF). Almost every chip design today incorporates some kind of low power technique either through power management on chip, by dividing the design into different voltage areas and controlling the voltages, through PA dynamic and PA static verification, or their combination. The entire LP design and PA verification process involves thousands of techniques, tools, and methodologies, employed from the r egister transfer level (RTL) of design abstraction down to the synthesis or place-and-route levels of physical design. These techniques, tools, and methodologies are evolving everyday through the progression of design-verification complexity and more intelligent ways of handling that complexity by engineers, researchers, and corporate engineering policy makers.
Timing issues are of growing importance for the conceptualization and design of computer-based systems. Timing may simply be essential for the correct behaviour of a system, e.g. of a controller. Even if timing is not essential for the correct behaviour of a system, there may be good reasons to introduce it in such a way that suitable timing becomes relevant for the correct behaviour of a complex system. This book is unique in presenting four algebraic theories about processes, each dealing with timing from a different point of view, in a coherent and systematic way. The timing of actions is either relative or absolute and the underlying time scale is either discrete or continuous. All presented theories are extensions of the algebra of communicating processes. The book is essential reading for researchers and advanced students interested in timing issues in the context of the design and analysis of concurrent and communicating processes.
With the development of Very-Deep Sub-Micron technologies, process variability is becoming increasingly important and is a very important issue in the design of complex circuits. Process variability is the statistical variation of process parameters, meaning that these parameters do not have always the same value, but become a random variable, with a given mean value and standard deviation. This effect can lead to several issues in digital circuit design. The logical consequence of this parameter variation is that circuit characteristics, as delay and power, also become random variables. Because of the delay variability, not all circuits will now have the same performance, but some will be faster and some slower. However, the slowest circuits may be so slow that they will not be appropriate for sale. On the other hand, the fastest circuits that could be sold for a higher price can be very leaky, and also not very appropriate for sale. A main consequence of power variability is that the power consumption of some circuits will be different than expected, reducing reliability, average life expectancy and warranty of products. Sometimes the circuits will not work at all, due to reasons associated with process variations. At the end, these effects result in lower yield and lower profitability. To understand these effects, it is necessary to study the consequences of variability in several aspects of circuit design, like logic gates, storage elements, clock distribution, and any other that can be affected by process variations. The main focus of this book will be storage elements.
Amid recent interest in Clifford algebra for dual quaternions as a more suitable method for Computer Graphics than standard matrix algebra, this book presents dual quaternions and their associated Clifford algebras in a new light, accessible to and geared towards the Computer Graphics community. Collating all the associated formulas and theorems in one place, this book provides an extensive and rigorous treatment of dual quaternions, as well as showing how two models of Clifford algebras emerge naturally from the theory of dual quaternions. Each chapter comes complete with a set of exercises to help readers sharpen and practice their knowledge. This book is accessible to anyone with a basic knowledge of quaternion algebra and is of particular use to forward-thinking members of the Computer Graphics community. . |
![]() ![]() You may like...
Handbook of Neural Network Signal…
Yu Hen Hu, Jenq-Neng Hwang
Hardcover
R7,957
Discovery Miles 79 570
Adaptive Neural Network Control Of…
Sam Shuzhi Ge, Christopher J. Harris, …
Hardcover
R3,735
Discovery Miles 37 350
Mathematics of Neural Networks - Models…
Stephen W. Ellacott, John C. Mason, …
Hardcover
R5,642
Discovery Miles 56 420
Computational Intelligence for Movement…
Rezaul Begg, Marimuthu Palaniswami
Hardcover
R2,600
Discovery Miles 26 000
Connectionist Speech Recognition - A…
Herve A. Bourlard, Nelson Morgan
Hardcover
R5,777
Discovery Miles 57 770
Speech, Audio, Image and Biomedical…
Bhanu Prasad, S.R.M. Prasanna
Hardcover
R4,418
Discovery Miles 44 180
Deep Learning in Computational Mechanics…
Stefan Kollmannsberger, Davide D'Angella, …
Hardcover
R2,508
Discovery Miles 25 080
Advances in Learning Theory - Methods…
Johan A.K. Suykens, G. Horvath, …
Hardcover
R2,681
Discovery Miles 26 810
Proceedings of the 1993 Connectionist…
Michael C. Mozer, Paul Smolensky, …
Hardcover
R4,027
Discovery Miles 40 270
|