![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Digital signal processing is an area of science and engineering that has been developed rapidly over the past years. This rapid development is the result of the significant advances in digital computer technology and integrated circuits fabrication. Many of the signal processing tasks conventionally performed by analog means, are realized today by less expensive and often more reliable digital hardware. Multirate Systems: Design and Applications addresses the rapid development of multirate digital signal processing and how it is complemented by the emergence of new applications.
The implementation of object-oriented languages has been an active topic of research since the 1960s when the first Simula compiler was written. The topic received renewed interest in the early 1980s with the growing popularity of object-oriented programming languages such as c++ and Smalltalk, and got another boost with the advent of Java. Polymorphic calls are at the heart of object-oriented languages, and even the first implementation of Simula-67 contained their classic implementation via virtual function tables. In fact, virtual function tables predate even Simula-for example, Ivan Sutherland's Sketchpad drawing editor employed very similar structures in 1960. Similarly, during the 1970s and 1980s the implementers of Smalltalk systems spent considerable efforts on implementing polymorphic calls for this dynamically typed language where virtual function tables could not be used. Given this long history of research into the implementation of polymorphic calls, and the relatively mature standing it achieved over time, why, one might ask, should there be a new book in this field? The answer is simple. Both software and hardware have changed considerably in recent years, to the point where many assumptions underlying the original work in this field are no longer true. In particular, virtual function tables are no longer sufficient to implement polymorphic calls even for statically typed languages; for example, Java's interface calls cannot be implemented this way. Furthermore, today's processors are deeply pipelined and can execute instructions out-of order, making it difficult to predict the execution time of even simple code sequences."
In three main divisions the book covers combinational circuits, latches, and asynchronous sequential circuits. Combinational circuits have no memorising ability, while sequential circuits have such an ability to various degrees. Latches are the simplest sequential circuits, ones with the shortest memory. The presentation is decidedly non-standard. The design of combinational circuits is discussed in an orthodox manner using normal forms and in an unorthodox manner using set-theoretical evaluation formulas relying heavily on Karnaugh maps. The latter approach allows for a new design technique called composition. Latches are covered very extensively. Their memory functions are expressed mathematically in a time-independent manner allowing the use of (normal, non-temporal) Boolean logic in their calculation. The theory of latches is then used as the basis for calculating asynchronous circuits. Asynchronous circuits are specified in a tree-representation, each internal node of the tree representing an internal latch of the circuit, the latches specified by the tree itself. The tree specification allows solutions of formidable problems such as algorithmic state assignment, finding equivalent states non-recursively, and verifying asynchronous circuits.
Advances in optical technologies have made it possible to implement optical interconnections in future massively parallel processing systems. Photons are non-charged particles, and do not naturally interact. Consequently, there are many desirable characteristics of optical interconnects, e.g. high speed (speed of light), increased fanout, high bandwidth, high reliability, longer interconnection lengths, low power requirements, and immunity to EMI with reduced crosstalk. Optics can utilize free-space interconnects as well as guided wave technology, neither of which has the problems of VLSI technology mentioned above. Optical interconnections can be built at various levels, providing chip-to-chip, module-to-module, board-to-board, and node-to-node communications. Massively parallel processing using optical interconnections poses new challenges; new system configurations need to be designed, scheduling and data communication schemes based on new resource metrics need to be investigated, algorithms for a wide variety of applications need to be developed under the novel computation models that optical interconnections permit, and so on. Parallel Computing Using Optical Interconnections is a collection of survey articles written by leading and active scientists in the area of parallel computing using optical interconnections. This is the first book which provides current and comprehensive coverage of the field, reflects the state of the art from high-level architecture design and algorithmic points of view, and points out directions for further research and development.
Multicore Processors and Systems provides a comprehensive overview of emerging multicore processors and systems. It covers technology trends affecting multicores, multicore architecture innovations, multicore software innovations, and case studies of state-of-the-art commercial multicore systems. A cross-cutting theme of the book is the challenges associated with scaling up multicore systems to hundreds of cores. The book provides an overview of significant developments in the architectures for multicore processors and systems. It includes chapters on fundamental requirements for multicore systems, including processing, memory systems, and interconnect. It also includes several case studies on commercial multicore systems that have recently been developed and deployed across multiple application domains. The architecture chapters focus on innovative multicore execution models as well as infrastructure for multicores, including memory systems and on-chip interconnections. The case studies examine multicore implementations across different application domains, including general purpose, server, media/broadband, network processing, and signal processing. Multicore Processors and Systems is the first book that focuses solely on multicore processors and systems, and in particular on the unique technology implications, architectures, and implementations. The book has contributing authors that are from both the academic and industrial communities.
This book describes for readers a methodology for dynamic power estimation, using Transaction Level Modeling (TLM). The methodology exploits the existing tools for RTL simulation, design synthesis and SystemC prototyping to provide fast and accurate power estimation using Transaction Level Power Modeling (TLPM). Readers will benefit from this innovative way of evaluating power on a high level of abstraction, at an early stage of the product life cycle, decreasing the number of the expensive design iterations.
Grids are a crucial enabling technology for scientific and industrial development. Grid and Services Evolution, the 11th edited volume of the CoreGRID series, was based on The CoreGRID Middleware Workshop, held in Barcelona, Spain, June 5-6, 2008. Grid and Services Evolution provides a bridge between the application community and the developers of middleware services, especially in terms of parallel computing. This edited volume brings together a critical mass of well-established researchers worldwide, from forty-two institutions active in the fields of distributed systems and middleware, programming models, algorithms, tools and environments. Grid and Services Evolution is designed for a professional audience composed of researchers and practitioners within the Grid community industry. This volume is also suitable for advanced-level students in computer science.
This book discusses the trade-offs involved in designing direct RF
digitization receivers for the radio frequency and digital signal
processing domains. A system-level framework is developed,
quantifying the relevant impairments of the signal processing
chain, through a comprehensive system-level analysis. Special focus
is given to noise analysis (thermal noise, quantization noise,
saturation noise, signal-dependent noise), broadband non-linear
distortion analysis, including the impact of the sampling strategy
(low-pass, band-pass), analysis of time-interleaved ADC channel
mismatches, sampling clock purity and digital channel selection.
The system-level framework described is applied to the design of a
cable multi-channel RF direct digitization receiver. An optimum RF
signal conditioning, and some algorithms (automatic gain control
loop, RF front-end amplitude equalization control loop) are used to
relax the requirements of a 2.7GHz 11-bit ADC.
The most exciting development in parallel computer architecture
is the convergence of traditionally disparate approaches on a
common machine structure. This book explains the forces behind this
convergence of shared-memory, message-passing, data parallel, and
data-driven computing architectures. It then examines the design
issues that are critical to all parallel architecture across the
full range of modern design, covering data access, communication
performance, coordination of cooperative work, and correct
implementation of useful semantics. It not only describes the
hardware and software techniques for addressing each of these
issues but also explores how these techniques interact in the same
system. Examining architecture from an application-driven
perspective, it provides comprehensive discussions of parallel
programming for high performance and of workload-driven evaluation,
based on understanding hardware-software interactions.
This book provides graduate students and practitioners with knowledge of the CORBA standard and practical experience of implementing distributed systems with CORBA's Java mapping. With tested code examples that will run immediately!
The continous development of computer technology supported by the VLSI revolution stimulated the research in the field .of multiprocessors systems. The main motivation for the migration of design efforts from conventional architectures towards multiprocessor ones is the possibi I ity to obtain a significant processing power together with the improvement of price/performance, reliability and flexibility figures. Currently, such systems are moving from research laboratories to real field appl ications. Future technological advances and new generations of components are I ikely to further enhance this trend. This book is intended to provide basic concepts and design methodologies for engineers and researchers involved in the development of mul tiprocessor systems and/or of appl ications based on multiprocessor architectures. In addition the book can be a source of material for computer architecture courses at graduate level. A preliminary knowledge of computer architecture and logical design has been assumed in wri ting this book. Not all the problems related with the development of multiprocessor systems are addressed in th i s book. The covered range spans from the electrical and logical design problems, to architectural issues, to design methodologis for system software. Subj ects such as software development in a multiprocessor environment or loosely coupled multiprocessor systems are out of the scope of the book. Since the basic elements, processors and memories, are now available as standard integrated circuits, the key design problem is how to put them together in an efficient and reliable way."
This volume comprises a collection of twenty written versions of invited as well as contributed papers presented at the conference held from 20-24 May 1996 in Beijing, China. It covers many areas of logic and the foundations of mathematics, as well as computer science. Also included is an article by M. Yasugi on the Asian Logic Conference which first appeared in Japanese, to provide a glimpse into the history and development of the series.
This Handbook is the first volume of the International Handbook on Information Systems. It offers a comprehensive overview of architectures, languages, methods, and techniques for modelling and analysing information systems in organisations. The contributions are written by authoritative figures in this area. Numerous approaches are surveyed coming from computer science, information systems, and business administration among others. This volume brings together more than 30 contributions in order to provide a reference source for problem solvers in business, industry, and government, and which can be used by professional researchers and graduate students. In the new edition, all contributions have been revised completely. New papers have been added on XML and UML.
The core idea of this book is that object- oriented technology is a generic technology whose various technical aspects can be presented in a unified and consistent framework. This applies to both practical and formal aspects of object-oriented technology. Course tested in a variety of object-oriented courses, numerous examples, figures and exercises are presented in each chapter. The approach in this book is based on typed technologies, and the core notions fit mainstream object-oriented languages such as Java and C#. The book promotes object-oriented constraints (assertions), their specification and verification. Object-oriented constraints apply to specification and verification of object-oriented programs, specification of the object-oriented platform, more advanced concurrent models, database integrity constraints and object-oriented transactions, their specification and verification.
Parallel Numerical Computations with Applications contains selected edited papers presented at the 1998 Frontiers of Parallel Numerical Computations and Applications Workshop, along with invited papers from leading researchers around the world. These papers cover a broad spectrum of topics on parallel numerical computation with applications; such as advanced parallel numerical and computational optimization methods, novel parallel computing techniques, numerical fluid mechanics, and other applications related to material sciences, signal and image processing, semiconductor technology, and electronic circuits and systems design. This state-of-the-art volume will be an up-to-date resource for researchers in the areas of parallel and distributed computing.
This volume is the first diverse and comprehensive treatment of
algorithms and architectures for the realization of neural network
systems. It presents techniques and diverse methods in numerous
areas of this broad subject. The book covers major neural network
systems structures for achieving effective systems, and illustrates
them with examples.
This book introduces a novel design methodology which can significantly reduce the ASIP development effort through high degrees of design automation. The key elements of this new design methodology are a powerful application profiler and an automated instruction-set customization tool which considerably lighten the burden of mapping a target application to an ASIP architecture in the initial design stages. The book includes several design case studies with real life embedded applications to demonstrate how the methodology and the tools can be used in practice for accelerating the overall ASIP design process.
This book describes a comprehensive approach for synthesis and optimization of logic-in-memory computing hardware and architectures using memristive devices, which creates a firm foundation for practical applications. Readers will get familiar with a new generation of computer architectures that potentially can perform faster, as the necessity for communication between the processor and memory is surpassed. The discussion includes various synthesis methodologies and optimization algorithms targeting implementation cost metrics including latency and area overhead as well as the reliability issue caused by short memory lifetime. Presents a comprehensive synthesis flow for the emerging field of logic-in-memory computing; Describes automated compilation of programmable logic-in-memory computer architectures; Includes several effective optimization algorithm also applicable to classical logic synthesis; Investigates unbalanced write traffic in logic-in-memory architectures and describes wear leveling approaches to alleviate it.
Recent developments in computer science clearly show the need for a
better theoretical foundation for some central issues. Methods and
results from mathematical logic, in particular proof theory and
model theory, are of great help here and will be used much more in
future than previously. This book provides an excellent
introduction to the interplay of mathematical logic and computer
science. It contains extensively reworked versions of the lectures
given at the 1997 Marktoberdorf Summer School by leading
researchers in the field.
At the beginning of the 1990s research started in how to combine soft comput ing with reconfigurable hardware in a quite unique way. One of the methods that was developed has been called evolvable hardware. Thanks to evolution ary algorithms researchers have started to evolve electronic circuits routinely. A number of interesting circuits - with features unreachable by means of con ventional techniques - have been developed. Evolvable hardware is quite pop ular right now; more than fifty research groups are spread out over the world. Evolvable hardware has become a part of the curriculum at some universi ties. Evolvable hardware is being commercialized and there are specialized conferences devoted to evolvable hardware. On the other hand, surprisingly, we can feel the lack of a theoretical background and consistent design methodology in the area. Furthermore, it is quite difficult to implement really innovative and practically successful evolvable systems using contemporary digital reconfigurable technology."
This book provides a comprehensive overview of the
state-of-the-art, data flow-based techniques for the analysis,
modeling and mapping technologies of concurrent applications on
multi-processors. The authors present a flow for designing embedded
hard/firm real-time multiprocessor streaming applications, based on
data flow formalisms, with a particular focus on wireless modem
applications. Architectures are described for the design tools and
run-time scheduling and resource management of such a platform.
Distributed applications are a necessity in most central application sectors of the contemporary information society, including e-commerce, e-banking, e-learning, e-health, telecommunication and transportation. This results from a tremendous growth of the role that the Internet plays in business, administration and our everyday activities. This trend is going to be even further expanded in the context of advances in broadband wireless communication. New Developments in Distributed Applications and Interoperable Systems focuses on the techniques available or under development with the goal to ease the burden of constructing reliable and maintainable interoperable information systems providing services in the global communicating environment. The topics covered in this book include: Context-aware applications; Integration and interoperability of distributed systems; Software architectures and services for open distributed systems; Management, security and quality of service issues in distributed systems; Software agents and mobility; Internet and other related problem areas. The book contains the proceedings of the Third International Working Conference on Distributed Applications and Interoperable Systems (DAIS'2001), which was held in September 2001 in Krakow, Poland, and sponsored by the International Federation on Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported."
This collection of papers is the result of a workshop sponsored by NATO's Defense Research Group Panel 8 during the Fall of 1993. The workshop was held at the University of German Armed Forces at Neubiberg (Munich) Germany 29 September-l October, 1993. Robert J. Seidel Paul R. Chatelier U.S. Army Research Institute for the Executive Office of the President Behavioral and Social Sciences Office of Science and Technology Policy Washington, D.C. Washington, D.C. v PREFACE We would like to thank the authors of the papers for providing an excellent coverage of this rapidly developing technology, the session chairpersons for providing excellent structure and management for each group of papers, and each session's discussant's for their summary and personal views of their sessions papers. Our special thanks go to Dr. Rolfe Otte, the German ministry of Defense's research study group member and the person responsible for our being able to have this workshop in Munich. We are also grateful to Dr. H. Closhen of the IABG for technical and administrative assistance throughout the planning and conduct of the workshop.
Lo, soul! seest thou not God's purpose from the first? The earth to be spann'd, connected by net-work From Passage to India! Walt Whitman, "Leaves of Grass", 1900. The Internet is growing at a tremendous rate today. New services, such as telephony and multimedia, are being added to the pure data-delivery framework of yesterday. Such high demands on capacity could lead to a "bandwidth-crunch" at the core wide-area network resulting in degra dation of service quality. Fortunately, technological innovations have emerged which can provide relief to the end-user to overcome the In ternet's well-known delay and bandwidth limitations. At the physical layer, a major overhaul of existing networks has been envisaged from electronic media (such as twisted-pair and cable) to optical fibers - in the wide area, in the metropolitan area, and even in the local area set tings. In order to exploit the immense bandwidth potential of the optical fiber, interesting multiplexing techniques have been developed over the years. Wavelength division multiplexing (WDM) is such a promising tech nique in which multiple channels are operated along a single fiber si multaneously, each on a different wavelength. These channels can be independently modulated to accommodate dissimilar bit rates and data formats, if so desired. Thus, WDM carves up the huge bandwidth of an optical fiber into channels whose bandwidths (1-10 Gbps) are compati ble with peak electronic processing speed. |
You may like...
Creativity in Load-Balance Schemes for…
Alberto Garcia-Robledo, Arturo Diaz Perez, …
Hardcover
R3,901
Discovery Miles 39 010
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R2,920
Discovery Miles 29 200
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Networks-on-Chip - From Implementations…
Sheng Ma, Libo Huang, …
Paperback
R1,247
Discovery Miles 12 470
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
|