![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
This volume is the first diverse and comprehensive treatment of
algorithms and architectures for the realization of neural network
systems. It presents techniques and diverse methods in numerous
areas of this broad subject. The book covers major neural network
systems structures for achieving effective systems, and illustrates
them with examples.
This book introduces a novel design methodology which can significantly reduce the ASIP development effort through high degrees of design automation. The key elements of this new design methodology are a powerful application profiler and an automated instruction-set customization tool which considerably lighten the burden of mapping a target application to an ASIP architecture in the initial design stages. The book includes several design case studies with real life embedded applications to demonstrate how the methodology and the tools can be used in practice for accelerating the overall ASIP design process.
This book describes a comprehensive approach for synthesis and optimization of logic-in-memory computing hardware and architectures using memristive devices, which creates a firm foundation for practical applications. Readers will get familiar with a new generation of computer architectures that potentially can perform faster, as the necessity for communication between the processor and memory is surpassed. The discussion includes various synthesis methodologies and optimization algorithms targeting implementation cost metrics including latency and area overhead as well as the reliability issue caused by short memory lifetime. Presents a comprehensive synthesis flow for the emerging field of logic-in-memory computing; Describes automated compilation of programmable logic-in-memory computer architectures; Includes several effective optimization algorithm also applicable to classical logic synthesis; Investigates unbalanced write traffic in logic-in-memory architectures and describes wear leveling approaches to alleviate it.
Recent developments in computer science clearly show the need for a
better theoretical foundation for some central issues. Methods and
results from mathematical logic, in particular proof theory and
model theory, are of great help here and will be used much more in
future than previously. This book provides an excellent
introduction to the interplay of mathematical logic and computer
science. It contains extensively reworked versions of the lectures
given at the 1997 Marktoberdorf Summer School by leading
researchers in the field.
At the beginning of the 1990s research started in how to combine soft comput ing with reconfigurable hardware in a quite unique way. One of the methods that was developed has been called evolvable hardware. Thanks to evolution ary algorithms researchers have started to evolve electronic circuits routinely. A number of interesting circuits - with features unreachable by means of con ventional techniques - have been developed. Evolvable hardware is quite pop ular right now; more than fifty research groups are spread out over the world. Evolvable hardware has become a part of the curriculum at some universi ties. Evolvable hardware is being commercialized and there are specialized conferences devoted to evolvable hardware. On the other hand, surprisingly, we can feel the lack of a theoretical background and consistent design methodology in the area. Furthermore, it is quite difficult to implement really innovative and practically successful evolvable systems using contemporary digital reconfigurable technology."
This book provides a comprehensive overview of the
state-of-the-art, data flow-based techniques for the analysis,
modeling and mapping technologies of concurrent applications on
multi-processors. The authors present a flow for designing embedded
hard/firm real-time multiprocessor streaming applications, based on
data flow formalisms, with a particular focus on wireless modem
applications. Architectures are described for the design tools and
run-time scheduling and resource management of such a platform.
Distributed applications are a necessity in most central application sectors of the contemporary information society, including e-commerce, e-banking, e-learning, e-health, telecommunication and transportation. This results from a tremendous growth of the role that the Internet plays in business, administration and our everyday activities. This trend is going to be even further expanded in the context of advances in broadband wireless communication. New Developments in Distributed Applications and Interoperable Systems focuses on the techniques available or under development with the goal to ease the burden of constructing reliable and maintainable interoperable information systems providing services in the global communicating environment. The topics covered in this book include: Context-aware applications; Integration and interoperability of distributed systems; Software architectures and services for open distributed systems; Management, security and quality of service issues in distributed systems; Software agents and mobility; Internet and other related problem areas. The book contains the proceedings of the Third International Working Conference on Distributed Applications and Interoperable Systems (DAIS'2001), which was held in September 2001 in Krakow, Poland, and sponsored by the International Federation on Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported."
This collection of papers is the result of a workshop sponsored by NATO's Defense Research Group Panel 8 during the Fall of 1993. The workshop was held at the University of German Armed Forces at Neubiberg (Munich) Germany 29 September-l October, 1993. Robert J. Seidel Paul R. Chatelier U.S. Army Research Institute for the Executive Office of the President Behavioral and Social Sciences Office of Science and Technology Policy Washington, D.C. Washington, D.C. v PREFACE We would like to thank the authors of the papers for providing an excellent coverage of this rapidly developing technology, the session chairpersons for providing excellent structure and management for each group of papers, and each session's discussant's for their summary and personal views of their sessions papers. Our special thanks go to Dr. Rolfe Otte, the German ministry of Defense's research study group member and the person responsible for our being able to have this workshop in Munich. We are also grateful to Dr. H. Closhen of the IABG for technical and administrative assistance throughout the planning and conduct of the workshop.
Lo, soul! seest thou not God's purpose from the first? The earth to be spann'd, connected by net-work From Passage to India! Walt Whitman, "Leaves of Grass", 1900. The Internet is growing at a tremendous rate today. New services, such as telephony and multimedia, are being added to the pure data-delivery framework of yesterday. Such high demands on capacity could lead to a "bandwidth-crunch" at the core wide-area network resulting in degra dation of service quality. Fortunately, technological innovations have emerged which can provide relief to the end-user to overcome the In ternet's well-known delay and bandwidth limitations. At the physical layer, a major overhaul of existing networks has been envisaged from electronic media (such as twisted-pair and cable) to optical fibers - in the wide area, in the metropolitan area, and even in the local area set tings. In order to exploit the immense bandwidth potential of the optical fiber, interesting multiplexing techniques have been developed over the years. Wavelength division multiplexing (WDM) is such a promising tech nique in which multiple channels are operated along a single fiber si multaneously, each on a different wavelength. These channels can be independently modulated to accommodate dissimilar bit rates and data formats, if so desired. Thus, WDM carves up the huge bandwidth of an optical fiber into channels whose bandwidths (1-10 Gbps) are compati ble with peak electronic processing speed.
This book describes how engineers can make optimum use of the two industry standard analysis/design tools, SystemC and SystemC-AMS. The authors use a system-level design approach, emphasizing how SystemC and SystemC-AMS features can be exploited most effectively to analyze/understand a given electronic system and explore the design space. The approach taken by this book enables system engineers to concentrate on only those SystemC/SystemC-AMS features that apply to their particular problem, leading to more efficient design. The presentation includes numerous, realistic and complete examples, which are graded in levels of difficulty to illustrate how a variety of systems can be analyzed with these tools.
Hardware correctness is becoming ever more important in the design of computer systems. The authors introduce a powerful new approach to the design and analysis of modern computer architectures, based on mathematically well-founded formal methods which allows for rigorous correctness proofs, accurate hardware costs determination, and performance evaluation. This book develops, at the gate level, the complete design of a pipelined RISC processor with a fully IEEE-compliant floating-point unit. In contrast to other design approaches, the design presented here is modular, clean and complete.
Chapters in Fast Simulation of Computer Architectures cover topics such as how to collect traces, emulate instruction sets, simulate microprocessors using execution-driven techniques, evaluate memory hierarchies, apply statistical sampling to simulation, and how to augment simulation with performance bound models. The chapters have been written by many of the leading researchers in the area, in a collaboration that ensures that the material is both coherent and cohesive. Audience: Of tremendous interest to practising computer architect designers seeking timely solutions to tough evaluation problems, and to advanced upper division undergraduate and graduate students of the field. Useful study aids are provided by the problems at the end of Chapters 2 through 8.
Debugging becomes more and more the bottleneck to chip design productivity, especially while developing modern complex integrated circuits and systems at the Electronic System Level (ESL). Today, debugging is still an unsystematic and lengthy process. Here, a simple reporting of a failure is not enough, anymore. Rather, it becomes more and more important not only to find many errors early during development but also to provide efficient methods for their isolation. In Debugging at the Electronic System Level the state-of-the-art of modeling and verification of ESL designs is reviewed. There, a particular focus is taken onto SystemC. Then, a reasoning hierarchy is introduced. The hierarchy combines well-known debugging techniques with whole new techniques to improve the verification efficiency at ESL. The proposed systematic debugging approach is supported amongst others by static code analysis, debug patterns, dynamic program slicing, design visualization, property generation, and automatic failure isolation. All techniques were empirically evaluated using real-world industrial designs. Summarized, the introduced approach enables a systematic search for errors in ESL designs. Here, the debugging techniques improve and accelerate error detection, observation, and isolation as well as design understanding.
Regular Nanofabrics in Emerging Technologies gives a deep insight into both fabrication and design aspects of emerging semiconductor technologies, that represent potential candidates for the post-CMOS era. Its approach is unique, across different fields, and it offers a synergetic view for a public of different communities ranging from technologists, to circuit designers, and computer scientists. The book presents two technologies as potential candidates for future semiconductor devices and systems and it shows how fabrication issues can be addressed at the design level and vice versa. The reader either for academic or research purposes will find novel material that is explained carefully for both experts and non-initiated readers. Regular Nanofabrics in Emerging Technologies is a survey of post-CMOS technologies. It explains processing, circuit and system level design for people with various backgrounds.
This book is for researchers in computer science, mathematical logic, and philosophical logic. It shows the state of the art in current investigations of process calculi with mainly two major paradigms at work: linear logic and modal logic. The combination of approaches and pointers for further integration also suggests a grander vision for the field.
Computer Networks, Architecture and Applications covers many aspects of research in modern communications networks for computing purposes.
Multiprocessor Execution of Logic Programs addresses the problem of efficient implementation of logic programming languages, specifically Prolog, on multiprocessor architectures. The approaches and implementations developed attempt to take full advantage of sequential implementation technology developed for Prolog (such as the WAM) while exploiting all forms of control parallelism present in logic programs, namely, or-parallelism, independent and-parallelism and dependent and-parallelism. Coverage includes a thorough survey of parallel implementation techniques and parallel systems developed for Prolog. Multiprocessor Execution of Logic Programs is recommended for people implementing parallel logic programming systems, parallel symbolic systems, parallel AI systems, and parallel theorem proving systems. It will also be useful to people who wish to learn about the implementation of parallel logic programming systems.
This book describes scalable and near-optimal, processor-level design space exploration (DSE) methodologies. The authors present design methodologies for data storage and processing in real-time, cost-sensitive data-dominated embedded systems. Readers will be enabled to reduce time-to-market, while satisfying system requirements for performance, area, and energy consumption, thereby minimizing the overall cost of the final design.
Pipelined ADCs have seen phenomenal improvements in performance over the last few years. As such, when designing a pipelined ADC a clear understanding of the design tradeoffs, and state of the art techniques is required to implement today's high performance low power ADCs.
This volume reports new developments on work in the quantum flux parametron (QFP) project. It makes complete a series on Josephson supercomputers, which includes four earlier volumes, also published by World Scientific. QFP technology has great potential especially in the design of computer architecture. It is regarded as being able to go beyond the horizon of current technology, and is a leading direction for the advancement of computer technology in the next decade.
Information security concerns the confidentiality, integrity, and availability of information processed by a computer system. With an emphasis on prevention, traditional information security research has focused little on the ability to survive successful attacks, which can seriously impair the integrity and availability of a system. Trusted Recovery And Defensive Information Warfare uses database trusted recovery, as an example, to illustrate the principles of trusted recovery in defensive information warfare. Traditional database recovery mechanisms do not address trusted recovery, except for complete rollbacks, which undo the work of benign transactions as well as malicious ones, and compensating transactions, whose utility depends on application semantics. Database trusted recovery faces a set of unique challenges. In particular, trusted database recovery is complicated mainly by (a) the presence of benign transactions that depend, directly or indirectly on malicious transactions; and (b) the requirement by many mission-critical database applications that trusted recovery should be done on-the-fly without blocking the execution of new user transactions. Trusted Recovery And Defensive Information Warfare proposes a new model and a set of innovative algorithms for database trusted recovery. Both read-write dependency based and semantics based trusted recovery algorithms are proposed. Both static and dynamic database trusted recovery algorithms are proposed. These algorithms can typically save a lot of work by innocent users and can satisfy a variety of attack recovery requirements of real world database applications. Trusted Recovery And Defensive Information Warfare is suitable as a secondary text for a graduate level course in computer science, and as a reference for researchers and practitioners in information security.
Grid Computing: Achievements and Prospects, the 9th edited volume of the CoreGRID series, includes selected papers from the CoreGRID Integration Workshop, held April 2008 in Heraklion-Crete, Greece. This event brings together representatives of the academic and industrial communities performing Grid research in Europe. The workshop was organized in the context of the CoreGRID Network of Excellence in order to provide a forum for the presentation and exchange of views on the latest developments in grid technology research. Grid Computing: Achievements and Prospects is designed for a professional audience, composed of researchers and practitioners in industry. This volume is also suitable for graduate-level students in computer science.
Motivation Modem enterprises rely on database management systems (DBMS) to collect, store and manage corporate data, which is considered a strategic corporate re source. Recently, with the proliferation of personal computers and departmen tal computing, the trend has been towards the decentralization and distribution of the computing infrastructure, with autonomy and responsibility for data now residing at the departmental and workgroup level of the organization. Users want their data delivered to their desktops, allowing them to incor porate data into their personal databases, spreadsheets, word processing doc uments, and most importantly, into their daily tasks and activities. They want to be able to share their information while retaining control over its access and distribution. There are also pressures from corporate leaders who wish to use information technology as a strategic resource in offering specialized value-added services to customers. Database technology is being used to manage the data associated with corporate processes and activities. Increasingly, the data being managed are not simply formatted tables in relational databases, but all types of ob jects, including unstructured text, images, audio, and video. Thus, the database management providers are being asked to extend the capabilities of DBMS to include object-relational models as well as full object-oriented database man agement systems."
This book is the product of Research Study Group (RSG) 13 on "Human Engineering Evaluation on the Use of Colour in Electronic Displays," of Panel 8, "Defence Applications of Human and Biomedical Sciences," of the NATO Defence Research Group. RSG 13 was chaired by Heino Widdel (Germany) and consisted of Jeffrey Grossman (United States), Jean-Pierre Menu (France), Giampaolo Noja (Italy, point of contact), David Post (United States), and Jan Walraven (Netherlands). Initially, Christopher Gibson (United Kingdom) and Sharon McFaddon (Canada) participated also. Most of these representatives served previously on the NATO program committee that produced Proceedings of a Workshop on Colour Coded vs. Monochrome Displays (edited by Christopher Gibson and published by the Royal Aircraft Establishment, Farnborough, England) in 1984. RSG 13 can be regarded as a descendent of that program committee. RSG 13 was formed in 1987 for the purpose of developing and distributing guidance regarding the use of color on electronic displays. During our first meeting, we discussed the fact that, although there is a tremendous amount of information available concerning color vision, color perception, colorimetry, and color displays-much of it relevant to display design-it is scattered across numerous texts, journals, conference proceedings, and technical reports. We decided that we could fulfill the RSG's purpose best by producing a book that consolidates and summarizes this information, emphasizing those aspects that are most applicable to display design. |
You may like...
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,315
Discovery Miles 53 150
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R5,924
Discovery Miles 59 240
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
High-Performance Computing Using FPGAs
Wim Vanderbauwhede, Khaled Benkrid
Hardcover
R6,662
Discovery Miles 66 620
|