![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
This monograph details several important advances in the direction of a practical proofs-as-programs paradigm, which constitutes a set of approaches to developing programs from proofs in constructive logic with applications to industrial-scale, complex software engineering problems. One of the books central themes is a general, abstract framework for developing new systems of programs synthesis by adapting proofs-as-programs to new contexts.
High-level synthesis - also called behavioral and architectural-level synthesis - is a key design technology to realize systems on chip/package of various kinds, whether single or multi-processors, homogeneousor heterogeneous, for the emb- ded systems market or not. Actually, as technology progresses and systems become increasingly complex, the use of high-level abstractions and synthesis methods becomes more and more a necessity. Indeed, the productivityof designers increases with the abstraction level, as demonstrated by practices in both the software and hardware domains. The use of high-level models allows designers with systems, rather than circuit, backgroundto be productive, thus matching the trend of industry whichisdeliveringanincreasinglylargernumberofintegratedsystemsascompared to integrated circuits. The potentials of high-level synthesis relate to leaving implementation details to the design algorithms and tools, including the ability to determine the precise timing of operations, data transfers, and storage. High-level optimization, coupled with high-levelsynthesis, canprovidedesignerswith the optimalconcurrencystr- ture for a data ow and corresponding technological constraints, thus providing the balancing act in the trade-offbetween latency and resource usage. For complex s- tems, the design space exploration, i.e., the systematic search for the Pareto-optimal points, can only be done by automated high-level synthesis and optimization tools. Nevertheless, high-level synthesis has been showing a long gestation period. Despite early resultsin the 1980s, it is still not commonpracticein hardwaredes
To the hard-pressed systems designer this book will come as a godsend. It is a hands-on guide to the many ways in which processor-based systems are designed to allow low power devices. Covering a huge range of topics, and co-authored by some of the field 's top practitioners, the book provides a good starting point for engineers in the area, and to research students embarking upon work on embedded systems and architectures.
This volume represents the state of the art for much current research in many-valued logics. Primary researchers in the field are among the authors. Major methodological issues of many-valued logics are treated, as well as applications of many-valued logics to reasoning with fuzzy information. Areas covered include: Algebras of multiple valued logics and their applications, proof theory and automated deduction in multiple valued logics, fuzzy logics and their applications, and multiple valued logics for control theory and rational belief.
New software tools and a sophisticated methodology above RTL are required to answer the challenges of designing an optimized application specific processor (ASIP). This book offers an automated and fully integrated implementation flow and compares it to common implementation practice. It provides case-studies that emphasize that neither the architectural advantages nor the design space of ASIPs are sacrificed for an automated implementation.
Practical Problems in VLSI Physical Design Automation contains problems and solutions related to various well-known algorithms used in VLSI physical design automation. Dr. Lim believes that the best way to learn new algorithms is to walk through a small example by hand. This knowledge will greatly help understand, analyze, and improve some of the well-known algorithms. The author has designed and taught a graduate-level course on physical CAD for VLSI at Georgia Tech. Over the years he has written his homework with such a focus and has maintained typeset version of the solutions.
Concurrency in Dependable Computing focuses on concurrency related issues in the area of dependable computing. Failures of system components, be hardware units or software modules, can be viewed as undesirable events occurring concurrently with a set of normal system events. Achieving dependability therefore is closely related to, and also benefits from, concurrency theory and formalisms. This beneficial relationship appears to manifest into three strands of work. Application level structuring of concurrent activities. Concepts such as atomic actions, conversations, exception handling, view synchrony, etc., are useful in structuring concurrent activities so as to facilitate attempts at coping with the effects of component failures. Replication induced concurrency management. Replication is a widely used technique for achieving reliability. Replica management essentially involves ensuring that replicas perceive concurrent events identically. Application of concurrency formalisms for dependability assurance. Fault-tolerant algorithms are harder to verify than their fault-free counterparts due to the fact that the impact of component faults at each state need to be considered in addition to valid state transitions. CSP, Petri nets, CCS are useful tools to specify and verify fault-tolerant designs and protocols. Concurrency in Dependable Computing explores many significant issues in all three strands. To this end, it is composed as a collection of papers written by authors well-known in their respective areas of research. To ensure quality, the papers are reviewed by a panel of at least three experts in the relevant area.
This book constitutes the refereed proceedings of the 7th International Symposium on Reconfigurable Computing: Architectures, Tools and Applications, ARC 2011, held in Belfast, UK, in March 2011. The 40 revised papers presented, consisting of 24 full papers, 14 poster papers, and the abstracts of 2 plenary talks, were carefully reviewed and selected from 88 submissions. The topics covered are reconfigurable accelerators, design tools, reconfigurable processors, applications, device architecture, methodology and simulation, and system architecture.
In Time Division Multiple Access (TDMA), within a given time frame a particular user is allowed to transmit within a given time slot. This technique is used in most of the second-generation digital mobile communication systems. In Europe the system is known as GSM, in USA as DAMPS and in Japan as MPT. In Code Division Multiple Access (CDMA) every user is using a distinct code so that it can occupy the same frequency bandwidth at the same time with other users and still can be separated on the basis of low correlation between the codes. These systems like IS-95 in the USA are also developed and standardized within the second generation of the mobile communication systems. CDMA systems within a cellular network can provide higher capacity and for this reason they become more and more attractive. At this moment it seems that both TDMA and CDMA remain viable candidates for application in future systems. Wireless Communications: TDMA versus CDMA provides enough information for correct understanding of the arguments in favour of one or other multiple access technique. The final decision about which of the two techniques should be employed will depend not only on technical arguments but also on the amount of new investments needed and compatibility with previous systems and their infrastructures. Wireless Communications: TDMA versus CDMA comprises a collection of specially written contributions from the most prominent specialists in wireless communications in the world today and presents the major, up to date, issues in this field. The material is grouped into four chapters: Communication theory, covering coding and modulation, Wireless communications, Antenna & Propagation and Advanced Systems & Technology. The book describes clearly the issues and presents the information in such a way that informed decisions about third generation wireless systems can be taken. It is essential reading for all researchers, engineers and managers working in the field of Wireless Communications.
During the 1980s and early 1990s there was signi?cant work in the design and implementation of hardware neurocomputers. Nevertheless, most of these efforts may be judged to have been unsuccessful: at no time have have ha- ware neurocomputers been in wide use. This lack of success may be largely attributed to the fact that earlier work was almost entirely aimed at developing custom neurocomputers, based on ASIC technology, but for such niche - eas this technology was never suf?ciently developed or competitive enough to justify large-scale adoption. On the other hand, gate-arrays of the period m- tioned were never large enough nor fast enough for serious arti?cial-neur- network (ANN) applications. But technology has now improved: the capacity and performance of current FPGAs are such that they present a much more realistic alternative. Consequently neurocomputers based on FPGAs are now a much more practical proposition than they have been in the past. This book summarizes some work towards this goal and consists of 12 papers that were selected, after review, from a number of submissions. The book is nominally divided into three parts: Chapters 1 through 4 deal with foundational issues; Chapters 5 through 11 deal with a variety of implementations; and Chapter 12 looks at the lessons learned from a large-scale project and also reconsiders design issues in light of current and future technology.
'Et moi, ..., si j'avait su comment. One service mathematics has ren- en revenir, je n'y serais point alle'. dered the human race. It has put common sense back where it be- Jules Verne longs, on the topmost shelf next to the dusty canister labelIed 'discard- The series is divergent; therefore we ed nonsense'. may be able to do something with Eric T. Bell it. O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and nonlinearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathema- tical physics ...'; 'One service logic has rendered computer science ...'; 'One service category theory has rendered mathematics ...'. All ar- guably true. Alld all statements obtainable this way form part of the raison d 'etre of this serics.
Since 1990 the German Research Society (Deutsche Forschungsgemeinschaft, DFG) has been funding PhD courses (Graduiertenkollegs) at selected universi- ties in the Federal Republic of Germany. TU Berlin has been one of the first universities joining that new funding program of DFG. The PhD courses have been funded over aperiod of 9 years. The grant for the nine years sums up to approximately 5 million DM. Our Grnduiertenkolleg on Communication-based Systems has been assigned to the Computer Science Department of TU Berlin although it is a joined effort of all three universities in Berlin, Technische Uni- versitat (TU), Freie Universitat (FU), and Humboldt Universitat (HU). The Graduiertenkolleg has been started its program in October 1991. The professors responsible for the program are: Hartmut Ehrig (TU), Gunter Hommel (TU), Stefan Jahnichen (TU), Peter Lohr (FU), Miroslaw Malek (RU), Peter Pep- per (TU), Radu Popescu-Zeletin (TU), Herbert Weber (TU), and Adam Wolisz (TU). The Graduiertenkolleg is a PhD program for highly qualified persons in the field of computer science. Twenty scholarships have been granted to fellows of the Graduiertenkolleg for a maximal period of three years. During this time the fellows take part in a selected educational program and work on their PhD thesis.
Communication between engineers, their managers, suppliers and customers relies on the existence of a common understanding for the meaning of terms. While this is not normally a problem, it has proved to be a significant roadblock in the EDA industry where terms are created as required by any number of people, multiple terms are coined for the same thing, or even worse, the same term is used for many different things. This taxonomy identifies all of the significant terms used by an industry and provides a structural framework in which those terms can be defined and their relationship to other terms identified. The origins of this work go back to 1995 with a government-sponsored program called RASSP. At the termination of their work, VSIA picked up their work and developed it further. Three new taxonomies were introduced by VSIA for additional facets of the system design and development process. Since role of VSIA has now changed so that it no longer maintains these taxonomies, the baton is being passed on again through a group of interested people and manifested in this key reference work.
TheSAMOSworkshopisaninternationalgatheringofhighlyquali?edresearchers from academia and industry, sharing ideas in a 3-day lively discussion on the quietandinspiringnorthernmountainsideoftheMediterraneanislandofSamos. The workshopmeeting is one of two co-locatedevents (the other event being the IC-SAMOS).Asatradition, theworkshopfeaturespresentationsinthemorning, while after lunch all kinds of informal discussions and nut-cracking gatherings take place. The workshop is unique in the sense that not only solved research problems are presented and discussed but also (partly) unsolved problems and in-depth topical reviews can be unleashed in the scienti?c arena. Consequently, the workshopprovidesthe participantswithanenvironmentwherecollaboration rather than competition is fostered. The SAMOS conference and workshop were established in 2001 by Stamatis Vassiliadis with the goals outlined above in mind, and located on Samos, one of the most beautiful islands of the Aegean. The rich historical and cultural environment of the island, coupled with the intimate atmosphereandthe slowpaceofasmallvillagebythe seainthe middle of the Greek summer, provide a very conducive environment where ideas can be exchanged and shared freely
Distributed and Parallel Systems: From Cluster to Grid Computing, is an edited volume based on DAPSYS 2006, the 6th Austrian-Hungarian Workshop on Distributed and Parallel Systems, which is dedicated to all aspects of distributed and parallel computing. The workshop was held in conjunction with the 2nd Austrian Grid Symposium in Innsbruck, Austria in September 2006. This book is designed for a professional audience composed of practitioners and researchers in industry. It is also suitable for advanced-level students in computer science.
Networks on Chip presents a variety of topics, problems and approaches with the common theme to systematically organize the on-chip communication in the form of a regular, shared communication network on chip, an NoC for short. As the number of processor cores and IP blocks integrated on a single chip is steadily growing, a systematic approach to design the communication infrastructure becomes necessary. Different variants of packed switched on-chip networks have been proposed by several groups during the past two years. This book summarizes the state of the art of these efforts and discusses the major issues from the physical integration to architecture to operating systems and application interfaces. It also provides a guideline and vision about the direction this field is moving to. Moreover, the book outlines the consequences of adopting design platforms based on packet switched network. The consequences may in fact be far reaching because many of the topics of distributed systems, distributed real-time systems, fault tolerant systems, parallel computer architecture, parallel programming as well as traditional system-on-chip issues will appear relevant but within the constraints of a single chip VLSI implementation. The book is organized in three parts. The first deals with system design and methodology issues. The second presents problems and solutions concerning the hardware and the basic communication infrastructure. Finally, the third part covers operating system, embedded software and application. However, communication from the physical to the application level is a central theme throughout the book. The book serves as an excellent reference source and may be used as a text for advanced courses on the subject.
Making Grids Work includes selected articles from the CoreGRID Workshop on Grid Programming Models, Grid and P2P Systems Architecture, Grid Systems, Tools and Environments held at the Institute of Computer Science, Foundation for Research and Technology - Hellas in Crete, Greece, June 2007. This workshop brought together representatives of the academic and industrial communities performing Grid research in Europe. Organized within the context of the CoreGRID Network of Excellence, this workshop provided a forum for the presentation and exchange of views on the latest developments in Grid Technology research. This volume is the 7th in the series of CoreGRID books. Making Grids Work is designed for a professional audience, composed of researchers and practitioners in industry. This volume is also suitable for graduate-level students in computer science.
Derived from industry-training classes that the author teaches at the Embedded Systems Institute at Eindhoven, the Netherlands and at Buskerud University College at Kongsberg in Norway, Systems Architecting: A Business Perspective places the processes of systems architecting in a broader context by juxtaposing the relationship of the systems architect with enterprise and management. This practical, scenario-driven guide fills an important gap, providing systems architects insight into the business processes, and especially into the processes to which they actively contribute. The book uses a simple reference model to enable understanding of the inside of a system in relation to its context. It covers the impact of tool selection and brings balance to the application of the intellectual tools versus computer-aided tools. Stressing the importance of a clear strategy, the authors discuss methods and techniques that facilitate the architect's contribution to the strategy process. They also give insight into the needs and complications of harvesting synergy, insight that will help establish an effective synergy-harvesting strategy. The book also explores the often difficult relationship between managers and systems architects. Written in an approachable style, the book discusses the breadth of the human sciences and their relevance to systems architecting. It highlights the relevance of human aspects to systems architects, linking theory to practical experience when developing systems architecting competence.
This book constitutes the thoroughly refereed and revised proceedings of the 9th International Workshop on Computational Logic for Multi-Agent Systems, CLIMA IX, held in Dresden, Germany, in September 2008 and co-located with the 11th European Conference on Logics in Artificial Intelligence, JELIA 2008. The 8 full papers, presented together with two invited papers, were carefull selected from 18 submissions and passed through two rounds of reviewing and revision. Topics addressed in the regular papers include the use of automata-based techniques for verifying agents' conformance with protocols, and an approach based on the C+ action description language to provide formal specifications of social processes such as those used in business processes and social networks. Other topics include casting reasoning as planning and thus providing an analysis of reasoning with resource bounds, a discussion of the formal properties of Computational Tree Logic (CTL) extended with knowledge operators, and the use of argumentation in multi-agent negotiation. The invited contributions discuss complexity results for model-checking temporal and strategic properties of multi-agent systems, and the challenges in design and development of programming languages for multi-agent systems.
Analyzing how hacks are done, so as to stop them in the future Reverse engineering is the process of analyzing hardware or software and understanding it, without having access to the source code or design documents. Hackers are able to reverse engineer systems and exploit what they find with scary results. Now the good guys can use the same tools to thwart these threats. Practical Reverse Engineering goes under the hood of reverse engineering for security analysts, security engineers, and system programmers, so they can learn how to use these same processes to stop hackers in their tracks. The book covers x86, x64, and ARM (the first book to cover all three); Windows kernel-mode code rootkits and drivers; virtual machine protection techniques; and much more. Best of all, it offers a systematic approach to the material, with plenty of hands-on exercises and real-world examples. * Offers a systematic approach to understanding reverse engineering, with hands-on exercises and real-world examples * Covers x86, x64, and advanced RISC machine (ARM) architectures as well as deobfuscation and virtual machine protection techniques * Provides special coverage of Windows kernel-mode code (rootkits/drivers), a topic not often covered elsewhere, and explains how to analyze drivers step by step * Demystifies topics that have a steep learning curve * Includes a bonus chapter on reverse engineering tools Practical Reverse Engineering: Using x86, x64, ARM, Windows Kernel, and Reversing Tools provides crucial, up-to-date guidance for a broad range of IT professionals.
We are surrounded by noise; we must be able to separate the signals we want to hear from those we do not. To overcome this 'cocktail party effect' we have developed various strategies; endowing computers with similar abilities would enable the development of devices such as intelligent hearing aids and robust speech recognition systems. This book describes a system which attempts to separate multiple, simultaneous acoustic sources using strategies based on those used by humans. It is both a review of recent work on the modelling of auditory processes, and a presentation of a new model in which acoustic signals are decomposed into elements. These structures are then re-assembled in accordance with rules of auditory organisation which operate to bind together elements that are likely to have arisen from the same source. The model is evaluated by measuring its ability to separate speech from a wide variety of other sounds, including music, phones and other speech.
Going beyond isolated research ideas and design experiences, Designing Network On-Chip Architectures in the Nanoscale Era covers the foundations and design methods of network on-chip (NoC) technology. The contributors draw on their own lessons learned to provide strong practical guidance on various design issues. Exploring the design process of the network, the first part of the book focuses on basic aspects of switch architecture and design, topology selection, and routing implementation. In the second part, contributors discuss their experiences in the industry, offering a roadmap to recent products. They describe Tilera's TILE family of multicore processors, novel Intel products and research prototypes, and the TRIPS operand network (OPN). The last part reveals state-of-the-art solutions to hardware-related issues and explains how to efficiently implement the programming model at the network interface. In the appendix, the microarchitectural details of two switch architectures targeting multiprocessor system-on-chips (MPSoCs) and chip multiprocessors (CMPs) can be used as an experimental platform for running tests. A stepping stone to the evolution of future chip architectures, this volume provides a how-to guide for designers of current NoCs as well as designers involved with 2015 computing platforms. It cohesively brings together fundamental design issues, alternative design paradigms and techniques, and the main design tradeoffs-consistently focusing on topics most pertinent to real-world NoC designers.
In view of the incessant growth of data and knowledge and the continued diversifi- tion of information dissemination on a global scale, scalability has become a ma- stream research area in computer science and information systems. The ICST INFO- SCALE conference is one of the premier forums for presenting new and exciting research related to all aspects of scalability, including system architecture, resource management, data management, networking, and performance. As the fourth conf- ence in the series, INFOSCALE 2009 was held in Hong Kong on June 10 and 11, 2009. The articles presented in this volume focus on a wide range of scalability issues and new approaches to tackle problems arising from the ever-growing size and c- plexity of information of all kind. More than 60 manuscripts were submitted, and the Program Committee selected 22 papers for presentation at the conference. Each s- mission was reviewed by three members of the Technical Program Committee.
Das 21. Fachgesprach Autonome Mobile Systeme (AMS 2009) ist ein Forum, das Wissenschaftlerinnen und Wissenschaftlern aus Forschung und Industrie, die auf dem Gebiet der autonomen mobilen Systeme arbeiten, eine Basis fur den Gedankenaustausch bietet und wissenschaftliche Diskussionen sowie Kooperationen auf diesem Forschungsgebiet fordert bzw. initiiert. Inhaltlich finden sich ausgewahlte Beitrage zu den Themen Humanoide Roboter und Flugmaschinen, Perzeption und Sensorik, Kartierung und Lokalisation, Regelung, Navigation, Lernverfahren, Systemarchitekturen sowie der Anwendung von autonomen mobilen Systemen."
In the mid 1990s, researchers began applying Evolutionary Algorithms (EAs) on a kind of computer chip that could dynamically alter the functionality and physicalconnectionsofits circuits. This combinationofEAs withprogrammable electronics (e. g., Field Programmable Gate Arrays (FPGAs) and Field P- grammable Analogue Arrays (FPAAs)) spawned a new ?eld of Evolutionary Computation (EC) called Evolvable Hardware (EH) with its ?rst workshop, - wards Evolvable Hardware, held in Lausanne, Switzerland in October 1995. This workshop was followed by the First International Conference on Evolvable S- tems: From Biology to Hardware (ICES' 96), held inTsukuba, Japanin October 1996. The second ICES was held in Lausanne, September 1998, the third was in Edinburgh, April 2000, the fourth was in Tokyo, October 2001, the ?fth was in Trondheim, March 2003, the sixth was in Sitges, September 2005, and the seventh was in Wuhan, September 2007. Over the years the EH ?eld has expanded beyond the use of EAs on simple electronic devices to encompass many di?erent combinations of EAs and biol- ically inspired algorithms (BIAs) with various physical devices (or simulations of physical devices). Present research in the ?eld of EH can be split into the two related areas of Evolvable HardwareDesign (EHD) and Adaptive Hardware (AH). Evolvable Hardware Design (EHD) is the use of EAs and BIAs for cre- ing physical devices and designs, examples of where EHD has had some success include analogue and digital electronics, antennas, MEMS chips, opticalsystems aswell asquantum circuits. |
You may like...
|