![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
Integrating formal property verification (FPV) into an existing design process raises several interesting questions. Have I written enough properties? Have I written a consistent set of properties? What should I do when the FPV tool runs into capacity issues? This book develops the answers to these questions and fits them into a roadmap for formal property verification a roadmap that shows how to glue FPV technology into the traditional validation flow. A Roadmap for Formal Property Verification explores the key issues in this powerful technology through simple examples you do not need any background on formal methods to read most parts of this book. "
Fundamental Problems in Computing is in honor of Professor Daniel J. Rosenkrantz, a distinguished researcher in Computer Science. Professor Rosenkrantz has made seminal contributions to many subareas of Computer Science including formal languages and compilers, automata theory, algorithms, database systems, very large scale integrated systems, fault-tolerant computing and discrete dynamical systems. For many years, Professor Rosenkrantz served as the Editor-in-Chief of the Journal of the Association for Computing Machinery (JACM), a very prestigious archival journal in Computer Science. His contributions to Computer Science have earned him many awards including the Fellowship from ACM and the ACM SIGMOD Contributions Award.
This is the official Open Group Pocket Guide for TOGAF Version 9.1 and is published in hard copy and electronic format by Van Haren Publishing on behalf of The Open Group. TOGAF(R), an Open Group Standard, is a proven enterprise architecture methodology and framework used by the world's leading organizations to improve business efficiency. It is the most prominent and reliable enterprise architecture standard, ensuring consistent standards, methods, and communication among enterprise architecture professionals. Enterprise architecture professionals fluent in TOGAF standards enjoy greater industry credibility, job effectiveness, and career opportunities. TOGAF helps practitioners avoid being locked into proprietary methods, utilize resources more efficiently and effectively, and realize a greater return on investment.
This volume contains the collected papers of the NATO Advanced Research Workshop on neurocomputing held in Les Arcs in February 1989. Various fields in neurocomputing are covered, including: - new or improved neural network algorithms: product units, recurrent multilayer networks, Boltzmann machines, Kohonen maps, growth algorithms; - topics in neural architectures: VLSI circuits, VLSI neurons, dedicated processors; - applications in speech processing: coding, link with hidden Markov Models (HMM), recognition, compression; - applications in image processing: recognition of digits, wind maps, mammographic images, compression; - models in neurobiology: cortical columns, posture, movement coordination. The book can be read at various levels. Newcomers will find in it a useful introduction to the field through the major papers, by leading scientists, and the collected bibliography at the end of the volume. Specialists will learn about some of the more advanced applications, architectures, chip designs and new algorithms. The book will interest all neural network scientists who wish to know what is happening in Europe.
This book presents as formal papers nearly all of the lectures given at the NATO advanced summer institute on Computer Architecture held at St. Raphael, France from September 12th - 24th 1976. It was not possible to include an important paper by G. Amdahl on the 470V6 System, nor papers by Mde. A. Recoque on distributed processing, Messrs. A. Maison and G. Debruyne on LSI technology, and K. Bowden. Computer architecture is a very diverse and expanding subject, consequently it was decided to limit the scope of the School to five main subject areas. These were: specific computer architectures, language orientated machines, associative processing, computer networks and specification and design methods. In addition an overall emphasis was placed on distributed and parallel processing and the need for an integrated hardware-software approach to design. Though some introductory material is included, this book is primarily intended for workers in the field of computer science and engineering who wish to update themselves on current topics in computer architecture. The main work of the School is well reflected in the collected papers, but it is impossible to convey the benefits obtained from the discussion groups and the continuous dialogue that was maintained throughout the School. The Editors would like to acknowledge with thanks the support of the NATO Scientific Affairs Division, who financed the School, and the European Research Office of the U.S. Army and the National Science Foundation for providing travel grants.
This volume contains the papers presented at the Fifth International Workshop on Database Machines. The papers cover a wide spectrum of topics on Database Machines and Knowledge Base Machines. Reports of major projects, ECRC, MCC, and ICOT are included. Topics on DBM cover new database machine architectures based on vector processing and hypercube parallel processing, VLSI oriented architecture, filter processor, sorting machine, concurrency control mechanism for DBM, main memory database, interconnection network for DBM, and performance evaluation. In this workshop much more attention was given to knowledge base management as compared to the previous four workshops. Many papers discuss deductive database processing. Architectures for semantic network, prolog, and production system were also proposed. We would like to express our deep thanks to all those who contributed to the success of the workshop. We would also like to express our apprecia tion for the valuable suggestions given to us by Prof. D. K. Hsiao, Prof. D."
This book is an edited selection of the papers presented at the International Workshop on VLSI for Artiflcial Intelligence which was held at the University of Oxford in July 1988. Our thanks go to all the contributors and especially to the programme committee for all their hard work. Thanks are also due to the ACM-SIGARCH, the Alvey Directorate, the lEE and the IEEE Computer Society for publicising the event and to Oxford University for their active support. We are particularly grateful to David Cawley and Paula Appleby for coping with the administrative problems. Jose Delgado-Frias Will Moore October 1988 Programme Committee Igor Aleksander, Imperial College (UK) Yves Bekkers, IRISA/INRIA (France) Michael Brady, University of Oxford (UK) Jose Delgado-Frias, University of Oxford (UK) Steven Krueger, Texas Instruments Inc. (USA) Simon Lavington, University of Essex (UK) Will Moore, University of Oxford (UK) Philip Treleaven, University College London (UK) Benjamin Wah, University of Illinois (USA) Prologue Research on architectures dedicated to artificial intelligence (AI) processing has been increasing in recent years, since conventional data- or numerically-oriented architec tures are not able to provide the computational power and/or functionality required. For the time being these architectures have to be implemented in VLSI technology with its inherent constraints on speed, connectivity, fabrication yield and power. This in turn impacts on the effectiveness of the computer architecture."
We shall begin this brief section with what we consider to be its objective. It will be followed by the main outline and then concluded by a few notes as to how this work should be used. Although logical systems have been manufactured for some time, the theory behind them is quite recent. Without going into historical digressions, we simply remark that the first comprehensive ideas on the application of Boolean algebra to logical systems appeared in the 1930's. These systems appeared in telephone exchanges and were realized with relays. It is only around 1955 that many articles and books trying to systematize the study of such automata, appeared. Since then, the theory has advanced regularly, but not in a way which satisfies those concerned with practical applications. What is serious, is that aside the books by Caldwell (which dates already from 1958), Marcus, and P. Naslin (in France), few works have been published which try to gather and unify results which can be used by the practis ing engineer; this is the objective of the present volumes."
Widespread use of parallel processing will become a reality only if the process of porting applications to parallel computers can be largely automated. Usually it is straightforward for a user to determine how an application can be mapped onto a parallel machine; however, the actual development of parallel code, if done by hand, is typically difficult and time consuming. Parallelizing compilers, which can gen erate parallel code automatically, are therefore a key technology for parallel processing. In this book, Ping-Sheng Tseng describes a parallelizing compiler for systolic arrays, called AL. Although parallelizing compilers are quite common for shared-memory parallel machines, the AL compiler is one of the first working parallelizing compilers for distributed memory machines, of which systolic arrays are a special case. The AL compiler takes advantage of the fine grain and high bandwidth interprocessor communication capabilities in a systolic architecture to generate efficient parallel code. xii Foreword While capable of handling an important class of applications, AL is not intended to be a general-purpose parallelizing compiler."
High Performance Scientific And Engineering Computing:
Hardware/Software Support contains selected chapters on
hardware/software support for high performance scientific and
engineering computing from prestigious workshops in the fields such
as PACT-SHPSEC, IPDPS-PDSECA and ICPP-HPSECA. This edited volume is
basically divided into six main sections which include invited
material from prominent researchers around the world. We believe
all of these contributed chapters and topics not only provide novel
ideas, new results and state-of-the-art techniques in this field,
but also stimulate the future research activities in the area of
high performance computing for science and engineering
applications.
Interoperability has been a requirement in NATO ever since the Alliance came into being - an obvious requirement when 16 independent Nations agree to allocate national resources for the achievement of a common goal: to maintain peace. With the appearance of data processing in the command and control pro cess of the armed forces, the requirement for interoperability expanded into the data processing field. Although problems of procedural and operational interoperability had been constantly resolved to some extent as they arose over the years, the introduction of data proces sing increased the problems of technical interoperability. The increase was partially due to the natural desire of nations to support their own national industries. But it was definetely also due to the lack of time and resources needed to solve the problems. During the mid- and late -1970s the International Standards Organisa tion (ISO) decided to develop a concept ("model") which would allow "systems" to intercommunicate. The famous ISO 7-layer model for Open Systems Interconnection (OSI) was born. The OSI model was adopted by NATO in 1983 as thi basis for standardization of data communications in NATO. The very successful (first) Symposium on Interoperability of ADP Sys tems, held in November 1982 at the SHAPE Technical Centre (STC), gave an exten ive overview of the work carried out on the lower layers of the model and revealed some intriguing ideas about the upper layers. The first Symposium accurately reflected the state-of-the-art at that point in time."
Cache And Interconnect Architectures In Multiprocessors Eilat, Israel May 25-261989 Michel Dubois UniversityofSouthernCalifornia Shreekant S. Thakkar SequentComputerSystems The aim of the workshop was to bring together researchers working on cache coherence protocols for shared-memory multiprocessors with various interconnect architectures. Shared-memory multiprocessors have become viable systems for many applications. Bus based shared-memory systems (Eg. Sequent's Symmetry, Encore's Multimax) are currently limited to 32 processors. The fIrst goal of the workshop was to learn about the performance ofapplications on current cache-based systems. The second goal was to learn about new network architectures and protocols for future scalable systems. These protocols and interconnects would allow shared-memory architectures to scale beyond current imitations. The workshop had 20 speakers who talked about their current research. The discussions were lively and cordial enough to keep the participants away from the wonderful sand and sun for two days. The participants got to know each other well and were able to share their thoughts in an informal manner. The workshop was organized into several sessions. The summary of each session is described below. This book presents revisions of some of the papers presented at the workshop."
Architecture-independent programming and automatic parallelisation have long been regarded as two different means of alleviating the prohibitive costs of parallel software development. Building on recent advances in both areas, Architecture-Independent Loop Parallelisation proposes a unified approach to the parallelisation of scientific computing code. This novel approach is based on the bulk-synchronous parallel model of computation, and succeeds in automatically generating parallel code that is architecture-independent, scalable, and of analytically predictable performance.
The development of any Software (Industrial) Intensive System, e.g. critical embedded software, requires both different notations, and a strong devel- ment process. Different notations are mandatory because different aspects of the Software System have to be tackled. A strong development process is mandatory as well because without a strong organization we cannot warrantee the system will meet its requirements. Unfortunately, much more is needed! The different notations that can be used must all possess at least one property: formality. The development process must also have important properties: a exha- tive coverage of the development phases, and a set of well integrated support tools. In Computer Science it is now widely accepted that only formal notations can guarantee a perfect de?ned meaning. This becomes a more and more important issue since software systems tend to be distributed in large systems (for instance in safe public transportation systems), and in small ones (for instance numerous processors in luxury cars). Distribution increases the complexity of embedded software while safety criteria get harder to be met. On the other hand, during the past decade Software Engineering techniques have been improved a lot, and are now currently used to conduct systematic and rigorous development of large software systems. UML has become the de facto standard notation for documenting Software Engineering projects. UML is supported by many CASE tools that offer graphical means for the UML notation.
It is a great pleasure to write a preface to this book. In my view, the content is unique in that it blends traditional teaching approaches with the use of mathematics and a mainstream Hardware Design Language (HDL) as formalisms to describe key concepts. The book keeps the "machine" separate from the "application" by strictly following a bottom-up approach: it starts with transistors and logic gates and only introduces assembly language programs once their execution by a processor is clearly de ned. Using a HDL, Verilog in this case, rather than static circuit diagrams is a big deviation from traditional books on computer architecture. Static circuit diagrams cannot be explored in a hands-on way like the corresponding Verilog model can. In order to understand why I consider this shift so important, one must consider how computer architecture, a subject that has been studied for more than 50 years, has evolved. In the pioneering days computers were constructed by hand. An entire computer could (just about) be described by drawing a circuit diagram. Initially, such d- grams consisted mostly of analogue components before later moving toward d- ital logic gates. The advent of digital electronics led to more complex cells, such as half-adders, ip- ops, and decoders being recognised as useful building blocks.
Parallel and distributed computing is one of the foremost
technologies for shaping New Horizons of Parallel and Distributed Computing is a collection of self-contained chapters written by pioneering researchers to provide solutions for newly emerging problems in this field. This volume will not only provide novel ideas, work in progress and state-of-the-art techniques in the field, but will also stimulate future research activities in the area of parallel and distributed computing with applications. New Horizons of Parallel and Distributed Computing is intended for industry researchers and developers, as well as for academic researchers and advanced-level students in computer science and electrical engineering. A valuable reference work, it is also suitable as a textbook.
Digital image business applications are expanding rapidly, driven by recent advances in the technology and breakthroughs in the price and performance of hardware and firmware. This ever increasing need for the storage and transmission of images has in turn driven the technology of image compression: image data rate reduction to save storage space and reduce transmission rate requirements. Digital image compression offers a solution to a variety of imaging applications that require a vast amount of data to represent the images, such as document imaging management systems, facsimile transmission, image archiving, remote sensing, medical imaging, entertainment, HDTV, broadcasting, education and video teleconferencing. Digital Image Compression: Algorithms and Standards introduces the reader to compression algorithms, including the CCITT facsimile standards T.4 and T.6, JBIG, CCITT H.261 and MPEG standards. The book provides comprehensive explanations of the principles and concepts of the algorithms, helping the readers' understanding and allowing them to use the standards in business, product development and R&D. Audience: A valuable reference for the graduate student, researcher and engineer. May also be used as a text for a course on the subject.
Welcome to the proceedings of the 19th International Workshop on Power and TimingModeling, OptimizationandSimulation, PATMOS2009.Overtheyears, PATMOShasevolvedintoanimportantEuropeanevent, whereresearchersfrom both industry and academia discuss and investigate the emerging challenges in future and contemporary applications, design methodologies, and tools required for the development of the upcoming generations of integrated circuits and s- tems. PATMOS 2009 was organized by TU Delft, The Netherlands, with sp- sorship by the NIRICT Design Lab and Cadence Design Systems, and technical co-sponsorshipbytheIEEE.Furtherinformationabouttheworkshopisavailable athttp: //ens.ewi.tudelft.nl/patmos09. The technical programof PATMOS 2009 contained state-of-the-arttechnical contributions, three invited keynotes, and a special session on SystemC-AMS Extensions. The technical program focused on timing, performance, and power consumption, as well as architectural aspects with particular emphasis on m- eling, design, characterization, analysis, and optimization in the nanometer era. The Technical Program Committee, with the assistance of additional expert reviewers, selected the 36 papers presented at PATMOS. The papers were - ganized into 7 oral sessions (with a total of 26 papers) and 2 poster sessions (with a total of 10 papers). As is customary for the PATMOS workshops, full papers were required for review, and a minimum of three reviews were received per manuscr
Networks on Chip presents a variety of topics, problems and approaches with the common theme to systematically organize the on-chip communication in the form of a regular, shared communication network on chip, an NoC for short. As the number of processor cores and IP blocks integrated on a single chip is steadily growing, a systematic approach to design the communication infrastructure becomes necessary. Different variants of packed switched on-chip networks have been proposed by several groups during the past two years. This book summarizes the state of the art of these efforts and discusses the major issues from the physical integration to architecture to operating systems and application interfaces. It also provides a guideline and vision about the direction this field is moving to. Moreover, the book outlines the consequences of adopting design platforms based on packet switched network. The consequences may in fact be far reaching because many of the topics of distributed systems, distributed real-time systems, fault tolerant systems, parallel computer architecture, parallel programming as well as traditional system-on-chip issues will appear relevant but within the constraints of a single chip VLSI implementation. The book is organized in three parts. The first deals with system design and methodology issues. The second presents problems and solutions concerning the hardware and the basic communication infrastructure. Finally, the third part covers operating system, embedded software and application. However, communication from the physical to the application level is a central theme throughout the book. The book serves as an excellent reference source and may be used as a text for advanced courses on the subject.
This new approach to an established field introduces the concept of predicate logic in order to supersede propositional logic in switching theory. The author gives new insight into the theory of latches (memory circuits) for use in undergraduate and graduate courses.
A new approach to explaining the existence of firms and markets, focusing on variability and coordination. It stands in contrast to the emphasis on transaction costs, and on monitoring and incentive structures, which are prominent in most of the modern literature in this field. This approach, called the variability approach, allows us to: show why both the need for communication and the coordination costs increase when the division of labor increases; explain why, while the firm relies on direction, the market does not; rigorously formulate the optimum divisionalization problem; better understand the relationship between technology and organization; show why the size' of the firm is limited; and to refine the analysis of whether the existence of a sharable input, or the presence of an external effect leads to the emergence of a firm. The book provides a wealth of insights for students and professionals in economics, business, law and organization.
Design for Manufacturability and Yield for Nano-Scale CMOS walks the reader through all the aspects of manufacturability and yield in a nano-CMOS process and how to address each aspect at the proper design step starting with the design and layout of standard cells and how to yield-grade libraries for critical area and lithography artifacts through place and route, CMP model based simulation and dummy-fill insertion, mask planning, simulation and manufacturing, and through statistical design and statistical timing closure of the design. It alerts the designer to the pitfalls to watch for and to the good practices that can enhance a design s manufacturability and yield. This book is a must read book the serious practicing IC designer and an excellent primer for any graduate student intent on having a career in IC design or in EDA tool development."
New Algorithms, Architectures and Applications for Reconfigurable Computing consists of a collection of contributions from the authors of some of the best papers from the Field Programmable Logic conference (FPL 03) and the Design and Test Europe conference (DATE 03). In all, seventy-nine authors, from research teams from all over the world, were invited to present their latest research in the extended format permitted by this special volume. The result is a valuable book that is a unique record of the state of the art in research into field programmable logic and reconfigurable computing. The contributions are organized into twenty-four chapters and are grouped into three main categories: architectures, tools and applications. Within these three broad areas the most strongly represented themes are coarse-grained architectures; dynamically reconfigurable and multi-context architectures; tools for coarse-grained and reconfigurable architectures; networking, security and encryption applications. Field programmable logic and reconfigurable computing are exciting research disciplines that span the traditional boundaries of electronic engineering and computer science. When the skills of both research communities are combined to address the challenges of a single research discipline they serve as a catalyst for innovative research. The work reported in the chapters of this book captures that spirit of that innovation."
This monograph details several important advances in the direction of a practical proofs-as-programs paradigm, which constitutes a set of approaches to developing programs from proofs in constructive logic with applications to industrial-scale, complex software engineering problems. One of the books central themes is a general, abstract framework for developing new systems of programs synthesis by adapting proofs-as-programs to new contexts. |
You may like...
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R5,924
Discovery Miles 59 240
|