![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Nonlinear Assignment Problems (NAPs) are natural extensions of the classic Linear Assignment Problem, and despite the efforts of many researchers over the past three decades, they still remain some of the hardest combinatorial optimization problems to solve exactly. The purpose of this book is to provide in a single volume, major algorithmic aspects and applications of NAPs as contributed by leading international experts. The chapters included in this book are concerned with major applications and the latest algorithmic solution approaches for NAPs. Approximation algorithms, polyhedral methods, semidefinite programming approaches and heuristic procedures for NAPs are included, while applications of this problem class in the areas of multiple-target tracking in the context of military surveillance systems, of experimental high energy physics, and of parallel processing are presented. Audience: Researchers and graduate students in the areas of combinatorial optimization, mathematical programming, operations research, physics, and computer science.
This book serves a dual purpose: firstly to combine the treatment of circuits and digital electronics, and secondly, to establish a strong connection with the contemporary world of digital systems. The need for this approach arises from the observation that introducing digital electronics through a course in traditional circuit analysis is fast becoming obsolete. Our world has gone digital. Automata theory helps with the design of digital circuits such as parts of computers, telephone systems and control systems. A complete perspective is emphasized, because even the most elegant computer architecture will not function without adequate supporting circuits. The focus is on explaining the real-world implementation of complete digital systems. In doing so, the reader is prepared to immediately begin design and implementation work. This work serves as a bridge to take readers from the theoretical world to the everyday design world where solutions must be complete to be successful.
Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."
Due to the decreasing production costs of IT systems, applications that had to be realised as expensive PCBs formerly, can now be realised as a system-on-chip. Furthermore, low cost broadband communication media for wide area communication as well as for the realisation of local distributed systems are available. Typically the market requires IT systems that realise a set of specific features for the end user in a given environment, so called embedded systems. Some examples for such embedded systems are control systems in cars, airplanes, houses or plants, information and communication devices like digital TV, mobile phones or autonomous systems like service- or edutainment robots. For the design of embedded systems the designer has to tackle three major aspects: The application itself including the man-machine interface, The (target) architecture of the system including all functional and non-functional constraints and, the design methodology including modelling, specification, synthesis, test and validation. The last two points are a major focus of this book. This book documents the high quality approaches and results that were presented at the International Workshop on Distributed and Parallel Embedded Systems (DIPES 2000), which was sponsored by the International Federation for Information Processing (IFIP), and organised by IFIP working groups WG10.3, WG10.4 and WG10.5. The workshop took place on October 18-19, 2000, in Schloss Eringerfeld near Paderborn, Germany. Architecture and Design of Distributed Embedded Systems is organised similar to the workshop. Chapters 1 and 4 (Methodology I and II) deal with different modelling and specification paradigms and the corresponding design methodologies. Generic system architectures for different classes of embedded systems are presented in Chapter 2. In Chapter 3 several design environments for the support of specific design methodologies are presented. Problems concerning test and validation are discussed in Chapter 5. The last two chapters include distribution and communication aspects (Chapter 6) and synthesis techniques for embedded systems (Chapter 7). This book is essential reading for computer science researchers and application developers."
This book looks at relationships between the organisation of physical objects in space and the organisation of ideas. Historical, philosophical, psychological and architectural knowledge are united to develop an understanding of the relationship between information and its representation. Despite its potential to break the mould, digital information has relied on metaphors from a pre-digital era. In particular, architectural ideas have pervaded discussions of digital information, from the urbanisation of cyberspace in science fiction, through to the adoption of spatial visualisations in the design of graphical user interfaces. This book tackles: * the historical importance of physical places to the organisation and expression of knowledge * the limitations of using the physical organisation of objects as the basis for systems of categorisation and taxonomy * the emergence of digital technologies and the 20th century new conceptual understandings of knowledge and its organisation * the concept of disconnecting storage of information objects from their presentation and retrieval * ideas surrounding semantic space' * the realities of the types of user interface which now dominate modern computing.
Automatic transformation of a sequential program into a parallel form is a subject that presents a great intellectual challenge and promises a great practical award. There is a tremendous investment in existing sequential programs, and scientists and engineers continue to write their application programs in sequential languages (primarily in Fortran). The demand for higher speedups increases. The job of a restructuring compiler is to discover the dependence structure and the characteristics of the given machine. Much attention has been focused on the Fortran do loop. This is where one expects to find major chunks of computation that need to be performed repeatedly for different values of the index variable. Many loop transformations have been designed over the years, and several of them can be found in any parallelizing compiler currently in use in industry or at a university research facility. The book series on KappaLoop Transformations for Restructuring Compilerskappa provides a rigorous theory of loop transformations and dependence analysis. We want to develop the transformations in a consistent mathematical framework using objects like directed graphs, matrices, and linear equations. Then, the algorithms that implement the transformations can be precisely described in terms of certain abstract mathematical algorithms. The first volume, Loop Transformations for Restructuring Compilers: The Foundations, provided the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discussed data dependence, and introduced the major transformations. The current volume, Loop Parallelization, builds a detailed theory of iteration-level loop transformations based on the material developed in the previous book.
Over the last decade, advances in the semiconductor fabrication
process have led to the realization of true system-on-a-chip
devices. But the theories, methods and tools for designing,
integrating and verifying these complex systems have not kept pace
with our ability to build them. System level design is a critical
component in the search for methods to develop designs more
productively. However, there are a number of challenges that must
be overcome in order to implement system level modeling.
This book presents some of the latest applications of new theories based on the concept of paraconsistency and correlated topics in informatics, such as pattern recognition (bioinformatics), robotics, decision-making themes, and sample size. Each chapter is self-contained, and an introductory chapter covering the logic theoretical basis is also included. The aim of the text is twofold: to serve as an introductory text on the theories and applications of new logic, and as a textbook for undergraduate or graduate-level courses in AI. Today AI frequently has to cope with problems of vagueness, incomplete and conflicting (inconsistent) information. One of the most notable formal theories for addressing them is paraconsistent (paracomplete and non-alethic) logic.
This book is structured in a practical, example-driven, manner. The use of VHDL for constructing logic synthesisers is one of the aims of the book; the second is the application of the tools to the design process. Worked examples, questions and answers are provided together with do and don'ts of good practice. An appendix on logic design the source code are available free of charge over the Internet.
This book introduces readers to various threats faced during design and fabrication by today's integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or "IC Overproduction," insertion of malicious circuits, referred as "Hardware Trojans", which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange of obfuscation keys- arguably the most critical element of hardware obfuscation.
Artificial Intelligence is entering the mainstream of com- puter applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the pro- cessing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an archi- tecture specialized for Prolog can achieve a ten-fold improve- ment in performance over conventional, general-purpose architec- tures. This book presents such an architecture for high perfor- mance execution of Prolog programs. The architecture is based on the abstract machine descrip- tion introduced by David H.D. Warren known as the Warren Abstract Machine (W AM). The execution model of the W AM is described and extended to provide a complete Instruction Set Architecture (lSA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design. The work described here represents one of the first efforts to implement the W AM model in hardware. The approach taken is that of direct implementation of the high level WAM instruction set in hardware resulting in a elSe style archi- tecture.
This book provides readers with a comprehensive introduction to physical inspection-based approaches for electronics security. The authors explain the principles of physical inspection techniques including invasive, non-invasive and semi-invasive approaches and how they can be used for hardware assurance, from IC to PCB level. Coverage includes a wide variety of topics, from failure analysis and imaging, to testing, machine learning and automation, reverse engineering and attacks, and countermeasures.
Per Martin-Loef's work on the development of constructive type theory has been of huge significance in the fields of logic and the foundations of mathematics. It is also of broader philosophical significance, and has important applications in areas such as computing science and linguistics. This volume draws together contributions from researchers whose work builds on the theory developed by Martin-Loef over the last twenty-five years. As well as celebrating the anniversary of the birth of the subject it covers many of the diverse fields which are now influenced by type theory. It is an invaluable record of areas of current activity, but also contains contributions from N. G. de Bruijn and William Tait, both important figures in the early development of the subject. Also published for the first time is one of Per Martin-Loef's earliest papers.
Mining Very Large Databases with Parallel Processing addresses the problem of large-scale data mining. It is an interdisciplinary text, describing advances in the integration of three computer science areas, namely intelligent' (machine learning-based) data mining techniques, relational databases and parallel processing. The basic idea is to use concepts and techniques of the latter two areas - particularly parallel processing - to speed up and scale up data mining algorithms. The book is divided into three parts. The first part presents a comprehensive review of intelligent data mining techniques such as rule induction, instance-based learning, neural networks and genetic algorithms. Likewise, the second part presents a comprehensive review of parallel processing and parallel databases. Each of these parts includes an overview of commercially-available, state-of-the-art tools. The third part deals with the application of parallel processing to data mining. The emphasis is on finding generic, cost-effective solutions for realistic data volumes. Two parallel computational environments are discussed, the first excluding the use of commercial-strength DBMS, and the second using parallel DBMS servers. It is assumed that the reader has a knowledge roughly equivalent to a first degree (BSc) in accurate sciences, so that (s)he is reasonably familiar with basic concepts of statistics and computer science. The primary audience for Mining Very Large Databases with Parallel Processing is industry data miners and practitioners in general, who would like to apply intelligent data mining techniques to large amounts of data. The book will also be of interest to academic researchers and postgraduate students, particularly database researchers, interested in advanced, intelligent database applications, and artificial intelligence researchers interested in industrial, real-world applications of machine learning.
We planned this book as a Festschrift for Smitty Stevens because we thought he might be retiring around 1974, although we knew very well that only death or deep illness would stop Smitty from doing science. Death came suddenly, unexpectedly - after a full day of skiing at Vail, Colorado on the annual trip with wife Didi to the Winter Conference on Brain Research. Smitty liked winter conferences near ski resorts and often tried to get us other psychophysicists to organize one. Every person is unique. Smitty would have said it's mainly because each of us has so many genes that two combinations just alike would be well-nigh impossible. But most of us strive in many ways to be like others, and to abide by the norms (some smaller number try even harder to be unlike other people); as a result many persons seem to lose their uniqueness, their individuality. Not Smitty. He tried neither to be like others nor to be different. He took himself as he found himself, and ascribed peculiarities, strengths, and weaknesses to his pioneering Utah forebears, in whom he took much pride. His was the true and right nonconformity. He approached each task, each problem, ready to grapple with the facts and set them into meaningful order. And if the answer he came up with was different from everyone else's, well that was too bad.
A quality-driven design and verification flow for digital systems is developed and presented in Quality-Driven SystemC Design. Two major enhancements characterize the new flow: First, dedicated verification techniques are integrated which target the different levels of abstraction. Second, each verification technique is complemented by an approach to measure the achieved verification quality. The new flow distinguishes three levels of abstraction (namely system level, top level and block level) and can be incorporated in existing approaches. After reviewing the preliminary concepts, in the following chapters the three levels for modeling and verification are considered in detail. At each level the verification quality is measured. In summary, following the new design and verification flow a high overall quality results.
The relevant techniques, vocabulary, currently available hardware architectures, and programming languages which provide the basic concepts of parallel computing are introduced in this book. In the future, we can expect to see massively parallel teraflop machines. These machines will be supported by gigabit network which allow grand-challenge problems to be solved by using several supercomputers and parallel machines concurrently.
In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.
This book is timely and discusses the effects from the pandemic. Written for longevity, and may be useful to compare this pandemic and the response to future events. The book is written for academia: social sciences, public health, information science, emergency management, and policy fields, and is easier informational reading for the layperson.
This book presents research in an interdisciplinary field, resulting from the vigorous and fruitful cross-pollination between traditional deontic logic and computer science. AI researchers have used deontic logic as one of the tools in modelling legal reasoning. Computer scientists have discovered that computer systems (including their interaction with other computer systems and with human agents) can often be productively modelled as norm-governed. So, for example, deontic logic has been applied by computer scientists for specifying bureaucratic systems, access and security policies, and soft design or integrity constraints, and for modelling fault tolerance. In turn, computer scientists and AI researchers have also discovered (and made it clear to the rest of us) that various formal tools (e.g. nonmonotonic, temporal and dynamic logics) developed in computer science and artificial intelligence have interesting applications to traditional issues in deontic logic. This volume presents some of the best work done in this area, with the selection at once reflecting the general interdisciplinary (and international) character that this area of research has taken on, as well as reflecting the more specific recent inter-disciplinary developments between traditional deontic logic and computer science.
This book describes the optimized implementations of several arithmetic datapath, controlpath and pseudorandom sequence generator circuits for realization of high performance arithmetic circuits targeted towards a specific family of the high-end Field Programmable Gate Arrays (FPGAs). It explores regular, modular, cascadable and bit-sliced architectures of these circuits, by directly instantiating the target FPGA-specific primitives in the HDL. Every proposed architecture is justified with detailed mathematical analyses. Simultaneously, constrained placement of the circuit building blocks is performed, by placing the logically related hardware primitives in close proximity to one another by supplying relevant placement constraints in the Xilinx proprietary "User Constraints File". The book covers the implementation of a GUI-based CAD tool named FlexiCore integrated with the Xilinx Integrated Software Environment (ISE) for design automation of platform-specific high-performance arithmetic circuits from user-level specifications. This tool has been used to implement the proposed circuits, as well as hardware implementations of integer arithmetic algorithms where several of the proposed circuits are used as building blocks. Implementation results demonstrate higher performance and superior operand-width scalability for the proposed circuits, with respect to implementations derived through other existing approaches. This book will prove useful to researchers, students and professionals engaged in the domain of FPGA circuit optimization and implementation.
This book provides a hands-on, application-oriented guide to the language and methodology of both SystemVerilog Assertions and SytemVerilog Functional Coverage. Readers will benefit from the step-by-step approach to functional hardware verification, which will enable them to uncover hidden and hard to find bugs, point directly to the source of the bug, provide for a clean and easy way to model complex timing checks and objectively answer the question 'have we functionally verified everything'. Written by a professional end-user of both SystemVerilog Assertions and SystemVerilog Functional Coverage, this book explains each concept with easy to understand examples, simulation logs and applications derived from real projects. Readers will be empowered to tackle the modeling of complex checkers for functional verification, thereby drastically reducing their time to design and debug.
For real-time systems, the worst-case execution time (WCET) is the key objective to be considered. Traditionally, code for real-time systems is generated without taking this objective into account and the WCET is computed only after code generation. Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems presents the first comprehensive approach integrating WCET considerations into the code generation process. Based on the proposed reconciliation between a compiler and a timing analyzer, a wide range of novel optimization techniques is provided. Among others, the techniques cover source code and assembly level optimizations, exploit machine learning techniques and address the design of modern systems that have to meet multiple objectives. Using these optimizations, the WCET of real-time applications can be reduced by about 30% to 45% on the average. This opens opportunities for decreasing clock speeds, costs and energy consumption of embedded processors. The proposed techniques can be used for all types real-time systems, including automotive and avionics IT systems.
This book brings together a selection of the best papers from the thirteenth edition of the Forum on specification and Design Languages Conference (FDL), which was held in Southampton, UK in September 2010. FDL is a well established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modelling and verification of integrated circuits, complex hardware/software embedded systems, and mixed-technology systems.
1) Provides a levelling approach, bringing students at all stages of programming experience to the same point 2) Focuses Python, a general language, to an engineering and scientific context 3) Uses a classroom tested, practical approach to teaching programming 4) Teaches students and professionals how to use Python to solve engineering calculations such as differential and algebraic equations |
![]() ![]() You may like...
Games: Unifying Logic, Language, and…
Ondrej Majer, Ahti-veikko Pietarinen, …
Hardcover
Combinatorial Optimization and Graph…
Takuro Fukunaga, Ken-ichi Kawarabayashi
Hardcover
R3,288
Discovery Miles 32 880
|