![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This Book discusses machine learning for model order reduction, which can be used in modern VLSI design to predict the behavior of an electronic circuit, via mathematical models that predict behavior. The author describes techniques to reduce significantly the time required for simulations involving large-scale ordinary differential equations, which sometimes take several days or even weeks. This method is called model order reduction (MOR), which reduces the complexity of the original large system and generates a reduced-order model (ROM) to represent the original one. Readers will gain in-depth knowledge of machine learning and model order reduction concepts, the tradeoffs involved with using various algorithms, and how to apply the techniques presented to circuit simulations and numerical analysis. Introduces machine learning algorithms at the architecture level and the algorithm levels of abstraction; Describes new, hybrid solutions for model order reduction; Presents machine learning algorithms in depth, but simply; Uses real, industrial applications to verify algorithms.
Major advances in computing are occurring at an ever-increasing pace. This is especially so in the area of high performance computing (HPC), where today's supercomputer is tomorrow's workstation. High Performance Computing Systems and Applications is a record of HPCS'98, the 12th annual Symposium on High Performance Computing Systems and Applications. The quality of the conference was significantly enhanced by the high proportion of keynote and invited speakers. This book presents the latest research in HPC architecture, networking, applications and tools. Of special note are the sections on computational biology and physics. High Performance Computing Systems and Applications is suitable as a secondary text for a graduate-level course on computer architecture and networking, and as a reference for researchers and practitioners in industry.
This book presents various novel architectures for FPGA-optimized accurate and approximate operators, their detailed accuracy and performance analysis, various techniques to model the behavior of approximate operators, and thorough application-level analysis to evaluate the impact of approximations on the final output quality and performance metrics. As multiplication is one of the most commonly used and computationally expensive operations in various error-resilient applications such as digital signal and image processing and machine learning algorithms, this book particularly focuses on this operation. The book starts by elaborating on the various sources of error resilience and opportunities available for approximations on various layers of the computation stack. It then provides a detailed description of the state-of-the-art approximate computing-related works and highlights their limitations.
This book describes model-based development of adaptive embedded systems, which enable improved functionality using the same resources. The techniques presented facilitate design from a higher level of abstraction, focusing on the problem domain rather than on the solution domain, thereby increasing development efficiency. Models are used to capture system specifications and to implement (manually or automatically) system functionality. The authors demonstrate the real impact of adaptivity on engineering of embedded systems by providing several industrial examples of the models used in the development of adaptive embedded systems.
Edsger Wybe Dijkstra (1930-2002) was one of the most influential researchers in the history of computer science, making fundamental contributions to both the theory and practice of computing. Early in his career, he proposed the single-source shortest path algorithm, now commonly referred to as Dijkstra's algorithm. He wrote (with Jaap Zonneveld) the first ALGOL 60 compiler, and designed and implemented with his colleagues the influential THE operating system. Dijkstra invented the field of concurrent algorithms, with concepts such as mutual exclusion, deadlock detection, and synchronization. A prolific writer and forceful proponent of the concept of structured programming, he convincingly argued against the use of the Go To statement. In 1972 he was awarded the ACM Turing Award for "fundamental contributions to programming as a high, intellectual challenge; for eloquent insistence and practical demonstration that programs should be composed correctly, not just debugged into correctness; for illuminating perception of problems at the foundations of program design." Subsequently he invented the concept of self-stabilization relevant to fault-tolerant computing. He also devised an elegant language for nondeterministic programming and its weakest precondition semantics, featured in his influential 1976 book A Discipline of Programming in which he advocated the development of programs in concert with their correctness proofs. In the later stages of his life, he devoted much attention to the development and presentation of mathematical proofs, providing further support to his long-held view that the programming process should be viewed as a mathematical activity. In this unique new book, 31 computer scientists, including five recipients of the Turing Award, present and discuss Dijkstra's numerous contributions to computing science and assess their impact. Several authors knew Dijkstra as a friend, teacher, lecturer, or colleague. Their biographical essays and tributes provide a fascinating multi-author picture of Dijkstra, from the early days of his career up to the end of his life.
A wide range of modern computer applications require the performance and flexibility of parallel and distributed systems. Better software support is required if the technical advances in these systems are to be fully exploited by commerce and industry. This involves the provision of specialised techniques and tools as well as the integration of standard software engineering methods. This book will reflect current advances in this area, and will address issues of theory and practice with contributions from academia and industry. It is the aim of the book to provide a focus for information on this developing which will be of use to both researchers and practitioners.
This book provides readers with an insightful guide to the design, testing and optimization of 2.5D integrated circuits. The authors describe a set of design-for-test methods to address various challenges posed by the new generation of 2.5D ICs, including pre-bond testing of the silicon interposer, at-speed interconnect testing, built-in self-test architecture, extest scheduling, and a programmable method for low-power scan shift in SoC dies. This book covers many testing techniques that have already been used in mainstream semiconductor companies. Readers will benefit from an in-depth look at test-technology solutions that are needed to make 2.5D ICs a reality and commercially viable.
This volume reports new developments on work in the quantum flux parametron (QFP) project. It makes complete a series on Josephson supercomputers, which includes four earlier volumes, also published by World Scientific. QFP technology has great potential especially in the design of computer architecture. It is regarded as being able to go beyond the horizon of current technology, and is a leading direction for the advancement of computer technology in the next decade.
The first Stanford MIPS project started as a special graduate course in 1981. That project produced working silicon in 1983 and a prototype for running small programs in early 1984. After that, we declared it a success and decided to move on to the next project-MIPS-X. This book is the final and complete word on MIPS-X. The initial design of MIPS-X was formulated in 1984 beginning in the Spring. At that time, we were unsure that RISe technology was going to have the industrial impact that we felt it should. We also knew of a number of architectural and implementation flaws in the Stanford MIPS machine. We believed that a new processor could achieve a performance level of over 10 times a VAX 11/780, and that a microprocessor of this performance level would convince academic skeptics of the value of the RISe approach. We were concerned that the flaws in the original RISe design might overshadow the core ideas, or that attempts to industrialize the technology would repeat the mistakes of the first generation designs. MIPS-X was targeted to eliminate the flaws in the first generation de signs and to boost the performance level by over a factor of five."
This book presents a state-of-the-art technique for formal verification of continuous-time Simulink/Stateflow diagrams, featuring an expressive hybrid system modelling language, a powerful specification logic and deduction-based verification approach, and some impressive, realistic case studies. Readers will learn the HCSP/HHL-based deductive method and the use of corresponding tools for formal verification of Simulink/Stateflow diagrams. They will also gain some basic ideas about fundamental elements of formal methods such as formal syntax and semantics, and especially the common techniques applied in formal modelling and verification of hybrid systems. By investigating the successful case studies, readers will realize how to apply the pure theory and techniques to real applications, and hopefully will be inspired to start to use the proposed approach, or even develop their own formal methods in their future work.
This book presents a new threat modelling approach that specifically targets the hardware supply chain, covering security risks throughout the lifecycle of an electronic system. The authors present a case study on a new type of security attack, which combines two forms of attack mechanisms from two different stages of the IC supply chain. More specifically, this attack targets the newly developed, light cipher (Ascon) and demonstrates how it can be broken easily, when its implementation is compromised with a hardware Trojan. This book also discusses emerging countermeasures, including anti-counterfeit design techniques for resources constrained devices and anomaly detection methods for embedded systems.
After a brief introduction to low-power VLSI design, the design space of ASIP instruction set architectures (ISAs) is introduced with a special focus on important features for digital signal processing. Based on the degrees of freedom offered by this design space, a consistent ASIP design flow is proposed: this design flow starts with a given application and uses incremental optimization of the ASIP hardware, of ASIP coprocessors and of the ASIP software by using a top-down approach and by applying application-specific modifications on all levels of design hierarchy. A broad range of real-world signal processing applications serves as vehicle to illustrate each design decision and provides a hands-on approach to ASIP design. Finally, two complete case studies demonstrate the feasibility and the efficiency of the proposed methodology and quantitatively evaluate the benefits of ASIPs in an industrial context.
Used alongside the students' text, Higher National Computing 2nd
edition, this pack offers a complete suite of lecturer resource
material and photocopiable handouts for the compulsory core units
of the new BTEC Higher Nationals in Computing and IT, including the
four core units for HNC, the two additional core units required at
HND, and the Core Specialist Unit 'Quality Systems', common to both
certificate and diploma level.
ARIS (Architecture of Integrated Information Systems) is a unique and internationally renowned method for optimizing business processes and implementing application systems. This book enhances the proven ARIS concept by describing product flows and explaining how to classify modern software concepts. The importance of the link between business process organization and strategic management is stressed. Bridging the gap between the different approaches in business theory and information technology, the ARIS concept provides a full-circle approach - from the organizational design of business processes to IT implementation. Featuring SAP R/3 as well, real-world examples of various standard software solutions illustrate these concepts.
Loop tiling, as one of the most important compiler optimizations, is beneficial for both parallel machines and uniprocessors with a memory hierarchy. This book explores the use of loop tiling for reducing communication cost and improving parallelism for distributed memory machines. The author provides mathematical foundations, investigates loop permutability in the framework of nonsingular loop transformations, discusses the necessary machineries required, and presents state-of-the-art results for finding communication- and time-minimal tiling choices. Throughout the book, theorems and algorithms are illustrated with numerous examples and diagrams. The techniques presented in Loop Tiling for Parallelism can be adapted to work for a cluster of workstations, and are also directly applicable to shared-memory machines once the machines are modeled as BSP (Bulk Synchronous Parallel) machines. Features and key topics: Detailed review of the mathematical foundations, including convex polyhedra and cones; Self-contained treatment of nonsingular loop transformations, code generation, and full loop permutability; Tiling loop nests by rectangles and parallelepipeds, including their mathematical definition, dependence analysis, legality test, and code generation; A complete suite of techniques for generating SPMD code for a tiled loop nest; Up-to-date results on tile size and shape selection for reducing communication and improving parallelism; End-of-chapter references for further reading. Researchers and practitioners involved in optimizing compilers and students in advanced computer architecture studies will find this a lucid and well-presented reference work with numerous citations to original sources.
This book offers readers a clear guide to implementing engineering applications with FPGAs, from the mathematical description to the hardware synthesis, including discussion of VHDL programming and co-simulation issues. Coverage includes FPGA realizations such as: chaos generators that are described from their mathematical models; artificial neural networks (ANNs) to predict chaotic time series, for which a discussion of different ANN topologies is included, with different learning techniques and activation functions; random number generators (RNGs) that are realized using different chaos generators, and discussions of their maximum Lyapunov exponent values and entropies. Finally, optimized chaotic oscillators are synchronized and realized to implement a secure communication system that processes black and white and grey-scale images. In each application, readers will find VHDL programming guidelines and computer arithmetic issues, along with co-simulation examples with Active-HDL and Simulink.The whole book provides a practical guide to implementing a variety of engineering applications from VHDL programming and co-simulation issues, to FPGA realizations of chaos generators, ANNs for chaotic time-series prediction, RNGs and chaotic secure communications for image transmission.
This book brings together a selection of the best papers from the sixteenth edition of the Forum on specification and Design Languages Conference (FDL), which was held in September 2013 in Paris, France. FDL is a well-established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modeling and verification of integrated circuits, complex hardware/software embedded systems and mixed-technology systems.
This work provides system architects a methodology for the implementation of x.500 and LDAP based metadirectory provisioning systems. In addition this work assists in the business process analysis that accompanies any deployment. DOC Safe Harbor
This book offers the first comprehensive coverage of digital design techniques to expand the power-performance tradeoff well beyond that allowed by conventional wide voltage scaling. Compared to conventional fixed designs, the approach described in this book makes digital circuits more versatile and adaptive, allowing simultaneous optimization at both ends of the power-performance spectrum. Drop-in solutions for fully automated and low-effort design based on commercial CAD tools are discussed extensively for processors, accelerators and on-chip memories, and are applicable to prominent applications (e.g., IoT, AI, wearables, biomedical). Through the higher power-performance versatility techniques described in this book, readers are enabled to reduce the design effort through reuse of the same digital design instance, across a wide range of applications. All concepts the authors discuss are demonstrated by dedicated testchip designs and experimental results. To make the results immediately usable by the reader, all the scripts necessary to create automated design flows based on commercial tools are provided and explained.
Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques advocates the viability of using fuzzy and annealing methods in solving scheduling problems for parallel computing systems. The book proposes new techniques for both static and dynamic scheduling, using emerging paradigms that are inspired by natural phenomena such as fuzzy logic, mean-field annealing, and simulated annealing. Systems that are designed using such techniques are often referred to in the literature as intelligent' because of their capability to adapt to sudden changes in their environments. Moreover, most of these changes cannot be anticipated in advance or included in the original design of the system. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques provides results that prove such approaches can become viable alternatives to orthodox solutions to the scheduling problem, which are mostly based on heuristics. Although heuristics are robust and reliable when solving certain instances of the scheduling problem, they do not perform well when one needs to obtain solutions to general forms of the scheduling problem. On the other hand, techniques inspired by natural phenomena have been successfully applied for solving a wide range of combinatorial optimization problems (e.g. traveling salesman, graph partitioning). The success of these methods motivated their use in this book to solve scheduling problems that are known to be formidable combinatorial problems. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques is an excellent reference and may be used for advanced courses on the topic.
Input/Output in Parallel and Distributed Computer Systems has attracted increasing attention over the last few years, as it has become apparent that input/output performance, rather than CPU performance, may be the key limiting factor in the performance of future systems. This I/O bottleneck is caused by the increasing speed mismatch between processing units and storage devices, the use of multiple processors operating simultaneously in parallel and distributed systems, and by the increasing I/O demands of new classes of applications, like multimedia. It is also important to note that, to varying degrees, the I/O bottleneck exists at multiple levels of the memory hierarchy. All indications are that the I/O bottleneck will be with us for some time to come, and is likely to increase in importance. Input/Output in Parallel and Distributed Computer Systems is based on papers presented at the 1994 and 1995 IOPADS workshops held in conjunction with the International Parallel Processing Symposium. This book is divided into three parts. Part I, the Introduction, contains four invited chapters which provide a tutorial survey of I/O issues in parallel and distributed systems. The chapters in Parts II and III contain selected research papers from the 1994 and 1995 IOPADS workshops; many of these papers have been substantially revised and updated for inclusion in this volume. Part II collects the papers from both years which deal with various aspects of system software, and Part III addresses architectural issues. Input/Output in Parallel and Distributed Computer Systems is suitable as a secondary text for graduate level courses in computer architecture, software engineering, and multimedia systems, and as a reference for researchers and practitioners in industry.
This book provides an overview of current hardware security primitives, their design considerations, and applications. The authors provide a comprehensive introduction to a broad spectrum (digital and analog) of hardware security primitives and their applications for securing modern devices. Readers will be enabled to understand the various methods for exploiting intrinsic manufacturing and temporal variations in silicon devices to create strong security primitives and solutions. This book will benefit SoC designers and researchers in designing secure, reliable, and trustworthy hardware. Provides guidance and security engineers for protecting their hardware designs; Covers a variety digital and analog hardware security primitives and applications for securing modern devices; Helps readers understand PUF, TRNGs, silicon odometer, and cryptographic hardware design for system security.
Language, Compilers and Run-time Systems for Scalable Computers contains 20 articles based on presentations given at the third workshop of the same title, and 13 extended abstracts from the poster session. Starting with new developments in classical problems of parallel compiler design, such as dependence analysis and an exploration of loop parallelism, the book goes on to address the issues of compiler strategy for specific architectures and programming environments. Several chapters investigate support for multi-threading, object orientation, irregular computation, locality enhancement, and communication optimization. Issues of the interface between language and operating system support are also discussed. Finally, the load balance issues are discussed in different contexts, including sparse matrix computation and iteratively balanced adaptive solvers for partial differential equations. Some additional topics are also discussed in the extended abstracts. Each chapter provides a bibliography of relevant papers and the book can thus be used as a reference to the most up-to-date research in parallel software engineering.
A hands-on introduction to FPGA prototyping and SoC design This Second Edition of the popular book follows the same "learning-by-doing" approach to teach the fundamentals and practices of VHDL synthesis and FPGA prototyping. It uses a coherent series of examples to demonstrate the process to develop sophisticated digital circuits and IP (intellectual property) cores, integrate them into an SoC (system on a chip) framework, realize the system on an FPGA prototyping board, and verify the hardware and software operation. The examples start with simple gate-level circuits, progress gradually through the RT (register transfer) level modules, and lead to a functional embedded system with custom I/O peripherals and hardware accelerators. Although it is an introductory text, the examples are developed in a rigorous manner, and the derivations follow strict design guidelines and coding practices used for large, complex digital systems. The new edition is completely updated. It presents the hardware design in the SoC context and introduces the hardware-software co-design concept. Instead of treating examples as isolated entities, the book integrates them into a single coherent SoC platform that allows readers to explore both hardware and software "programmability" and develop complex and interesting embedded system projects. The revised edition: Adds four general-purpose IP cores, which are multi-channel PWM (pulse width modulation) controller, I2C controller, SPI controller, and XADC (Xilinx analog-to-digital converter) controller. Introduces a music synthesizer constructed with a DDFS (direct digital frequency synthesis) module and an ADSR (attack-decay-sustain-release) envelop generator. Expands the original video controller into a complete stream-based video subsystem that incorporates a video synchronization circuit, a test pattern generator, an OSD (on-screen display) controller, a sprite generator, and a frame buffer. Introduces basic concepts of software-hardware co-design with Xilinx MicroBlaze MCS soft-core processor. Provides an overview of bus interconnect and interface circuit. Introduces basic embedded system software development. Suggests additional modules and peripherals for interesting and challenging projects. The FPGA Prototyping by VHDL Examples, Second Edition makes a natural companion text for introductory and advanced digital design courses and embedded system course. It also serves as an ideal self-teaching guide for practicing engineers who wish to learn more about this emerging area of interest.
This book provides a comprehensive picture of fog computing technology, including of fog architectures, latency aware application management issues with real time requirements, security and privacy issues and fog analytics, in wide ranging application scenarios such as M2M device communication, smart homes, smart vehicles, augmented reality and transportation management. This book explores the research issues involved in the application of traditional shallow machine learning and deep learning techniques to big data analytics. It surveys global research advances in extending the conventional unsupervised or clustering algorithms, extending supervised and semi-supervised algorithms and association rule mining algorithms to big data Scenarios. Further it discusses the deep learning applications of big data analytics to fields of computer vision and speech processing, and describes applications such as semantic indexing and data tagging. Lastly it identifies 25 unsolved research problems and research directions in fog computing, as well as in the context of applying deep learning techniques to big data analytics, such as dimensionality reduction in high-dimensional data and improved formulation of data abstractions along with possible directions for their solutions. |
![]() ![]() You may like...
Eigenvalue Problems: Algorithms…
Tetsuya Sakurai, Shao-Liang Zhang, …
Hardcover
R5,642
Discovery Miles 56 420
CABology: Value of Cloud, Analytics and…
Nitin Upadhyay
Hardcover
Frontiers in Computational and Systems…
Jianfeng Feng, Wenjiang Fu, …
Hardcover
R4,433
Discovery Miles 44 330
The Theory of Queuing Systems with…
Alexander N. Dudin, Valentina I. Klimenok, …
Hardcover
R2,944
Discovery Miles 29 440
Statistical Analysis for…
Arnoldo Frigessi, Peter Buhlmann, …
Hardcover
Computer Mathematics - 9th Asian…
Ruyong Feng, Wen-shin Lee, …
Hardcover
Algebra, Geometry and Software Systems
Michael Joswig, Nobuki Takayama
Hardcover
R2,919
Discovery Miles 29 190
Time Series Analysis and Forecasting…
Ignacio Rojas, Hector Pomares, …
Hardcover
R4,396
Discovery Miles 43 960
|