![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
Analog Interfacing to Embedded Microprocessors addresses the
technologies and methods used in interfacing analog devices to
microprocessors, providing in-depth coverage of practical control
applications, op amp examples, and much more. A companion to the
author's popular Embedded Microprocessor Systems: Real World
Design, this new embedded systems book focuses on measurement and
control of analog quantities in embedded systems that are required
to interface to the real world.
This is the first book to focus on designing run-time reconfigurable systems on FPGAs, in order to gain resource and power efficiency, as well as to improve speed. Case studies in partial reconfiguration guide readers through the FPGA jungle, straight toward a working system. The discussion of partial reconfiguration is comprehensive and practical, with models introduced together with methods to implement efficiently the corresponding systems. Coverage includes concepts for partial module integration and corresponding communication architectures, floorplanning of the on-FPGA resources, physical implementation aspects starting from constraining primitive placement and routing all the way down to the bitstream required to configure the FPGA, and verification of reconfigurable systems.
This book describes the optimized implementations of several arithmetic datapath, controlpath and pseudorandom sequence generator circuits for realization of high performance arithmetic circuits targeted towards a specific family of the high-end Field Programmable Gate Arrays (FPGAs). It explores regular, modular, cascadable and bit-sliced architectures of these circuits, by directly instantiating the target FPGA-specific primitives in the HDL. Every proposed architecture is justified with detailed mathematical analyses. Simultaneously, constrained placement of the circuit building blocks is performed, by placing the logically related hardware primitives in close proximity to one another by supplying relevant placement constraints in the Xilinx proprietary "User Constraints File". The book covers the implementation of a GUI-based CAD tool named FlexiCore integrated with the Xilinx Integrated Software Environment (ISE) for design automation of platform-specific high-performance arithmetic circuits from user-level specifications. This tool has been used to implement the proposed circuits, as well as hardware implementations of integer arithmetic algorithms where several of the proposed circuits are used as building blocks. Implementation results demonstrate higher performance and superior operand-width scalability for the proposed circuits, with respect to implementations derived through other existing approaches. This book will prove useful to researchers, students and professionals engaged in the domain of FPGA circuit optimization and implementation.
This book covers layout design and layout migration methodologies for optimizing multi-net wire structures in advanced VLSI interconnects. Scaling-dependent models for interconnect power, interconnect delay and crosstalk noise are covered in depth, and several design optimization problems are addressed, such as minimization of interconnect power under delay constraints, or design for minimal delay in wire bundles within a given routing area. A handy reference or a guide for design methodologies and layout automation techniques, this book provides a foundation for physical design challenges of interconnect in advanced integrated circuits.
One suspects that the people who use computers for their livelihood are growing more "sophisticated" as the field of computer science evolves. This view might be defended by the expanding use of languages such as C and Lisp in contrast to the languages such as FORTRAN and COBOL. This hypothesis is false however - computer languages are not like natural languages where successive generations stick with the language of their ancestors. Computer programmers do not grow more sophisticated - programmers simply take the time to muddle through the increasingly complex language semantics in an attempt to write useful programs. Of course, these programmers are "sophisticated" in the same sense as are hackers of MockLisp, PostScript, and Tex - highly specialized and tedious languages. It is quite frustrating how this myth of sophistication is propagated by some industries, universities, and government agencies. When I was an undergraduate at MIT, I distinctly remember the convoluted questions on exams concerning dynamic scoping in Lisp - the emphasis was placed solely on a "hacker's" view of computation, i. e. , the control and manipulation of storage cells. No consideration was given to the logical structure of programs. Within the past five years, Ada and Common Lisp have become programming language standards, despite their complexity (note that dynamic scoping was dropped even from Common Lisp). Of course, most industries' selection of programming languages are primarily driven by the requirement for compatibility (with previous software) and performance.
Design technology to address the new and vast problem of heterogeneous embedded systems design while remaining compatible with standard "More Moore" flows, i.e. capable of simultaneously handling both silicon complexity and system complexity, represents one of the most important challenges facing the semiconductor industry today and will be for several years to come. While the micro-electronics industry, over the years and with its spectacular and unique evolution, has built its own specific design methods to focus mainly on the management of complexity through the establishment of abstraction levels, the emergence of device heterogeneity requires new approaches enabling the satisfactory design of physically heterogeneous embedded systems for the widespread deployment of such systems. Heterogeneous Embedded Systems, compiled largely from a set of contributions from participants of past editions of the Winter School on Heterogeneous Embedded Systems Design Technology (FETCH), proposes a necessarily broad and holistic overview of design techniques used to tackle the various facets of heterogeneity in terms of technology and opportunities at the physical level, signal representations and different abstraction levels, architectures and components based on hardware and software, in all the main phases of design (modeling, validation with multiple models of computation, synthesis and optimization). It concentrates on the specific issues at the interfaces, and is divided into two main parts. The first part examines mainly theoretical issues and focuses on the modeling, validation and design techniques themselves. The second part illustrates the use of these methods in various design contexts at the forefront of new technology and architectural developments.
This text provides a systematic guide describing practical approaches to planning, developing, and implementing successful ITS architectures in regional settings. Based on the principles and methods used to create developed US national ITS architecture, the authors provide readers with a solid understanding of each critical step involved in the regional ITS deployment process. The text also explores key ingredients that make up an effective ITS mission statement, how to choose the best ITS technologies for a specific application, the components involved in developing and appropriate logical and physical architecture.
This book covers essential topics in the architecture and design of Internet of Things (IoT) systems. The authors provide state-of-the-art information that enables readers to design systems that balance functionality, bandwidth, and power consumption, while providing secure and safe operation in the face of a wide range of threat and fault models. Coverage includes essential topics in system modeling, edge/cloud architectures, and security and safety, including cyberphysical systems and industrial control systems.
Grounded in the user-centered design movement, this book offers a broad consideration of how our civilization has evolved its technical infrastructure for human purpose to help us make sense of the contemporary world of information infrastructure and online existence. The author incorporates historical, cultural and aesthetic approaches to situating information and its underlying technologies across time in the collective, lived experiences of humanity. In today's digital information world, user experience is vital to the success of any product or service. Yet as the user population expands to include us all, designing for people who vary in skills, abilities, preferences and backgrounds is challenging. This book provides an integrated understanding of users, and the methods that have evolved to identify usability challenges, that can facilitate cohesive and earlier solutions. The book treats information creation and use as a core human behavior based on acts of representation and recording that humans have always practiced. It suggests that the traditional ways of studying information use, with their origins in the distinct layers of social science theories and models is limiting our understanding of what it means to be an information user and hampers our efforts at being truly user-centric in design. Instead, the book offers a way of integrating the knowledge base to support a richer view of use and users in design education and evaluation. Understanding Users is aimed at those studying or practicing user-centered design and anyone interested in learning how people might be better integrated in the design of new technologies to augment human capabilities and experiences.
For courses in engineering and technical management System architecture is the study of early decision making in complex systems. This text teaches how to capture experience and analysis about early system decisions, and how to choose architectures that meet stakeholder needs, integrate easily, and evolve flexibly. With case studies written by leading practitioners, from hybrid cars to communications networks to aircraft, this text showcases the science and art of system architecture.
Major advances in computing are occurring at an ever-increasing pace. This is especially so in the area of high performance computing (HPC), where today's supercomputer is tomorrow's workstation. High Performance Computing Systems and Applications is a record of HPCS'98, the 12th annual Symposium on High Performance Computing Systems and Applications. The quality of the conference was significantly enhanced by the high proportion of keynote and invited speakers. This book presents the latest research in HPC architecture, networking, applications and tools. Of special note are the sections on computational biology and physics. High Performance Computing Systems and Applications is suitable as a secondary text for a graduate-level course on computer architecture and networking, and as a reference for researchers and practitioners in industry.
Real-time and embedded systems are essential to our lives, from controlling car engines and regulating traffic lights to monitoring plane takeoffs and landings to providing up-to-the-minute stock quotes. Bringing together researchers from both academia and industry, the Handbook of Real-Time and Embedded Systems provides comprehensive coverage of the most advanced and timely topics in the field. The book focuses on several major areas of real-time and embedded systems. It examines real-time scheduling and resource management issues and explores the programming languages, paradigms, operating systems, and middleware for these systems. The handbook also presents challenges encountered in wireless sensor networks and offers ways to solve these problems. It addresses key matters associated with real-time data services and reviews the formalisms, methods, and tools used in real-time and embedded systems. In addition, the book considers how these systems are applied in various fields, including adaptive cruise control in the automobile industry. With its essential material and integration of theory and practice, the Handbook of Real-Time and Embedded Systems facilitates advancements in this area so that the services we rely on can continue to operate successfully.
This book describes model-based development of adaptive embedded systems, which enable improved functionality using the same resources. The techniques presented facilitate design from a higher level of abstraction, focusing on the problem domain rather than on the solution domain, thereby increasing development efficiency. Models are used to capture system specifications and to implement (manually or automatically) system functionality. The authors demonstrate the real impact of adaptivity on engineering of embedded systems by providing several industrial examples of the models used in the development of adaptive embedded systems.
This book provides readers with an insightful guide to the design, testing and optimization of 2.5D integrated circuits. The authors describe a set of design-for-test methods to address various challenges posed by the new generation of 2.5D ICs, including pre-bond testing of the silicon interposer, at-speed interconnect testing, built-in self-test architecture, extest scheduling, and a programmable method for low-power scan shift in SoC dies. This book covers many testing techniques that have already been used in mainstream semiconductor companies. Readers will benefit from an in-depth look at test-technology solutions that are needed to make 2.5D ICs a reality and commercially viable.
The first Stanford MIPS project started as a special graduate course in 1981. That project produced working silicon in 1983 and a prototype for running small programs in early 1984. After that, we declared it a success and decided to move on to the next project-MIPS-X. This book is the final and complete word on MIPS-X. The initial design of MIPS-X was formulated in 1984 beginning in the Spring. At that time, we were unsure that RISe technology was going to have the industrial impact that we felt it should. We also knew of a number of architectural and implementation flaws in the Stanford MIPS machine. We believed that a new processor could achieve a performance level of over 10 times a VAX 11/780, and that a microprocessor of this performance level would convince academic skeptics of the value of the RISe approach. We were concerned that the flaws in the original RISe design might overshadow the core ideas, or that attempts to industrialize the technology would repeat the mistakes of the first generation designs. MIPS-X was targeted to eliminate the flaws in the first generation de signs and to boost the performance level by over a factor of five."
This book presents a state-of-the-art technique for formal verification of continuous-time Simulink/Stateflow diagrams, featuring an expressive hybrid system modelling language, a powerful specification logic and deduction-based verification approach, and some impressive, realistic case studies. Readers will learn the HCSP/HHL-based deductive method and the use of corresponding tools for formal verification of Simulink/Stateflow diagrams. They will also gain some basic ideas about fundamental elements of formal methods such as formal syntax and semantics, and especially the common techniques applied in formal modelling and verification of hybrid systems. By investigating the successful case studies, readers will realize how to apply the pure theory and techniques to real applications, and hopefully will be inspired to start to use the proposed approach, or even develop their own formal methods in their future work.
After a brief introduction to low-power VLSI design, the design space of ASIP instruction set architectures (ISAs) is introduced with a special focus on important features for digital signal processing. Based on the degrees of freedom offered by this design space, a consistent ASIP design flow is proposed: this design flow starts with a given application and uses incremental optimization of the ASIP hardware, of ASIP coprocessors and of the ASIP software by using a top-down approach and by applying application-specific modifications on all levels of design hierarchy. A broad range of real-world signal processing applications serves as vehicle to illustrate each design decision and provides a hands-on approach to ASIP design. Finally, two complete case studies demonstrate the feasibility and the efficiency of the proposed methodology and quantitatively evaluate the benefits of ASIPs in an industrial context.
This book offers readers a clear guide to implementing engineering applications with FPGAs, from the mathematical description to the hardware synthesis, including discussion of VHDL programming and co-simulation issues. Coverage includes FPGA realizations such as: chaos generators that are described from their mathematical models; artificial neural networks (ANNs) to predict chaotic time series, for which a discussion of different ANN topologies is included, with different learning techniques and activation functions; random number generators (RNGs) that are realized using different chaos generators, and discussions of their maximum Lyapunov exponent values and entropies. Finally, optimized chaotic oscillators are synchronized and realized to implement a secure communication system that processes black and white and grey-scale images. In each application, readers will find VHDL programming guidelines and computer arithmetic issues, along with co-simulation examples with Active-HDL and Simulink.The whole book provides a practical guide to implementing a variety of engineering applications from VHDL programming and co-simulation issues, to FPGA realizations of chaos generators, ANNs for chaotic time-series prediction, RNGs and chaotic secure communications for image transmission.
Used alongside the students' text, Higher National Computing 2nd
edition, this pack offers a complete suite of lecturer resource
material and photocopiable handouts for the compulsory core units
of the new BTEC Higher Nationals in Computing and IT, including the
four core units for HNC, the two additional core units required at
HND, and the Core Specialist Unit 'Quality Systems', common to both
certificate and diploma level.
ARIS (Architecture of Integrated Information Systems) is a unique and internationally renowned method for optimizing business processes and implementing application systems. This book enhances the proven ARIS concept by describing product flows and explaining how to classify modern software concepts. The importance of the link between business process organization and strategic management is stressed. Bridging the gap between the different approaches in business theory and information technology, the ARIS concept provides a full-circle approach - from the organizational design of business processes to IT implementation. Featuring SAP R/3 as well, real-world examples of various standard software solutions illustrate these concepts.
This book brings together a selection of the best papers from the sixteenth edition of the Forum on specification and Design Languages Conference (FDL), which was held in September 2013 in Paris, France. FDL is a well-established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modeling and verification of integrated circuits, complex hardware/software embedded systems and mixed-technology systems.
This work provides system architects a methodology for the implementation of x.500 and LDAP based metadirectory provisioning systems. In addition this work assists in the business process analysis that accompanies any deployment. DOC Safe Harbor
This book presents various novel architectures for FPGA-optimized accurate and approximate operators, their detailed accuracy and performance analysis, various techniques to model the behavior of approximate operators, and thorough application-level analysis to evaluate the impact of approximations on the final output quality and performance metrics. As multiplication is one of the most commonly used and computationally expensive operations in various error-resilient applications such as digital signal and image processing and machine learning algorithms, this book particularly focuses on this operation. The book starts by elaborating on the various sources of error resilience and opportunities available for approximations on various layers of the computation stack. It then provides a detailed description of the state-of-the-art approximate computing-related works and highlights their limitations. |
You may like...
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R5,924
Discovery Miles 59 240
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
Intelligent Applications for…
Kandarpa Kumar Sarma, Manash Pratim Sarma, …
Hardcover
R6,324
Discovery Miles 63 240
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
|