![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
Analog Behavioral Modeling With The Verilog-A Language provides the IC designer with an introduction to the methodologies and uses of analog behavioral modeling with the Verilog-A language. In doing so, an overview of Verilog-A language constructs as well as applications using the language are presented. In addition, the book is accompanied by the Verilog-A Explorer IDE (Integrated Development Environment), a limited capability Verilog-A enhanced SPICE simulator for further learning and experimentation with the Verilog-A language. This book assumes a basic level of understanding of the usage of SPICE-based analog simulation and the Verilog HDL language, although any programming language background and a little determination should suffice. From the Foreword: Verilog-A is a new hardware design language (HDL) for analog circuit and systems design. Since the mid-eighties, Verilog HDL has been used extensively in the design and verification of digital systems. However, there have been no analogous high-level languages available for analog and mixed-signal circuits and systems. Verilog-A provides a new dimension of design and simulation capability for analog electronic systems. Previously, analog simulation has been based upon the SPICE circuit simulator or some derivative of it. Digital simulation is primarily performed with a hardware description language such as Verilog, which is popular since it is easy to learn and use. Making Verilog more worthwhile is the fact that several tools exist in the industry that complement and extend Verilog's capabilities ... Behavioral Modeling With the Verilog-A Language provides a good introduction and starting place for students and practicing engineers with interest in understanding this new level of simulation technology. This book contains numerous examples that enhance the text material and provide a helpful learning tool for the reader. The text and the simulation program included can be used for individual study or in a classroom environment ...' Dr. Thomas A. DeMassa, Professor of Engineering, Arizona State University
Haptics technology is being used more and more in different applications, such as in computer games for increased immersion, in surgical simulators to create a realistic environment for training of surgeons, in surgical robotics due to safety issues and in mobile phones to provide feedback from user action. The existence of these applications highlights a clear need to understand performance metrics for haptic interfaces and their implications on device design, use and application. Performance Metrics for Haptic Interfaces aims at meeting this need by establishing standard practices for the evaluation of haptic interfaces and by identifying significant performance metrics. Towards this end, a combined physical and psychophysical experimental methodology is presented. Firstly, existing physical performance measures and device characterization techniques are investigated and described in an illustrative way. Secondly, a wide range of human psychophysical experiments are reviewed and the appropriate ones are applied to haptic interactions. The psychophysical experiments are unified as a systematic and complete evaluation method for haptic interfaces. Finally, synthesis of both evaluation methods is discussed. The metrics provided in this state-of-the-art volume will guide readers in evaluating the performance of any haptic interface. The generic methodology will enable researchers to experimentally assess the suitability of a haptic interface for a specific purpose, to characterize and compare devices quantitatively and to identify possible improvement strategies in the design of a system.
The third edition of this authoritative and comprehensive handbook is the definitive work on the current state of the art of Biometric Presentation Attack Detection (PAD) - also known as Biometric Anti-Spoofing. Building on the success of the previous editions, this thoroughly updated third edition has been considerably revised to provide even greater coverage of PAD methods, spanning biometrics systems based on face, fingerprint, iris, voice, vein, and signature recognition. New material is also included on major PAD competitions, important databases for research, and on the impact of recent international legislation. Valuable insights are supplied by a selection of leading experts in the field, complete with results from reproducible research, supported by source code and further information available at an associated website. Topics and features: reviews the latest developments in PAD for fingerprint biometrics, covering recent technologies like Vision Transformers, and review of competition series; examines methods for PAD in iris recognition systems, the use of pupil size measurement or multiple spectra for this purpose; discusses advancements in PAD methods for face recognition-based biometrics, such as recent progress on detection of 3D facial masks and the use of multiple spectra with Deep Neural Networks; presents an analysis of PAD for automatic speaker recognition (ASV), including a study of the generalization to unseen attacks; describes the results yielded by key competitions on fingerprint liveness detection, iris liveness detection, and face anti-spoofing; provides analyses of PAD in finger-vein recognition, in signature biometrics, and in mobile biometrics; includes coverage of international standards in PAD and legal aspects of image manipulations like morphing.This text/reference is essential reading for anyone involved in biometric identity verification, be they students, researchers, practitioners, engineers, or technology consultants. Those new to the field will also benefit from a number of introductory chapters, outlining the basics for the most important biometrics. This text/reference is essential reading for anyone involved in biometric identity verification, be they students, researchers, practitioners, engineers, or technology consultants. Those new to the field will also benefit from a number of introductory chapters, outlining the basics for the most important biometrics.
Currently employed at STMicroelectronics, Transactional-Level Modeling (TLM) puts forward a novel SoC design methodology beyond RTL with measured improvements of productivity and first time silicon success. The SystemC consortium has published the official TLM development kit in May 2005 to standardize this modeling technique. The library is flexible enough to model components and systems at many different levels of abstractions: from cycle-accurate to untimed models, and from bit-true behavior to floating-point algorithms. However, careful selection of the abstraction level and associated methodology is crucial to ensure practical gains for design teams. Transaction-Level Modeling with SystemC presents the formalized abstraction and related methodology defined at STMicroelectronics, and covers all major topics related to the Electronic System-Level (ESL) industry: - TLM modeling concepts Complementary to the book, open source code to put this approach into practice is available on several Internet sites as indicated in the first chapter.
This book analyzes the causes of failures in computing systems, their consequences, as weIl as the existing solutions to manage them. The domain is tackled in a progressive and educational manner with two objectives: 1. The mastering of the basics of dependability domain at system level, that is to say independently ofthe technology used (hardware or software) and of the domain of application. 2. The understanding of the fundamental techniques available to prevent, to remove, to tolerate, and to forecast faults in hardware and software technologies. The first objective leads to the presentation of the general problem, the fault models and degradation mechanisms wh ich are at the origin of the failures, and finally the methods and techniques which permit the faults to be prevented, removed or tolerated. This study concerns logical systems in general, independently of the hardware and software technologies put in place. This knowledge is indispensable for two reasons: * A large part of a product' s development is independent of the technological means (expression of requirements, specification and most of the design stage). Very often, the development team does not possess this basic knowledge; hence, the dependability requirements are considered uniquely during the technological implementation. Such an approach is expensive and inefficient. Indeed, the removal of a preliminary design fault can be very difficult (if possible) if this fault is detected during the product's final testing.
Behavioral Intervals in Embedded Software introduces a
comprehensive approach to timing, power, and communication analysis
of embedded software processes. Embedded software timing, power and
communication are typically not unique but occur in intervals which
result from data dependent behavior, environment timing and target
system properties.
Java is an exciting new object-oriented technology. Hardware for supporting objects and other features of Java such as multithreading, dynamic linking and loading is the focus of this book. The impact of Java's features on micro-architectural resources and issues in the design of Java-specific architectures are interesting topics that require the immediate attention of the research community. While Java has become an important part of desktop applications, it is now being used widely in high-end server markets, and will soon be widespread in low-end embedded computing. Java Microarchitectures contains a collection of papers providing a snapshot of the state of the art in hardware support for Java. The book covers the behavior of Java applications, embedded processors for Java, memory system design, and high-performance single-chip architectures designed to execute Java applications efficiently.
Community structure is a salient structural characteristic of
many real-world networks. Communities are generally hierarchical,
overlapping, multi-scale and coexist with other types of structural
regularities of networks. This poses major challenges for
conventional methods of community detection. This book will
comprehensively introduce the latest advances in community
detection, especially the detection of overlapping and hierarchical
community structures, the detection of multi-scale communities in
heterogeneous networks, and the exploration of multiple types of
structural regularities. These advances have been successfully
applied to analyze large-scale online social networks, such as
Facebook and Twitter. This book provides readers a convenient way
to grasp the cutting edge of community detection in complex
networks.
system is a complex object containing a significant percentage of elec A tronics that interacts with the Real World (physical environments, humans, etc. ) through sensing and actuating devices. A system is heterogeneous, i. e., is characterized by the co-existence of a large number of components of disparate type and function (for example, programmable components such as micro processors and Digital Signal Processors (DSPs), analog components such as AID and D/A converters, sensors, transmitters and receivers). Any approach to system design today must include software concerns to be viable. In fact, it is now common knowledge that more than 70% of the development cost for complex systems such as automotive electronics and communication systems are due to software development. In addition, this percentage is increasing constantly. It has been my take for years that the so-called hardware-software co-design problem is formulated at a too low level to yield significant results in shorten ing design time to the point needed for next generation electronic devices and systems. The level of abstraction has to be raised to the Architecture-Function co-design problem, where Function refers to the operations that the system is supposed to carry out and Architecture is the set of supporting components for that functionality. The supporting components as we said above are heteroge neous and contain almost always programmable components."
Spectral Techniques in VLSI CAD have become a subject of renewed interest in the design automation community due to the emergence of new and efficient methods for the computation of discrete function spectra. In the past, spectral computations for digital logic were too complex for practical implementation. The use of decision diagrams for spectral computations has greatly reduced this obstacle allowing for the development of new and useful spectral techniques for VLSI synthesis and verification. Several new algorithms for the computation of the Walsh, Reed-Muller, arithmetic and Haar spectra are described. The relation of these computational methods to traditional ones is also provided. Spectral Techniques in VLSI CAD provides a unified formalism of the representation of bit-level and word-level discrete functions in the spectral domain and as decision diagrams. An alternative and unifying interpretation of decision diagram representations is presented since it is shown that many of the different commonly used varieties of decision diagrams are merely graphical representations of various discrete function spectra. Viewing various decision diagrams as being described by specific sets of transformation functions not only illustrates the relationship between graphical and spectral representations of discrete functions, but also gives insight into how various decision diagram types are related. Spectral Techniques in VLSI CAD describes several new applications of spectral techniques in discrete function manipulation including decision diagram minimization, logic function synthesis, technology mapping and equivalence checking. The use of linear transformations in decision diagram size reduction is described and the relationship to the operation known as spectral translation is described. Several methods for synthesizing digital logic circuits based on a subset of spectral coefficients are described. An equivalence checking approach for functional verification is described based upon the use of matching pairs of Haar spectral coefficients.
This important, state-of-the-art book brings together for the first
time in one volume the two areas of Legacy Systems and Business
Processes. The research discussed has arisen from the EPSRC
research programme on Systems Engineering for Business Process
Change, and the book contains contributions from leading experts in
the field.
Embedded systems are informally defined as a collection of programmable parts surrounded by ASICs and other standard components, that interact continuously with an environment through sensors and actuators. The programmable parts include micro-controllers and Digital Signal Processors (DSPs). Embedded systems are often used in life-critical situations, where reliability and safety are more important criteria than performance. Today, embedded systems are designed with an ad hoc approach that is heavily based on earlier experience with similar products and on manual design. Use of higher-level languages such as C helps structure the design somewhat, but with increasing complexity it is not sufficient. Formal verification and automatic synthesis of implementations are the surest ways to guarantee safety. Thus, the POLIS system which is a co-design environment for embedded systems is based on a formal model of computation. POLIS was initiated in 1988 as a research project at the University of California at Berkeley and, over the years, grew into a full design methodology with a software system supporting it. Hardware-Software Co-Design of Embedded Systems: The POLIS Approach is intended to give a complete overview of the POLIS system including its formal and algorithmic aspects. Hardware-Software Co-Design of Embedded Systems: The POLIS Approach will be of interest to embedded system designers (automotive electronics, consumer electronics and telecommunications), micro-controller designers, CAD developers and students.
The book is directly relevant for students on HND, degree and professional courses. This market-leading text covers the whole range of activities necessary for the analysis, design and implementation of computer-based information and data processing systems. The authors emphasize the role of people, management and quality issues, and consider practical and business realities.
Content distribution, i.e., distributing digital content from one node to another node or multiple nodes, is the most fundamental function of the Internet. Since Amazon's launch of EC2 in 2006 and Apple's release of the iPhone in 2007, Internet content distribution has shown a strong trend toward polarization. On the one hand, considerable investments have been made in creating heavyweight, integrated data centers ("heavy-cloud") all over the world, in order to achieve economies of scale and high flexibility/efficiency of content distribution. On the other hand, end-user devices ("light-end") have become increasingly lightweight, mobile and heterogeneous, creating new demands concerning traffic usage, energy consumption, bandwidth, latency, reliability, and/or the security of content distribution. Based on comprehensive real-world measurements at scale, we observe that existing content distribution techniques often perform poorly under the abovementioned new circumstances. Motivated by the trend of "heavy-cloud vs. light-end," this book is dedicated to uncovering the root causes of today's mobile networking problems and designing innovative cloud-based solutions to practically address such problems. Our work has produced not only academic papers published in prestigious conference proceedings like SIGCOMM, NSDI, MobiCom and MobiSys, but also concrete effects on industrial systems such as Xiaomi Mobile, MIUI OS, Tencent App Store, Baidu PhoneGuard, and WiFi.com. A series of practical takeaways and easy-to-follow testimonials are provided to researchers and practitioners working in mobile networking and cloud computing. In addition, we have released as much code and data used in our research as possible to benefit the community.
Inthe?eldofformalmethodsincomputerscience,concurrencytheoryisreceivinga constantlyincreasinginterest.Thisisespeciallytrueforprocessalgebra.Althoughit had been originally conceived as a means for reasoning about the semantics of c- current programs, process algebraic formalisms like CCS, CSP, ACP, ?-calculus, and their extensions (see, e.g., [154,119,112,22,155,181,30]) were soon used also for comprehendingfunctionaland nonfunctionalaspects of the behaviorof com- nicating concurrent systems. The scienti?c impact of process calculi and behavioral equivalences at the base of process algebra is witnessed not only by a very rich literature. It is in fact worth mentioningthe standardizationprocedurethat led to the developmentof the process algebraic language LOTOS [49], as well as the implementation of several modeling and analysis tools based on process algebra, like CWB [70] and CADP [93], some of which have been used in industrial case studies. Furthermore, process calculi and behavioral equivalencesare by now adopted in university-levelcourses to teach the foundations of concurrent programming as well as the model-driven design of concurrent, distributed, and mobile systems. Nevertheless, after 30 years since its introduction, process algebra is rarely adopted in the practice of software development. On the one hand, its technica- ties often obfuscate the way in which systems are modeled. As an example, if a process term comprises numerous occurrences of the parallel composition operator, it is hard to understand the communicationscheme among the varioussubterms. On the other hand, process algebra is perceived as being dif?cult to learn and use by practitioners, as it is not close enough to the way they think of software systems.
Embedded Software for SoC covers all software related aspects of
SoC design:
Integrating the best aspects of the structured systems analysis and design method and the prototyping method, this work introduces a unique approach to computer systems development which is simple, flexible, thorough, and cost-effective.
This handbook distils the wealth of expertise and knowledge from a large community of researchers and industrial practitioners in Software Product Lines (SPLs) gained through extensive and rigorous theoretical, empirical, and applied research. It is a timely compilation of well-established and cutting-edge approaches that can be leveraged by those facing the prevailing and daunting challenge of re-engineering their systems into SPLs. The selection of chapters provides readers with a wide and diverse perspective that reflects the complementary and varied expertise of the chapter authors. This perspective covers the re-engineering processes, from planning to execution. SPLs are families of systems that share common assets, allowing a disciplined software reuse. The adoption of SPL practices has shown to enable significant technical and economic benefits for the companies that employ them. However, successful SPLs rarely start from scratch, but instead, they usually start from a set of existing systems that must undergo well-defined re-engineering processes to unleash new levels of productivity and competitiveness. Practitioners will benefit from the lessons learned by the community, captured in the array of methodological and technological alternatives presented in the chapters of the handbook, and will gain the confidence for undertaking their own re-engineering challenges. Researchers and educators will find a valuable single-entry point to quickly become familiar with the state-of-the-art on the topic and the open research opportunities; including undergraduate, graduate students, and R&D engineers who want to have a comprehensive understanding of techniques in reverse engineering and re-engineering of variability-rich software systems.
This book, for the first time, provides comprehensive coverage on malicious modification of electronic hardware, also known as, hardware Trojan attacks, highlighting the evolution of the threat, different attack modalities, the challenges, and diverse array of defense approaches. It debunks the myths associated with hardware Trojan attacks and presents practical attack space in the scope of current business models and practices. It covers the threat of hardware Trojan attacks for all attack surfaces; presents attack models, types and scenarios; discusses trust metrics; presents different forms of protection approaches - both proactive and reactive; provides insight on current industrial practices; and finally, describes emerging attack modes, defenses and future research pathways.
Covering theoretical methods and computational techniques in biomolecular research, this book focuses on approaches for the treatment of macromolecules, including proteins, nucleic acids, and bilayer membranes. It uses concepts in free energy calculations, conformational analysis, reaction rates, and transition pathways to calculate and interpret biomolecular properties gleaned from computer-generated membrane simulations. It also demonstrates comparative protein structure modeling, outlines computer-aided drug design, discusses Bayesian statistics in molecular and structural biology, and examines the RISM-SCF/MCSCF approach to chemical processes in solution.
Network monitoring serves as the basis for a wide scope of network, engineering and management operations. Precise network monitoring involves inspecting every packet traversing in a network. However, this is not feasible with future high-speed networks, due to significant overheads of processing, storing, and transferring measured data. Network Monitoring in High Speed Networks presents accurate measurement schemes from both traffic and performance perspectives, and introduces adaptive sampling techniques for various granularities of traffic measurement. The techniques allow monitoring systems to control the accuracy of estimations, and adapt sampling probability dynamically according to traffic conditions. The issues surrounding network delays for practical performance monitoring are discussed in the second part of this book. Case studies based on real operational network traces are provided throughout this book. Network Monitoring in High Speed Networks is designed as a secondary text or reference book for advanced-level students and researchers concentrating on computer science and electrical engineering. Professionals working within the networking industry will also find this book useful.
Information System Development Improving Enterprise Communication are the collected proceedings of the 22nd International Conference on Information Systems Development: Improving Enterprise Communication ISD 2013 Conference, held in Seville, Spain. It follows in the tradition of previous conferences in the series in exploring the connections between industry, research and education. These proceedings represent ongoing reflections within the academic community on established information systems topics and emerging concepts, approaches and ideas. It is hoped that the papers herein contribute towards disseminating research and improving practice. The conference tracks highlighted at the 22nd International Conference on Information Systems Development (ISD 2013) were: ApplicationsData and OntologiesEnd UsersEnterprise EvolutionIndustrial cases in ISDIntelligent Business Process ManagementModel Driven Engineering in ISDNew TechnologiesProcess ManagementQuality"
Contains a disk of all the example problems included in the book Embedded systems are altering the landscape of electronics manufacturing worldwide, giving many consumer products sophisticated capabilities undreamt of even a few years ago. The explosive proliferation of built-in computers and the variety of design methods developed in both industry and academia necessitates the sort of pragmatic guidance offered in Embedded Systems Design with 8051 Microcontrollers. This enormously practical reference/text explains the developments in microcontroller technology and provides lucid instructions on its many and varied applications-focusing on the popular 8-bit microcontroller, the 8051, and the 83C552. Outlines a systematic methodology for design of small-scale, control-dominated embedded systems Including end-of-chapter problems that reinforce essential concepts and end-of-chapter references with URLs, Embedded Systems Design with 8051 Microcontrollers reviews basic concepts, from logic gates to Internet appliances considers 8051 and 83C552 microcontrollers as parallel running processors and embedded peripherals introduces a coherent taxonomy and symbols for microcontroller flags provides a succession of assembly language examples such as electromechanical and digital clocks examines digital interfacing at two hierarchical levels: interface to typical system components and interaction with the outside world covers applications of analog interfacing, from elementary forms to advanced designs for speech machines discusses serial interfaces suitable for distributed embedded systems demonstrates the transition from classical design approaches to the hardware-software codesign with case studies of a simplified EPROM programmer and an EPROM emulator and more Profusely illustrated with over 250 drawings and diagrams, this state-of-the-art resource is a must-read reference for electrical, electronics, computer, industrial, and
A reactive system is one that is in continual interaction with its environment and executes at a pace determined by that environment. Examples of reactive systems are network protocols, air-traffic control systems, industrial-process control systems etc. Reactive systems are ubiquitous and represent an important class of systems. Due to their complex nature, such systems are extremely difficult to specify and implement. Many reactive systems are employed in highly-critical applications, making it crucial that one considers issues such as reliability and safety while designing such systems. The design of reactive systems is considered to be problematic, and p.oses one of the greatest challenges in the field of system design and development. In this paper, we discuss specification-modeling methodologies for reactive systems. Specification modeling is an important stage in reactive system design where the designer specifies the desired properties of the reactive system in the form of a specification model. This specification model acts as the guidance and source for the implementation. To develop the specification model of complex systems in an organized manner, designers resort to specification modeling methodologies. In the context of reactive systems, we can call such methodologies reactive-system specification modeling methodologies. |
![]() ![]() You may like...
Advances in Non-volatile Memory and…
Yoshio Nishi, Blanka Magyari-Kope
Paperback
R4,881
Discovery Miles 48 810
|