|
|
Books > Computing & IT > Computer hardware & operating systems > General
Thomas Feller sheds some light on trust anchor architectures for
trustworthy reconfigurable systems. He is presenting novel concepts
enhancing the security capabilities of reconfigurable hardware.
Almost invisible to the user, many computer systems are embedded
into everyday artifacts, such as cars, ATMs, and pacemakers. The
significant growth of this market segment within the recent years
enforced a rethinking with respect to the security properties and
the trustworthiness of these systems. The trustworthiness of a
system in general equates to the integrity of its system
components. Hardware-based trust anchors provide measures to
compare the system configuration to reference measurements.
Reconfigurable architectures represent a special case in this
regard, as in addition to the software implementation, the
underlying hardware architecture may be exchanged, even during
runtime.
This book provides a literature review of various wireless MAC
protocols and techniques for achieving real-time and reliable
communications in the context of cyber-physical systems (CPS). The
evaluation analysis of IEEE 802.15.4 for CPS therein will give
insights into configuration and optimization of critical design
parameters of MAC protocols. In addition, this book also presents
the design and evaluation of an adaptive MAC protocol for medical
CPS, which exemplifies how to facilitate real-time and reliable
communications in CPS by exploiting IEEE 802.15.4 based MAC
protocols. This book will be of interest to researchers,
practitioners, and students to better understand the QoS
requirements of CPS, especially for healthcare applications.
This book constitutes the refereed proceedings of the 22nd
International Conference on Nonlinear Dynamics of Electronic
Systems, NDES 2014, held in Albena, Bulgaria, in July 2014. The 47
revised full papers presented were carefully reviewed and selected
from 65 submissions. The papers are organized in topical sections
on nonlinear oscillators, circuits and electronic systems; networks
and nonlinear dynamics and nonlinear phenomena in biological and
physiological systems.
Embedded Software Development: The Open-Source Approach delivers a
practical introduction to embedded software development, with a
focus on open-source components. This programmer-centric book is
written in a way that enables even novice practitioners to grasp
the development process as a whole. Incorporating real code
fragments and explicit, real-world open-source operating system
references (in particular, FreeRTOS) throughout, the text: Defines
the role and purpose of embedded systems, describing their internal
structure and interfacing with software development tools Examines
the inner workings of the GNU compiler collection (GCC)-based
software development system or, in other words, toolchain Presents
software execution models that can be adopted profitably to model
and express concurrency Addresses the basic nomenclature, models,
and concepts related to task-based scheduling algorithms Shows how
an open-source protocol stack can be integrated in an embedded
system and interfaced with other software components Analyzes the
main components of the FreeRTOS Application Programming Interface
(API), detailing the implementation of key operating system
concepts Discusses advanced topics such as formal verification,
model checking, runtime checks, memory corruption, security, and
dependability Embedded Software Development: The Open-Source
Approach capitalizes on the authors' extensive research on
real-time operating systems and communications used in embedded
applications, often carried out in strict cooperation with
industry. Thus, the book serves as a springboard for further
research.
The Verilog hardware description language (HDL) provides the
ability to describe digital and analog systems. This ability spans
the range from descriptions that express conceptual and
architectural design to detailed descriptions of implementations in
gates and transistors. Verilog was developed originally at Gateway
Design Automation Corporation during the mid-eighties. Tools to
verify designs expressed in Verilog were implemented at the same
time and marketed. Now Verilog is an open standard of IEEE with the
number 1364. Verilog HDL is now used universally for digital
designs in ASIC, FPGA, microprocessor, DSP and many other kinds of
design-centers and is supported by most of the EDA companies. The
research and education that is conducted in many universities is
also using Verilog. This book introduces the Verilog hardware
description language and describes it in a comprehensive manner.
Verilog HDL was originally developed and specified with the intent
of use with a simulator. Semantics of the language had not been
fully described until now. In this book, each feature of the
language is described using semantic introduction, syntax and
examples. Chapter 4 leads to the full semantics of the language by
providing definitions of terms, and explaining data structures and
algorithms. The book is written with the approach that Verilog is
not only a simulation or synthesis language, or a formal method of
describing design, but a complete language addressing all of these
aspects. This book covers many aspects of Verilog HDL that are
essential parts of any design process.
The basic concepts and building blocks for the design of Fine- (or
FPGA) and Coarse-Grain Reconfigurable Architectures are discussed
in this book. Recently-developed integrated architecture design and
software-supported design flow of FPGA and coarse-grain
reconfigurable architecture are also described.
Ever since television became practical in the early 1950s,
closed-circuit television (CCTV) in conjunction with the light
microscope has provided large screen display, raised image
contrast, and made the images formed by ultraviolet and infrared
rays visible. With the introduction of large-scale integrated
circuits in the last decade, TV equipment has improved by leaps and
bounds, as has its application in microscopy. With modem CCTV,
sometimes with the help of digital computers, we can distill the
image from a scene that appears to be nothing but noise; capture
fluorescence too dim to be seen; visualize structures far below the
limit of resolution; crispen images hidden in fog; measure, count,
and sort objects; and record in time-lapsed and high-speed
sequences through the light microscope without great difficulty. In
fact, video is becoming indispensable for harnessing the fullest
capacity of the light microscope, a capacity that itself is much
greater than could have been envisioned just a few years ago. The
time seemed ripe then to review the basics of video, and of
microscopy, and to examine how the two could best be combined to
accomplish these tasks. The Marine Biological Laboratory short
courses on Analytical and Quantitative Light Microscopy in Biology,
Medicine, and the Materials Sciences, and the many inquiries I
received on video microscopy, supported such an effort, and Kirk
Jensen of Plenum Press persuaded me of its worth.
The original motivation for the development of digital computers
was to make it possible to perform calculations that were too large
to be attempted by a human being without serious likelihood of
error. Once the users found that they could achieve their initial
aims, they then wanted to go into greater detail, and to solve
still bigger problems, so that the demand for extra computing power
has continued unabated, and shows no sign of slackening. This book
is an attempt to describe some of the more important techniques
used today, or likely to be used in the near future, to perform
arithmetic within the computing machine. There are, at present, few
books in this field. Most books on computer design cover the more
elementary methods, and some go into detail on one or two more
ambitious units. Space does not allow more. In this text the aim
has been to fill this gap in the literature. In selecting the
topics to be covered, there have been two main aims: first, to deal
with the basic procedures of arithmetic, and then to carry on to
the design of more powerful units; second, to maintain a strictly
practical approach. The number of mathematical formulae has been
kept to a minimum, and the more complex ones have been eliminated,
since they merely serve to obscure the essential principles.
Sponsored by the International Society for Computational Methods in
Engineering
In 1981 Robotics Bibliography was published containing over 1,800
references on industrial robot research and development, culled
from the scientific literature over the previous 12 years. It was
felt that sensors for use with industrial robots merited a section
and accordingly just over 200 papers were included. It is a sign of
the increased research into sensors in production engineering that
this bibliography on both the contact and non-contact forms has
appeared less than three years after that first comprehensive
collection of references appeared. In a reviell''; in 1975
Professor Warnecke of IPA, Stuttgart drew attention to the lack of
sensors for touch and vision. Since then research workers in
various companies, universities and national laboratories in the
USA, the UK, Italy, Germany and Japan have concentrated on
improving sensor capabilities, particularly utilising vision,
artificial intelligence and pattern recognition principles. As a
result many research projects are on the brink of commercial
exploitation and development. This biblio graphy brings together
the documentation on that research and development, highlighting
the advances made in vision systems, but not neglecting the
development of tactile sensors of various types. No bibliography
can ever be comprehensive, but significant contributions from
research workers and production engineers from the major
industrialised countries over the last 12 years have been
included."
This book provides a comprehensive introduction to embedded systems
for smart appliances and energy management, bringing together for
the first time a multidisciplinary blend of topics from embedded
systems, information technology and power engineering. Coverage
includes challenges for future resource distribution grids, energy
management in smart appliances, micro energy generation, demand
response management, ultra-low power stand by, smart standby and
communication networks in home and building automation.
NATO's Division of Scientific and Environmental Affairs sponsored
this Advan ced Study Institute because it was felt to be timely to
cover this important and challengjng subject for the first time in
the framework of NATO's ASI programme. The significance of
real-time systems in everyones' life is rapidly growing. The vast
spectrum of these systems can be characterised by just a few
examples of increasing complexity: controllers in washing machines,
air traffic control systems, control and safety systems of nuclear
power plants and, finally, future military systems like the
Strategic Defense Initiative (SDI). The import ance of such systems
for the well-being of people requires considerable efforts in
research and development of highly reliable real-time systems.
Furthermore, the competitiveness and prosperity of entire nations
now depend on the early app lication and efficient utilisation of
computer integrated manufacturing systems (CIM), of which real-time
systems are an essential and decisive part. Owing to its key
significance in computerised defence systems, real-time computing
has also a special importance for the Alliance. The early research
and development activities in this field in the 1960s and 1970s
aimed towards improving the then unsatisfactory software situation.
Thus, the first high-level real-time languages were defined and
developed: RTL/2, Coral 66, Procol, LTR, and PEARL. In close
connection with these language develop ments and with the
utilisation of special purpose process control peripherals, the
research on real-time operating systems advanced considerably."
No other area of biology has grown as fast and become as relevant
over the last decade as virology. It is with no little amount of
amaze ment, that the more we learn about fundamental biological
questions and mechanisms of diseases, the more obvious it becomes
that viruses perme ate all facets of our lives. While on one hand
viruses are known to cause acute and chronic, mild and fatal, focal
and generalized diseases, on the other hand, they are used as tools
for gaining an understanding of the structure and function of
higher organisms, and as vehicles for carrying protective or
curative therapies. The wide scope of approaches to different
biological and medical virological questions was well rep resented
by the speakers that participated in this year's Symposium. While
the epidemic by the human immunodeficiency virus type 1 continues
to spread without hope for much relief in sight, intriguing
questions and answers in the area of diagnostics, clinical
manifestations and therapeutical approaches to viral infections are
unveiled daily. Let us hope, that with the increasing awareness by
our society of the role played by viruses, not only as causative
agents of diseases, but also as models for better understanding
basic biological principles, more efforts and resources are placed
into their study. Luis M. de la Maza Irvine, California Ellena M."
 |
Design Methods and Applications for Distributed Embedded Systems
- IFIP 18th World Computer Congress, TC10 Working Conference on Distributed and Parallel, Embedded Systems (DIPES 2004), 22-27 August, 2004 Toulouse, France
(Paperback, Softcover reprint of the original 1st ed. 2004)
Bernd Kleinjohann, Guang R. Gao, Hermann Kopetz, Lisa Kleinjohann, Achim Rettberg
|
R1,420
Discovery Miles 14 200
|
Ships in 18 - 22 working days
|
|
|
The ever decreasing price/performance ratio of microcontrollers
makes it economically attractive to replace more and more
conventional mechanical or electronic control systems within many
products by embedded real-time computer systems. An embedded
real-time computer system is always part of a well-specified larger
system, which we call an intelligent product. Although most
intelligent products start out as stand-alone units, many of them
are required to interact with other systems at a later stage. At
present, many industries are in the middle of this transition from
stand-alone products to networked embedded systems. This transition
requires reflection and architecting: the complexity of the
evolving distributed artifact can only be controlled if careful
planning and principled design methods replace the ad-hoc
engineering of the first version of many standalone embedded
products.Design Methods and Applications for Distributed Embedded
Systems documents recent approaches and results presented at the
IFIP TC10 Working Conference on Distributed and Parallel Embedded
Systems (DIPES 2004), which was held in August 2004 as a co-located
conference of the 18th IFIP World Computer Congress in Toulouse,
France, and sponsored by the International Federation for
Information Processing (IFIP). The topics which have been chosen
for this working conference are very timely: model-based design
methods, design space exploration, design methodologies and user
interfaces, networks and communication, scheduling and resource
management, fault detection and fault tolerance, and verification
and analysis. These topics are supplemented by several hardware and
application oriented papers.
Analog Behavioral Modeling With The Verilog-A Language provides the
IC designer with an introduction to the methodologies and uses of
analog behavioral modeling with the Verilog-A language. In doing
so, an overview of Verilog-A language constructs as well as
applications using the language are presented. In addition, the
book is accompanied by the Verilog-A Explorer IDE (Integrated
Development Environment), a limited capability Verilog-A enhanced
SPICE simulator for further learning and experimentation with the
Verilog-A language. This book assumes a basic level of
understanding of the usage of SPICE-based analog simulation and the
Verilog HDL language, although any programming language background
and a little determination should suffice. From the Foreword:
`Verilog-A is a new hardware design language (HDL) for analog
circuit and systems design. Since the mid-eighties, Verilog HDL has
been used extensively in the design and verification of digital
systems. However, there have been no analogous high-level languages
available for analog and mixed-signal circuits and systems.
Verilog-A provides a new dimension of design and simulation
capability for analog electronic systems. Previously, analog
simulation has been based upon the SPICE circuit simulator or some
derivative of it. Digital simulation is primarily performed with a
hardware description language such as Verilog, which is popular
since it is easy to learn and use. Making Verilog more worthwhile
is the fact that several tools exist in the industry that
complement and extend Verilog's capabilities ... Behavioral
Modeling With the Verilog-A Language provides a good introduction
and starting place for students and practicing engineers with
interest in understanding this new level of simulation technology.
This book contains numerous examples that enhance the text material
and provide a helpful learning tool for the reader. The text and
the simulation program included can be used for individual study or
in a classroom environment ...' Dr. Thomas A. DeMassa, Professor of
Engineering, Arizona State University
The Verilog Hardware Description Language (Verilog-HDL) has long
been the most popular language for describing complex digital
hardware. It started life as a prop- etary language but was donated
by Cadence Design Systems to the design community to serve as the
basis of an open standard. That standard was formalized in 1995 by
the IEEE in standard 1364-1995. About that same time a group named
Analog Verilog International formed with the intent of proposing
extensions to Verilog to support analog and mixed-signal
simulation. The first fruits of the labor of that group became
available in 1996 when the language definition of Verilog-A was
released. Verilog-A was not intended to work directly with
Verilog-HDL. Rather it was a language with Similar syntax and
related semantics that was intended to model analog systems and be
compatible with SPICE-class circuit simulation engines. The first
implementation of Verilog-A soon followed: a version from Cadence
that ran on their Spectre circuit simulator. As more
implementations of Verilog-A became available, the group defining
the a- log and mixed-signal extensions to Verilog continued their
work, releasing the defi- tion of Verilog-AMS in 2000. Verilog-AMS
combines both Verilog-HDL and Verilog-A, and adds additional
mixed-signal constructs, providing a hardware description language
suitable for analog, digital, and mixed-signal systems. Again,
Cadence was first to release an implementation of this new
language, in a product named AMS Designer that combines their
Verilog and Spectre simulation engines.
Control system design is a challenging task for practicing
engineers. It requires knowledge of different engineering fields, a
good understanding of technical specifications and good
communication skills. The current book introduces the reader into
practical control system design, bridging the gap between theory
and practice. The control design techniques presented in the book
are all model based., considering the needs and possibilities of
practicing engineers. Classical control design techniques are
reviewed and methods are presented how to verify the robustness of
the design. It is how the designed control algorithm can be
implemented in real-time and tested, fulfilling different safety
requirements. Good design practices and the systematic software
development process are emphasized in the book according to the
generic standard IEC61508. The book is mainly addressed to
practicing control and embedded software engineers - working in
research and development - as well as graduate students who are
faced with the challenge to design control systems and implement
them in real-time.
Communication protocols are rules whereby meaningful communication
can be exchanged between different communicating entities. In
general, they are complex and difficult to design and implement.
Specifications of communication protocols written in a natural
language (e.g. English) can be unclear or ambiguous, and may be
subject to different interpretations. As a result, independent
implementations of the same protocol may be incompatible. In
addition, the complexity of protocols make them very hard to
analyze in an informal way. There is, therefore, a need for precise
and unambiguous specification using some formal languages. Many
protocol implementations used in the field have almost suffered
from failures, such as deadlocks. When the conditions in which the
protocols work correctly have been changed, there has been no
general method available for determining how they will work under
the new conditions. It is necessary for protocol designers to have
techniques and tools to detect errors in the early phase of design,
because the later in the process that a fault is discovered, the
greater the cost of rectifying it. Protocol verification is a
process of checking whether the interactions of protocol entities,
according to the protocol specification, do indeed satisfy certain
properties or conditions which may be either general (e.g., absence
of deadlock) or specific to the particular protocol system directly
derived from the specification. In the 80s, an ISO (International
Organization for Standardization) working group began a programme
of work to develop formal languages which were suitable for Open
Systems Interconnection (OSI). This group called such languages
Formal Description Techniques (FDTs). Some of the objectives of ISO
in developing FDTs were: enabling unambiguous, clear and precise
descriptions of OSI protocol standards to be written, and allowing
such specifications to be verified for correctness. There are two
FDTs standardized by ISO: LOTOS and Estelle. Communication Protocol
Specification and Verification is written to address the two issues
discussed above: the needs to specify a protocol using an FDT and
to verify its correctness in order to uncover specification errors
in the early stage of a protocol development process. The
readership primarily consists of advanced undergraduate students,
postgraduate students, communication software developers,
telecommunication engineers, EDP managers, researchers and software
engineers. It is intended as an advanced undergraduate or
postgraduate textbook, and a reference for communication protocol
professionals.
The proliferation and growth of Electronic Design Automation (EDA)
has spawned many diverse and interesting technologies. One of the
most prominent of these technologies is the VHSIC Hardware
Description Language, or VHDL. VHDL permits designers of digital
modules, components, systems, and even networks to describe their
designs both structurally and behaviorally. VHDL also allows
simulation of the designs in order to investigate their performance
prior to actually implementing them in hardware. Having gained the
ability to simulate designs once encoded in VHDL, designers were
naturally confronted with the issue of testing these designs. VHDL
did not explicitly address the requirement to insert particular
digital waveforms, often termed test vectors or patterns, or to
subsequently assess the correctness of the response from some
digital entity. In a distributed design environment, or even in an
isolated one where the design was subject to review or scrutiny by
another organization, de-facto methods of testing and evaluating
results proved faulty. The reason was a lack of
standardization.When organization A designed a circuit and tested
it with their self-developed test tools it had a certain behavior.
When it was delivered to organization B and B tested it using their
test tools, the behavior was different. Was the fault in the
circuit, in A's tools, or in B's tools? The only way to resolve
this was for both organizations to agree on a test apparatus,
validate its correctness and use it consistently. While VHDL was an
IEEE standard language, and consistency among myriad designers was
fairly well guaranteed, no such standard existed for test waveform
generation and assessment. Hence, the value of standardization in
the design language was being negated by the lack of such a
standard for testing. The Waveform and Vector Exchange
Specification, or WAVES, was conceived and designed to solve this
testing problem -- and it has. Being both a subset of VHDL itself,
as well as an IEEE standard, it guarantees both conformity among
multiple applications and easy integration with VHDL units under
test (UUTs). Using WAVES and VHDL for Effective Design and Testing
will serve many purposes.For the WAVES beginner, its tutorial will
make the application of WAVES in typical, standard usage
straightforward and convenient. For the more advanced user, the
advanced topics will provide insight into the nuances of these
useful capabilities. For all users, the tools, templates and
examples given in the chapters, as well as on the companion disk,
will provide a practical starting foundation for using WAVES and
VHDL.
Verilog(R) Quickstart is a basic, practical, introductory textbook
for professionals and students alike. This book explains how a
designer can be more effective through the use of the Verilog
hardware description language to simulate and document a design. By
understanding simulation, a designer can simulate a design to see
if a design works before it is built. This gives the designer an
opportunity to try different ideas. Documentation allows a designer
to maintain and reuse a design more easily. Verilog's intrinsic
hierarchical modularity enables the designer to easily reuse
portions of the design as 'intellectual property' or 'macro-cells'.
Verilog(R) Quickstart presents some of the formal Verilog syntax
and definitions and then shows practical uses. This book does not
oversimplify the Verilog language nor does it emphasize theory.
Verilog(R) Quickstart has over 100 examples that are used to
illustrate aspects of the language. In the later chapters the focus
is on working with modeling style and explaining why and when one
would use different elements of the language. Another feature of
the book is the chapter on state machine modeling.There is also a
chapter on test benches and testing strategy as well as a chapter
on debugging. Verilog(R) Quickstart is designed to teach the
Verilog language, to show the designer how to model in Verilog and
to explain the basics of using Verilog simulators.
A Guide to VHDL is intended for the working engineer who needs to
develop, document, simulate and synthesize a design using the VHDL
language. It is for system and chip designers who are working with
VHDL CAD tools, and who have some experience programming in
Fortran, Pascal, or C and have used a logic simulator. A Guide to
VHDL includes a number of paper exercises and computer lab
experiments. If a compiler/simulator is available to the reader,
then the lab exercises invluded in the chapters can be run to
reinforce the learning experience. For practical purposes, this
book keeps simulator-specific text to a minimum, but does use the
Synopsys VHDL Simulator command language in a few cases. A Guide to
VHDL can be used as a primer, since its contents are appropriate
for an introductory course in VHDL.
Principles of Verifiable RTL Design: A Functional Coding Style
Supporting Verification Processes in Verilog explains how you can
write Verilog to describe chip designs at the RT-level in a manner
that cooperates with verification processes. This cooperation can
return an order of magnitude improvement in performance and
capacity from tools such as simulation and equivalence checkers. It
reduces the labor costs of coverage and formal model checking by
facilitating communication between the design engineer and the
verification engineer. It also orients the RTL style to provide
more useful results from the overall verification process. The
intended audience for Principles of Verifiable RTL Design: A
Functional Coding Style Supporting Verification Processes in
Verilog is engineers and students who need an introduction to
various design verification processes and a supporting functional
Verilog RTL coding style. A second intended audience is engineers
who have been through introductory training in Verilog and now want
to develop good RTL writing practices for verification. A third
audience is Verilog language instructors who are using a general
text on Verilog as the course textbook but want to enrich their
lectures with an emphasis on verification. A fourth audience is
engineers with substantial Verilog experience who want to improve
their Verilog practice to work better with RTL Verilog verification
tools. A fifth audience is design consultants searching for proven
verification-centric methodologies. A sixth audience is EDA
verification tool implementers who want some suggestions about a
minimal Verilog verification subset. Principles of Verifiable RTL
Design: A Functional Coding Style Supporting Verification Processes
in Verilog is based on the reality that comes from actual
large-scale product design process and tool experience.
Physicians, lawyers, engineers, architects, financial analysts, and
other pro fessionals articulate an increasing need for support by
intelligent workstations for decision making, analysis,
communication, and other activities. "Intelligent Workstations for
Professionals" is the collection of papers presented by inter
national scientists at a symposium and workshop in March 1992.
Requirements from potential users, studies of their behavior as
well as approaches and aspects oftechnical realizations of
"intelligent" functions are introduced. Eight contributions from
members of the Center for Information and Tele communication
Technology (Clrn of Northwestern University, Wisconsin Whitewater
University, and the Children's Memorial Hospital deal with the
latest findings of the UNIS (Users' Needs for Intelligent Systems)
project, which is designed to identify needs and wishes from
professionals for intelligent sup port systems and the potential
barriers to adoption and use of such systems. The remaining papers
concentrate on new approaches and techniques that en hance the
"intelligence" of future workstations. They tackle issues like
architectural trends in workstation design, the combination of
workstations with HDTV and speech processing, automatic reading and
understanding of documents, the automated development of software,
or the processing of in exact knowledge. These papers were
contributed by members of the DFKI GmbH (German Research Institute
for Artificial Intelligence), GMD mbH (German Society for
Mathematics and Data Processing), Siemens Gammasonics Inc., Siemens
Nixdorf Informationssysteme AG and Siemens AG."
|
You may like...
Hauntings
Niq Mhlongo
Paperback
R280
R259
Discovery Miles 2 590
Book Lovers
Emily Henry
Paperback
(4)
R275
R254
Discovery Miles 2 540
Joburg Noir
Niq Mhlongo
Paperback
(2)
R280
R259
Discovery Miles 2 590
Suspects
Danielle Steel
Paperback
(3)
R340
R308
Discovery Miles 3 080
The Pink House
Catherine Alliott
Paperback
R395
R365
Discovery Miles 3 650
In At The Kill
Gerald Seymour
Paperback
R445
R409
Discovery Miles 4 090
|