|
|
Books > Computing & IT > Computer hardware & operating systems
Learn the essentials of Networking and Embedded TCP/IP stacks. Part
I of this comprehensive book provides a thorough explanation of
Micri m's C/TCP-IP stack including its implementation and usage.
Part II describes practical, working applications for embedded
medical devices built on C/OS-III, C/TCP-IP and Freescale's
TWR-K53N512 medical board (ARM Cortex -M4) using IAR developments
tools. Each of the included examples feature hands-on working
projects, which allow you to get your application running quickly,
and can serve as a reference design to develop an embedded system
connected to the Internet of Things. This book is the perfect
complement to C/OS-III: The Real-Time Kernel for the ARM Cortex -M4
by Jean Labrosse (ISBN 978-0-9823375-2-3), as it uses the same
medical application examples but connects them via TCP/IP. This
book is written for serious embedded systems programmers,
consultants, hobbyists, and students interested in understanding
the inner workings of a TCP/IP stack. C/TCP-IP is more than just a
great learning platform. It is a full commercial-grade software
package, ready to serve as the foundation for a wide range of
products. Some of the key topics covered in this book are: Ethernet
technology and device drivers IP connectivity Client and Server
architecture Socket programming UDP and TCP performance tuning
This holistic book is an invaluable reference for addressing
various practical challenges in architecting and engineering
Intelligent IoT and eHealth solutions for industry practitioners,
academic and researchers, as well as for engineers involved in
product development. The first part provides a comprehensive guide
to fundamentals, applications, challenges, technical and economic
benefits, and promises of the Internet of Things using examples of
real-world applications. It also addresses all important aspects of
designing and engineering cutting-edge IoT solutions using a
cross-layer approach from device to fog, and cloud covering
standards, protocols, design principles, reference architectures,
as well as all the underlying technologies, pillars, and components
such as embedded systems, network, cloud computing, data storage,
data processing, big data analytics, machine learning, distributed
ledger technologies, and security. In addition, it discusses the
effects of Intelligent IoT, which are reflected in new business
models and digital transformation. The second part provides an
insightful guide to the design and deployment of IoT solutions for
smart healthcare as one of the most important applications of IoT.
Therefore, the second part targets smart healthcare-wearable
sensors, body area sensors, advanced pervasive healthcare systems,
and big data analytics that are aimed at providing connected health
interventions to individuals for healthier lifestyles.
This book will attempt to give a first synthesis of recent works
con cerning reactive system design. The term "reactive system" has
been introduced in order to at'oid the ambiguities often associated
with by the term "real-time system," which, although best known and
more sugges tive, has been given so many different meanings that it
is almost in evitably misunderstood. Industrial process control
systems, transporta tion control and supervision systems,
signal-processing systems, are ex amples of the systems we have in
mind. Although these systems are more and more computerized, it is
sur prising to notice that the problem of time in computer science
has been studied only recently by "pure" computer scientists. Until
the early 1980s, time problems were regarded as the concern of
performance evalu ation, or of some (unjustly scorned) "industrial
computer engineering," or, at best, of operating systems. A second
surprising fact, in contrast, is the growth of research con cerning
timed systems during the last decade. The handling of time has
suddenly become a fundamental goal for most models of concurrency.
In particular, Robin Alilner 's pioneering works about synchronous
process algebras gave rise to a school of thought adopting the
following abstract point of view: As soon as one admits that a
system can instantaneously react to events, i. e."
This volume provides a comprehensive introduction to mHealth
technology and is accessible to technology-oriented researchers and
practitioners with backgrounds in computer science, engineering,
statistics, and applied mathematics. The contributing authors
include leading researchers and practitioners in the mHealth field.
The book offers an in-depth exploration of the three key elements
of mHealth technology: the development of on-body sensors that can
identify key health-related behaviors (sensors to markers), the use
of analytic methods to predict current and future states of health
and disease (markers to predictors), and the development of mobile
interventions which can improve health outcomes (predictors to
interventions). Chapters are organized into sections, with the
first section devoted to mHealth applications, followed by three
sections devoted to the above three key technology areas. Each
chapter can be read independently, but the organization of the
entire book provides a logical flow from the design of on-body
sensing technology, through the analysis of time-varying sensor
data, to interactions with a user which create opportunities to
improve health outcomes. This volume is a valuable resource to spur
the development of this growing field, and ideally suited for use
as a textbook in an mHealth course.
From basic architecture, interconnection, and parallelization to
power optimization, this book provides a comprehensive description
of emerging multicore systems-on-chip (MCSoCs) hardware and
software design. Highlighting both fundamentals and advanced
software and hardware design, it can serve as a primary textbook
for advanced courses in MCSoCs design and embedded systems. The
first three chapters introduce MCSoCs architectures, present design
challenges and conventional design methods, and describe in detail
the main building blocks of MCSoCs. Chapters 4, 5, and 6 discuss
fundamental and advanced on-chip interconnection network
technologies for multi and many core SoCs, enabling readers to
understand the microarchitectures for on-chip routers and network
interfaces that are essential in the context of latency, area, and
power constraints. With the rise of multicore and many-core
systems, concurrency is becoming a major issue in the daily life of
a programmer. Thus, compiler and software development tools are
critical in helping programmers create high-performance software.
Programmers should make sure that their parallelized program codes
will not cause race condition, memory-access deadlocks, or other
faults that may crash their entire systems. As such, Chapter 7
describes a novel parallelizing compiler design for
high-performance computing. Chapter 8 provides a detailed
investigation of power reduction techniques for MCSoCs at component
and network levels. It discusses energy conservation in general
hardware design, and also in embedded multicore system components,
such as CPUs, disks, displays and memories. Lastly, Chapter 9
presents a real embedded MCSoCs system design targeted for health
monitoring in the elderly.
This book is intended for a first course on microprocessor-based
systems design for engineering and computer science students. It
starts with an introduction of the fundamental concepts, followed
by a practical path that guides readers to developing a basic
microprocessor example, using a step-by-step problem-solving
approach. Then, a second microprocessor is presented, and readers
are guided to the implementation and programming of microcomputer
systems based on it. The numerous worked examples and solved
exercises allow a better understanding and a more effective
learning. All the examples and exercises were developed on Deeds
(Digital Electronics Education and Design Suite), which is freely
available online on a website developed and maintained by the
authors. The discussed examples can be simulated by using Deeds and
the solutions to all exercises and examples can be found on that
website. Further, in the last part of this book, different
microprocessor-based systems, which have been specifically thought
for educational purposes, are extensively developed, simulated and
implemented on FPGA-based platforms. This textbook draws on the
authors' extensive experience in teaching and developing learning
materials for bachelor's and master's engineering courses. It can
be used for self-study as well, and even independently from the
simulator. Thanks to the learning-by-doing approach and the
plentiful examples, no prior knowledge in computer programming is
required.
This book presents the basics of both NAND flash storage and
machine learning, detailing the storage problems the latter can
help to solve. At a first sight, machine learning and non-volatile
memories seem very far away from each other. Machine learning
implies mathematics, algorithms and a lot of computation;
non-volatile memories are solid-state devices used to store
information, having the amazing capability of retaining the
information even without power supply. This book will help the
reader understand how these two worlds can work together, bringing
a lot of value to each other. In particular, the book covers two
main fields of application: analog neural networks (NNs) and
solid-state drives (SSDs). After reviewing the basics of machine
learning in Chapter 1, Chapter 2 shows how neural networks can
mimic the human brain; to accomplish this result, neural networks
have to perform a specific computation called vector-by-matrix
(VbM) multiplication, which is particularly power hungry. In the
digital domain, VbM is implemented by means of logic gates which
dictate both the area occupation and the power consumption; the
combination of the two poses serious challenges to the hardware
scalability, thus limiting the size of the neural network itself,
especially in terms of the number of processable inputs and
outputs. Non-volatile memories (phase change memories in Chapter 3,
resistive memories in Chapter 4, and 3D flash memories in Chapter 5
and Chapter 6) enable the analog implementation of the VbM (also
called "neuromorphic architecture"), which can easily beat the
equivalent digital implementation in terms of both speed and energy
consumption. SSDs and flash memories are strictly coupled together;
as 3D flash scales, there is a significant amount of work that has
to be done in order to optimize the overall performances of SSDs.
Machine learning has emerged as a viable solution in many stages of
this process. After introducing the main flash reliability issues,
Chapter 7 shows both supervised and un-supervised machine learning
techniques that can be applied to NAND. In addition, Chapter 7
deals with algorithms and techniques for a pro-active reliability
management of SSDs. Last but not least, the last section of Chapter
7 discusses the next challenge for machine learning in the context
of the so-called computational storage. No doubt that machine
learning and non-volatile memories can help each other, but we are
just at the beginning of the journey; this book helps researchers
understand the basics of each field by providing real application
examples, hopefully, providing a good starting point for the next
level of development.
This book describes the most frequently used high-speed serial
buses in embedded systems, especially those used by FPGAs. These
buses employ SerDes, JESD204, SRIO, PCIE, Aurora and SATA protocols
for chip-to-chip and board-to-board communication, and CPCIE, VPX,
FC and Infiniband protocols for inter-chassis communication. For
each type, the book provides the bus history and version info,
while also assessing its advantages and limitations. Furthermore,
it offers a detailed guide to implementing these buses in FPGA
design, from the physical layer and link synchronization to the
frame format and application command. Given its scope, the book
offers a valuable resource for researchers, R&D engineers and
graduate students in computer science or electronics who wish to
learn the protocol principles, structures and applications of
high-speed serial buses.
The Elixir programming language has become a go-to tool for
creating reliable, fault-tolerant, and robust server-side
applications. Thanks to Nerves, those same exact benefits can be
realized in embedded applications. This book will teach you how to
structure, build, and deploy production grade Nerves applications
to network-enabled devices. The weather station sensor hub project
that you will be embarking upon will show you how to create a full
stack IoT solution in record time. You will build everything from
the embedded Nerves device to the Phoenix backend and even the
Grafana time-series data visualizations. Elixir as a programming
language has found its way into many different software domains,
largely in part to the rock-solid foundation of the Erlang virtual
machine. Thanks to the Nerves framework, Elixir has also found
success in the world of embedded systems and IoT. Having access to
all of the Elixir and OTP constructs such as concurrency,
supervision, and immutability makes for a powerful IoT recipe. Find
out how to create fault-tolerant, reliable, and robust embedded
applications using the Nerves framework. Build and deploy a
production-grade weather station sensor hub using Elixir and
Nerves, all while leveraging the best practices established by the
Nerves community for structuring and organizing Nerves
applications. Capture all of your weather station sensor data using
Phoenix and Ecto in a lightweight server-side application.
Efficiently store and retrieve the time-series weather data
collected by your device using TimescaleDB (the Postgres extension
for time-series data). Finally, complete the full stack IoT
solution by using Grafana to visualize all of your time-series
weather station data. Discover how to create software solutions
where the underlying technologies and techniques are applicable to
all layers of the project. Take your project from idea to
production ready in record time with Elixir and Nerves.
This book pioneers the field of gain-cell embedded DRAM (GC-eDRAM)
design for low-power VLSI systems-on-chip (SoCs). Novel GC-eDRAMs
are specifically designed and optimized for a range of low-power
VLSI SoCs, ranging from ultra-low power to power-aware
high-performance applications. After a detailed review of prior-art
GC-eDRAMs, an analytical retention time distribution model is
introduced and validated by silicon measurements, which is key for
low-power GC-eDRAM design. The book then investigates supply
voltage scaling and near-threshold voltage (NTV) operation of a
conventional gain cell (GC), before presenting novel GC circuit and
assist techniques for NTV operation, including a 3-transistor full
transmission-gate write port, reverse body biasing (RBB), and a
replica technique for optimum refresh timing. Next, conventional GC
bitcells are evaluated under aggressive technology and voltage
scaling (down to the subthreshold domain), before novel bitcells
for aggressively scaled CMOS nodes and soft-error tolerance as
presented, including a 4-transistor GC with partial internal
feedback and a 4-transistor GC with built-in redundancy.
Amid recent interest in Clifford algebra for dual quaternions as a
more suitable method for Computer Graphics than standard matrix
algebra, this book presents dual quaternions and their associated
Clifford algebras in a new light, accessible to and geared towards
the Computer Graphics community. Collating all the associated
formulas and theorems in one place, this book provides an extensive
and rigorous treatment of dual quaternions, as well as showing how
two models of Clifford algebras emerge naturally from the theory of
dual quaternions. Each chapter comes complete with a set of
exercises to help readers sharpen and practice their knowledge.
This book is accessible to anyone with a basic knowledge of
quaternion algebra and is of particular use to forward-thinking
members of the Computer Graphics community. .
Embedded System Interfacing: Design for the Internet-of-Things
(IoT) and Cyber-Physical Systems (CPS) takes a comprehensive
approach to the interface between embedded systems and software. It
provides the principles needed to understand how digital and analog
interfaces work and how to design new interfaces for specific
applications. The presentation is self-contained and practical,
with discussions based on real-world components. Design examples
are used throughout the book to illustrate important concepts. This
book is a complement to the author's Computers as Components, now
in its fourth edition, which concentrates on software running on
the CPU, while Embedded System Interfacing explains the hardware
surrounding the CPU.
This book provides readers with an up-to-date account of the use of
machine learning frameworks, methodologies, algorithms and
techniques in the context of computer-aided design (CAD) for
very-large-scale integrated circuits (VLSI). Coverage includes the
various machine learning methods used in lithography, physical
design, yield prediction, post-silicon performance analysis,
reliability and failure analysis, power and thermal analysis,
analog design, logic synthesis, verification, and neuromorphic
design. Provides up-to-date information on machine learning in VLSI
CAD for device modeling, layout verifications, yield prediction,
post-silicon validation, and reliability; Discusses the use of
machine learning techniques in the context of analog and digital
synthesis; Demonstrates how to formulate VLSI CAD objectives as
machine learning problems and provides a comprehensive treatment of
their efficient solutions; Discusses the tradeoff between the cost
of collecting data and prediction accuracy and provides a
methodology for using prior data to reduce cost of data collection
in the design, testing and validation of both analog and digital
VLSI designs. From the Foreword As the semiconductor industry
embraces the rising swell of cognitive systems and edge
intelligence, this book could serve as a harbinger and example of
the osmosis that will exist between our cognitive structures and
methods, on the one hand, and the hardware architectures and
technologies that will support them, on the other....As we
transition from the computing era to the cognitive one, it behooves
us to remember the success story of VLSI CAD and to earnestly seek
the help of the invisible hand so that our future cognitive systems
are used to design more powerful cognitive systems. This book is
very much aligned with this on-going transition from computing to
cognition, and it is with deep pleasure that I recommend it to all
those who are actively engaged in this exciting transformation. Dr.
Ruchir Puri, IBM Fellow, IBM Watson CTO & Chief Architect, IBM
T. J. Watson Research Center
((keine o-Punkte, sondern 2 accents aigus auf dem o in
Szokefalvi, s. auch Titel ))
In August 1999, an international conference was held in Szeged,
Hungary, in honor of Bela Szokefalvi-Nagy, one of the founders and
main contributors of modern operator theory. This volume contains
some of the papers presented at the meeting, complemented by
several papers of experts who were unable to attend. These 35
refereed articles report on recent and original results in various
areas of operator theory and connected fields, many of them
strongly related to contributions of Sz.-Nagy. The scientific part
of the book is preceeded by fifty pages of biographical material,
including several photos."
This book provides a new perspective on modeling cyber-physical
systems (CPS), using a data-driven approach. The authors cover the
use of state-of-the-art machine learning and artificial
intelligence algorithms for modeling various aspect of the CPS.
This book provides insight on how a data-driven modeling approach
can be utilized to take advantage of the relation between the cyber
and the physical domain of the CPS to aid the first-principle
approach in capturing the stochastic phenomena affecting the CPS.
The authors provide practical use cases of the data-driven modeling
approach for securing the CPS, presenting novel attack models,
building and maintaining the digital twin of the physical system.
The book also presents novel, data-driven algorithms to handle non-
Euclidean data. In summary, this book presents a novel perspective
for modeling the CPS.
This book is based on the 18 tutorials presented during the 26th
workshop on Advances in Analog Circuit Design. Expert designers
present readers with information about a variety of topics at the
frontier of analog circuit design, with specific contributions
focusing on hybrid ADCs, smart sensors for the IoT, sub-1V and
advanced-node analog circuit design. This book serves as a valuable
reference to the state-of-the-art, for anyone involved in analog
circuit research and development.
Dependence Analysis may be considered to be the second edition of
the author's 1988 book, Dependence Analysis for Supercomputing. It
is, however, a completely new work that subsumes the material of
the 1988 publication. This book is the third volume in the series
Loop Transformations for Restructuring Compilers. This series has
been designed to provide a complete mathematical theory of
transformations that can be used to automatically change a
sequential program containing FORTRAN-like do loops into an
equivalent parallel form. In Dependence Analysis, the author
extends the model to a program consisting of do loops and
assignment statements, where the loops need not be sequentially
nested and are allowed to have arbitrary strides. In the context of
such a program, the author studies, in detail, dependence between
statements of the program caused by program variables that are
elements of arrays. Dependence Analysis is directed toward graduate
and undergraduate students, and professional writers of
restructuring compilers. The prerequisite for the book consists of
some knowledge of programming languages, and familiarity with
calculus and graph theory. No knowledge of linear programming is
required.
Provides comprehensive research ideas about Edge-AI technology that
can assist doctors in making better data-driven decisions and will
provide insights to researchers about healthcare industry, trends
and future perspective. Examines how healthcare systems of the
future will operate, by augmenting clinical resources and ensuring
optimal patient outcomes. Provides insight about how Edge-AI is
revolutionizing decision making, early warnings for conditions, and
visual inspection in healthcare. Highlight trends, challenges,
opportunities and future areas where Healthcare informatics deal
with accessing vast data sets of potentially life-saving
information.
This reference text presents the usage of artificial intelligence
in healthcare and discusses the challenges and solutions of using
advanced techniques like wearable technologies and image processing
in the sector. Features: Focuses on the use of artificial
intelligence (AI) in healthcare with issues, applications, and
prospects Presents the application of artificial intelligence in
medical imaging, fractionalization of early lung tumour detection
using a low intricacy approach, etc Discusses an artificial
intelligence perspective on wearable technology Analyses cardiac
dynamics and assessment of arrhythmia by classifying heartbeat
using electrocardiogram (ECG) Elaborates machine learning models
for early diagnosis of depressive mental affliction This book
serves as a reference for students and researchers analyzing
healthcare data. It can also be used by graduate and post graduate
students as an elective course.
|
You may like...
The Warning
James Patterson, Robison Wells
Paperback
(1)
R261
R238
Discovery Miles 2 380
|