|
Showing 1 - 6 of
6 matches in All Departments
From basic architecture, interconnection, and parallelization to
power optimization, this book provides a comprehensive description
of emerging multicore systems-on-chip (MCSoCs) hardware and
software design. Highlighting both fundamentals and advanced
software and hardware design, it can serve as a primary textbook
for advanced courses in MCSoCs design and embedded systems. The
first three chapters introduce MCSoCs architectures, present design
challenges and conventional design methods, and describe in detail
the main building blocks of MCSoCs. Chapters 4, 5, and 6 discuss
fundamental and advanced on-chip interconnection network
technologies for multi and many core SoCs, enabling readers to
understand the microarchitectures for on-chip routers and network
interfaces that are essential in the context of latency, area, and
power constraints. With the rise of multicore and many-core
systems, concurrency is becoming a major issue in the daily life of
a programmer. Thus, compiler and software development tools are
critical in helping programmers create high-performance software.
Programmers should make sure that their parallelized program codes
will not cause race condition, memory-access deadlocks, or other
faults that may crash their entire systems. As such, Chapter 7
describes a novel parallelizing compiler design for
high-performance computing. Chapter 8 provides a detailed
investigation of power reduction techniques for MCSoCs at component
and network levels. It discusses energy conservation in general
hardware design, and also in embedded multicore system components,
such as CPUs, disks, displays and memories. Lastly, Chapter 9
presents a real embedded MCSoCs system design targeted for health
monitoring in the elderly.
From basic architecture, interconnection, and parallelization to
power optimization, this book provides a comprehensive description
of emerging multicore systems-on-chip (MCSoCs) hardware and
software design. Highlighting both fundamentals and advanced
software and hardware design, it can serve as a primary textbook
for advanced courses in MCSoCs design and embedded systems. The
first three chapters introduce MCSoCs architectures, present design
challenges and conventional design methods, and describe in detail
the main building blocks of MCSoCs. Chapters 4, 5, and 6 discuss
fundamental and advanced on-chip interconnection network
technologies for multi and many core SoCs, enabling readers to
understand the microarchitectures for on-chip routers and network
interfaces that are essential in the context of latency, area, and
power constraints. With the rise of multicore and many-core
systems, concurrency is becoming a major issue in the daily life of
a programmer. Thus, compiler and software development tools are
critical in helping programmers create high-performance software.
Programmers should make sure that their parallelized program codes
will not cause race condition, memory-access deadlocks, or other
faults that may crash their entire systems. As such, Chapter 7
describes a novel parallelizing compiler design for
high-performance computing. Chapter 8 provides a detailed
investigation of power reduction techniques for MCSoCs at component
and network levels. It discusses energy conservation in general
hardware design, and also in embedded multicore system components,
such as CPUs, disks, displays and memories. Lastly, Chapter 9
presents a real embedded MCSoCs system design targeted for health
monitoring in the elderly.
System on chips designs have evolved from fairly simple unicore,
single memory designs to complex heterogeneous multicore SoC
architectures consisting of a large number of IP blocks on the same
silicon. To meet high computational demands posed by latest
consumer electronic devices, most current systems are based on such
paradigm, which represents a real revolution in many aspects in
computing. The attraction of multicore processing for power
reduction is compelling. By splitting a set of tasks among multiple
processor cores, the operating frequency necessary for each core
can be reduced, allowing to reduce the voltage on each core.
Because dynamic power is proportional to the frequency and to the
square of the voltage, we get a big gain, even though we may have
more cores running. As more and more cores are integrated into
these designs to share the ever increasing processing load, the
main challenges lie in efficient memory hierarchy, scalable system
interconnect, new programming paradigms, and efficient integration
methodology for connecting such heterogeneous cores into a single
system capable of leveraging their individual flexibility. Current
design methods tend toward mixed HW/SW co-designs targeting
multicore systems on-chip for specific applications. To decide on
the lowest cost mix of cores, designers must iteratively map the
device's functionality to a particular HW/SW partition and target
architectures. In addition, to connect the heterogeneous cores, the
architecture requires high performance complex communication
architectures and efficient communication protocols, such as
hierarchical bus, point-to-point connection, or Network-on-Chip.
Software development also becomes far more complex due to the
difficulties in breaking a single processing task into multiple
parts that can be processed separately and then reassembled later.
This reflects the fact that certain processor jobs cannot be easily
parallelized to run concurrently on multiple processing cores and
that load balancing between processing cores - especially
heterogeneous cores - is very difficult.
System on chips designs have evolved from fairly simple unicore,
single memory designs to complex heterogeneous multicore SoC
architectures consisting of a large number of IP blocks on the same
silicon. To meet high computational demands posed by latest
consumer electronic devices, most current systems are based on such
paradigm, which represents a real revolution in many aspects in
computing. The attraction of multicore processing for power
reduction is compelling. By splitting a set of tasks among multiple
processor cores, the operating frequency necessary for each core
can be reduced, allowing to reduce the voltage on each core.
Because dynamic power is proportional to the frequency and to the
square of the voltage, we get a big gain, even though we may have
more cores running. As more and more cores are integrated into
these designs to share the ever increasing processing load, the
main challenges lie in efficient memory hierarchy, scalable system
interconnect, new programming paradigms, and efficient integration
methodology for connecting such heterogeneous cores into a single
system capable of leveraging their individual flexibility. Current
design methods tend toward mixed HW/SW co-designs targeting
multicore systems on-chip for specific applications. To decide on
the lowest cost mix of cores, designers must iteratively map the
device's functionality to a particular HW/SW partition and target
architectures. In addition, to connect the heterogeneous cores, the
architecture requires high performance complex communication
architectures and efficient communication protocols, such as
hierarchical bus, point-to-point connection, or Network-on-Chip.
Software development also becomes far more complex due to the
difficulties in breaking a single processing task into multiple
parts that can be processed separately and then reassembled later.
This reflects the fact that certain processor jobs cannot be easily
parallelized to run concurrently on multiple processing cores and
that load balancing between processing cores - especially
heterogeneous cores - is very difficult.
This book focuses on neuromorphic computing principles and
organization and how to build fault-tolerant scalable hardware for
large and medium scale spiking neural networks with learning
capabilities. In addition, the book describes in a comprehensive
way the organization and how to design a spike-based neuromorphic
system to perform network of spiking neurons communication,
computing, and adaptive learning for emerging AI applications. The
book begins with an overview of neuromorphic computing systems and
explores the fundamental concepts of artificial neural networks.
Next, we discuss artificial neurons and how they have evolved in
their representation of biological neuronal dynamics. Afterward, we
discuss implementing these neural networks in neuron models,
storage technologies, inter-neuron communication networks,
learning, and various design approaches. Then, comes the
fundamental design principle to build an efficient neuromorphic
system in hardware. The challenges that need to be solved toward
building a spiking neural network architecture with many synapses
are discussed. Learning in neuromorphic computing systems and the
major emerging memory technologies that promise neuromorphic
computing are then given. A particular chapter of this book is
dedicated to the circuits and architectures used for communication
in neuromorphic systems. In particular, the Network-on-Chip fabric
is introduced for receiving and transmitting spikes following the
Address Event Representation (AER) protocol and the memory
accessing method. In addition, the interconnect design principle is
covered to help understand the overall concept of on-chip and
off-chip communication. Advanced on-chip interconnect technologies,
including si-photonic three-dimensional interconnects and
fault-tolerant routing algorithms, are also given. The book also
covers the main threats of reliability and discusses several
recovery methods for multicore neuromorphic systems. This is
important for reliable processing in several embedded neuromorphic
applications. A reconfigurable design approach that supports
multiple target applications via dynamic reconfigurability, network
topology independence, and network expandability is also described
in the subsequent chapters. The book ends with a case study about a
real hardware-software design of a reliable three-dimensional
digital neuromorphic processor geared explicitly toward the 3D-ICs
biological brain's three-dimensional structure. The platform
enables high integration density and slight spike delay of spiking
networks and features a scalable design. We present methods for
fault detection and recovery in a neuromorphic system as well.
Neuromorphic Computing Principles and Organization is an excellent
resource for researchers, scientists, graduate students, and
hardware-software engineers dealing with the ever-increasing
demands on fault-tolerance, scalability, and low power consumption.
It is also an excellent resource for teaching advanced
undergraduate and graduate students about the fundamentals
concepts, organization, and actual hardware-software design of
reliable neuromorphic systems with learning and fault-tolerance
capabilities.
This book focuses on neuromorphic computing principles and
organization and how to build fault-tolerant scalable hardware for
large and medium scale spiking neural networks with learning
capabilities. In addition, the book describes in a comprehensive
way the organization and how to design a spike-based neuromorphic
system to perform network of spiking neurons communication,
computing, and adaptive learning for emerging AI applications. The
book begins with an overview of neuromorphic computing systems and
explores the fundamental concepts of artificial neural networks.
Next, we discuss artificial neurons and how they have evolved in
their representation of biological neuronal dynamics. Afterward, we
discuss implementing these neural networks in neuron models,
storage technologies, inter-neuron communication networks,
learning, and various design approaches. Then, comes the
fundamental design principle to build an efficient neuromorphic
system in hardware. The challenges that need to be solved toward
building a spiking neural network architecture with many synapses
are discussed. Learning in neuromorphic computing systems and the
major emerging memory technologies that promise neuromorphic
computing are then given. A particular chapter of this book is
dedicated to the circuits and architectures used for communication
in neuromorphic systems. In particular, the Network-on-Chip fabric
is introduced for receiving and transmitting spikes following the
Address Event Representation (AER) protocol and the memory
accessing method. In addition, the interconnect design principle is
covered to help understand the overall concept of on-chip and
off-chip communication. Advanced on-chip interconnect technologies,
including si-photonic three-dimensional interconnects and
fault-tolerant routing algorithms, are also given. The book also
covers the main threats of reliability and discusses several
recovery methods for multicore neuromorphic systems. This is
important for reliable processing in several embedded neuromorphic
applications. A reconfigurable design approach that supports
multiple target applications via dynamic reconfigurability, network
topology independence, and network expandability is also described
in the subsequent chapters. The book ends with a case study about a
real hardware-software design of a reliable three-dimensional
digital neuromorphic processor geared explicitly toward the 3D-ICs
biological brain’s three-dimensional structure. The platform
enables high integration density and slight spike delay of spiking
networks and features a scalable design. We present methods for
fault detection and recovery in a neuromorphic system as well.
Neuromorphic Computing Principles and Organization is an excellent
resource for researchers, scientists, graduate students, and
hardware-software engineers dealing with the ever-increasing
demands on fault-tolerance, scalability, and low power consumption.
It is also an excellent resource for teaching advanced
undergraduate and graduate students about the fundamentals
concepts, organization, and actual hardware-software design of
reliable neuromorphic systems with learning and fault-tolerance
capabilities.
|
|