The transition towards exascale computing has resulted in major
transformations in computing paradigms. The need to analyze and
respond to such large amounts of data sets has led to the adoption
of machine learning (ML) and deep learning (DL) methods in a wide
range of applications. One of the major challenges is the fetching
of data from computing memory and writing it back without
experiencing a memory-wall bottleneck. To address such concerns,
in-memory computing (IMC) and supporting frameworks have been
introduced. In-memory computing methods have ultra-low power and
high-density embedded storage. Resistive Random-Access Memory
(ReRAM) technology seems the most promising IMC solution due to its
minimized leakage power, reduced power consumption and smaller
hardware footprint, as well as its compatibility with CMOS
technology, which is widely used in industry. In this book, the
authors introduce ReRAM techniques for performing distributed
computing using IMC accelerators, present ReRAM-based IMC
architectures that can perform computations of ML and
data-intensive applications, as well as strategies to map ML
designs onto hardware accelerators. The book serves as a bridge
between researchers in the computing domain (algorithm designers
for ML and DL) and computing hardware designers.
General
Is the information for this product incomplete, wrong or inappropriate?
Let us know about it.
Does this product have an incorrect or missing image?
Send us a new image.
Is this product missing categories?
Add more categories.
Review This Product
No reviews yet - be the first to create one!