|
Showing 1 - 1 of
1 matches in All Departments
Distributed-memory multiprocessing systems (DMS), such as Intel's
hypercubes, the Paragon, Thinking Machine's CM-5, and the Meiko
Computing Surface, have rapidly gained user acceptance and promise
to deliver the computing power required to solve the grand
challenge problems of Science and Engineering. These machines are
relatively inexpensive to build, and are potentially scalable to
large numbers of processors. However, they are difficult to
program: the non-uniformity of the memory which makes local
accesses much faster than the transfer of non-local data via
message-passing operations implies that the locality of algorithms
must be exploited in order to achieve acceptable performance. The
management of data, with the twin goals of both spreading the
computational workload and minimizing the delays caused when a
processor has to wait for non-local data, becomes of paramount
importance. When a code is parallelized by hand, the programmer
must distribute the program's work and data to the processors which
will execute it. One of the common approaches to do so makes use of
the regularity of most numerical computations. This is the
so-called Single Program Multiple Data (SPMD) or data parallel
model of computation. With this method, the data arrays in the
original program are each distributed to the processors,
establishing an ownership relation, and computations defining a
data item are performed by the processors owning the data.
|
|
Email address subscribed successfully.
A activation email has been sent to you.
Please click the link in that email to activate your subscription.