|
Showing 1 - 2 of
2 matches in All Departments
Parallel computation on cluster architectures has become the most
common solution for developing high-performance scientific
applications. Message Passing Interface (MPI) is the
message-passing library most widely used to provide communications
in clusters. Along the I/O phase, the processes frequently access a
common data set by issuing a large number of small non-contiguous
I/O requests, which might create bottlenecks in the I/O subsystem.
These bottlenecks are still higher in commodity clusters, where
commercial networks are usually installed. Scalability is also an
important issue in cluster systems when many processors are used,
which may cause network saturation and still higher latencies. As
communication-intensive parallel applications spend a significant
amount of their total execution time exchanging data between
processes, the former problems may lead to poor performance not
only in the I/O subsystem, but also in communication phase.
Therefore, we can conclude that it is necessary to develop
techniques for improving the performance of both communication and
I/O subsystems.
|
|
Email address subscribed successfully.
A activation email has been sent to you.
Please click the link in that email to activate your subscription.