|
Showing 1 - 7 of
7 matches in All Departments
Although the origins of parallel computing go back to the last
century, it was only in the 1970s that parallel and vector
computers became available to the scientific community. The first
of these machines-the 64 processor llliac IV and the vector
computers built by Texas Instruments, Control Data Corporation, and
then CRA Y Research Corporation-had a somewhat limited impact. They
were few in number and available mostly to workers in a few
government laboratories. By now, however, the trickle has become a
flood. There are over 200 large-scale vector computers now
installed, not only in government laboratories but also in
universities and in an increasing diversity of industries.
Moreover, the National Science Foundation's Super computing Centers
have made large vector computers widely available to the academic
community. In addition, smaller, very cost-effective vector
computers are being manufactured by a number of companies.
Parallelism in computers has also progressed rapidly. The largest
super computers now consist of several vector processors working in
parallel. Although the number of processors in such machines is
still relatively small (up to 8), it is expected that an increasing
number of processors will be added in the near future (to a total
of 16 or 32). Moreover, there are a myriad of research projects to
build machines with hundreds, thousands, or even more processors.
Indeed, several companies are now selling parallel machines, some
with as many as hundreds, or even tens of thousands, of
processors."
Although the origins of parallel computing go back to the last
century, it was only in the 1970s that parallel and vector
computers became available to the scientific community. The first
of these machines-the 64 processor llliac IV and the vector
computers built by Texas Instruments, Control Data Corporation, and
then CRA Y Research Corporation-had a somewhat limited impact. They
were few in number and available mostly to workers in a few
government laboratories. By now, however, the trickle has become a
flood. There are over 200 large-scale vector computers now
installed, not only in government laboratories but also in
universities and in an increasing diversity of industries.
Moreover, the National Science Foundation's Super computing Centers
have made large vector computers widely available to the academic
community. In addition, smaller, very cost-effective vector
computers are being manufactured by a number of companies.
Parallelism in computers has also progressed rapidly. The largest
super computers now consist of several vector processors working in
parallel. Although the number of processors in such machines is
still relatively small (up to 8), it is expected that an increasing
number of processors will be added in the near future (to a total
of 16 or 32). Moreover, there are a myriad of research projects to
build machines with hundreds, thousands, or even more processors.
Indeed, several companies are now selling parallel machines, some
with as many as hundreds, or even tens of thousands, of
processors."
Linear algebra and matrix theory are essentially synonymous terms
for an area of mathematics that has become one of the most useful
and pervasive tools in a wide range of disciplines. It is also a
subject of great mathematical beauty. In consequence of both of
these facts, linear algebra has increasingly been brought into
lower levels of the curriculum, either in conjunction with the
calculus or separate from it but at the same level. A large and
still growing number of textbooks has been written to satisfy this
need, aimed at students at the junior, sophomore, or even freshman
levels. Thus, most students now obtaining a bachelor's degree in
the sciences or engineering have had some exposure to linear
algebra. But rarely, even when solid courses are taken at the
junior or senior levels, do these students have an adequate working
knowledge of the subject to be useful in graduate work or in
research and development activities in government and industry. In
particular, most elementary courses stop at the point of canonical
forms, so that while the student may have "seen" the Jordan and
other canonical forms, there is usually little appreciation of
their usefulness. And there is almost never time in the elementary
courses to deal with more specialized topics like nonnegative
matrices, inertia theorems, and so on. In consequence, many
graduate courses in mathematics, applied mathe matics, or
applications develop certain parts of matrix theory as needed."
In addition to being an introduction to C++, this text also provides clear explanations of the basics of numerical methods, and is unique for its coverage of numerical methods used in scientific and engineering computation. In addition there is a general discussion of some of the basic paradigms for writing good programs and detecting errors. The result is a brief yet comprehensive treatment of the subject.
Scientific Computing and Differential Equations: An Introduction to
Numerical Methods, is an excellent complement to Introduction to
Numerical Methods by Ortega and Poole. The book emphasizes the
importance of solving differential equations on a computer, which
comprises a large part of what has come to be called scientific
computing. It reviews modern scientific computing, outlines its
applications, and places the subject in a larger context.
This book is appropriate for upper undergraduate courses in
mathematics, electrical engineering, and computer science; it is
also well-suited to serve as a textbook for numerical differential
equations courses at the graduate level.
* An introductory chapter gives an overview of scientific
computing, indicating its important role in solving differential
equations, and placing the subject in the larger environment
* Contains an introduction to numerical methods for both ordinary
and partial differential equations
* Concentrates on ordinary differential equations, especially
boundary-value problems
* Contains most of the main topics for a first course in numerical
methods, and can serve as a text for this course
* Uses material for junior/senior level undergraduate courses in
math and computer science plus material for numerical differential
equations courses for engineering/science students at the graduate
level
Describes a selection of important parallel algorithms for matrix
computations. Reviews the current status and provides an overall
perspective of parallel algorithms for solving problems arising in
the major areas of numerical linear algebra, including (1) direct
solution of dense, structured, or sparse linear systems, (2) dense
or structured least squares computations, (3) dense or structured
eigenvaluen and singular value computations, and (4) rapid elliptic
solvers. The book emphasizes computational primitives whose
efficient execution on parallel and vector computers is essential
to obtain high performance algorithms. Consists of two
comprehensive survey papers on important parallel algorithms for
solving problems arising in the major areas of numerical linear
algebra - direct solution of linear systems, least squares
computations, eigenvalue and singular value computations, and rapid
elliptic solvers, plus an extensive up-to-date bibliography (2,000
items) on related research.
|
You may like...
Loot
Nadine Gordimer
Paperback
(2)
R398
R330
Discovery Miles 3 300
|