Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems > Supercomputers
The SUPERMEN "After a rare speech at the National Center for Atmospheric Research in Boulder, Colorado, in 1976, programmers in the audience had suddenly fallen silent when Cray offered to answer questions. He stood there for several minutes, waiting for their queries, but none came. When he left, the head of NCAR's computing division chided the programmers. 'Why didn't someone raise a hand?' After a tense moment, one programmer replied, 'How do you talk to God?'" -from The SUPERMEN The Story of Seymour Cray and the Technical Wizards behind the Supercomputer "They were building revolutionary, not evolutionary, machines...They were blazing a trail-molding science into a product...The freedom to create was extraordinary." -from The Supermen In 1951, a soft-spoken, skinny young man fresh from the University of Minnesota took a job in an old glider factory in St. Paul. Computer technology would never be the same, for the glider factory was the home of Engineering Research Associates and the recent college grad was Seymour R. Cray. During his extraordinary career, Cray would be alternately hailed as "the Albert Einstein," "the Thomas Edison," and "the Evel Knievel" of supercomputing. At various times, he was all three-a master craftsman, inventor, and visionary whose disdain for the rigors of corporate life became legendary, and whose achievements remain unsurpassed. The Supermen is award-winning writer Charles J. Murray's exhilarating account of how the brilliant-some would say eccentric-Cray and his gifted colleagues blazed the trail that led to the Information Age. This is a thrilling, real-life scientific adventure, deftly capturing the daring, seat-of-the-pants spirit of the early days of computer development, as well as an audacious, modern-day David and Goliath battle, in which a group of maverick engineers beat out IBM to become the runaway industry leaders. Murray's briskly paced narrative begins during the final months of the Second World War, when men such as William Norris and Howard Engstrom began researching commercial applications for the code-breaking machines of wartime, and charts the rise of technological research in response to the Cold War. In those days computers were huge, cumbersome machines with names like Demon and Atlas. When Cray came on board, things quickly changed. Drawing on in-depth interviews-including the last interview Cray completed before his untimely and tragic death-Murray provides rare insight into Cray's often controversial approach to his work. Cray could spend exhausting hours in single-minded pursuit of a particular goal, and Murray takes us behind the scenes to witness late-night brainstorming sessions and miraculous eleventh-hour fixes. Cray's casual, often hostile attitude toward management, although alienating to some, was more than a passionate need for independence; he simply thought differently than others. Seymour Cray saw farther and faster, and trusted his vision with an unassailable confidence. Yet he inspired great loyalty as well, making it possible for his own start-up company, Cray Research, to bring the 54,000-employee conglomerate of Control Data to its knees. Ultimately, The Supermen is a story of genius, and how a unique set of circumstances-a small-team approach, corporate detachment, and a government-backed marketplace-enabled that genius to flourish. In an atmosphere of unparalleled freedom and creativity, Seymour Cray's vision and drive fueled a technological revolution from which America would emerge as the world's leader in supercomputing.
VECPAR is a series of international conferences dedicated to the promotion and advancement of all aspects of high-performance computing for computational science, as an industrial technique and academic discipline, extending the fr- tier of both the state of the art and the state of practice. The audience for and participants in VECPAR are seen as researchers in academic departments, g- ernment laboratories and industrial organizations. There is now a permanent website for the series, http: //vecpar.fe.up.pt, where the history of the conf- ences is described. ThesixtheditionofVECPARwasthe?rsttimetheconferencewascelebrated outside Porto at the Universitad Politecnica de Valencia (Spain), June 28 30, 2004. The whole conference programme consisted of 6 invited talks, 61 papers and26posters, outof130contributionsthatwereinitiallysubmitted.Themajor themes were divided into large-scale numerical and non-numerical simulations, parallel and grid computing, biosciences, numerical algorithms, data mining and visualization. This postconference book includes the best 48 papers and 5 invited talks presented during the three days of the conference. The book is organized into 6 chapters, with a prominent position reserved for the invited talks and the Best Student Paper. As a whole it appeals to a wide research community, from those involved in the engineering applications to those interested in the actual details of the hardware or software implementations, in line with what, in these days, tends to be considered as computational science and engineering (CSE)."
Message from the Steering Chair It was my pleasure to welcome attendees to the 10th International Conference on High-Performance Computing and to Hyderabad, an emerging center of IT activities in India. We are indebted to Timothy Pinkston for his superb e?orts as program chair in organizing an excellent technical program. We received a record number of submissions this year. Over the past year, I discussed the meeting details with Timothy. I am grateful to him for his thoughtful inputs. Many volunteers helped to organize the meeting. In addition, I was glad to welcome Rajesh Gupta as Keynote Chair, Atul Negi as Student Scholarships Chair, and Sushil Prasad as Proceedings Chair. I look forward to their cont- butions for the continued success of the meeting series. Sushil Prasad did an excellent job in bringing out these proceedings. Kamal Karlapalem assisted us with local arrangements at IIIT, Hyderabad. Dheeraj Sanghi took on the r- ponsibility of focussed publicity for the meeting within India. Vijay Keshav of Intel India, though not listed as a volunteer, provided me with many pointers for bringing the India-based high-performance computing vendors to the meeting. I would like to thank M. Vidyasagar for agreeing to host the meeting in Hyderabad and for his assistance with the local arrangements. Continuing the tradition set at last year's meeting, several workshops were organized by volunteers. These workshops were coordinated by C.P. Ravikumar. Healsovolunteeredtoputtogethertheworkshopproceedings, andSushilPrasad assisted him in this.
The 5th International Symposium on High Performance Computing (ISHPC-V) was held in Odaiba, Tokyo, Japan, October 20-22, 2003. The symposium was thoughtfully planned, organized, and supported by the ISHPC Organizing C- mittee and its collaborating organizations. The ISHPC-V program included two keynote speeches, several invited talks, two panel discussions, and technical sessions covering theoretical and applied research topics in high-performance computing and representing both academia and industry. One of the regular sessions highlighted the research results of the ITBL project (IT-based research laboratory, http://www.itbl.riken.go.jp/). ITBL is a Japanese national project started in 2001 with the objective of re- izing a virtual joint research environment using information technology. ITBL aims to connect 100 supercomputers located in main Japanese scienti?c research laboratories via high-speed networks. A total of 58 technical contributions from 11 countries were submitted to ISHPC-V. Each paper received at least three peer reviews. After a thorough evaluation process, the program committee selected 14 regular (12-page) papers for presentation at the symposium. In addition, several other papers with fav- able reviews were recommended for a poster session presentation. They are also included in the proceedings as short (8-page) papers. Theprogramcommitteegaveadistinguishedpaperawardandabeststudent paper award to two of the regular papers. The distinguished paper award was given for "Code and Data Transformations for Improving Shared Cache P- formance on SMT Processors" by Dimitrios S. Nikolopoulos. The best student paper award was given for "Improving Memory Latency Aware Fetch Policies for SMT Processors" by Francisco J. Cazorla.
This book constitutes the thoroughly refereed post-proceedings of the 5th International Conference on High Performance Computing for Computational Science, VECPAR 2002, held in Porto, Portugal in June 2002. The 45 revised full papers presented together with 4 invited papers were carefully selected during two rounds of reviewing and improvement. The papers are organized in topical sections on fluids and structures, data mining, computing in chemistry and biology, problem solving environments, computational linear and non-linear algebra, cluster computing, imaging, and software tools and environments.
This book constitutes the refereed proceedings of the 9th International Conference on High Performance Computing, HiPC 2002, held in Bangalore, India in December 2002. The 57 revised full contributed papers and 9 invited papers presented together with various keynote abstracts were carefully reviewed and selected from 145 submissions. The papers are organized in topical sections on algorithms, architecture, systems software, networks, mobile computing and databases, applications, scientific computation, embedded systems, and biocomputing.
This book constitutes the refereed proceedings of the 4th International Symposium on High Performance Computing, ISHPC 2002, held in Kansai Science City, Japan, in May 2002 together with the two workshops WOMPEI 2002 and HPF/HiWEP 2002.The 51 revised papers presented were carefully reviewed and selected for inclusion in the proceedings. The book is organized in topical sections on networks, architectures, HPC systems, Earth Simulator, OpenMP-WOMPEI 2002, and HPF-HiWEP 2002.
This book constitutes the refereed proceedings of the 8th International Conference on High Performance Computing, HiPC 2001, held in Hyderabad, India, in December 2001.The 29 revised full papers presented together with 5 keynote papers and 3 invited papers were carefully reviewed and selected from 108 submissions. The papers are organized in topical sections on algorithms, applications, architecture, systems software, communications networks, and challenges in networking.
This book constitutes the refereed proceedings of the 7th International Conference on High Performance Computing, HiPC 2000, held in Bangalore, India in December 2000. The 46 revised papers presented together with five invited contributions were carefully reviewed and selected from a total of 127 submissions. The papers are organized in topical sections on system software, algorithms, high-performance middleware, applications, cluster computing, architecture, applied parallel processing, networks, wireless and mobile communication systems, and large scale data mining.
I wish to welcome all of you to the International Symposium on High Perf- mance Computing 2000 (ISHPC 2000) in the megalopolis of Tokyo. After having two great successes with ISHPC'97 (Fukuoka, November 1997) and ISHPC'99 (Kyoto, May 1999), many people have requested that the symposium would be held in the capital of Japan and we have agreed. I am very pleased to serve as Conference Chair at a time when high p- formance computing (HPC) has a signi?cant in?uence on computer science and technology. In particular, HPC has had and will continue to have a signi?cant - pact on the advanced technologies of the "IT" revolution. The many conferences and symposiums that are held on the subject around the world are an indication of the importance of this area and the interest of the research community. One of the goals of this symposium is to provide a forum for the discussion of all aspects of HPC (from system architecture to real applications) in a more informal and personal fashion. Today we are delighted to have this symposium, which includes excellent invited talks, tutorials and workshops, as well as high quality technical papers.
These are the proceedings of the Sixth International Conference on High Performance Computing (HiPC'99) held December 17-20 in Calcutta, India. The meeting serves as a forum for presenting current work by researchers from around the world as well as highlighting activities in Asia in the high performance computing area. The meeting emphasizes both the design and the analysis of high performance computing systems and their scientific, engineering, and commercial applications. Topics covered in the meeting series include: Parallel Algorithms Scientific Computation Parallel Architectures Visualization Parallel Languages & Compilers Network and Cluster Based Computing Distributed Systems Signal & Image Processing Systems Programming Environments Supercomputing Applications Memory Systems Internet and WWW-based Computing Multimedia and High Speed Networks Scalable Servers We would like to thank Alfred Hofmann and Ruth Abraham of Springer-Verlag for their excellent support in bringing out the proceedings. The detailed messages from the steering committee chair, general co-chair and program chair pay tribute to numerous volunteers who helped us in organizing the meeting. October 1999 Viktor K. Prasanna Bhabani Sinha Prithviraj Banerjee Message from the Steering Chair It is my pleasure to welcome you to the Sixth International Conference on High Performance Computing. I hope you enjoy the meeting, the rich cultural heritage of Calcutta, as well as the mother Ganges, "the river of life."
This book constitutes the refereed proceedings of the Second
International Symposium on High-Performance Computing, ISHPC'99,
held in Kyoto, Japan in May 1999.
High Performance Computing: Programming and Applications presents techniques that address new performance issues in the programming of high performance computing (HPC) applications. Omitting tedious details, the book discusses hardware architecture concepts and programming techniques that are the most pertinent to application developers for achieving high performance. Even though the text concentrates on C and Fortran, the techniques described can be applied to other languages, such as C++ and Java. Drawing on their experience with chips from AMD and systems, interconnects, and software from Cray Inc., the authors explore the problems that create bottlenecks in attaining good performance. They cover techniques that pertain to each of the three levels of parallelism: Message passing between the nodes Shared memory parallelism on the nodes or the multiple instruction, multiple data (MIMD) units on the accelerator Vectorization on the inner level After discussing architectural and software challenges, the book outlines a strategy for porting and optimizing an existing application to a large massively parallel processor (MPP) system. With a look toward the future, it also introduces the use of general purpose graphics processing units (GPGPUs) for carrying out HPC computations. A companion website at www.hybridmulticoreoptimization.com contains all the examples from the book, along with updated timing results on the latest released processors.
The combination of fast, low-latency networks and high-performance, distributed tools for mathematical software has resulted in widespread, affordable scientific computing facilities. Practitioners working in the fields of computer communication networks, distributed computing, computational algebra and numerical analysis have been brought together to contribute to this volume and explore the emerging distributed and parallel technology in a scientific environment. This collection includes surveys and original research on both software infrastructure for parallel applications and hardware and architecture infrastructure. Among the topics covered are switch-based high-speed networks, ATM over local and wide area networks, network performance, application support, finite element methods, eigenvalue problems, invariant subspace decomposition, QR factorization and Todd-Coxseter coset enumeration.
Industrial Applications of High-Performance Computing: Best Global Practices offers a global overview of high-performance computing (HPC) for industrial applications, along with a discussion of software challenges, business models, access models (e.g., cloud computing), public-private partnerships, simulation and modeling, visualization, big data analysis, and governmental and industrial influence. Featuring the contributions of leading experts from 11 different countries, this authoritative book: Provides a brief history of the development of the supercomputer Describes the supercomputing environments of various government entities in terms of policy and service models Includes a case study section that addresses more subtle and technical aspects of industrial supercomputing Shows how access to supercomputing matters, and how supercomputing can be used to solve large-scale and complex science and engineering problems Emphasizes the need for collaboration between companies, political organizations, government agencies, and entire nations Industrial Applications of High-Performance Computing: Best Global Practices supplies computer engineers and researchers with a state-of-the-art supercomputing reference. This book also keeps policymakers and industrial decision-makers informed about the economic impact of these powerful technological investments.
This book constitutes the strictly refereed post-workshop
proceedings of the International Workshop on Job Scheduling
Strategies for Parallel Processing, held in conjunction with IPPS
'96 symposium in Honolulu, Hawaii, in April 1996.
"Ask not what your compiler can do for you, ask what you can do for your compiler." --John Levesque, Director of Cray's Supercomputing Centers of Excellence The next decade of computationally intense computing lies with more powerful multi/manycore nodes where processors share a large memory space. These nodes will be the building block for systems that range from a single node workstation up to systems approaching the exaflop regime. The node itself will consist of 10's to 100's of MIMD (multiple instruction, multiple data) processing units with SIMD (single instruction, multiple data) parallel instructions. Since a standard, affordable memory architecture will not be able to supply the bandwidth required by these cores, new memory organizations will be introduced. These new node architectures will represent a significant challenge to application developers. Programming for Hybrid Multi/Manycore MPP Systems attempts to briefly describe the current state-of-the-art in programming these systems, and proposes an approach for developing a performance-portable application that can effectively utilize all of these systems from a single application. The book starts with a strategy for optimizing an application for multi/manycore architectures. It then looks at the three typical architectures, covering their advantages and disadvantages. The next section of the book explores the other important component of the target-the compiler. The compiler will ultimately convert the input language to executable code on the target, and the book explores how to make the compiler do what we want. The book then talks about gathering runtime statistics from running the application on the important problem sets previously discussed. How best to utilize available memory bandwidth and virtualization is covered next, along with hybridization of a program. The last part of the book includes several major applications, and examines future hardware advancements and how the application developer may prepare for those advancements.
Supercomputer technologies have evolved rapidly since the first commercial-based supercomputer, CRAY-1 was introduced in 1976. In early 1980's three Japanese super computers appeared, and Cray Research delivered the X-MP series. These machines including the later-announced CRAY-2 and NEC SX series created one generation of supercomputers, and the market was spread dramatically. The peak performance was higher than 1 GFLOPS and the compiler improvement was remarkable. There appeared many articles and books that described their architecture and their performance on The late 1980's saw a new generation of supercomputers. several benchmark problems. Following CRAY Y-MP and Hitachi S-820 delivered in 1988, NEC announced SX-3 and Fujitsu announced the VP2000 series in 1990. In addition, Cray Research announced the Y-MP C-90 late in 1991. The peak performance of these machines reached several to a few ten's GFLOPS. The hardware characteristics of these machines are known, but their practical performance has not been well documented so far. Computational Fluid Dynamics (CFD) is one of the important research fields that have been progressing with the growth of supercomputers. Today's fluid dynamic re search cannot be discussed without supercomputers and since CFD is one of the im portant users of supercomputers, future development of supercomputers has to take the requirements of CFD into account. There are many benchmark reports available today. However, they mostly use so called kernels. For fluid dynamics researchers, benchmark test on real fluid dynamic codes are necessary."
Supercomputing and networking are of great importance in the field of computer chemistry. In this volume some fundamen- tals are discussed; new results are presented in the paral- lelization of a direct SCF on workstations and of several application programs, in the long time dynamics of proteins and for the IGLO method. A general overview of quantum che- mical calculations of small molecules is included. That com- putational methods complement experimental approaches, is demonstrated with short-lived intermediates (carbocations, alkyl radicals) and the 3-D-structure of saruplase-domains.
Past, Present, Parallel is a survey of the current state of the parallel processing industry. In the early 1980s, parallel computers were generally regarded as academic curiosities whose natural environment was the research laboratory. Today, parallelism is being used by every major computer manufacturer, although in very different ways, to produce increasingly powerful and cost-effec- tive machines. The first chapter introduces the basic concepts of parallel computing; the subsequent chapters cover different forms of parallelism, including descriptions of vector supercomputers, SIMD computers, shared memory multiprocessors, hypercubes, and transputer-based machines. Each section concentrates on a different manufacturer, detailing its history and company profile, the machines it currently produces, the software environments it supports, the market segment it is targetting, and its future plans. Supplementary chapters describe some of the companies which have been unsuccessful, and discuss a number of the common software systems which have been developed to make parallel computers more usable. The appendices describe the technologies which underpin parallelism. Past, Present, Parallel is an invaluable reference work, providing up-to-date material for commercial computer users and manufacturers, and for researchers and postgraduate students with an interest in parallel computing.
Supercomputer and Chemistry is the name of a series of seminars, which the Industrieanlagen-Betriebsgesellschaft (IABG), Ottobrunn near Munich, started in 1987. This third meeting stressed the fields of computational science, supercomputing and computer-aided chemistry. Moreover, ~he current situation in the supercomputer market as a whole, particularly in Germany, and the trends to be expected were discussed. The new generation of graphic workstations such as StARDENT have the power of minisupercomputers. Some performance results are pre- sented and comparisons with other machines are made. One of the most exciting prospects for improving the performance of computers is parallel processing. Especially, transputers seem to give unli- mited computing speed, in effect a Crayon your desk. We examine the technology of transputers and their usage in industrial and research projects. The user will have a formidable task in paral- lelizing software. The second part of the seminar addressed the usage of mainframes and supercomputers in the chemical industry. The interplay of ex- periments and computer-aided drug design was highlighted by spea- kers from Sandoz, Boehringer-Ingelheim and Merck. There is still one open question when using numerical methods, i.e. whether all the relevant and important conformations have been obtained. Cer- tainly the computational results have to be checked and verified against experimental results. Furthermore, the benefits, disadvantages and the reduction in costs and time in using supercomputers in pharmaceutical research were discussed.
Although the highly anticipated petascale computers of the near future will perform at an order of magnitude faster than today's quickest supercomputer, the scaling up of algorithms and applications for this class of computers remains a tough challenge. From scalable algorithm design for massive concurrency toperformance analyses and scientific visualization, Petascale Computing:Algorithms and Applications captures the state of the art in high-performance computing algorithms and applications. Featuring contributions from the world's leading experts in computational science, this edited collection explores the use of petascale computers for solving the most difficult scientific and engineering problems of the current century. Covering a wide range of important topics, the book illustrates how petascale computing can be applied to space and Earth science missions, biological systems, weather prediction, climate science, disasters, black holes, and gamma ray bursts. It details the simulation of multiphysics, cosmological evolution, molecular dynamics, and biomolecules. The book also discusses computational aspects that include the Uintah framework, Enzo code, multithreaded algorithms, petaflops, performance analysis tools, multilevel finite element solvers, finite element code development, Charm++, and the Cactus framework. Supplying petascale tools, programming methodologies, and an eight-page color insert, this volume addresses the challenging problems of developing application codes that can take advantage of the architectural features of the new petascale systems in advance of their first deployment.
Modern computing relies on future and emergent technologies which have been conceived via interaction between computer science, engineering, chemistry, physics and biology. This highly interdisciplinary book presents advances in the fields of parallel, distributed and emergent information processing and computation. The book represents major breakthroughs in parallel quantum protocols, elastic cloud servers, structural properties of interconnection networks, internet of things, morphogenetic collective systems, swarm intelligence and cellular automata, unconventionality in parallel computation, algorithmic information dynamics, localized DNA computation, graph-based cryptography, slime mold inspired nano-electronics and cytoskeleton computers. Features Truly interdisciplinary, spanning computer science, electronics, mathematics and biology Covers widely popular topics of future and emergent computing technologies, cloud computing, parallel computing, DNA computation, security and network analysis, cryptography, and theoretical computer science Provides unique chapters written by top experts in theoretical and applied computer science, information processing and engineering From Parallel to Emergent Computing provides a visionary statement on how computing will advance in the next 25 years and what new fields of science will be involved in computing engineering. This book is a valuable resource for computer scientists working today, and in years to come.
In view of the growing presence and popularity of multicore and manycore processors, accelerators, and coprocessors, as well as clusters using such computing devices, the development of efficient parallel applications has become a key challenge to be able to exploit the performance of such systems. This book covers the scope of parallel programming for modern high performance computing systems. It first discusses selected and popular state-of-the-art computing devices and systems available today, These include multicore CPUs, manycore (co)processors, such as Intel Xeon Phi, accelerators, such as GPUs, and clusters, as well as programming models supported on these platforms. It next introduces parallelization through important programming paradigms, such as master-slave, geometric Single Program Multiple Data (SPMD) and divide-and-conquer. The practical and useful elements of the most popular and important APIs for programming parallel HPC systems are discussed, including MPI, OpenMP, Pthreads, CUDA, OpenCL, and OpenACC. It also demonstrates, through selected code listings, how selected APIs can be used to implement important programming paradigms. Furthermore, it shows how the codes can be compiled and executed in a Linux environment. The book also presents hybrid codes that integrate selected APIs for potentially multi-level parallelization and utilization of heterogeneous resources, and it shows how to use modern elements of these APIs. Selected optimization techniques are also included, such as overlapping communication and computations implemented using various APIs. Features: Discusses the popular and currently available computing devices and cluster systems Includes typical paradigms used in parallel programs Explores popular APIs for programming parallel applications Provides code templates that can be used for implementation of paradigms Provides hybrid code examples allowing multi-level parallelization Covers the optimization of parallel programs
From the Foreword: "The authors of the chapters in this book are the pioneers who will explore the exascale frontier. The path forward will not be easy... These authors, along with their colleagues who will produce these powerful computer systems will, with dedication and determination, overcome the scalability problem, discover the new algorithms needed to achieve exascale performance for the broad range of applications that they represent, and create the new tools needed to support the development of scalable and portable science and engineering applications. Although the focus is on exascale computers, the benefits will permeate all of science and engineering because the technologies developed for the exascale computers of tomorrow will also power the petascale servers and terascale workstations of tomorrow. These affordable computing capabilities will empower scientists and engineers everywhere." - Thom H. Dunning, Jr., Pacific Northwest National Laboratory and University of Washington, Seattle, Washington, USA "This comprehensive summary of applications targeting Exascale at the three DoE labs is a must read." - Rio Yokota, Tokyo Institute of Technology, Tokyo, Japan "Numerical simulation is now a need in many fields of science, technology, and industry. The complexity of the simulated systems coupled with the massive use of data makes HPC essential to move towards predictive simulations. Advances in computer architecture have so far permitted scientific advances, but at the cost of continually adapting algorithms and applications. The next technological breakthroughs force us to rethink the applications by taking energy consumption into account. These profound modifications require not only anticipation and sharing but also a paradigm shift in application design to ensure the sustainability of developments by guaranteeing a certain independence of the applications to the profound modifications of the architectures: it is the passage from optimal performance to the portability of performance. It is the challenge of this book to demonstrate by example the approach that one can adopt for the development of applications offering performance portability in spite of the profound changes of the computing architectures." - Christophe Calvin, CEA, Fundamental Research Division, Saclay, France "Three editors, one from each of the High Performance Computer Centers at Lawrence Berkeley, Argonne, and Oak Ridge National Laboratories, have compiled a very useful set of chapters aimed at describing software developments for the next generation exa-scale computers. Such a book is needed for scientists and engineers to see where the field is going and how they will be able to exploit such architectures for their own work. The book will also benefit students as it provides insights into how to develop software for such computer architectures. Overall, this book fills an important need in showing how to design and implement algorithms for exa-scale architectures which are heterogeneous and have unique memory systems. The book discusses issues with developing user codes for these architectures and how to address these issues including actual coding examples.' - Dr. David A. Dixon, Robert Ramsay Chair, The University of Alabama, Tuscaloosa, Alabama, USA |
You may like...
Computational Science and High…
Egon Krause, Yurii I Shokin, …
Hardcover
R5,635
Discovery Miles 56 350
High Performance Computing in Science…
Siegfried Wagner, Etc
Hardcover
R2,431
Discovery Miles 24 310
Advances in High Performance Computing…
Lucio Grandinetti, Etc
Hardcover
R2,520
Discovery Miles 25 200
High Performance Computing on Vector…
Thomas Boenisch, Sunil Tiyyagura, …
Hardcover
R2,791
Discovery Miles 27 910
Introduction to Engineering and…
David E. Clough, Steven C. Chapra
Hardcover
R2,709
Discovery Miles 27 090
Artificial Intelligence for Capital…
Syed Hasan Jafar, Hemachandran K, …
Hardcover
R2,866
Discovery Miles 28 660
High Performance Computing in Science…
Egon Krause, Willi Jager
Hardcover
R2,437
Discovery Miles 24 370
|