Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems > Supercomputers
This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe's leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.
This volume presents a selection of reports from scientific projects requiring high end computing resources on the Hitachi SR8000-F1 supercomputer operated by Leibniz Computing Center in Munich. All reports were presented at the joint HLRB and KONWHIR workshop at the Technical University of Munich in October 2002. The following areas of scientific research are covered: Applied Mathematics, Biosciences, Chemistry, Computational Fluid Dynamics, Cosmology, Geosciences, High-Energy Physics, Informatics, Nuclear Physics, Solid-State Physics. Moreover, projects from interdisciplinary research within the KONWIHR framework (Competence Network for Scientific High Performance Computing in Bavaria) are also included. Each report summarizes its scientific background and discusses the results with special consideration of the quantity and quality of Hitachi SR8000 resources needed to complete the research.
The state of the art in supercomputing is summarized in this volume. The book presents selected results of the projects of the High Performance Computing Center Stuttgart (HLRS) for the year 2001. Together these contributions provide an overview of recent developments in high performance computing and simulation. Reflecting the close cooperation of the HLRS with industry, special emphasis has been put on the industrial relevance of the presented results and methods. The book therefore becomes a collection of showcases for an innovative usage of state-of-the-art modeling, novel numerical algorithms and the use of leading edge high performance computing systems in a GRID-like environment.
Making the most ef?cient use of computer systems has rapidly become a leading topic of interest for the computer industry and its customers alike. However, the focus of these discussions is often on single, isolated, and speci?c architectural and technological improvements for power reduction and conservation, while ignoring the fact that power ef?ciency as a ratio of performance to power consumption is equally in?uenced by performance improvements and architectural power red- tion. Furthermore, ef?ciency can be in?uenced on all levels of today's system hi- archies from single cores all the way to distributed Grid environments. To improve execution and power ef?ciency requires progress in such diverse ?elds as program optimization, optimization of program scheduling, and power reduction of idling system components for all levels of the system hierarchy. Improving computer system ef?ciency requires improving system performance and reducing system power consumption. To research and reach reasonable conc- sions about system performance we need to not only understand the architectures of our computer systems and the available array of code transformations for p- formance optimizations, but we also need to be able to express this understanding in performance models good enough to guide decisions about code optimizations for speci?c systems. This understanding is necessary on all levels of the system hierarchy from single cores to nodes to full high performance computing (HPC) systems, and eventually to Grid environments with multiple systems and resources.
The book presents the state-of-the-art in high performance computing and simulation on modern supercomputer architectures. It covers trends in high performance application software development in general and specifically for parallel vector architectures. The contributions cover among others the field of computational fluid dynamics, physics, chemistry, and meteorology. Innovative application fields like reactive flow simulations and nano technology are presented.
This book is open access under a CC BY NC ND license. It addresses the most recent developments in cloud computing such as HPC in the Cloud, heterogeneous cloud, self-organising and self-management, and discusses the business implications of cloud computing adoption. Establishing the need for a new architecture for cloud computing, it discusses a novel cloud management and delivery architecture based on the principles of self-organisation and self-management. This focus shifts the deployment and optimisation effort from the consumer to the software stack running on the cloud infrastructure. It also outlines validation challenges and introduces a novel generalised extensible simulation framework to illustrate the effectiveness, performance and scalability of self-organising and self-managing delivery models on hyperscale cloud infrastructures. It concludes with a number of potential use cases for self-organising, self-managing clouds and the impact on those businesses.
For the fourth time, the Leibniz Supercomputing Centre (LRZ) and the Com- tence Network for Technical, Scienti c High Performance Computing in Bavaria (KONWIHR) publishes the results from scienti c projects conducted on the c- puter systems HLRB I and II (High Performance Computer in Bavaria). This book reports the research carried out on the HLRB systems within the last three years and compiles the proceedings of the Third Joint HLRB and KONWIHR Result and Reviewing Workshop (3rd and 4th December 2007) in Garching. In 2000, HLRB I was the rst system in Europe that was capable of performing more than one Tera op/s or one billion oating point operations per second. In 2006 it was replaced by HLRB II. After a substantial upgrade it now achieves a peak performance of more than 62 Tera op/s. To install and operate this powerful system, LRZ had to move to its new facilities in Garching. However, the situation regarding the need for more computation cycles has not changed much since 2000. The demand for higher performance is still present, a trend that is likely to continue for the foreseeable future. Other resources like memory and disk space are currently in suf cient abundance on this new system.
The book discusses some key scientific and technological developments in high performance computing, identifies significant trends, and defines desirable research objectives. It covers general concepts and emerging systems, software technology, algorithms and applications. Coverage includes hardware, software tools, networks and numerical methods, new computer architectures, and a discussion of future trends. Beyond purely scientific/engineering computing, the book extends to coverage of enterprise-wide, commercial applications, including papers on performance and scalability of database servers and Oracle DBM systems. Audience: Most papers are research level, but some are suitable for computer literate managers and technicians, making the book useful to users of commercial parallel computers.
This book provides an overview of the resources and research projects that are bringing Big Data and High Performance Computing (HPC) on converging tracks. It demystifies Big Data and HPC for the reader by covering the primary resources, middleware, applications, and tools that enable the usage of HPC platforms for Big Data management and processing.Through interesting use-cases from traditional and non-traditional HPC domains, the book highlights the most critical challenges related to Big Data processing and management, and shows ways to mitigate them using HPC resources. Unlike most books on Big Data, it covers a variety of alternatives to Hadoop, and explains the differences between HPC platforms and Hadoop.Written by professionals and researchers in a range of departments and fields, this book is designed for anyone studying Big Data and its future directions. Those studying HPC will also find the content valuable.
This volume contains 27 contributions to the Second Russian-German Advanced Research Workshop on Computational Science and High Performance Computing presented in March 2005 at Stuttgart, Germany. Contributions range from computer science, mathematics and high performance computing to applications in mechanical and aerospace engineering.
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2006. The reports cover all fields of computational science and engineering ranging from CFD via computational physics and chemistry to computer science with a special emphasis on industrially relevant applications. The book comes with illustrations and tables.
Artificial Intelligence for Capital Market throws light on application of AI/ML techniques in the financial capital markets. This book discusses the challenges posed by the AI/ML techniques as these are prone to "black box" syndrome. The complexity of understanding the underlying dynamics for results generated by these methods is one of the major concerns which is highlighted in this book: Features: Showcases artificial intelligence in finance service industry Explains Credit and Risk Analysis Elaborates on cryptocurrencies and blockchain technology Focuses on optimal choice of asset pricing model Introduces Testing of market efficiency and Forecasting in Indian Stock Market This book serves as a reference book for Academicians, Industry Professional, Traders, Finance Mangers and Stock Brokers. It may also be used as textbook for graduate level courses in financial services and financial Analytics.
This volume reports new developments on work in the quantum flux parametron (QFP) project. It makes complete a series on Josephson supercomputers, which includes four earlier volumes, also published by World Scientific. QFP technology has great potential especially in the design of computer architecture. It is regarded as being able to go beyond the horizon of current technology, and is a leading direction for the advancement of computer technology in the next decade.
This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; reviews the latest research on the DataFlow architecture and its applications; introduces a new method for the rapid handling of real-world challenges involving large datasets; provides a case study on the use of the new approach to accelerate the Cooley-Tukey algorithm on a DataFlow machine; includes a step-by-step guide to the web-based integrated development environment WebIDE.
Proceedings of the International Symposium on High Performance Computational Science and Engineering 2004 (IFIP World Computer Congress) is an essential reference for both academic and professional researchers in the field of computational science and engineering. Computational Science and Engineering is increasingly becoming an emerging and promising discipline in shaping future research and development activities in academia and industry ranging from engineering, science, finance, economics, arts and humanitarian fields. New challenges are in modeling of complex systems, sophisticated algorithms, advanced scientific and engineering computing, and associated (multi-disciplinary) problem solving environments. The papers presented in this volume are specially selected to address the most up-to-date ideas, results, work-in-progress and research experience in the area of high performance computational techniques for science and engineering applications. This state-of-the-are volume presents the proceedings of the International Symposium on High Performance Computational Science and Engineering, held in conjunction with the IFIP World Computer Congress, August 2004, in Toulouse, France. The collection will be important not only for computational science and engineering experts and researchers but for all teachers and administrators interested in high performance computational techniques.
The International Symposium on Supercomputing - New Horizon of Computational Science was held on September 1-3, 1997 at the Science MuseuminTokyo,tocelebrate60-yearbirthdayofProfessorDaiichiroSug- imoto,who hasbeenleadingtheoreticalandnumericalastrophysicsfor 30 years. The conference coveredexceptionally wide range ofsubjects, to follow Sugimoto'saccomplishmentsinmanyfields.Onthefirstdaywehadthree talksonstellarevolutionandsixtalksonstellardynamics. Onthesecond day, six talks on special-purpose computingand four talks on large-scale computing in MolecularDynamicswere given. Onthethirdandthelast day,threetalks on dedicatedcomputerson LatticeQCDcalculationsand sixtalksonpresentandfutureofgeneral-purposeHPCsystemsweregiven. Inaddition,some30posterswerepresentedonvarioussubjectsincompu- tationalscience. Instellarevolution, D.Arnett (Univ. ofArizona) gaveanexcellenttalk on the recent development in three-dimensionalsimulation ofSupernova, inparticularonquantitativecomparisonbetweendifferenttechniquessuch asgrid-basedmethodsandSPH (SmoothedParticleHydrodynamics). Y. Kondo (NASA) discussedresentadvanceinthemodelingoftheevolution ofbinarystars,and1.Hachisu(Univ. ofTokyo)discussedRayleigh-Taylor instabilitiesinsupernovae(contributionnotincluded). Instellardynamics, P.Hut(lAS)gaveasuperbreviewonthelong-term evolution ofstellarsystem, J. Makino (Univ. ofTokyo) described briefly theresultsobtainedonGRAPE-4special-purposecomputerandthefollow- up project,GRAPE-6,whichisapprovedas ofJune 1997. GRAPE-6will be completed by year 2001 with the peak speed around 200 Tflops. R. Spurzem (Rechen-Inst.) and D. Heggie (Univ. of Edinburgh) talked on recentadvanceinthestudyofstarclusters,andE.Athanassoula(Marseille Observatory) describedthe work doneusingtheirGRAPE-3 systems. S. Ida (TokyoInst. ofTechnology) describedthe result ofthe simulationof theformationofMoon. Thefirst talkoftheseconddaywas given by F-H. Hsu oftheIBMT.J. Watson Research center, on "Deep Blue", the special-purpose computer for Chess,which, forthefirst timeinthehistory, wonthematchwiththe besthumanplayer,Mr. GaryKasparov(unfortunately,Hsu'scontribution isnot included in this volume). Then A. Bakker of Delft Inst. of Tech- nology looked back his 20 years ofdevelopingspecial-purpose computers formoleculardynamicsandsimulationofspinsystems. J.Arnoldgavean overviewoftheemergingnewfieldofreconfigurablecomputing,whichfalls inbetweentraditionalgeneral-purposecomputersandspecial-purposecom- puters. S.Okumura(NAO)describedthehistoryofultra-high-performance digital signalprocessors for radio astronomy. They havebuilt a machine with 20GaPS performance in early 80s, and keep improvingthe speed. M. Taiji (ISM) told on general aspects of GRAPE-type systems, and T. Narumi (Univ. of Tokyo) the 100-Tflops GRAPE-type machine for MD calculations,whichwillbefinished by 1999.
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the Stuttgart High Performance Computing Center in 2007. The reports cover all fields of computational science and engineering, with emphasis on industrially relevant applications. Presenting results for both vector-based and microprocessor-based systems, the book allows comparison between performance levels and usability of various architectures.
The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.
1) Provides a levelling approach, bringing students at all stages of programming experience to the same point 2) Focuses Python, a general language, to an engineering and scientific context 3) Uses a classroom tested, practical approach to teaching programming 4) Teaches students and professionals how to use Python to solve engineering calculations such as differential and algebraic equations
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the Gauss-Allianz, the association of High-Performance Computing centers in Germany. The reports cover all fields of computational science and engineering, ranging from CFD to Computational Physics and Biology to Computer Science, with a special emphasis on industrially relevant applications. Presenting results for large-scale parallel microprocessor-based systems and GPU and FPGA-supported systems, the book makes it possible to compare the performance levels and usability of various architectures. Its outstanding results in achieving the highest performance for production codes are of particular interest for both scientists and engineers. The book includes a wealth of color illustrations and tables.
Over the past decade high performance computing has demonstrated the ability to model and predict accurately a wide range of physical properties and phenomena. Many of these have had an important impact in contributing to wealth creation and improving the quality of life through the development of new products and processes with greater efficacy, efficiency or reduced harmful side effects, and in contributing to our ability to understand and describe the world around us. Following a survey ofthe U.K.'s urgent need for a supercomputingfacility for aca demic research (see next chapter), a 256-processor T3D system from Cray Research Inc. went into operation at the University of Edinburgh in the summer of 1994. The High Performance Computing Initiative, HPCI, was established in November 1994 to support and ensure the efficient and effective exploitation of the T3D (and future gen erations of HPC systems) by a number of consortia working in the "frontier" areas of computational research. The Cray T3D, now comprising 512 processors and total of 32 CB memory, represented a very significant increase in computing power, allowing simulations to move forward on a number offronts. The three-fold aims of the HPCI may be summarised as follows; (1) to seek and maintain a world class position incomputational scienceand engineering, (2) to support and promote exploitation of HPC in industry, commerce and business, and (3) to support education and training in HPC and its application."
Artificial Intelligence (AI), when incorporated with machine learning and deep learning algorithms, has a wide variety of applications today. This book focuses on the implementation of various elementary and advanced approaches in AI that can be used in various domains to solve real-time decision-making problems. The book focuses on concepts and techniques used to run tasks in an automated manner. It discusses computational intelligence in the detection and diagnosis of clinical and biomedical images, covers the automation of a system through machine learning and deep learning approaches, presents data analytics and mining for decision-support applications, and includes case-based reasoning, natural language processing, computer vision, and AI approaches in real-time applications. Academic scientists, researchers, and students in the various domains of computer science engineering, electronics and communication engineering, and information technology, as well as industrial engineers, biomedical engineers, and management, will find this book useful. By the end of this book, you will understand the fundamentals of AI. Various case studies will develop your adaptive thinking to solve real-time AI problems. Features Includes AI-based decision-making approaches Discusses computational intelligence in the detection and diagnosis of clinical and biomedical images Covers automation of systems through machine learning and deep learning approaches and its implications to the real world Presents data analytics and mining for decision-support applications Offers case-based reasoning
This definitive new volume brings together scientists from government, industry, and the academic worlds to explore ways in which to capitalize on resources for new ventures into the next generation of supercomputers. The wealth of information on state-of-the-art scientific developments contained in this single volume makes Supercomputers an invaluable resource for management scholars and government policymakers interested in high technology companies and strategic planning.
Highlights developments, discoveries, and practical and advanced experiences related to responsive distributed computing and how it can support the deployment of trajectory-based applications in intelligent systems. Presents metamodeling with new trajectories patterns which are very useful for intelligent transportation systems. Examines the processing aspects of raw trajectories to develop other types of semantic and activity-type and space-time path type trajectories. Discusses Complex Event Processing (CEP), Internet of Things (IoT), Internet of Vehicle (IoV), V2X communication, Big Data Analytics, distributed processing frameworks, and Cloud Computing. Presents a number of case studies to demonstrate smart trajectories related to spatio-temporal events such as traffic congestion, viral contamination, and pedestrian accidents.
This reference text presents the usage of artificial intelligence in healthcare and discusses the challenges and solutions of using advanced techniques like wearable technologies and image processing in the sector. Features: Focuses on the use of artificial intelligence (AI) in healthcare with issues, applications, and prospects Presents the application of artificial intelligence in medical imaging, fractionalization of early lung tumour detection using a low intricacy approach, etc Discusses an artificial intelligence perspective on wearable technology Analyses cardiac dynamics and assessment of arrhythmia by classifying heartbeat using electrocardiogram (ECG) Elaborates machine learning models for early diagnosis of depressive mental affliction This book serves as a reference for students and researchers analyzing healthcare data. It can also be used by graduate and post graduate students as an elective course. |
You may like...
High Performance Computing in Science…
Egon Krause, Willi Jager
Hardcover
R3,127
Discovery Miles 31 270
Real-Time Systems Development with RTEMS…
Gedare Bloom, Joel Sherrill, …
Paperback
R1,857
Discovery Miles 18 570
Scientific Computing on Supercomputers…
J. T Devreese, P. E Van Camp
Hardcover
R4,332
Discovery Miles 43 320
Computational Science and High…
Egon Krause, Yurii I Shokin, …
Hardcover
R5,586
Discovery Miles 55 860
Distributed Artificial Intelligence - A…
Satya Prakash Yadav, Dharmendra Prasad Mahato, …
Hardcover
R4,322
Discovery Miles 43 220
Smart Urban Computing Applications
M.A. Jabbar, Sanju Tiwari, …
Hardcover
R3,050
Discovery Miles 30 500
Smart Buildings Digitalization - IoT and…
O.V. Gnana Swathika, K. Karthikeyan, …
Hardcover
R3,897
Discovery Miles 38 970
Knowledge Guided Machine Learning…
Anuj Karpatne, Ramakrishnan Kannan, …
Hardcover
R2,850
Discovery Miles 28 500
Machine Learning for Edge Computing…
Amitoj Singh, Vinay Kukreja, …
Hardcover
R2,472
Discovery Miles 24 720
|