![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Supercomputers
Using HPC for Computational Fluid Dynamics: A Guide to High Performance Computing for CFD Engineers offers one of the first self-contained guides on the use of high performance computing for computational work in fluid dynamics. Beginning with an introduction to HPC, including its history and basic terminology, the book moves on to consider how modern supercomputers can be used to solve common CFD challenges, including the resolution of high density grids and dealing with the large file sizes generated when using commercial codes. Written to help early career engineers and post-graduate students compete in the fast-paced computational field where knowledge of CFD alone is no longer sufficient, the text provides a one-stop resource for all the technical information readers will need for successful HPC computation.
This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe's leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.
A smart building is the state-of-art in building with features that facilitates informed decision making based on the available data through smart metering and IoT sensors. This set provides useful information for developing smart buildings including significant improvement of energy efficiency, implementation of operational improvements and targeting sustainable environment to create an effective customer experience. It includes case studies from industrial results which provide cost effective solutions and integrates the digital SCADE solution. Describes complete implication of smart buildings via industrial, commercial and community platforms Systematically defines energy-efficient buildings, employing power consumption optimization techniques with inclusion of renewable energy sources Covers data centre and cyber security with excellent data storage features for smart buildings Includes systematic and detailed strategies for building air conditioning and lighting Details smart building security propulsion. This set is aimed at graduate students, researchers and professionals in building systems, architectural, and electrical engineering.
Making the most ef?cient use of computer systems has rapidly become a leading topic of interest for the computer industry and its customers alike. However, the focus of these discussions is often on single, isolated, and speci?c architectural and technological improvements for power reduction and conservation, while ignoring the fact that power ef?ciency as a ratio of performance to power consumption is equally in?uenced by performance improvements and architectural power red- tion. Furthermore, ef?ciency can be in?uenced on all levels of today's system hi- archies from single cores all the way to distributed Grid environments. To improve execution and power ef?ciency requires progress in such diverse ?elds as program optimization, optimization of program scheduling, and power reduction of idling system components for all levels of the system hierarchy. Improving computer system ef?ciency requires improving system performance and reducing system power consumption. To research and reach reasonable conc- sions about system performance we need to not only understand the architectures of our computer systems and the available array of code transformations for p- formance optimizations, but we also need to be able to express this understanding in performance models good enough to guide decisions about code optimizations for speci?c systems. This understanding is necessary on all levels of the system hierarchy from single cores to nodes to full high performance computing (HPC) systems, and eventually to Grid environments with multiple systems and resources.
The book presents the state-of-the-art in high performance computing and simulation on modern supercomputer architectures. It covers trends in high performance application software development in general and specifically for parallel vector architectures. The contributions cover among others the field of computational fluid dynamics, physics, chemistry, and meteorology. Innovative application fields like reactive flow simulations and nano technology are presented.
For the fourth time, the Leibniz Supercomputing Centre (LRZ) and the Com- tence Network for Technical, Scienti c High Performance Computing in Bavaria (KONWIHR) publishes the results from scienti c projects conducted on the c- puter systems HLRB I and II (High Performance Computer in Bavaria). This book reports the research carried out on the HLRB systems within the last three years and compiles the proceedings of the Third Joint HLRB and KONWIHR Result and Reviewing Workshop (3rd and 4th December 2007) in Garching. In 2000, HLRB I was the rst system in Europe that was capable of performing more than one Tera op/s or one billion oating point operations per second. In 2006 it was replaced by HLRB II. After a substantial upgrade it now achieves a peak performance of more than 62 Tera op/s. To install and operate this powerful system, LRZ had to move to its new facilities in Garching. However, the situation regarding the need for more computation cycles has not changed much since 2000. The demand for higher performance is still present, a trend that is likely to continue for the foreseeable future. Other resources like memory and disk space are currently in suf cient abundance on this new system.
This volume contains 27 contributions to the Second Russian-German Advanced Research Workshop on Computational Science and High Performance Computing presented in March 2005 at Stuttgart, Germany. Contributions range from computer science, mathematics and high performance computing to applications in mechanical and aerospace engineering.
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2006. The reports cover all fields of computational science and engineering ranging from CFD via computational physics and chemistry to computer science with a special emphasis on industrially relevant applications. The book comes with illustrations and tables.
This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; reviews the latest research on the DataFlow architecture and its applications; introduces a new method for the rapid handling of real-world challenges involving large datasets; provides a case study on the use of the new approach to accelerate the Cooley-Tukey algorithm on a DataFlow machine; includes a step-by-step guide to the web-based integrated development environment WebIDE.
Proceedings of the International Symposium on High Performance Computational Science and Engineering 2004 (IFIP World Computer Congress) is an essential reference for both academic and professional researchers in the field of computational science and engineering. Computational Science and Engineering is increasingly becoming an emerging and promising discipline in shaping future research and development activities in academia and industry ranging from engineering, science, finance, economics, arts and humanitarian fields. New challenges are in modeling of complex systems, sophisticated algorithms, advanced scientific and engineering computing, and associated (multi-disciplinary) problem solving environments. The papers presented in this volume are specially selected to address the most up-to-date ideas, results, work-in-progress and research experience in the area of high performance computational techniques for science and engineering applications. This state-of-the-are volume presents the proceedings of the International Symposium on High Performance Computational Science and Engineering, held in conjunction with the IFIP World Computer Congress, August 2004, in Toulouse, France. The collection will be important not only for computational science and engineering experts and researchers but for all teachers and administrators interested in high performance computational techniques.
The International Symposium on Supercomputing - New Horizon of Computational Science was held on September 1-3, 1997 at the Science MuseuminTokyo,tocelebrate60-yearbirthdayofProfessorDaiichiroSug- imoto,who hasbeenleadingtheoreticalandnumericalastrophysicsfor 30 years. The conference coveredexceptionally wide range ofsubjects, to follow Sugimoto'saccomplishmentsinmanyfields.Onthefirstdaywehadthree talksonstellarevolutionandsixtalksonstellardynamics. Onthesecond day, six talks on special-purpose computingand four talks on large-scale computing in MolecularDynamicswere given. Onthethirdandthelast day,threetalks on dedicatedcomputerson LatticeQCDcalculationsand sixtalksonpresentandfutureofgeneral-purposeHPCsystemsweregiven. Inaddition,some30posterswerepresentedonvarioussubjectsincompu- tationalscience. Instellarevolution, D.Arnett (Univ. ofArizona) gaveanexcellenttalk on the recent development in three-dimensionalsimulation ofSupernova, inparticularonquantitativecomparisonbetweendifferenttechniquessuch asgrid-basedmethodsandSPH (SmoothedParticleHydrodynamics). Y. Kondo (NASA) discussedresentadvanceinthemodelingoftheevolution ofbinarystars,and1.Hachisu(Univ. ofTokyo)discussedRayleigh-Taylor instabilitiesinsupernovae(contributionnotincluded). Instellardynamics, P.Hut(lAS)gaveasuperbreviewonthelong-term evolution ofstellarsystem, J. Makino (Univ. ofTokyo) described briefly theresultsobtainedonGRAPE-4special-purposecomputerandthefollow- up project,GRAPE-6,whichisapprovedas ofJune 1997. GRAPE-6will be completed by year 2001 with the peak speed around 200 Tflops. R. Spurzem (Rechen-Inst.) and D. Heggie (Univ. of Edinburgh) talked on recentadvanceinthestudyofstarclusters,andE.Athanassoula(Marseille Observatory) describedthe work doneusingtheirGRAPE-3 systems. S. Ida (TokyoInst. ofTechnology) describedthe result ofthe simulationof theformationofMoon. Thefirst talkoftheseconddaywas given by F-H. Hsu oftheIBMT.J. Watson Research center, on "Deep Blue", the special-purpose computer for Chess,which, forthefirst timeinthehistory, wonthematchwiththe besthumanplayer,Mr. GaryKasparov(unfortunately,Hsu'scontribution isnot included in this volume). Then A. Bakker of Delft Inst. of Tech- nology looked back his 20 years ofdevelopingspecial-purpose computers formoleculardynamicsandsimulationofspinsystems. J.Arnoldgavean overviewoftheemergingnewfieldofreconfigurablecomputing,whichfalls inbetweentraditionalgeneral-purposecomputersandspecial-purposecom- puters. S.Okumura(NAO)describedthehistoryofultra-high-performance digital signalprocessors for radio astronomy. They havebuilt a machine with 20GaPS performance in early 80s, and keep improvingthe speed. M. Taiji (ISM) told on general aspects of GRAPE-type systems, and T. Narumi (Univ. of Tokyo) the 100-Tflops GRAPE-type machine for MD calculations,whichwillbefinished by 1999.
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the Stuttgart High Performance Computing Center in 2007. The reports cover all fields of computational science and engineering, with emphasis on industrially relevant applications. Presenting results for both vector-based and microprocessor-based systems, the book allows comparison between performance levels and usability of various architectures.
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the Gauss-Allianz, the association of High-Performance Computing centers in Germany. The reports cover all fields of computational science and engineering, ranging from CFD to Computational Physics and Biology to Computer Science, with a special emphasis on industrially relevant applications. Presenting results for large-scale parallel microprocessor-based systems and GPU and FPGA-supported systems, the book makes it possible to compare the performance levels and usability of various architectures. Its outstanding results in achieving the highest performance for production codes are of particular interest for both scientists and engineers. The book includes a wealth of color illustrations and tables.
Over the past decade high performance computing has demonstrated the ability to model and predict accurately a wide range of physical properties and phenomena. Many of these have had an important impact in contributing to wealth creation and improving the quality of life through the development of new products and processes with greater efficacy, efficiency or reduced harmful side effects, and in contributing to our ability to understand and describe the world around us. Following a survey ofthe U.K.'s urgent need for a supercomputingfacility for aca demic research (see next chapter), a 256-processor T3D system from Cray Research Inc. went into operation at the University of Edinburgh in the summer of 1994. The High Performance Computing Initiative, HPCI, was established in November 1994 to support and ensure the efficient and effective exploitation of the T3D (and future gen erations of HPC systems) by a number of consortia working in the "frontier" areas of computational research. The Cray T3D, now comprising 512 processors and total of 32 CB memory, represented a very significant increase in computing power, allowing simulations to move forward on a number offronts. The three-fold aims of the HPCI may be summarised as follows; (1) to seek and maintain a world class position incomputational scienceand engineering, (2) to support and promote exploitation of HPC in industry, commerce and business, and (3) to support education and training in HPC and its application."
This definitive new volume brings together scientists from government, industry, and the academic worlds to explore ways in which to capitalize on resources for new ventures into the next generation of supercomputers. The wealth of information on state-of-the-art scientific developments contained in this single volume makes Supercomputers an invaluable resource for management scholars and government policymakers interested in high technology companies and strategic planning.
Artificial Intelligence for Capital Market throws light on application of AI/ML techniques in the financial capital markets. This book discusses the challenges posed by the AI/ML techniques as these are prone to "black box" syndrome. The complexity of understanding the underlying dynamics for results generated by these methods is one of the major concerns which is highlighted in this book: Features: Showcases artificial intelligence in finance service industry Explains Credit and Risk Analysis Elaborates on cryptocurrencies and blockchain technology Focuses on optimal choice of asset pricing model Introduces Testing of market efficiency and Forecasting in Indian Stock Market This book serves as a reference book for Academicians, Industry Professional, Traders, Finance Mangers and Stock Brokers. It may also be used as textbook for graduate level courses in financial services and financial Analytics.
The proliferation of multicore processors in the embedded market for Internet-of-Things (IoT) and Cyber-Physical Systems (CPS) makes developing real-time embedded applications increasingly difficult. What is the underlying theory that makes multicore real-time possible? How does theory influence application design? When is a real-time operating system (RTOS) useful? What RTOS features do applications need? How does a mature RTOS help manage the complexity of multicore hardware? Real-Time Systems Development with RTEMS and Multicore Processors answers these questions and more with exemplar Real-Time Executive for Multiprocessor Systems (RTEMS) RTOS to provide concrete advice and examples for constructing useful, feature-rich applications. RTEMS is free, open-source software that supports multi-processor systems for over a dozen CPU architectures and over 150 specific system boards in applications spanning the range of IoT and CPS domains such as satellites, particle accelerators, robots, racing motorcycles, building controls, medical devices, and more. The focus of this book is on enabling real-time embedded software engineering while providing sufficient theoretical foundations and hardware background to understand the rationale for key decisions in RTOS and application design and implementation. The topics covered in this book include: Cross-compilation for embedded systems development Concurrent programming models used in real-time embedded software Real-time scheduling theory and algorithms used in wide practice Usage and comparison of two application programmer interfaces (APIs) in real-time embedded software: POSIX and the RTEMS Classic APIs Design and implementation in RTEMS of commonly found RTOS features for schedulers, task management, time-keeping, inter-task synchronization, inter-task communication, and networking The challenges introduced by multicore hardware, advances in multicore real-time theory, and software engineering multicore real-time systems with RTEMS All the authors of this book are experts in the academic field of real-time embedded systems. Two of the authors are primary open-source maintainers of the RTEMS software project.
The International Workshop on "The Use of Supercomputers in Theoretical Science" took place on January 24 and 25, 1991, at the University of Antwerp (UIA), Antwerpen, Belgium. It was the sixth in a series of workshops, the fIrst of which took place in 1984. The principal aim of these workshops is to present the state of the art in scientific large-scale and high speed-computation. Computational science has developed into a third methodology equally important now as its theoretical and experimental companions. Gradually academic researchers acquired access to a variety of supercomputers and as a consequence computational science has become a major tool for their work. It is a pleasure to thank the Belgian National Science Foundation (NFWO-FNRS) and the Ministry of ScientifIc Affairs for sponsoring the workshop. It was organized both in the framework of the Third Cycle "Vectorization, Parallel Processing and Supercomputers" and the "Governemental Program in Information Technology." We also very much would like to thank the University of Antwerp (Universitaire Instelling Antwerpen -VIA) for fInancial and material support. Special thanks are due to Mrs. H. Evans for the typing and editing of the manuscripts and for the preparation of the author and subject indexes. J.T. Devreese P.E. Van Camp University of Antwerp July 1991 v CONlENTS High Perfonnance Numerically Intensive Applications on Distributed Memory Parallel Computers .................... . F.W. Wray Abstract ......................................... .
Highlights developments, discoveries, and practical and advanced experiences related to responsive distributed computing and how it can support the deployment of trajectory-based applications in intelligent systems. Presents metamodeling with new trajectories patterns which are very useful for intelligent transportation systems. Examines the processing aspects of raw trajectories to develop other types of semantic and activity-type and space-time path type trajectories. Discusses Complex Event Processing (CEP), Internet of Things (IoT), Internet of Vehicle (IoV), V2X communication, Big Data Analytics, distributed processing frameworks, and Cloud Computing. Presents a number of case studies to demonstrate smart trajectories related to spatio-temporal events such as traffic congestion, viral contamination, and pedestrian accidents.
This volume is published as the proceedings of the third Russian-German - vanced Research Workshop on Computational Science and High Performance Computing in Novosibirsk, Russia, in July 2007. The contributions of these proceedings were provided and edited by the - thors, chosen after a careful selection and reviewing. The workshop was organized by the High Performance Computing Center Stuttgart(Stuttgart,Germany)andtheInstituteofComputationalTechnologies SBRAS(Novosibirsk,Russia)intheframeworkofactivitiesoftheGerman-Russian CenterforComputationalTechnologiesandHighPerformanceComputing. Thee event is held biannually and has already become a good tradition for German and Russian scientists. The ?rst Workshop took place in September 2003 in Novosibirskand the second Workshopwas hosted by Stuttgart in March 2005. Both workshops gave the possibility of sharing and discussing the latest results and developing further scienti?c contacts in the ?eld of computational science and high performance computing. The topics of the current workshop include software and hardware for high performancecomputation,numericalmodellingingeophysicsandcomputational ?uid dynamics, mathematical modelling of tsunami waves, simulation of fuel cellsandmodern? breopticsdevices,numericalmodellingincryptographypr- lems andaeroacoustics,interval analysis,toolsfor Gridapplications,researchon service-oriented architecture (SOA) and telemedicine technologies. Theparticipationofrepresentativesofmajorresearchorganizationsengagedin the solution of the most complex problems of mathematical modelling, devel- ment of new algorithms,programsandkey elementsof informationtechnologies, elaboration and implementation of software and hardware for high performance computing systems,provideda highlevelof competenceofthe workshop. Among the German participants were the heads and leading specialists of the HighPerformanceComputingCenterStuttgart(HLRS)(UniversityofStuttgart), NECHighPerformanceComputingEuropeGmbH,SectionofAppliedMathem- ics(UniversityofFreiburgi.Br.),InstituteofAerodynamics(RWTHAachen),- gionalComputingCenterErlangen(RRZE(UniversityofErlangen-Nuremberg), Center for High Performance Computing (ZHR) (Dresden University of Technology).
This reference text presents the usage of artificial intelligence in healthcare and discusses the challenges and solutions of using advanced techniques like wearable technologies and image processing in the sector. Features: Focuses on the use of artificial intelligence (AI) in healthcare with issues, applications, and prospects Presents the application of artificial intelligence in medical imaging, fractionalization of early lung tumour detection using a low intricacy approach, etc Discusses an artificial intelligence perspective on wearable technology Analyses cardiac dynamics and assessment of arrhythmia by classifying heartbeat using electrocardiogram (ECG) Elaborates machine learning models for early diagnosis of depressive mental affliction This book serves as a reference for students and researchers analyzing healthcare data. It can also be used by graduate and post graduate students as an elective course.
The book discusses the fundamentals of high-performance computing. The authors combine visualization, comprehensibility, and strictness in their material presentation, and thus influence the reader towards practical application and learning how to solve real computing problems. They address both key approaches to programming modern computing systems: multithreading-based parallelizing in shared memory systems, and applying message-passing technologies in distributed systems. The book is suitable for undergraduate and graduate students, and for researchers and practitioners engaged with high-performance computing systems. Each chapter begins with a theoretical part, where the relevant terminology is introduced along with the basic theoretical results and methods of parallel programming, and concludes with a list of test questions and problems of varying difficulty. The authors include many solutions and hints, and often sample code.
Blockchain Supply Chain Use Cases. Distributed Ledger Technology Supply Chain Use Cases. Blockchain-Enabled Digital Transformation Use Cases. Blockchain Supply Chain Diffusion/Innovation Use Cases.
Given their tremendous success in commercial applications, machine learning (ML) models are increasingly being considered as alternatives to science-based models in many disciplines. Yet, these "black-box" ML models have found limited success due to their inability to work well in the presence of limited training data and generalize to unseen scenarios. As a result, there is a growing interest in the scientific community on creating a new generation of methods that integrate scientific knowledge in ML frameworks. This emerging field, called scientific knowledge-guided ML (KGML), seeks a distinct departure from existing "data-only" or "scientific knowledge-only" methods to use knowledge and data at an equal footing. Indeed, KGML involves diverse scientific and ML communities, where researchers and practitioners from various backgrounds and application domains are continually adding richness to the problem formulations and research methods in this emerging field. Knowledge Guided Machine Learning: Accelerating Discovery using Scientific Knowledge and Data provides an introduction to this rapidly growing field by discussing some of the common themes of research in KGML using illustrative examples, case studies, and reviews from diverse application domains and research communities as book chapters by leading researchers. KEY FEATURES First-of-its-kind book in an emerging area of research that is gaining widespread attention in the scientific and data science fields Accessible to a broad audience in data science and scientific and engineering fields Provides a coherent organizational structure to the problem formulations and research methods in the emerging field of KGML using illustrative examples from diverse application domains Contains chapters by leading researchers, which illustrate the cutting-edge research trends, opportunities, and challenges in KGML research from multiple perspectives Enables cross-pollination of KGML problem formulations and research methods across disciplines Highlights critical gaps that require further investigation by the broader community of researchers and practitioners to realize the full potential of KGML |
You may like...
Introduction to Engineering and…
David E. Clough, Steven C. Chapra
Hardcover
R3,112
Discovery Miles 31 120
Machine Learning for Edge Computing…
Amitoj Singh, Vinay Kukreja, …
Hardcover
R2,791
Discovery Miles 27 910
Software Engineering for Science
Jeffrey C. Carver, Neil P. Chue Hong, …
Paperback
R1,477
Discovery Miles 14 770
Smart Buildings Digitalization - IoT and…
O.V. Gnana Swathika, K. Karthikeyan, …
Hardcover
R4,507
Discovery Miles 45 070
Smart Buildings Digitalization - Case…
O.V. Gnana Swathika, K. Karthikeyan, …
Hardcover
R4,499
Discovery Miles 44 990
Integrating Deep Learning Algorithms to…
R. Sujatha, S. L. Aarthy, …
Hardcover
R3,649
Discovery Miles 36 490
Artificial Intelligence in Mechanical…
Kaushik Kumar, Divya Zindani, …
Hardcover
R5,052
Discovery Miles 50 520
Artificial Intelligence (AI) - Recent…
S. Kanimozhi Suguna, M. Dhivya, …
Hardcover
R5,076
Discovery Miles 50 760
Hidden Markov Models - Theory and…
Joao Paulo Coelho, Tatiana M. Pinho, …
Paperback
R1,615
Discovery Miles 16 150
|