![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > Parallel processing
This book constitutes the proceedings of the 13th International Workshop on OpenMP, IWOMP 2017, held in Stony Brook, NY, USA, in September 2017. The 23 full papers presented in this volume were carefully reviewed and selected from 28 submissions. They were organized in topical sections named: Advanced Implementations and Extensions; OpenMP Application Studies; Analyzing and Extending Tasking; OpenMP 4 Application Evaluation; Extended Parallelism Models: Performance Analysis and Tools; and Advanced Data Management with OpenMP.
This book constitutes the proceedings of the 14th International Conference on Parallel Computing Technologies, PaCT 2017, held in Nizhny Novgorod, Russia, in September 2017. The 25 full papers and 24 short papers presented were carefully reviewed and selected from 93 submissions. The papers are organized in topical sections on mainstream parallel computing, parallel models and algorithms in numerical computation, cellular automata and discrete event systems, organization of parallel computation, parallel computing applications.
This book constitutes the proceedings of the 23rd International Conference on Parallel and Distributed Computing, Euro-Par 2017, held in Santiago de Compostela, Spain, in August/September 2017. The 50 revised full papers presented together with 2 abstract of invited talks and 1 invited paper were carefully reviewed and selected from 176 submissions. The papers are organized in the following topical sections: support tools and environments; performance and power modeling, prediction and evaluation; scheduling and load balancing; high performance architectures and compilers; parallel and distributed data management and analytics; cluster and cloud computing; distributed systems and algorithms; parallel and distributed programming, interfaces and languages; multicore and manycore parallelism; theory and algorithms for parallel computation and networking; prallel numerical methods and applications; and accelerator computing.
This four volume set LNCS 9528, 9529, 9530 and 9531 constitutes the refereed proceedings of the 15th International Conference on Algorithms and Architectures for Parallel Processing, ICA3PP 2015, held in Zhangjiajie, China, in November 2015. The 219 revised full papers presented together with 77 workshop papers in these four volumes were carefully reviewed and selected from 807 submissions (602 full papers and 205 workshop papers). The first volume comprises the following topics: parallel and distributed architectures; distributed and network-based computing and internet of things and cyber-physical-social computing. The second volume comprises topics such as big data and its applications and parallel and distributed algorithms. The topics of the third volume are: applications of parallel and distributed computing and service dependability and security in distributed and parallel systems. The covered topics of the fourth volume are: software systems and programming models and performance modeling and evaluation.
This book constitutes the refereed proceedings of the Workshops and Symposiums of the 15th International Conference on Algorithms and Architectures for Parallel Processing, ICA3PP 2015, held in Zhangjiajie, China, in November 2015. The program of this year consists of 6 symposiums/workshops that cover a wide range of research topics on parallel processing technology: the Sixth International Workshop on Trust, Security and Privacy for Big Data, TrustData 2015; the Fifth International Symposium on Trust, Security and Privacy for Emerging Applications, TSP 2015; the Third International Workshop on Network Optimization and Performance Evaluation, NOPE 2015; the Second International Symposium on Sensor-Cloud Systems, SCS 2015; the Second International Workshop on Security and Privacy Protection in Computer and Network Systems, SPPCN 2015; and the First International Symposium on Dependability in Sensor, Cloud, and Big Data Systems and Applications, DependSys 2015. The aim of these symposiums/workshops is to provide a forum to bring together practitioners and researchers from academia and industry for discussion and presentations on the current research and future directions related to parallel processing technology. The themes and topics of these symposiums/workshops are a valuable complement to the overall scope of ICA3PP 2015 and give additional values and interests.
This four volume set LNCS 9528, 9529, 9530 and 9531 constitutes the refereed proceedings of the 15th International Conference on Algorithms and Architectures for Parallel Processing, ICA3PP 2015, held in Zhangjiajie, China, in November 2015. The 219 revised full papers presented together with 77 workshop papers in these four volumes were carefully reviewed and selected from 807 submissions (602 full papers and 205 workshop papers). The first volume comprises the following topics: parallel and distributed architectures; distributed and network-based computing and internet of things and cyber-physical-social computing. The second volume comprises topics such as big data and its applications and parallel and distributed algorithms. The topics of the third volume are: applications of parallel and distributed computing and service dependability and security in distributed and parallel systems. The covered topics of the fourth volume are: software systems and programming models and performance modeling and evaluation.
This four volume set LNCS 9528, 9529, 9530 and 9531 constitutes the refereed proceedings of the 15th International Conference on Algorithms and Architectures for Parallel Processing, ICA3PP 2015, held in Zhangjiajie, China, in November 2015. The 219 revised full papers presented together with 77 workshop papers in these four volumes were carefully reviewed and selected from 807 submissions (602 full papers and 205 workshop papers). The first volume comprises the following topics: parallel and distributed architectures; distributed and network-based computing and internet of things and cyber-physical-social computing. The second volume comprises topics such as big data and its applications and parallel and distributed algorithms. The topics of the third volume are: applications of parallel and distributed computing and service dependability and security in distributed and parallel systems. The covered topics of the fourth volume are: software systems and programming models and performance modeling and evaluation.
This book constitutes the thoroughly refereed post-conference proceedings of 12 workshops held at the 21st International Conference on Parallel and Distributed Computing, Euro-Par 2015, in Vienna, Austria, in August 2015. The 67 revised full papers presented were carefully reviewed and selected from 121 submissions. The volume includes papers from the following workshops: BigDataCloud: 4th Workshop on Big Data Management in Clouds - Euro-EDUPAR: First European Workshop on Parallel and Distributed Computing Education for Undergraduate Students - Hetero Par: 13th International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms - LSDVE: Third Workshop on Large Scale Distributed Virtual Environments - OMHI: 4th International Workshop on On-chip Memory Hierarchies and Interconnects - PADAPS: Third Workshop on Parallel and Distributed Agent-Based Simulations - PELGA: Workshop on Performance Engineering for Large-Scale Graph Analytics - REPPAR: Second International Workshop on Reproducibility in Parallel Computing - Resilience: 8th Workshop on Resiliency in High Performance Computing in Clusters, Clouds, and Grids - ROME: Third Workshop on Runtime and Operating Systems for the Many Core Era - UCHPC: 8th Workshop on UnConventional High Performance Computing - and VHPC: 10th Workshop on Virtualization in High-Performance Cloud Computing.
This book constitutes the refereed proceedings of the 15th
International Conference on Coordination Models and Languages,
COORDINATION 2013, held in Firenze, Italy, in June 2013, within the
8th International Federated Conference on Distributed Computing
Techniques (DisCoTec 2013).
Quite soon, the world's information infrastructure is going to reach a level of scale and complexity that will force scientists and engineers to approach it in an entirely new way. The familiar notions of command and control are being thwarted by realities of a faster, denser world of communication where choice, variety, and indeterminism rule. The myth of the machine that does exactly what we tell it has come to an end. What makes us think we can rely on all this technology? What keeps it together today, and how might it work tomorrow? Will we know how to build the next generation-or will we be lulled into a stupor of dependence brought about by its conveniences? In this book, Mark Burgess focuses on the impact of computers and information on our modern infrastructure by taking you from the roots of science to the principles behind system operation and design. To shape the future of technology, we need to understand how it works-or else what we don't understand will end up shaping us. This book explores this subject in three parts: Part I, Stability: describes the fundamentals of predictability, and why we have to give up the idea of control in its classical meaning Part II, Certainty: describes the science of what we can know, when we don't control everything, and how we make the best of life with only imperfect information Part III, Promises: explains how the concepts of stability and certainty may be combined to approach information infrastructure as a new kind of virtual material, restoring a continuity to human-computer systems so that society can rely on them.
This book constitutes the thoroughly refereed joint post-proceedings of the three International Workshops on Grid Middleware, CoreGrid 2006, the UNICORE Summit 2006, and the Workshop on Petascale Computational Biology and Bioinformatics, held in Dresden, Germany, in August/September 2006, in conjunction with Euro-Par 2006, the 12th International Conference on Parallel Computing.
Ever since the invention of the computer, users have demanded more and more computational power to tackle increasingly complex problems. A common means of increasing the amount of computational power available for solving a problem is to use parallel computing. Unfortunately, however, creating efficient parallel programs is notoriously difficult. In addition to all of the well-known problems that are associated with constructing a good serial algorithm, there are a number of problems specifically associated with constructing a good parallel algorithm. These mainly revolve around ensuring that all processors are kept busy and that they have timely access to the data that they require. Unfortunately, however, controlling a number of processors operating in parallel can be exponentially more complicated than controlling one processor. Furthermore, unlike data placement in serial programs, where sophisticated compilation techniques that optimise cache behaviour and memory interleaving are common, optimising data placement throughout the vastly more complex memory hierarchy present in parallel computers is often left to the parallel application programmer. All of these problems are compounded by the large number of parallel computing architectures that exist, because they often exhibit vastly different performance characteristics, which makes writing well-optimised, portable code especially difficult. The primary weapon against these problems in a parallel programmer's or parallel computer architect's arsenal is -- or at least should be -- the art of performance prediction. This book provides a historical exposition of over four decades of research into techniques for modelling the performance of computer programs running on parallel computers.
The two volume set LNCS 7133 and LNCS 7134 constitutes the thoroughly refereed post-conference proceedings of the 10th International Conference on Applied Parallel and Scientific Computing, PARA 2010, held in Reykjavik, Iceland, in June 2010. These volumes contain three keynote lectures, 29 revised papers and 45 minisymposia presentations arranged on the following topics: cloud computing, HPC algorithms, HPC programming tools, HPC in meteorology, parallel numerical algorithms, parallel computing in physics, scientific computing tools, HPC software engineering, simulations of atomic scale systems, tools and environments for accelerator based computational biomedicine, GPU computing, high performance computing interval methods, real-time access and processing of large data sets, linear algebra algorithms and software for multicore and hybrid architectures in honor of Fred Gustavson on his 75th birthday, memory and multicore issues in scientific computing - theory and praxis, multicore algorithms and implementations for application problems, fast PDE solvers and a posteriori error estimates, and scalable tools for high performance computing.
A step-by-step guide to working with programs that exploit quantum computing principles with the help of IBM Quantum, Qiskit, and Python Key Features * Understand the difference between classical computers and quantum computers * Work with key quantum computational principles such as superposition and entanglement and see how they are leveraged on IBM Quantum systems * Run your own quantum experiments and applications by integrating with Qiskit and Python Book Description IBM Quantum Lab is a platform that enables developers to learn the basics of quantum computing by allowing them to run experiments on a quantum computing simulator and on several real quantum computers. Updated with new examples and changes to the platform, this edition begins with an introduction to the IBM Quantum dashboard and Quantum Information Science Kit (Qiskit) SDK. You will get well versed with the IBM Quantum Composer interface as well as the IBM Quantum Lab. You will learn the differences between the various available quantum computers and simulators. Along the way, you'll learn some of the fundamental principles of quantum mechanics, quantum circuits, qubits, and the gates that are used to perform operations on each qubit. As you build on your knowledge, you'll understand the functionality of IBM Quantum and the developer-focused resources it offers to address key concerns like noise, decoherence, and affinity within a quantum system. You'll learn how to monitor and optimize your quantum circuits. Lastly, you'll look at the fundamental quantum algorithms and understand how they can be applied effectively. By the end of this quantum computing book, you'll know how to build quantum programs on your own and will have gained practical understanding of quantum computation skills that you can apply to your business. What you will learn * Get familiar with the contents and layout of IBM Quantum Lab * Create and visualize quantum circuits * Understand quantum gates and visualize how they operate on qubits using the IBM Quantum Composer * Save, import, and leverage existing circuits with the IBM Quantum Lab * Discover Qiskit and its latest modules for model, algorithm, and kernel developers * Get to grips with fundamental quantum algorithms such as Deutsch-Jozsa, Grover's algorithm, and Shor's algorithm Who This Book Is For This book is for Python developers who are looking to learn quantum computing from the ground up and put their knowledge to use in practical situations with the help of the IBM Quantum platform and Qiskit. Some background in computer science and high-school-level physics and math is required.
In the last few years, courses on parallel computation have been developed and offered in many institutions in the UK, Europe and US as a recognition of the growing significance of this topic in mathematics and computer science. There is a clear need for texts that meet the needs of students and lecturers and this book, based on the author's lecture at ETH Zurich, is an ideal practical student guide to scientific computing on parallel computers working up from a hardware instruction level, to shared memory machines, and finally to distributed memory machines. Aimed at advanced undergraduate and graduate students in applied mathematics, computer science, and engineering, subjects covered include linear algebra, fast Fourier transform, and Monte-Carlo simulations, including examples in C and, in some cases, Fortran. This book is also ideal for practitioners and programmers.
This book constitutes the thoroughly refereed post-conference proceedings of the 27th International Workshop on Languages and Compilers for Parallel Computing, LCPC 2014, held in Hillsboro, OR, USA, in September 2014. The 25 revised full papers were carefully reviewed and selected from 39 submissions. The papers are organized in topical sections on accelerator programming; algorithms for parallelism; compilers; debugging; vectorization.
DNA microarrays, or biochips, are small glass chips embedded with ordered rows of DNA and by providing a massive parallel platform for data gathering represent a fundamental technical advance in biomedical research. This volume is a comprehensive overview of DNA microarray technology and will be invaluable to any researcher interested in taking advantage of this powerful new technique.
This book constitutes the thoroughly refereed post-conference proceedings of the 26th International Workshop on Languages and Compilers for Parallel Computing, LCPC 2013, held in Tokyo, Japan, in September 2012. The 20 revised full papers and two keynote papers presented were carefully reviewed and selected from 44 submissions. The focus of the papers is on following topics: parallel programming models, compiler analysis techniques, parallel data structures and parallel execution models, to GPGPU and other heterogeneous execution models, code generation for power efficiency on mobile platforms, and debugging and fault tolerance for parallel systems.
Governments around the world have policies to promote links between industry and academic and government laboratories in order to foster economic growth and innovation in the technology-based industries. Knowledge Frontiers gives new insights into this process and offers an original framework for tracking these interactions. The book shows what 'knowledge' companies want from public sector research, and how they network to get this knowledge in three new and promising fields of advanced technology - biotechnology, engineering ceramics, and parallel computing. The authors first look at some of the background issues - policy issues about links between industry and public sector research; the ways in which science and technology interact in the innovation process; and general developments in each of the technologies examined. They look in more detail at public-private research links in the three areas. They find similarities which point to the general importance to innovation of frontier research in universities, and the need to encourage informal interaction/contact between industrial and public sector researchers. They also find differences between the fields which suggest that the policies to provide research links should be more effectively targeted, as an integral part of the broader objective of fostering 'strategic technologies'. Knowledge Frontiers advances our understanding of the various types of knowledge used in the course of research, design, and development leading to innovation. It is essential reading for those wanting to get to grips with the complex and dynamic realities of the innovation process - be they researchers, managers, or policy makers.
Monitor, log, and trace your cloud applications using the power of AWS' myriad observability tools to ensure the systems you build are resilient Key Features * Implement observability in your cloud applications and systems with the power of AWS * Ensure your customers' satisfaction by identifying and fixing bottlenecks quickly * Learn from the experts to get the best possible insight into AWS' observability solutions Book Description Cloud observability is complex if you're new to the cloud and even if you're an experienced cloud practitioner. Thankfully, the world's most popular cloud provider, AWS, provides multiple tools for identifying performance bottlenecks in modern distributed applications. An Insider's Guide to Observability on AWS will help you use these tools to provide the logging, monitoring, and tracing that your systems need to be as efficient as possible. This comprehensive guide to observability on AWS covers all the bases, taking you from basic observability with CloudWatch, through automated observability, to machine-learning-powered tools such as AWS DevOps Guru, and everything in between. You'll learn how to implement observability in containers, in serverless applications, and for user experience monitoring. This is truly an all-encompassing guide that leaves no stone unturned in its quest to give you the knowledge, skills, and practice to implement observability in your applications from end to end and visualize the results using the wide range of tools provided by AWS. You'll also see some of the guidelines and best practices, such as how the Well-Architected Framework relates to observability. By the end of the book, you will find it easy to implement observability in your applications using AWS' native and managed open source tools. What you will learn * Take metrics from an EC2 instance and visualize them in a dashboard * Conduct distributed tracing using AWS X-Ray * Derive operational metrics using CloudWatch Logs * Achieve observability of containerized applications in ECS and EKS * Use CloudWatch and Lambda Insights to monitor serverless applications * Visualize your insights with Amazon Managed Grafana * Harness the power of the ELK stack with OpenSearch * Scale the observability of applications in complex organizations Who This Book Is For This book is intended for SREs, cloud developers and DevOps engineers using AWS native services and tools as well as open source managed services on AWS to achieve the required observability targets. It will also provide guidance to Solution Architects on achieving operational excellence in adopting cloud observability solutions for the workloads. Readers need to have a basic understanding of AWS cloud fundamentals and different services available on AWS cloud to run their applications like EC2, storage solutions like S3, container solutions like ECS, EKS, etc.
Go beyond connecting services to understand the unique challenges encountered in industrial environments by building Industrial IoT architectures using AWS Purchase of the print or kindle book includes a free eBook in the PDF format Key Features Understand the key components of IoT Architecture and how it applies to Industry 4.0 Walk through extensive examples and solutions across multiple Industries Learn how to collect, process, store, and analyse Industrial IoT data Book DescriptionWhen it comes to using the core and managed services available on AWS for making decisions about architectural environments for an enterprise, there are as many challenges as there are advantages. This Industrial IoT book follows the journey of data from the shop floor to the boardroom, identifying goals and aiding in strong architectural decision-making. You'll begin from the ground up, analyzing environment needs and understanding what is required from the captured data, applying industry standards and conventions throughout the process. This will help you realize why digital integration is crucial and how to approach an Industrial IoT project from a holistic perspective. As you advance, you'll delve into the operational technology realm and consider integration patterns with common industrial protocols for data gathering and analysis with direct connectivity to data through sensors or systems. The book will equip you with the essentials for designing industrial IoT architectures while also covering intelligence at the edge and creating a greater awareness of the role of machine learning and artificial intelligence in overcoming architectural challenges. By the end of this book, you'll be ready to apply IoT directly to the industry while adapting the concepts covered to implement AWS IoT technologies. What you will learn Discover Industrial IoT best practices and conventions Understand how to get started with edge computing Define and build IoT solution architectures from scratch Use AWS as the core of your solution platform Apply advanced analytics and machine learning to your data Deploy edge processing to react in near real time to events within your environment Who this book is forThis book is for architects, engineers, developers, and technical professionals interested in building an edge and cloud-based Internet of Things ecosystem with a focus on industry solutions. Since the focus of this book is specifically on IoT, a solid understanding of core IoT technologies and how they work is necessary to get started. If you are someone with no hands-on experience, but are familiar with the subject, you'll find the use cases useful to learn how architectural decisions are made.
Leverage OpenTelemetry's API, libraries, tools and the collector to produce and collect telemetry along with using open-source tools to analyze distributed traces, check metrics and logs, and gain insights into application health Key Features Get to grips with OpenTelemetry, an open-source cloud-native software observability standard Use vendor-neutral tools to instrument applications to produce better telemetry and improve observability Understand how telemetry data can be correlated and interpreted to understand distributed systems Book DescriptionCloud-Native Observability with OpenTelemetry is a guide to helping you look for answers to questions about your applications. This book teaches you how to produce telemetry from your applications using an open standard to retain control of data. OpenTelemetry provides the tools necessary for you to gain visibility into the performance of your services. It allows you to instrument your application code through vendor-neutral APIs, libraries and tools. By reading Cloud-Native Observability with OpenTelemetry, you'll learn about the concepts and signals of OpenTelemetry - traces, metrics, and logs. You'll practice producing telemetry for these signals by configuring and instrumenting a distributed cloud-native application using the OpenTelemetry API. The book also guides you through deploying the collector, as well as telemetry backends necessary to help you understand what to do with the data once it's emitted. You'll look at various examples of how to identify application performance issues through telemetry. By analyzing telemetry, you'll also be able to better understand how an observable application can improve the software development life cycle. By the end of this book, you'll be well-versed with OpenTelemetry, be able to instrument services using the OpenTelemetry API to produce distributed traces, metrics and logs, and more. What you will learn Understand the core concepts of OpenTelemetry Explore concepts in distributed tracing, metrics, and logging Discover the APIs and SDKs necessary to instrument an application using OpenTelemetry Explore what auto-instrumentation is and how it can help accelerate application instrumentation Configure and deploy the OpenTelemetry Collector Get to grips with how different open-source backends can be used to analyze telemetry data Understand how to correlate telemetry in common scenarios to get to the root cause of a problem Who this book is forThis book is for software engineers, library authors, and systems operators looking to better understand their infrastructure, services and applications by leveraging telemetry data like never before. Working knowledge of Python programming is assumed for the example applications that you'll be building and instrumenting using the OpenTelemetry API and SDK. Some familiarity with Go programming, Linux, and Docker is preferable to help you set up additional components in various examples throughout the book.
Solve the complexity of running a business in a multi-cloud environment with practical guidelines backed by industry experience Key Features * Explore the benefits of the major cloud providers to make better informed decisions * Accelerate digital transformation with multi-cloud adoption, including the use of PaaS and SaaS concepts * Get the best out of multi-cloud by exploring relevant use cases for data platforms and IoT Book Description Most enterprises adopt multi-cloud with the intention of accelerating digital transformation, but moving data and applications to public clouds and implementing Platform as a Service (PaaS) and Software as a Service (SaaS) solutions are challenging. One of the biggest challenges is deciding what parts of which services are the most useful to help the company thrive. Through this book, you'll learn how to choose the most apt cloud service and how to manage operations, cost, and security, all while learning how to overcome the complexities associated with multi-cloud adoption via use cases (IoT, data mining, Web3, financial management, and more). This new edition is focused on helping you stay in control of your cloud environments by using the concepts of BaseOps, FinOps, and DevSecOps. You'll learn how to develop, release, and manage products and services in the major public clouds Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP), whilst optimizing costs and maximizing security using the various tools and services that these cloud providers offer. By the end of this book, you will have become familiar with the complexities associated with running a business in a multi-cloud environment and identified ways to solve these complexities in the domains of operations, financial management, and security. What you will learn * Learn how to choose the right cloud platform via various use cases * Understand the concepts associated with multi-cloud, including IaC, SaaS, PaaS, and CaC * Use the techniques and tools offered by Azure, AWS, and GCP to integrate security * Learn about enterprise architecture, value streams, and well-architected frameworks of Azure, AWS, and GCP * Use FinOps to define cost models and create transparency in cloud costs with showback and chargeback * Improve security with the DevSecOps maturity model * Explore the concepts of AIOps and GreenOps Who This Book Is For Cloud architects, solutions architects, enterprise architects, and cloud consultants will find this book valuable. Basic knowledge of any one of the major public clouds (Azure, AWS, or GCP) will be helpful.
|
You may like...
Migrating Legacy Applications…
Anca Daniela Ionita, Marin Litoiu, …
Hardcover
R4,968
Discovery Miles 49 680
Introduction to Parallel Computing - A…
Wesley Petersen, Peter Arbenz
Hardcover
R5,836
Discovery Miles 58 360
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R2,920
Discovery Miles 29 200
Computation and Storage in the Cloud…
Dong Yuan, Yun Yang, …
Paperback
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R6,687
Discovery Miles 66 870
Cyber-Physical Systems for Social…
Maya Dimitrova, Hiroaki Wagatsuma
Hardcover
R6,528
Discovery Miles 65 280
Parallel Computing: Fundamentals…
E.H. D'Hollander, G.R. Joubert, …
Hardcover
R6,688
Discovery Miles 66 880
|