![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > Parallel processing
This book covers the essential elements of parallel processing and parallel algorithms. It is unique in that it is a self-contained book covering everything fundamental of parallel processing from computer architecture to parallel programming and parallel algorithms. It is designed to function as a text for an undergraduate course in parallel processing, but also works well as a comprehensive reference for professionals interested in all phases of parallel processing and parallel programming.
Computer vision falls short of human vision in two respects: execution time and intelligent interpretation. This book addresses the question of execution time. It is based on a workshop on specialized processors for real-time image analysis, held as part of the activities of an ESPRIT Basic Research Action, the Working Group on Vision. The aim of the book is to examine the state of the art in vision-oriented computers. Two approaches are distinguished: multiprocessor systems and fine-grain massively parallel computers. The development of fine-grain machines has become more important over the last decade, but one of the main conclusions of the workshop is that this does not imply the replacement of multiprocessor machines. The book is divided into four parts. Part 1 introduces different architectures for vision: associative and pyramid processors as examples of fine-grain machines and a workstation with bus-oriented network topology as an example of a multiprocessor system. Parts 2 and 3 deal with the design and development of dedicated and specialized architectures. Part 4 is mainly devoted to applications, including road segmentation, mobile robot guidance and navigation, reconstruction and identification of 3D objects, and motion estimation.
This book provides an introduction to decision making in a distributed computational framework. Classical detection theory assumes a centralized configuration. All observations are processed by a central processor to produce the decision. In the decentralized detection system, distributed detectors generate decisions based on locally available observations; these decisions are then conveyed to the fusion center that makes the global decision. Using numerous examples throughout the book, the author discusses such distributed detection processes under several different formulations and in a wide variety of network topologies.
Distributed and Parallel Systems: Cluster and Grid Computing is the proceedings of the fourth Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by Johannes Kepler University, Linz, Austria and the MTA SZTAKI Computer and Automation Research Institute. The papers in this volume cover a broad range of research topics presented in four groups. The first one introduces cluster tools and techniques, especially the issues of load balancing and migration. Another six papers deal with grid and global computing including grid infrastructure, tools, applications and mobile computing. The next nine papers present general questions of distributed development and applications. The last four papers address a crucial issue in distributed computing: fault tolerance and dependable systems. This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.
The papers in this volume are based on lectures given at the IMA workshop on the Parallel Solution of PDE during June 9-13, 1997. The numerical solution of partial differential equations has been of major importance to the development of many technologies and has been the target of much of the development of parallel computer hardware and software. Parallel computer offers the promise of greatly increased performance and the routine calculation of previously intractable problems.This volume contains papers on the development and assessment of new approximation and solution techniques that can take advantage of parallel computers. It will be of interest to applied mathematicians, computer scientists, and engineers concerned with investigating the state of the art and future directions in numerical computing.Topics include domain decomposition methods, parallel multi-grid methods, front tracking methods, sparse matrix techniques, adaptive methods, fictitious domain methods, and novel time and space discretizations. Applications discussed include fluid dynamics, radiative transfer, solid mechanics, and semiconductor simulation.
Dependence Analysis may be considered to be the second edition of the author's 1988 book, Dependence Analysis for Supercomputing. It is, however, a completely new work that subsumes the material of the 1988 publication. This book is the third volume in the series Loop Transformations for Restructuring Compilers. This series has been designed to provide a complete mathematical theory of transformations that can be used to automatically change a sequential program containing FORTRAN-like do loops into an equivalent parallel form. In Dependence Analysis, the author extends the model to a program consisting of do loops and assignment statements, where the loops need not be sequentially nested and are allowed to have arbitrary strides. In the context of such a program, the author studies, in detail, dependence between statements of the program caused by program variables that are elements of arrays. Dependence Analysis is directed toward graduate and undergraduate students, and professional writers of restructuring compilers. The prerequisite for the book consists of some knowledge of programming languages, and familiarity with calculus and graph theory. No knowledge of linear programming is required.
Automatic Performance Prediction of Parallel Programs presents a unified approach to the problem of automatically estimating the performance of parallel computer programs. The author focuses primarily on distributed memory multiprocessor systems, although large portions of the analysis can be applied to shared memory architectures as well. The author introduces a novel and very practical approach for predicting some of the most important performance parameters of parallel programs, including work distribution, number of transfers, amount of data transferred, network contention, transfer time, computation time and number of cache misses. This approach is based on advanced compiler analysis that carefully examines loop iteration spaces, procedure calls, array subscript expressions, communication patterns, data distributions and optimizing code transformations at the program level; and the most important machine specific parameters including cache characteristics, communication network indices, and benchmark data for computational operations at the machine level. The material has been fully implemented as part of P3T, which is an integrated automatic performance estimator of the Vienna Fortran Compilation System (VFCS), a state-of-the-art parallelizing compiler for Fortran77, Vienna Fortran and a subset of High Performance Fortran (HPF) programs. A large number of experiments using realistic HPF and Vienna Fortran code examples demonstrate highly accurate performance estimates, and the ability of the described performance prediction approach to successfully guide both programmer and compiler in parallelizing and optimizing parallel programs. A graphical user interface is described and displayed that visualizes each program source line together with the corresponding parameter values. P3T uses color-coded performance visualization to immediately identify hot spots in the parallel program. Performance data can be filtered and displayed at various levels of detail. Colors displayed by the graphical user interface are visualized in greyscale. Automatic Performance Prediction of Parallel Programs also includes coverage of fundamental problems of automatic parallelization for distributed memory multicomputers, a description of the basic parallelization strategy and a large variety of optimizing code transformations as included under VFCS.
With the evolution of technology and sudden growth in the number of smart vehicles, traditional Vehicular Ad hoc NETworks (VANETs) face several technical challenges in deployment and management due to less flexibility, scalability, poor connectivity, and inadequate intelligence. VANETs have raised increasing attention from both academic research and industrial aspects resulting from their important role in driving assistant system. Vehicular Ad Hoc Networks focuses on recent advanced technologies and applications that address network protocol design, low latency networking, context-aware interaction, energy efficiency, resource management, security, human-robot interaction, assistive technology and robots, application development, and integration of multiple systems that support Vehicular Networks and smart interactions. Simulation is a key tool for the design and evaluation of Intelligent Transport Systems (ITS) that take advantage of communication-capable vehicles in order to provide valuable safety, traffic management, and infotainment services. It is widely recognized that simulation results are only significant when realistic models are considered within the simulation tool chain. However, quite often research works on the subject are based on simplistic models unable to capture the unique characteristics of vehicular communication networks. The support that different simulation tools offer for such models is discussed, as well as the steps that must be undertaken to fine-tune the model parameters in order to gather realistic results. Moreover, the book provides handy hints and references to help determine the most appropriate tools and models. This book will promote best simulation practices in order to obtain accurate results.
Edsger Wybe Dijkstra (1930-2002) was one of the most influential researchers in the history of computer science, making fundamental contributions to both the theory and practice of computing. Early in his career, he proposed the single-source shortest path algorithm, now commonly referred to as Dijkstra's algorithm. He wrote (with Jaap Zonneveld) the first ALGOL 60 compiler, and designed and implemented with his colleagues the influential THE operating system. Dijkstra invented the field of concurrent algorithms, with concepts such as mutual exclusion, deadlock detection, and synchronization. A prolific writer and forceful proponent of the concept of structured programming, he convincingly argued against the use of the Go To statement. In 1972 he was awarded the ACM Turing Award for "fundamental contributions to programming as a high, intellectual challenge; for eloquent insistence and practical demonstration that programs should be composed correctly, not just debugged into correctness; for illuminating perception of problems at the foundations of program design." Subsequently he invented the concept of self-stabilization relevant to fault-tolerant computing. He also devised an elegant language for nondeterministic programming and its weakest precondition semantics, featured in his influential 1976 book A Discipline of Programming in which he advocated the development of programs in concert with their correctness proofs. In the later stages of his life, he devoted much attention to the development and presentation of mathematical proofs, providing further support to his long-held view that the programming process should be viewed as a mathematical activity. In this unique new book, 31 computer scientists, including five recipients of the Turing Award, present and discuss Dijkstra's numerous contributions to computing science and assess their impact. Several authors knew Dijkstra as a friend, teacher, lecturer, or colleague. Their biographical essays and tributes provide a fascinating multi-author picture of Dijkstra, from the early days of his career up to the end of his life.
A comprehensive overview of the current evolution of research in algorithms, architectures and compilation for parallel systems is provided by this publication. The contributions focus specifically on domains where embedded systems are required, either oriented to application-specific or to programmable realisations. These are crucial in domains such as audio, telecom, instrumentation, speech, robotics, medical and automotive processing, image and video processing, TV, multimedia, radar and sonar. The book will be of particular interest to the academic community because of the detailed descriptions of research results presented. In addition, many contributions feature the "real-life" applications that are responsible for driving research and the impact of their specific characteristics on the methodologies is assessed. The publication will also be of considerable value to senior design engineers and CAD managers in the industrial arena, who wish either to anticipate the evolution of commercially available design tools or to utilize the presented concepts in their own R&D programmes.
The Rust programming language is extremely well-suited for concurrency, and its ecosystem has many libraries that include lots of concurrent data structures, locks, and more. But implementing those structures correctly can be very difficult. Even in the most well-used libraries, memory ordering bugs are not uncommon. In this practical book, Mara Bos, leader of the Rust library team, helps Rust programmers of all levels gain a clear understanding of low-level concurrency. You'll learn everything about atomics and memory ordering and how they're combined with basic operating system APIs to build common primitives like mutexes and condition variables. Once you're done, you'll have a firm grasp of how Rust's memory model, the processor, and the roles of the operating system all fit together. With this guide, you'll learn: How Rust's type system works exceptionally well for programming concurrency correctly All about mutexes, condition variables, atomics, and memory ordering What happens in practice with atomic operations on Intel and ARM processors How locks are implemented with support from the operating system How to write correct code that includes concurrency, atomics, and locks How to build your own locking and synchronization primitives correctly
The organization of data is clearly of great importance in the design of high performance algorithms and architectures. Although there are several landmark papers on this subject, no comprehensive treatment has appeared. This monograph is intended to fill that gap. We introduce a model of computation for parallel computer architec tures, by which we are able to express the intrinsic complexity of data or ganization for specific architectures. We apply this model of computation to several existing parallel computer architectures, e.g., the CDC 205 and CRAY vector-computers, and the MPP binary array processor. The study of data organization in parallel computations was introduced as early as 1970. During the development of the ILLIAC IV system there was a need for a theory of possible data arrangements in interleaved mem ory systems. The resulting theory dealt primarily with storage schemes also called skewing schemes for 2-dimensional matrices, i.e., mappings from a- dimensional array to a number of memory banks. By means of the model of computation we are able to apply the theory of skewing schemes to var ious kinds of parallel computer architectures. This results in a number of consequences for both the design of parallel computer architectures and for applications of parallel processing."
In view of the growing presence and popularity of multicore and manycore processors, accelerators, and coprocessors, as well as clusters using such computing devices, the development of efficient parallel applications has become a key challenge to be able to exploit the performance of such systems. This book covers the scope of parallel programming for modern high performance computing systems. It first discusses selected and popular state-of-the-art computing devices and systems available today, These include multicore CPUs, manycore (co)processors, such as Intel Xeon Phi, accelerators, such as GPUs, and clusters, as well as programming models supported on these platforms. It next introduces parallelization through important programming paradigms, such as master-slave, geometric Single Program Multiple Data (SPMD) and divide-and-conquer. The practical and useful elements of the most popular and important APIs for programming parallel HPC systems are discussed, including MPI, OpenMP, Pthreads, CUDA, OpenCL, and OpenACC. It also demonstrates, through selected code listings, how selected APIs can be used to implement important programming paradigms. Furthermore, it shows how the codes can be compiled and executed in a Linux environment. The book also presents hybrid codes that integrate selected APIs for potentially multi-level parallelization and utilization of heterogeneous resources, and it shows how to use modern elements of these APIs. Selected optimization techniques are also included, such as overlapping communication and computations implemented using various APIs. Features: Discusses the popular and currently available computing devices and cluster systems Includes typical paradigms used in parallel programs Explores popular APIs for programming parallel applications Provides code templates that can be used for implementation of paradigms Provides hybrid code examples allowing multi-level parallelization Covers the optimization of parallel programs
GPU Parallel Program Development using CUDA teaches GPU programming by showing the differences among different families of GPUs. This approach prepares the reader for the next generation and future generations of GPUs. The book emphasizes concepts that will remain relevant for a long time, rather than concepts that are platform-specific. At the same time, the book also provides platform-dependent explanations that are as valuable as generalized GPU concepts. The book consists of three separate parts; it starts by explaining parallelism using CPU multi-threading in Part I. A few simple programs are used to demonstrate the concept of dividing a large task into multiple parallel sub-tasks and mapping them to CPU threads. Multiple ways of parallelizing the same task are analyzed and their pros/cons are studied in terms of both core and memory operation. Part II of the book introduces GPU massive parallelism. The same programs are parallelized on multiple Nvidia GPU platforms and the same performance analysis is repeated. Because the core and memory structures of CPUs and GPUs are different, the results differ in interesting ways. The end goal is to make programmers aware of all the good ideas, as well as the bad ideas, so readers can apply the good ideas and avoid the bad ideas in their own programs. Part III of the book provides pointer for readers who want to expand their horizons. It provides a brief introduction to popular CUDA libraries (such as cuBLAS, cuFFT, NPP, and Thrust),the OpenCL programming language, an overview of GPU programming using other programming languages and API libraries (such as Python, OpenCV, OpenGL, and Apple's Swift and Metal,) and the deep learning library cuDNN.
How do you detangle a monolithic system and migrate it to a microservice architecture? How do you do it while maintaining business-as-usual? As a companion to Sam Newman's extremely popular Building Microservices, this new book details a proven method for transitioning an existing monolithic system to a microservice architecture. With many illustrative examples, insightful migration patterns, and a bevy of practical advice to transition your monolith enterprise into a microservice operation, this practical guide covers multiple scenarios and strategies for a successful migration, from initial planning all the way through application and database decomposition. You'll learn several tried and tested patterns and techniques that you can use as you migrate your existing architecture. Ideal for organizations looking to transition to microservices, rather than rebuild Helps companies determine whether to migrate, when to migrate, and where to begin Addresses communication, integration, and the migration of legacy systems Discusses multiple migration patterns and where they apply Provides database migration examples, along with synchronization strategies Explores application decomposition, including several architectural refactoring patterns Delves into details of database decomposition, including the impact of breaking referential and transactional integrity, new failure modes, and more
It is universally accepted today that parallel processing is here to stay but that software for parallel machines is still difficult to develop. However, there is little recognition of the fact that changes in processor architecture can significantly ease the development of software. In the seventies the availability of processors that could address a large name space directly, eliminated the problem of name management at one level and paved the way for the routine development of large programs. Similarly, today, processor architectures that can facilitate cheap synchronization and provide a global address space can simplify compiler development for parallel machines. If the cost of synchronization remains high, the pro gramming of parallel machines will remain significantly less abstract than programming sequential machines. In this monograph Bob Iannucci presents the design and analysis of an architecture that can be a better building block for parallel machines than any von Neumann processor. There is another very interesting motivation behind this work. It is rooted in the long and venerable history of dataflow graphs as a formalism for ex pressing parallel computation. The field has bloomed since 1974, when Dennis and Misunas proposed a truly novel architecture using dataflow graphs as the parallel machine language. The novelty and elegance of dataflow architectures has, however, also kept us from asking the real question: "What can dataflow architectures buy us that von Neumann ar chitectures can't?" In the following I explain in a round about way how Bob and I arrived at this question."
Shared Memory Application Programming presents the key concepts and applications of parallel programming, in an accessible and engaging style applicable to developers across many domains. Multithreaded programming is today a core technology, at the basis of all software development projects in any branch of applied computer science. This book guides readers to develop insights about threaded programming and introduces two popular platforms for multicore development: OpenMP and Intel Threading Building Blocks (TBB). Author Victor Alessandrini leverages his rich experience to explain each platform's design strategies, analyzing the focus and strengths underlying their often complementary capabilities, as well as their interoperability. The book is divided into two parts: the first develops the essential concepts of thread management and synchronization, discussing the way they are implemented in native multithreading libraries (Windows threads, Pthreads) as well as in the modern C++11 threads standard. The second provides an in-depth discussion of TBB and OpenMP including the latest features in OpenMP 4.0 extensions to ensure readers' skills are fully up to date. Focus progressively shifts from traditional thread parallelism to modern task parallelism deployed by modern programming environments. Several chapter include examples drawn from a variety of disciplines, including molecular dynamics and image processing, with full source code and a software library incorporating a number of utilities that readers can adapt into their own projects.
If you need to learn CUDA but don't have experience with
parallel computing, "CUDA Programming: A Developer's Introduction
"offers a detailed guide to CUDA with a grounding in parallel
fundamentals. It starts by introducing CUDA and bringing you up to
speed on GPU parallelism and hardware, then delving into CUDA
installation. Chapters on core concepts including threads, blocks,
grids, and memory focus on both parallel and CUDA-specific issues.
Later, the book demonstrates CUDA in practice for optimizing
applications, adjusting to new hardware, and solving common
problems.
Programming is now parallel programming. Much as structured
programming revolutionized traditional serial programming decades
ago, a new kind of structured programming, based on patterns, is
relevant to parallel programming today. Parallel computing experts
and industry insiders Michael McCool, Arch Robison, and James
Reinders describe how to design and implement maintainable and
efficient parallel algorithms using a pattern-based approach. They
present both theory and practice, and give detailed concrete
examples using multiple programming models. Examples are primarily
given using two of the most popular and cutting edge programming
models for parallel programming: Threading Building Blocks, and
Cilk Plus. These architecture-independent models enable easy
integration into existing applications, preserve investments in
existing code, and speed the development of parallel applications.
Examples from realistic contexts illustrate patterns and themes in
parallel algorithm design that are widely applicable regardless of
implementation technology.
NB-IoT is the Internet of Things (IoT) technology used for cellular communication. NB-IoT devices deliver much better capability and performance, such as: increased area coverage of up to one kilometer; a massive number of devices-up to 200,000-per a single base-station area; longer battery lifetime of ten years; and better indoor and outdoor coverage for areas with weak signal, such as underground garages. The cellular NB-IoT technology is a challenging technology to use and understand. With more than 30 projects presented in this book, covering many use cases and scenarios, this book provides hands-on and practical experience of how to use the cellular NB-IoT for smart applications using Arduino (TM), Amazon Cloud, Google Maps, and charts. The book starts by explaining AT commands used to configure the NB-IoT modem; data serialization and deserialization; how to set up the cloud for connecting NB-IoT devices; setting up rules, policy, security certificates, and a NoSQL database on the cloud; how to store and read data in the cloud; how to use Google Maps to visualize NB-IoT device geo-location; and how to use charts to visualize sensor datasets. Projects for Arduino are presented in four parts. The first part explains how to connect the device to the mobile operator and cellular network; perform communication using different network protocols, such as TCP, HTTP, SSL, or MQTT; how to use GPS for geo-location applications; and how to upgrade NB-IoT modem firmware over the air. The second part explains the microcontroller unit and how to build and run projects, such as a 7-segment display or a real-time clock. The third part explains how NB-IoT can be used with sensor devices, such as ultrasonic and environmental sensors. Finally, the fourth part explains how NB-IoT can be used to control actuators, such as stepper motors and relays. This book is a unique resource for understanding practical uses of the NB-IoT technology and serves as a handbook for technical and non-technical readers who are looking for practicing and exercising the cellular NB-IoT technology. The book can be used by engineers, students, researchers, system integrators, mobile operators' technical staff, and electronics enthusiasts. To download the software which can be used with the book, go to: https://github.com/5ghub/NB-IoT About the Author: Hossam Fattah is a technology expert in 4G/5G wireless systems and networking. He received his Ph.D. in Electrical and Computer Engineering from University of British Columbia, Vancouver, Canada in 2003. He received his Master of Applied Science in Electrical and Computer Engineering from University of Victoria, Victoria, Canada in 2000. He completed his B.Sc. degree in Computers and Systems Engineering from Al-Azhar University, Cairo, Egypt in 1995. Between 2003 and 2011, he was in academia and industry, including Texas A&M University. Between 2011 and 2013, he was with Spirent Communications, NJ, USA. Since 2013, he has been with Microsoft, USA. He is also an affiliate associate professor at University of Washington, Tacoma, WA, USA, teaching graduate courses on IoT and distributed systems and collaborating on 5G research and innovations. He has had many patents and technical publications in conferences and journals. He is a registered professional Engineer with the Association of Professional Engineers, British Columbia, Canada. He is the author of the recent book 5G LTE Narrowband Internet of Things (NB-IoT). His research interest is in wireless communications and radio networks and protocols, cellular quality of service, radio resource management, traffic and packet scheduling, network analytics, and mobility.
The book "Parallel Computing" deals with the topics of current interest in high performance computing, viz. pipeline and parallel processing architectures, and the whole book is based on treatment of these ideas. The present revised edition is updated with the addition of topics like processor performance and technology developments in chapter 1 and advanced pipeline processing on today's high performance processors in chapter 2. A new chapter on neurocomputing and two new sections on Branch prediction and scoreboard are the other major changes done to make the book more viable.
Focusing on algorithms for distributed-memory parallel architectures, Parallel Algorithms presents a rigorous yet accessible treatment of theoretical models of parallel computation, parallel algorithm design for homogeneous and heterogeneous platforms, complexity and performance analysis, and essential notions of scheduling. The book extracts fundamental ideas and algorithmic principles from the mass of parallel algorithm expertise and practical implementations developed over the last few decades. In the first section of the text, the authors cover two classical theoretical models of parallel computation (PRAMs and sorting networks), describe network models for topology and performance, and define several classical communication primitives. The next part deals with parallel algorithms on ring and grid logical topologies as well as the issue of load balancing on heterogeneous computing platforms. The final section presents basic results and approaches for common scheduling problems that arise when developing parallel algorithms. It also discusses advanced scheduling topics, such as divisible load scheduling and steady-state scheduling. With numerous examples and exercises in each chapter, this text encompasses both the theoretical foundations of parallel algorithms and practical parallel algorithm design.
Build and use systems that safely automate software delivery from testing through release with this jargon-busting guide to Continuous Delivery pipelines. In Grokking Continuous Delivery you will learn how to: Design effective CD pipelines for new and legacy projects Keep your software projects release-ready Maintain effective tests Scale CD across multiple applications Ensure pipelines give the right signals at the right time Use version control as the source of truth Safely automate deployments with metrics Describe CD in a way that makes sense to your colleagues Grokking Continuous Delivery teaches you the design and purpose of continuous delivery systems that you can use with any language or stack. You'll learn directly from your mentor Christie Wilson, Google engineer and co-creator of the Tekton CI/CD framework. Using crystal-clear, well-illustrated examples, Christie lays out the practical nuts and bolts of continuous delivery for developers and pipeline designers. In each chapter, you'll uncover the proper approaches to solve the real-world challenges of setting up a CD pipeline. With this book as your roadmap, you'll have a clear plan for bringing CD to your team without the need for costly trial-and-error experimentation. Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications. About the technology Keep your codebase release-ready. A continuous delivery pipeline automates version control, testing, and deployment with minimal developer intervention. Master the tools and practices of continuous delivery, and you'll be able to add features and push updates quickly and consistently. About the book Grokking Continuous Delivery is a friendly guide to setting up and working with a continuous delivery pipeline. Each chapter takes on a different scenario you'll face when setting up a CD system, with real-world examples like automated scaling and testing legacy applications. Taking a tool-agnostic approach, author Christie Wilson guides you each step of the way with illustrations, crystal-clear explanations, and practical exercises to lock in what you're learning. What's inside Design effective CD pipelines for new and legacy projects Ensure your pipelines give the right signals at the right times Version control as the source of truth Safely automate deployments About the reader For software engineers who want to add CD to their development process. About the author Christie Wilson is a software engineer at Google, where she co-created Tekton, a cloud-native CI/CD platform built on Kubernetes. Table of Contents PART 1 Introducing continuous delivery 1 Welcome to Grokking Continuous Delivery 2 A basic pipeline PART 2 Keeping software in a deliverable state at all times 3 Version control is the only way to roll 4 Use linting effectively 5 Dealing with noisy tests 6 Speeding up slow test suites 7 Give the right signals at the right times PART 3 Making delivery easy 8 Easy delivery starts with version control 9 Building securely and reliably 10 Deploying confidently PART 4 CD design 11 Starter packs: From zero to CD 12 Scripts are code, too 13 Pipeline design |
![]() ![]() You may like...
Modeling and Simulating Complex Business…
Zoumpolia Dikopoulou
Hardcover
R3,608
Discovery Miles 36 080
Data Science and Big Data: An…
Witold Pedrycz, Shyi-Ming Chen
Hardcover
R5,000
Discovery Miles 50 000
Adaptive Resonance Theory in Social…
Lei Meng, Ah-Hwee Tan, …
Hardcover
R2,879
Discovery Miles 28 790
Digital Transformation of Collaboration…
Aleksandra Przegalinska, Francesca Grippa, …
Hardcover
R3,158
Discovery Miles 31 580
Internet of Things. Information…
Leon Strous, Vinton G. Cerf
Hardcover
R2,276
Discovery Miles 22 760
Stabilization, Safety, and Security of…
Xavier Defago, Franck Petit, …
Paperback
R1,570
Discovery Miles 15 700
Exploring Future Opportunities of…
Madhulika Bhatia, Tanupriya Choudhury, …
Hardcover
R7,249
Discovery Miles 72 490
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
|