![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Supercomputers
In view of the growing presence and popularity of multicore and manycore processors, accelerators, and coprocessors, as well as clusters using such computing devices, the development of efficient parallel applications has become a key challenge to be able to exploit the performance of such systems. This book covers the scope of parallel programming for modern high performance computing systems. It first discusses selected and popular state-of-the-art computing devices and systems available today, These include multicore CPUs, manycore (co)processors, such as Intel Xeon Phi, accelerators, such as GPUs, and clusters, as well as programming models supported on these platforms. It next introduces parallelization through important programming paradigms, such as master-slave, geometric Single Program Multiple Data (SPMD) and divide-and-conquer. The practical and useful elements of the most popular and important APIs for programming parallel HPC systems are discussed, including MPI, OpenMP, Pthreads, CUDA, OpenCL, and OpenACC. It also demonstrates, through selected code listings, how selected APIs can be used to implement important programming paradigms. Furthermore, it shows how the codes can be compiled and executed in a Linux environment. The book also presents hybrid codes that integrate selected APIs for potentially multi-level parallelization and utilization of heterogeneous resources, and it shows how to use modern elements of these APIs. Selected optimization techniques are also included, such as overlapping communication and computations implemented using various APIs. Features: Discusses the popular and currently available computing devices and cluster systems Includes typical paradigms used in parallel programs Explores popular APIs for programming parallel applications Provides code templates that can be used for implementation of paradigms Provides hybrid code examples allowing multi-level parallelization Covers the optimization of parallel programs
From the Foreword: "The authors of the chapters in this book are the pioneers who will explore the exascale frontier. The path forward will not be easy... These authors, along with their colleagues who will produce these powerful computer systems will, with dedication and determination, overcome the scalability problem, discover the new algorithms needed to achieve exascale performance for the broad range of applications that they represent, and create the new tools needed to support the development of scalable and portable science and engineering applications. Although the focus is on exascale computers, the benefits will permeate all of science and engineering because the technologies developed for the exascale computers of tomorrow will also power the petascale servers and terascale workstations of tomorrow. These affordable computing capabilities will empower scientists and engineers everywhere." - Thom H. Dunning, Jr., Pacific Northwest National Laboratory and University of Washington, Seattle, Washington, USA "This comprehensive summary of applications targeting Exascale at the three DoE labs is a must read." - Rio Yokota, Tokyo Institute of Technology, Tokyo, Japan "Numerical simulation is now a need in many fields of science, technology, and industry. The complexity of the simulated systems coupled with the massive use of data makes HPC essential to move towards predictive simulations. Advances in computer architecture have so far permitted scientific advances, but at the cost of continually adapting algorithms and applications. The next technological breakthroughs force us to rethink the applications by taking energy consumption into account. These profound modifications require not only anticipation and sharing but also a paradigm shift in application design to ensure the sustainability of developments by guaranteeing a certain independence of the applications to the profound modifications of the architectures: it is the passage from optimal performance to the portability of performance. It is the challenge of this book to demonstrate by example the approach that one can adopt for the development of applications offering performance portability in spite of the profound changes of the computing architectures." - Christophe Calvin, CEA, Fundamental Research Division, Saclay, France "Three editors, one from each of the High Performance Computer Centers at Lawrence Berkeley, Argonne, and Oak Ridge National Laboratories, have compiled a very useful set of chapters aimed at describing software developments for the next generation exa-scale computers. Such a book is needed for scientists and engineers to see where the field is going and how they will be able to exploit such architectures for their own work. The book will also benefit students as it provides insights into how to develop software for such computer architectures. Overall, this book fills an important need in showing how to design and implement algorithms for exa-scale architectures which are heterogeneous and have unique memory systems. The book discusses issues with developing user codes for these architectures and how to address these issues including actual coding examples.' - Dr. David A. Dixon, Robert Ramsay Chair, The University of Alabama, Tuscaloosa, Alabama, USA
This multi-contributed handbook focuses on the latest workings of IoT (internet of Things) and Big Data. As the resources are limited, it's the endeavor of the authors to support and bring the information into one resource. The book is divided into 4 sections that covers IoT and technologies, the future of Big Data, algorithms, and case studies showing IoT and Big Data in various fields such as health care, manufacturing and automation. Features Focuses on the latest workings of IoT and Big Data Discusses the emerging role of technologies and the fast-growing market of Big Data Covers the movement toward automation with hardware, software, and sensors, and trying to save on energy resources Offers the latest technology on IoT Presents the future horizons on Big Data
The Future of Numerical Computing Written by one of the foremost experts in high-performance computing and the inventor of Gustafson's Law, The End of Error: Unum Computing explains a new approach to computer arithmetic: the universal number (unum). The unum encompasses all IEEE floating-point formats as well as fixed-point and exact integer arithmetic. This new number type obtains more accurate answers than floating-point arithmetic yet uses fewer bits in many cases, saving memory, bandwidth, energy, and power. A Complete Revamp of Computer Arithmetic from the Ground Up Richly illustrated in color, this groundbreaking book represents a fundamental change in how to perform calculations automatically. It illustrates how this novel approach can solve problems that have vexed engineers and scientists for decades, including problems that have been historically limited to serial processing. Suitable for Anyone Using Computers for Calculations The book is accessible to anyone who uses computers for technical calculations, with much of the book only requiring high school math. The author makes the mathematics interesting through numerous analogies. He clearly defines jargon and uses color-coded boxes for mathematical formulas, computer code, important descriptions, and exercises.
Combinatorial Scientific Computing explores the latest research on creating algorithms and software tools to solve key combinatorial problems on large-scale high-performance computing architectures. It includes contributions from international researchers who are pioneers in designing software and applications for high-performance computing systems. The book offers a state-of-the-art overview of the latest research, tool development, and applications. It focuses on load balancing and parallelization on high-performance computers, large-scale optimization, algorithmic differentiation of numerical simulation code, sparse matrix software tools, and combinatorial challenges and applications in large-scale social networks. The authors unify these seemingly disparate areas through a common set of abstractions and algorithms based on combinatorics, graphs, and hypergraphs. Combinatorial algorithms have long played a crucial enabling role in scientific and engineering computations and their importance continues to grow with the demands of new applications and advanced architectures. By addressing current challenges in the field, this volume sets the stage for the accelerated development and deployment of fundamental enabling technologies in high-performance scientific computing.
Exploring how concurrent programming can be assisted by language-level techniques, Introduction to Concurrency in Programming Languages presents high-level language techniques for dealing with concurrency in a general context. It provides an understanding of programming languages that offer concurrency features as part of the language definition. The book supplies a conceptual framework for different aspects of parallel algorithm design and implementation. It first addresses the limitations of traditional programming techniques and models when dealing with concurrency. The book then explores the current state of the art in concurrent programming and describes high-level language constructs for concurrency. It also discusses the historical evolution of hardware, corresponding high-level techniques that were developed, and the connection to modern systems, such as multicore and manycore processors. The remainder of the text focuses on common high-level programming techniques and their application to a range of algorithms. The authors offer case studies on genetic algorithms, fractal generation, cellular automata, game logic for solving Sudoku puzzles, pipelined algorithms, and more. Illustrating the effect of concurrency on programs written in familiar languages, this text focuses on novel language abstractions that truly bring concurrency into the language and aid analysis and compilation tools in generating efficient, correct programs. It also explains the complexity involved in taking advantage of concurrency with regard to program correctness and performance.
Scientific Simulations with Special-Purpose Computers. The GRAPE Systems J. Makino University of Tokyo, Japan and M. Taiji Institute for Statistical Mathematics, Tokyo, Japan Physics is full of complex many-body problems, i.e. problems where there are a large number of bodies interacting. This is particularly true in astrophysics, where stars or galaxies can be thought of as individual particles, but also in plasma physics, hydrodynamics and molecular dynamics. Special purpose computers have been developed to handle these highly complex problems. Scientific Simulations with Special-Purpose Computers gives an overview of these systems, and then focuses on an extremely high profile and successful project-the GRAPE computer at the University of Tokyo-and discusses its development, performance and applications across a range of problems. The future development and applications of special purpose computers are also discussed. Written by two of the leading developers of the GRAPE system, this unique volume will be of great interest to readers across a wide range of fields, including, astrophysicists, astronomers, plasma physicists, researchers in molecular dynamics and computer scientists.
A bold, visionary, and mind-bending exploration of how the geometry of chaos can explain our uncertain world - from weather and pandemics to quantum physics and free will Covering a breathtaking range of topics - from climate change to the foundations of quantum physics, from economic modelling to conflict prediction, from free will to consciousness and spirituality - The Primacy of Doubt takes us on a unique journey through the science of uncertainty. A key theme that unifies these seemingly unconnected topics is the geometry of chaos: the beautiful and profound fractal structures that lie at the heart of much of modern mathematics. Royal Society Research Professor Tim Palmer shows us how the geometry of chaos not only provides the means to predict the world around us, it suggests new insights into some of the most astonishing aspects of our universe and ourselves. This important and timely book helps the reader makes sense of uncertainty in a rapidly changing world.
As more and more data is generated at a faster-than-ever rate, processing large volumes of data is becoming a challenge for data analysis software. Addressing performance issues, Cloud Computing: Data-Intensive Computing and Scheduling explores the evolution of classical techniques and describes completely new methods and innovative algorithms. The book delineates many concepts, models, methods, algorithms, and software used in cloud computing. After a general introduction to the field, the text covers resource management, including scheduling algorithms for real-time tasks and practical algorithms for user bidding and auctioneer pricing. It next explains approaches to data analytical query processing, including pre-computing, data indexing, and data partitioning. Applications of MapReduce, a new parallel programming model, are then presented. The authors also discuss how to optimize multiple group-by query processing and introduce a MapReduce real-time scheduling algorithm. A useful reference for studying and using MapReduce and cloud computing platforms, this book presents various technologies that demonstrate how cloud computing can meet business requirements and serve as the infrastructure of multidimensional data analysis applications.
This practical book presents fundamental concepts and issues in computer modeling and simulation (M&S) in a simple and practical way for engineers, scientists, and managers who wish to apply simulation successfully to their real-world problems. It offers a concise approach to the coverage of generic (tool-independent) M&S concepts and enables engineering practitioners to easily learn, evaluate, and apply various available simulation concepts. Worked out examples are included to illustrate the concepts and an example modeling application is continued throughout the chapters to demonstrate the techniques. The book discusses modeling purposes, scoping a model, levels of modeling abstraction, the benefits and cost of including randomness, types of simulation, and statistical techniques. It also includes a chapter on modeling and simulation projects and how to conduct them for customer and engineer benefit and covers the stages of a modeling and simulation study, including process and system investigation, data collection, modeling scoping and production, model verification and validation, experimentation, and analysis of results.
This practical book presents fundamental concepts and issues in computer modeling and simulation (M&S) in a simple and practical way for engineers, scientists, and managers who wish to apply simulation successfully to their real-world problems. It offers a concise approach to the coverage of generic (tool-independent) M&S concepts and enables engineering practitioners to easily learn, evaluate, and apply various available simulation concepts. Worked out examples are included to illustrate the concepts and an example modeling application is continued throughout the chapters to demonstrate the techniques. The book discusses modeling purposes, scoping a model, levels of modeling abstraction, the benefits and cost of including randomness, types of simulation, and statistical techniques. It also includes a chapter on modeling and simulation projects and how to conduct them for customer and engineer benefit and covers the stages of a modeling and simulation study, including process and system investigation, data collection, modeling scoping and production, model verification and validation, experimentation, and analysis of results.
High-Performance Computing for Big Data: Methodologies and Applications explores emerging high-performance architectures for data-intensive applications, novel efficient analytical strategies to boost data processing, and cutting-edge applications in diverse fields, such as machine learning, life science, neural networks, and neuromorphic engineering. The book is organized into two main sections. The first section covers Big Data architectures, including cloud computing systems, and heterogeneous accelerators. It also covers emerging 3D IC design principles for memory architectures and devices. The second section of the book illustrates emerging and practical applications of Big Data across several domains, including bioinformatics, deep learning, and neuromorphic engineering. Features Covers a wide range of Big Data architectures, including distributed systems like Hadoop/Spark Includes accelerator-based approaches for big data applications such as GPU-based acceleration techniques, and hardware acceleration such as FPGA/CGRA/ASICs Presents emerging memory architectures and devices such as NVM, STT- RAM, 3D IC design principles Describes advanced algorithms for different big data application domains Illustrates novel analytics techniques for Big Data applications, scheduling, mapping, and partitioning methodologies Featuring contributions from leading experts, this book presents state-of-the-art research on the methodologies and applications of high-performance computing for big data applications. About the Editor Dr. Chao Wang is an Associate Professor in the School of Computer Science at the University of Science and Technology of China. He is the Associate Editor of ACM Transactions on Design Automations for Electronics Systems (TODAES), Applied Soft Computing, Microprocessors and Microsystems, IET Computers & Digital Techniques, and International Journal of Electronics. Dr. Chao Wang was the recipient of Youth Innovation Promotion Association, CAS, ACM China Rising Star Honorable Mention (2016), and best IP nomination of DATE 2015. He is now on the CCF Technical Committee on Computer Architecture, CCF Task Force on Formal Methods. He is a Senior Member of IEEE, Senior Member of CCF, and a Senior Member of ACM.
This book constitutes the refereed conference proceedings of the workshops held at the 37th International ISC High Performance 2022 Conference, in Hamburg, Germany, in June 2, 2022. The 27 full papers were included in this book were carefully reviewed and selected from 43 submissions. ISC High Performance 2022 presents the following workshops: Compiler-assisted Correctness Checking and Performance Optimization for HPC HPC on Heterogeneous Hardware (H3) Malleability Techniques Applications in High Performance Computing Fifth Workshop on Interactive High Performance Computing 3rd ISC HPC International Workshop on Monitoring & Operational Data Analytics 6th International Workshop on In Situ Visualization 17th Workshop on Virtualization in High Performance Cloud Computing
Read this if you want to understand how to shape our technological future and reinvigorate democracy along the way. -- Reed Hastings, co-founder and CEO of Netflix __________ A forward-thinking manifesto from three Stanford professors which reveals how big tech's obsession with optimization and efficiency has sacrificed fundamental human values and outlines steps we can take to change course, renew our democracy, and save ourselves. __________ In no more than the blink of an eye, a naive optimism about technology's liberating potential has given way to a dystopian obsession with biased algorithms, surveillance capitalism, and job-displacing robots. Yet too few of us see any alternative to accepting the onward march of technology. We have simply accepted a technological future designed for us by technologists, the venture capitalists who fund them, and the politicians who give them free rein. It doesn't need to be this way. System Error exposes the root of our current predicament: how big tech's relentless focus on optimization is driving a future that reinforces discrimination, erodes privacy, displaces workers, and pollutes the information we get. Armed with an understanding of how technologists think and exercise their power, three Stanford professors - a philosopher working at the intersection of tech and ethics, a political scientist who served under Obama, and the director of the undergraduate Computer Science program at Stanford (also an early Google engineer) - reveal how we can hold that power to account. As the dominance of big tech becomes an explosive societal conundrum, they share their provocative insights and concrete solutions to help everyone understand what is happening, what is at stake, and what we can do to control technology instead of letting it control us.
Over the past several decades, applications permeated by advances in digital signal processing have undergone unprecedented growth in capabilities. The editors and authors of High Performance Embedded Computing Handbook: A Systems Perspective have been significant contributors to this field, and the principles and techniques presented in the handbook are reinforced by examples drawn from their work. The chapters cover system components found in today's HPEC systems by addressing design trade-offs, implementation options, and techniques of the trade, then solidifying the concepts with specific HPEC system examples. This approach provides a more valuable learning tool, Because readers learn about these subject areas through factual implementation cases drawn from the contributing authors' own experiences. Discussions include: Key subsystems and components Computational characteristics of high performance embedded algorithms and applications Front-end real-time processor technologies such as analog-to-digital conversion, application-specific integrated circuits, field programmable gate arrays, and intellectual property-based design Programmable HPEC systems technology, including interconnection fabrics, parallel and distributed processing, performance metrics and software architecture, and automatic code parallelization and optimization Examples of complex HPEC systems representative of actual prototype developments Application examples, including radar, communications, electro-optical, and sonar applications The handbook is organized around a canonical framework that helps readers navigate through the chapters, and it concludes with a discussion of future trends in HPEC systems. The material is covered at a level suitable for practicing engineers and HPEC computational practitioners and is easily adaptable to their own implementation requirements.
The book discusses the fundamentals of high-performance computing. The authors combine visualization, comprehensibility, and strictness in their material presentation, and thus influence the reader towards practical application and learning how to solve real computing problems. They address both key approaches to programming modern computing systems: multithreading-based parallelizing in shared memory systems, and applying message-passing technologies in distributed systems. The book is suitable for undergraduate and graduate students, and for researchers and practitioners engaged with high-performance computing systems. Each chapter begins with a theoretical part, where the relevant terminology is introduced along with the basic theoretical results and methods of parallel programming, and concludes with a list of test questions and problems of varying difficulty. The authors include many solutions and hints, and often sample code.
Intel Xeon Phi Processor High Performance Programming is an all-in-one source of information for programming the Second-Generation Intel Xeon Phi product family also called Knights Landing. The authors provide detailed and timely Knights Landingspecific details, programming advice, and real-world examples. The authors distill their years of Xeon Phi programming experience coupled with insights from many expert customers - Intel Field Engineers, Application Engineers, and Technical Consulting Engineers - to create this authoritative book on the essentials of programming for Intel Xeon Phi products. Intel (R) Xeon Phi (TM) Processor High-Performance Programming is useful even before you ever program a system with an Intel Xeon Phi processor. To help ensure that your applications run at maximum efficiency, the authors emphasize key techniques for programming any modern parallel computing system whether based on Intel Xeon processors, Intel Xeon Phi processors, or other high-performance microprocessors. Applying these techniques will generally increase your program performance on any system and prepare you better for Intel Xeon Phi processors.
This book constitutes the refereed proceedings of the 4th Russian Supercomputing Days, RuSCDays 2018, held in Moscow, Russia, in September 2018.The 59 revised full papers and one revised short paper presented were carefully reviewed and selected from 136 submissions. The papers are organized in topical sections on parallel algorithms; supercomputer simulation; high performance architectures, tools and technologies.
1) Provides a levelling approach, bringing students at all stages of programming experience to the same point 2) Focuses Python, a general language, to an engineering and scientific context 3) Uses a classroom tested, practical approach to teaching programming 4) Teaches students and professionals how to use Python to solve engineering calculations such as differential and algebraic equations
'This colourful page-turner puts artificial intelligence into a human perspective . . . Metz explains this transformative technology and makes the quest thrilling.' Walter Isaacson, author of Steve Jobs ____________________________________________________ This is the inside story of a small group of mavericks, eccentrics and geniuses who turned Artificial Intelligence from a fringe enthusiasm into a transformative technology. It's the story of how that technology became big business, creating vast fortunes and sparking intense rivalries. And it's the story of breakneck advances that will shape our lives for many decades to come - both for good and for ill. ________________________________________________ 'One day soon, when computers are safely driving our roads and speaking to us in complete sentences, we'll look back at Cade Metz's elegant, sweeping Genius Makers as their birth story - the Genesis for an age of sentient machines.' Brad Stone, author of The Everything Store and The Upstarts 'A ringside seat at what may turn out to be the pivotal episode in human history . . . easy and fun to read . . . undeniably charming.' Forbes
A step-by-step guide that will enhance your skills in creating powerful systems to solve complex issues About This Book * Carlos R. Morrison from NASA will teach you to build a supercomputer with Raspberry Pi 3 * Deepen your understanding of setting up host nodes, configuring networks, and automating mountable drives * Learn various math, physics, and engineering applications to solve complex problems Who This Book Is For This book targets hobbyists and enthusiasts who want to explore building supercomputers with microcomputers. Researchers will also find this book useful. Prior programming knowledge is necessary; knowledge of supercomputers is not. What You Will Learn * Understand the concept of the Message Passing Interface (MPI) * Understand node networking. * Configure nodes so that they can communicate with each other via the network switch * Build a Raspberry Pi3 supercomputer. * Test the supercluster * Use the supercomputer to calculate MPI p codes. * Learn various practical supercomputer applications In Detail Author Carlos R. Morrison (Staff Scientist, NASA) will empower the uninitiated reader to quickly assemble and operate a Pi3 supercomputer in the shortest possible time. The lifeblood of a supercomputer, the MPI code, is introduced early, and sample MPI code provides additional practice opportunities for you to test the effectiveness of your creation. You will learn how to configure various nodes and switches so that they can effectively communicate with each other. By the end of this book, you will have successfully built a supercomputer and the various applications related to it. Style and approach A progressive guide that will start off with serial coding and MPI concepts, moving towards configuring a complete supercluster, and solving real world problems
Read this if you want to understand how to shape our technological future and reinvigorate democracy along the way. -- Reed Hastings, co-founder and CEO of Netflix __________ A forward-thinking manifesto from three Stanford professors which reveals how big tech's obsession with optimization and efficiency has sacrificed fundamental human values and outlines steps we can take to change course, renew our democracy, and save ourselves. __________ In no more than the blink of an eye, a naive optimism about technology's liberating potential has given way to a dystopian obsession with biased algorithms, surveillance capitalism, and job-displacing robots. Yet too few of us see any alternative to accepting the onward march of technology. We have simply accepted a technological future designed for us by technologists, the venture capitalists who fund them, and the politicians who give them free rein. It doesn't need to be this way. System Error exposes the root of our current predicament: how big tech's relentless focus on optimization is driving a future that reinforces discrimination, erodes privacy, displaces workers, and pollutes the information we get. Armed with an understanding of how technologists think and exercise their power, three Stanford professors - a philosopher working at the intersection of tech and ethics, a political scientist who served under Obama, and the director of the undergraduate Computer Science program at Stanford (also an early Google engineer) - reveal how we can hold that power to account. As the dominance of big tech becomes an explosive societal conundrum, they share their provocative insights and concrete solutions to help everyone understand what is happening, what is at stake, and what we can do to control technology instead of letting it control us.
Fifteen original contributions from experts in high-speed computation on multi-processor architectures, concurrent programming and parallel algorithms. Experts in high-speed computation agree that the rapidly growing demand for more powerful computers can only be met by a radical change in computer architecture, a change from a single serial processor to an aggregation of many processors working in parallel. At present, our knowledge about multi-processor architectures, concurrent programming or parallel algorithms is very limited. This book discusses all three subjects in relation to the HEP supercomputer that can handle multiple instruction streams and multiple data streams (MIMD). The HEP multiprocessor is an innovative general purpose computer, easy to use by anybody familiar with FORTRAN. Following a preface by the editor, the book's fifteen original contributions are divided into four sections: The HEP Architecture and Systems Software; The HEP Performance; Programming and Languages; and Applications of the HEP Computer. An appendix describes the use of monitors in FORTRAN, providing a tutorial on the barrier, self-scheduling DO loop, and Askfor monitors. J. S. Kowalik, who has contributed a chapter with S. P. Kumar on "Parallel Algorithms for Recurrence and Tridiagonal Linear Equations," is a manager in Boeing Computer Services' Artificial Intelligence Center in Seattle.MIMD Computation is included in the Scientific Computation Series, edited by Dennis Cannon.
Ecosystems and Technology: Idea Generation and Content Model Processing, presents important new innovations in the area of management and computing. Innovation is the generation and application of new ideas and skills to produce new products, processes, and services that improve economic and social prosperity. This includes management and design policy decisions and encompasses innovation research, analysis, and best practice in enterprises, public and private sector service organizations, government, regional societies and economies. The book, the first volume in the Innovation Management and Computing book series, looks at technology that improves efficiency and idea generation, including systems for business, medical/health, education, and more. The book provides detailed examples to provide readers with current issues, including Venture planning for innovations New technologies supporting innovations systems Competitive business modeling Context-driven innovation modeling The generation of ideas faster The measurement of relevant data Virtual interfaces Business intelligence and content processing Predictive modeling Haptic expression and emotion recognition innovations, with applications to neurocognitive medical science This book provides a wealth of information that will be useful for IT and business professionals, educators, and students in many fields.
Read this if you want to understand how to shape our technological future and reinvigorate democracy along the way. -- Reed Hastings, co-founder and CEO of Netflix __________ A forward-thinking manifesto from three Stanford professors which reveals how big tech's obsession with optimization and efficiency has sacrificed fundamental human values and outlines steps we can take to change course, renew our democracy, and save ourselves. __________ In no more than the blink of an eye, a naive optimism about technology's liberating potential has given way to a dystopian obsession with biased algorithms, surveillance capitalism, and job-displacing robots. Yet too few of us see any alternative to accepting the onward march of technology. We have simply accepted a technological future designed for us by technologists, the venture capitalists who fund them, and the politicians who give them free rein. It doesn't need to be this way. System Error exposes the root of our current predicament: how big tech's relentless focus on optimization is driving a future that reinforces discrimination, erodes privacy, displaces workers, and pollutes the information we get. Armed with an understanding of how technologists think and exercise their power, three Stanford professors - a philosopher working at the intersection of tech and ethics, a political scientist who served under Obama, and the director of the undergraduate Computer Science program at Stanford (also an early Google engineer) - reveal how we can hold that power to account. As the dominance of big tech becomes an explosive societal conundrum, they share their provocative insights and concrete solutions to help everyone understand what is happening, what is at stake, and what we can do to control technology instead of letting it control us. |
![]() ![]() You may like...
Arithmetic and Algebraic Circuits
Antonio Lloris Ruiz, Encarnacion Castillo Morales, …
Hardcover
R5,234
Discovery Miles 52 340
Security of Data and Transaction…
Vijay Atluri, Pierangela Samarati
Hardcover
R2,955
Discovery Miles 29 550
Data-Driven Science and Engineering…
Steven L. Brunton, J. Nathan Kutz
Hardcover
R1,706
Discovery Miles 17 060
Machine Learning Techniques for Pattern…
Mohit Dua, Ankit Kumar Jain
Hardcover
R8,638
Discovery Miles 86 380
|