![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Pipelines can be challenging to manage, especially when your data has to flow through a collection of application components, servers, and cloud services. Airflow lets you schedule, restart, and backfill pipelines, and its easy-to-use UI and workflows with Python scripting has users praising its incredible flexibility. Data Pipelines with Apache Airflow takes you through best practices for creating pipelines for multiple tasks, including data lakes, cloud deployments, and data science. Data Pipelines with Apache Airflow teaches you the ins-and-outs of the Directed Acyclic Graphs (DAGs) that power Airflow, and how to write your own DAGs to meet the needs of your projects. With complete coverage of both foundational and lesser-known features, when you're done you'll be set to start using Airflow for seamless data pipeline development and management. Key Features Framework foundation and best practices Airflow's execution and dependency system Testing Airflow DAGs Running Airflow in production For data-savvy developers, DevOps and data engineers, and system administrators with intermediate Python skills. About the technology Data pipelines are used to extract, transform and load data to and from multiple sources, routing it wherever it's needed -- whether that's visualisation tools, business intelligence dashboards, or machine learning models. Airflow streamlines the whole process, giving you one tool for programmatically developing and monitoring batch data pipelines, and integrating all the pieces you use in your data stack. Bas Harenslak and Julian de Ruiter are data engineers with extensive experience using Airflow to develop pipelines for major companies including Heineken, Unilever, and Booking.com. Bas is a committer, and both Bas and Julian are active contributors to Apache Airflow.
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance - simulation and sampling, as well as experimental design and data collection - that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference.
This book presents a unified theory of random matrices for applications in machine learning, offering a large-dimensional data vision that exploits concentration and universality phenomena. This enables a precise understanding, and possible improvements, of the core mechanisms at play in real-world machine learning algorithms. The book opens with a thorough introduction to the theoretical basics of random matrices, which serves as a support to a wide scope of applications ranging from SVMs, through semi-supervised learning, unsupervised spectral clustering, and graph methods, to neural networks and deep learning. For each application, the authors discuss small- versus large-dimensional intuitions of the problem, followed by a systematic random matrix analysis of the resulting performance and possible improvements. All concepts, applications, and variations are illustrated numerically on synthetic as well as real-world data, with MATLAB and Python code provided on the accompanying website.
Die Medienmarkte konvergieren. Digitalisierung und technische Innovationen fuhren zu wachsenden Verzahnungen und Kompatibilitaten der traditionellen Medien- und Kommunikationsplattformen. Musik-, Film- oder TV-Inhalte konnen uber Internet oder mobile Telekommunikation verbreitet werden und sind als digitale Datensatze schnell verfugbar. Triple Play" und Interaktionsangebote liefern Massen- und Individualkommunikation aus einer Hand. Mit dem Zusammenwachsen der Markte gewinnt die Gesamtheit der medienrechtlichen Rahmenbedingungen fur die Branchenbeteiligten zunehmend an Bedeutung. Das Buch vermittelt einen strukturierten Uberblick uber das Medienrecht, die Rechtsbeziehungen der Beteiligten und die Entwicklung der Markte. Neben den rechtsspezifischen Aspekten der Konvergenz werden u.a. Fragen der Vertragsgestaltung und der Abgrenzung von Lizenzrechten thematisiert."
Chunyan Li is a course instructor with many years of experience in teaching about time series analysis. His book is essential for students and researchers in oceanography and other subjects in the Earth sciences, looking for a complete coverage of the theory and practice of time series data analysis using MATLAB. This textbook covers the topic's core theory in depth, and provides numerous instructional examples, many drawn directly from the author's own teaching experience, using data files, examples, and exercises. The book explores many concepts, including time; distance on Earth; wind, current, and wave data formats; finding a subset of ship-based data along planned or random transects; error propagation; Taylor series expansion for error estimates; the least squares method; base functions and linear independence of base functions; tidal harmonic analysis; Fourier series and the generalized Fourier transform; filtering techniques: sampling theorems: finite sampling effects; wavelet analysis; and EOF analysis.
An accessible primer on how to create effective graphics from data This book provides students and researchers a hands-on introduction to the principles and practice of data visualization. It explains what makes some graphs succeed while others fail, how to make high-quality figures from data using powerful and reproducible methods, and how to think about data visualization in an honest and effective way. Data Visualization builds the reader's expertise in ggplot2, a versatile visualization library for the R programming language. Through a series of worked examples, this accessible primer then demonstrates how to create plots piece by piece, beginning with summaries of single variables and moving on to more complex graphics. Topics include plotting continuous and categorical variables; layering information on graphics; producing effective "small multiple" plots; grouping, summarizing, and transforming data for plotting; creating maps; working with the output of statistical models; and refining plots to make them more comprehensible. Effective graphics are essential to communicating ideas and a great way to better understand data. This book provides the practical skills students and practitioners need to visualize quantitative data and get the most out of their research findings. Provides hands-on instruction using R and ggplot2 Shows how the "tidyverse" of data analysis tools makes working with R easier and more consistent Includes a library of data sets, code, and functions
All social and policy researchers need to synthesize data into a visual representation. Producing good visualizations combines creativity and technique. This book teaches the techniques and basics to produce a variety of visualizations, allowing readers to communicate data and analyses in a creative and effective way. Visuals for tables, time series, maps, text, and networks are carefully explained and organized, showing how to choose the right plot for the type of data being analysed and displayed. Examples are drawn from public policy, public safety, education, political tweets, and public health. The presentation proceeds step by step, starting from the basics, in the programming languages R and Python so that readers learn the coding skills while simultaneously becoming familiar with the advantages and disadvantages of each visualization. No prior knowledge of either Python or R is required. Code for all the visualizations are available from the book's website.
Was ist Informationsdesign? Welche Designdisziplinen spielen dabei eine Rolle? Und wo liegen Schnittstellen zu anderen Disziplinen wie Usability-Engineering und Informationsarchitektur? Das Kompendium bietet eine umfassende Einfuhrung in theoretische und gestalterische Grundlagen, in Geschichte und Praxis des Informationsdesigns. Verstandlich und anschaulich beschreiben die Autoren Teildisziplinen und Aufgabenfelder des Informationsdesigns: von Interaktionsdesign, Ausstellungsdesign und Signaletik uber Corporate Design, Textdesign und Sounddesign bis hin zu Informationsdidaktik und Informationspsychologie. Begriffsdefinitionen, Tipps sowie Beispiele aus der Praxis machen das Kompendium Informationsdesign zu einem Handbuch fur Studierende, Dozenten und Praktiker.
To lead a data science team, you need to expertly articulate technology roadmaps, support a data-driven culture, and plan a data strategy that drives a competitive business plan. In this practical guide, you'll learn leadership techniques the authors have developed building multiple high-performance data teams. In How to Lead in Data Science you'll master techniques for leading data science at every seniority level, from heading up a single project to overseeing a whole company's data strategy. You'll find advice on plotting your long-term career advancement, as well as quick wins you can put into practice right away. Throughout, carefully crafted assessments and interview scenarios encourage introspection, reveal personal blind spots, and show development areas to help advance your career. Leading a data science team takes more than the typical set of business management skills. You need specific know-how to articulate technology roadmaps, support a data-driven culture, and plan a data strategy that drives a competitive business plan. Whether you're looking to manage your team better or work towards a seat at your company's top leadership table, this book will show you how.
This is a book about how ecologists can integrate remote sensing and GIS in their research. It will allow readers to get started with the application of remote sensing and to understand its potential and limitations. Using practical examples, the book covers all necessary steps from planning field campaigns to deriving ecologically relevant information through remote sensing and modelling of species distributions. An Introduction to Spatial Data Analysis introduces spatial data handling using the open source software Quantum GIS (QGIS). In addition, readers will be guided through their first steps in the R programming language. The authors explain the fundamentals of spatial data handling and analysis, empowering the reader to turn data acquired in the field into actual spatial data. Readers will learn to process and analyse spatial data of different types and interpret the data and results. After finishing this book, readers will be able to address questions such as "What is the distance to the border of the protected area?", "Which points are located close to a road?", "Which fraction of land cover types exist in my study area?" using different software and techniques. This book is for novice spatial data users and does not assume any prior knowledge of spatial data itself or practical experience working with such data sets. Readers will likely include student and professional ecologists, geographers and any environmental scientists or practitioners who need to collect, visualize and analyse spatial data. The software used is the widely applied open source scientific programs QGIS and R. All scripts and data sets used in the book will be provided online at book.ecosens.org. This book covers specific methods including: what to consider before collecting in situ data how to work with spatial data collected in situ the difference between raster and vector data how to acquire further vector and raster data how to create relevant environmental information how to combine and analyse in situ and remote sensing data how to create useful maps for field work and presentations how to use QGIS and R for spatial analysis how to develop analysis scripts
Anomaly detection is the detective work of machine learning: finding the unusual, catching the fraud, discovering strange activity in large and complex datasets. But, unlike Sherlock Holmes, you may not know what the puzzle is, much less what "suspects" you're looking for. This O'Reilly report uses practical examples to explain how the underlying concepts of anomaly detection work. From banking security to natural sciences, medicine, and marketing, anomaly detection has many useful applications in this age of big data. And the search for anomalies will intensify once the Internet of Things spawns even more new types of data. The concepts described in this report will help you tackle anomaly detection in your own project. Use probabilistic models to predict what's normal and contrast that to what you observe Set an adaptive threshold to determine which data falls outside of the normal range, using the t-digest algorithm Establish normal fluctuations in complex systems and signals (such as an EKG) with a more adaptive probablistic model Use historical data to discover anomalies in sporadic event streams, such as web traffic Learn how to use deviations in expected behavior to trigger fraud alerts
Renowned DAX experts Alberto Ferrari and Marco Russo teach you how to design data models for maximum efficiency and effectiveness. How can you use Excel and Power BI to gain real insights into your information? As you examine your data, how do you write a formula that provides the numbers you need? The answers to both of these questions lie with the data model. This book introduces the basic techniques for shaping data models in Excel and Power BI. It's meant for readers who are new to data modeling as well as for experienced data modelers looking for tips from the experts. If you want to use Power BI or Excel to analyze data, the many real-world examples in this book will help you look at your reports in a different way-like experienced data modelers do. As you'll soon see, with the right data model, the correct answer is always a simple one! By reading this book, you will: * Gain an understanding of the basics of data modeling, including tables, relationships, and keys * Familiarize yourself with star schemas, snowflakes, and common modeling techniques * Learn the importance of granularity * Discover how to use multiple fact tables, like sales and purchases, in a complex data model * Manage calendar-related calculations by using date tables * Track historical attributes, like previous addresses of customers or manager assignments * Use snapshots to compute quantity on hand * Work with multiple currencies in the most efficient way * Analyze events that have durations, including overlapping durations * Learn what data model you need to answer your specific business questions About This Book * For Excel and Power BI users who want to exploit the full power of their favorite tools * For BI professionals seeking new ideas for modeling data
Images play a crucial role in shaping and reflecting political life. Digitization has vastly increased the presence of such images in daily life, creating valuable new research opportunities for social scientists. We show how recent innovations in computer vision methods can substantially lower the costs of using images as data. We introduce readers to the deep learning algorithms commonly used for object recognition, facial recognition, and visual sentiment analysis. We then provide guidance and specific instructions for scholars interested in using these methods in their own research.
The term "smart city" defines the new urban environment, one that is designed for performance through information and communication technologies. Given that the majority of people across the world will live in urban environments within the next few decades, it's not surprising that massive effort and investment is being placed into efforts to develop strategies and plans for achieving "smart" urban growth. Building Smart Cities: Analytics, ICT, and Design Thinking explains the technology and a methodology known as design thinking for building smart cities. Information and communications technologies form the backbone of smart cities. A comprehensive and robust data analytics program enables the right choices to be made in building these cities. Design thinking helps to create smart cities that are both livable and able to evolve. This book examines all of these components in the context of smart city development and shows how to use them in an integrated manner. Using the principles of design thinking to reframe the problems of the smart city and capture the real needs of people living in a highly efficient urban environment, the book helps city planners and technologists through the following: Presentation of the relevant technologies required for coordinated, efficient cities Exploration of the latent needs of community stakeholders in a culturally appropriate context Discussion of the tested approaches to ideation, design, prototyping, and building or retrofitting smart cities Proposal of a model for a viable smart city project The smart city vision that we can create an optimized society through technology is hypothetical at best and reflects the failed repetition through the ages of equating scientific progress with positive social change. Up until now, despite our best hopes and efforts, technology has yet to bring an end to scarcity or suffering. Technical innovation, instead, can and should be directed in the service of our shared cultural values, especially within the rapidly growing urban milieu. In Building Smart Cities: Analytics, ICT, and Design Thinking, the author discusses the need to focus on creating human-centered approaches to our cities that integrate our human needs and technology to meet our economic, environmental, and existential needs. The book shows how this approach can lead to innovative, livable urban environments that are realizable, practical, and economically and environmentally sustainable.
Unique reference book covering the entire field of accounting information systems. Contributions from an international range of accounting and information systems experts. Includes coverage of contemporary themes such as big data, data security, cloud computing, IoT and blockchain.
What happens when a researcher and a practitioner spend hours crammed in a Fiat discussing data visualization? Beyond creating beautiful charts, they found greater richness in the craft as an integrated whole. Drawing from their unconventional backgrounds, these two women take readers through a journey around perception, semantics, and intent as the triad that influences visualization. This visually engaging book blends ideas from theory, academia, and practice to craft beautiful, yet meaningful visualizations and dashboards. How do you take your visualization skills to the next level? The book is perfect for analysts, research and data scientists, journalists, and business professionals. Functional Aesthetics for Data Visualization is also an indispensable resource for just about anyone curious about seeing and understanding data. Think of it as a coffee book for the data geek in you. https: //www.functionalaestheticsbook.com
CCTV for Wildlife Monitoring is a handbook on the use of CCTV in nature watching, conservation and ecological research. CCTV offers a unique ability to monitor wildlife in real time, stream video to the web, capture imagery of fast-moving species or cold animals such as wet otters or fish and maintain monitoring over long periods of time in a diverse array of habitats. Wildlife watchers can take advantage of a huge range of CCTV cameras, recording devices and accessories developed for use in non-wildlife applications. CCTV allows intimate study of animal behaviour not possible with other technologies. With expert experience in engineering, photography and wildlife, Susan Young describes CCTV equipment and techniques, giving readers the confidence to tackle what initially may seem technically challenging. The book enables the reader to navigate the technical aspects of recording: basic analogue, high definition HD-TVI and IP cameras, portable CCTV, digital video recorders (DVR) and video processing by focusing on practical applications. No prior knowledge of CCTV is required - step-by-step information is provided to get anyone started recording wildlife. In-depth methods for recording foxes, badger, deer, otters, small mammals and fish are also included, and the book makes comparisons with trail cameras where appropriate. Examples of recorded footage illustrate the book along with detailed diagrams on camera set-ups and links to accompanying videos on YouTube. Case-studies show real projects, both the equipment used and the results. This book will be of interest to amateur naturalists wishing to have a window into the private world of wildlife, ecological consultants monitoring protected species and research scientists studying animal behaviour.
Discover how graph databases can help you manage and query highly connected data. With this practical book, you'll learn how to design and implement a graph database that brings the power of graphs to bear on a broad range of problem domains. Whether you want to speed up your response to user queries or build a database that can adapt as your business evolves, this book shows you how to apply the schema-free graph model to real-world problems. This second edition includes new code samples and diagrams, using the latest Neo4j syntax, as well as information on new functionality. Learn how different organizations are using graph databases to outperform their competitors. With this book's data modeling, query, and code examples, you'll quickly be able to implement your own solution. Model data with the Cypher query language and property graph model Learn best practices and common pitfalls when modeling with graphs Plan and implement a graph database solution in test-driven fashion Explore real-world examples to learn how and why organizations use a graph database Understand common patterns and components of graph database architecture Use analytical techniques and algorithms to mine graph database information
If you're a business team leader, CIO, business analyst, or developer interested in how Apache Hadoop and Apache HBase-related technologies can address problems involving large-scale data in cost-effective ways, this book is for you. Using real-world stories and situations, authors Ted Dunning and Ellen Friedman show Hadoop newcomers and seasoned users alike how NoSQL databases and Hadoop can solve a variety of business and research issues. You'll learn about early decisions and pre-planning that can make the process easier and more productive. If you're already using these technologies, you'll discover ways to gain the full range of benefits possible with Hadoop. While you don't need a deep technical background to get started, this book does provide expert guidance to help managers, architects, and practitioners succeed with their Hadoop projects.Examine a day in the life of big data: India's ambitious Aadhaar project; review tools in the Hadoop ecosystem such as Apache's Spark, Storm, and Drill to learn how they can help you; pick up a collection of technical and strategic tips that have helped others succeed with Hadoop; learn from several prototypical Hadoop use cases, based on how organizations have actually applied the technology. You can explore real-world stories that reveal how MapR customers combine use cases when putting Hadoop and NoSQL to work, including in production.
Richly illustrated in color, Statistics and Data Analysis for Microarrays Using R and Bioconductor, Second Edition provides a clear and rigorous description of powerful analysis techniques and algorithms for mining and interpreting biological information. Omitting tedious details, heavy formalisms, and cryptic notations, the text takes a hands-on, example-based approach that teaches students the basics of R and microarray technology as well as how to choose and apply the proper data analysis tool to specific problems. New to the Second EditionCompletely updated and double the size of its predecessor, this timely second edition replaces the commercial software with the open source R and Bioconductor environments. Fourteen new chapters cover such topics as the basic mechanisms of the cell, reliability and reproducibility issues in DNA microarrays, basic statistics and linear models in R, experiment design, multiple comparisons, quality control, data pre-processing and normalization, Gene Ontology analysis, pathway analysis, and machine learning techniques. Methods are illustrated with toy examples and real data and the R code for all routines is available on an accompanying downloadable resource. With all the necessary prerequisites included, this best-selling book guides students from very basic notions to advanced analysis techniques in R and Bioconductor. The first half of the text presents an overview of microarrays and the statistical elements that form the building blocks of any data analysis. The second half introduces the techniques most commonly used in the analysis of microarray data.
Manufacturing Execution Systeme (MES) sind das Werkzeug, mit dem die Fertigungsprozesse transparent gemacht werden und mit dem die Ablaufe in Realtime unter Berucksichtigung von Zielvorgaben geregelt werden konnen. Das Buch soll helfen, ein MES zielorientiert im Unternehmen einzufuhren. Hierzu werden nicht nur Ratschlage zur Konzeption gegeben, sondern es wird auch bei der "internen Vermarktung" des MES-Vorhabens in Form von Ratschlagen und Wirtschaftlichkeitsbetrachtungen unterstutzt. Im Anschluss daran werden Hinweise zur Erstellung eines Pflichtenhefts sowie zur Ausschreibung und Anbieterauswahl gegeben. Neben Tipps vom Projektstart bis zum Produktivstart des Systems werden Themen wie Mitarbeiterqualifizierung und Support angesprochen. Ferner wird aufgezeigt, wie der Einfuhrungsprozess durch externe MES-Berater unterstutzt werden kann. Zwei Fallbeispiele zeigen, wie die Einfuhrung in der Praxis verlief und welcher Nutzen durch das MES erzielt werden konnte. Zur Besserung Nutzung des Systems werden noch organisatorische Massnahmen beschrieben, wie die Mitarbeitereinbindung mit Zielvereinbarungen und Pramienentlohnung, die auch neue Tarifmodelle, wie z.B. ERA (Entgeltrahmenabkommen) vorsehen. Ein Kapitel mit Checklisten, Literaturtipps und Weblinks schliesst dieses Buch ab."
At the intersection of computer science and healthcare, data analytics has emerged as a promising tool for solving problems across many healthcare-related disciplines. Supplying a comprehensive overview of recent healthcare analytics research, Healthcare Data Analytics provides a clear understanding of the analytical techniques currently available to solve healthcare problems. The book details novel techniques for acquiring, handling, retrieving, and making best use of healthcare data. It analyzes recent developments in healthcare computing and discusses emerging technologies that can help improve the health and well-being of patients. Written by prominent researchers and experts working in the healthcare domain, the book sheds light on many of the computational challenges in the field of medical informatics. Each chapter in the book is structured as a "survey-style" article discussing the prominent research issues and the advances made on that research topic. The book is divided into three major categories: Healthcare Data Sources and Basic Analytics - details the various healthcare data sources and analytical techniques used in the processing and analysis of such data Advanced Data Analytics for Healthcare - covers advanced analytical methods, including clinical prediction models, temporal pattern mining methods, and visual analytics Applications and Practical Systems for Healthcare - covers the applications of data analytics to pervasive healthcare, fraud detection, and drug discovery along with systems for medical imaging and decision support Computer scientists are usually not trained in domain-specific medical concepts, whereas medical practitioners and researchers have limited exposure to the data analytics area. The contents of this book will help to bring together these diverse communities by carefully and comprehensively discussing the most relevant contributions from each domain.
Das St. Galler Modell fur prozesszentriertes Customer Relationship Management basiert auf Praxiserfahrungen, die in acht Fallstudien fuhrender Unternehmen dokumentiert sind: Ganzheitliches Kundenbindungsmarketing der Direkt Anlage Bank; Contact Center der Swisscom; Kampagnen- und Kundenmanagement bei Genossenschaftsbanken; Kundenzentrierte Prozesse und Systeme der Credit Suisse, LGT Bank in Liechtenstein und Neuen Zurcher Zeitung; Management von Projekt- und Kundenwissen bei der SAP. Das Gesamtmodell beschreibt mit Kunden-, Kanal- sowie Prozess- und Wissensmanagement die wesentlichen Instrumente zur radikalen Ausrichtung auf Kundenprozesse. Eine Ubersicht der achtzehn wichtigsten Einfuhrungsmethoden aus Literatur, Beratung und von Systemanbietern unterstutzt die erfolgreiche Projektdurchfuhrung."
PRAISE FOR THE ANALYTICS LIFECYCLE TOOLKIT "Full of wisdom and experience about analytics, this book's greatest strength is its lifecycle approach. From framing the question to getting results, you'll learn how analytics can really have an impact on organizations." Thomas H. Davenport, Ph.D., Author of Competing on Analytics and Only Humans Need Apply "This book condenses a lot of deep thinking on the wide field of analytics strategy. Analytics is not easy there are no quickie AI/BI/ML shortcuts to understanding your data, your business, or your processes. You have to build a diverse team of talent. You have to respect the hazards of 'fishing expeditions' that may need false-discovery-rate adjustments. You should consider designed experiments to get the true behavior of a process, something that observational data may hint at, but not provide complete understanding. There are dimensions of data wrangling, feature engineering, and data sense-making that all call for different skills. But with deep investment in analytics comes deep insight into processes and tremendous opportunity for improvements. This book puts analytics in the context of a strategic business system, with all its dimensions." John Sall, Ph.D., SAS co-founder and chief architect of JMP "The Analytics Lifecycle Toolkit provides a clear prescription for organizations aiming to develop a high-performing and scalable analytics capability. Greg organizes and develops with unusual clarity some of the critical nontechnical aspects of the analytics value-chain, and links them with the technical as building blocks in a comprehensive practice. Studying this map of how to negotiate the challenges to effectiveness and efficiency in analytics could save organizations months, or even years of painful trial and error on the road to proficiency." Scott Radcliffe, Executive Director, Data Analytics at Cox Communications "Many books exist that answer the question 'what is the right tool to solve a problem?' This is one of the few books I've read that answers the much more difficult question 'how do we make analytics become transformative throughout our organization?' Incorporating elements of data science, design thinking, and organizational theory, this book is a valuable resource for executives looking to build analytics into their organizational DNA, data scientists looking to expand their organizational reach, and analytics programs that teach students not just how to do data science, but how to use data science to affect tangible change." Jeremy Petranka, Ph.D., Assistant Dean Master of Quantitative Management at Duke University's Fuqua School of Business "This book is the 'thinking person's guide to analytics.' Greg has gone deep on some topics and provided considerable references across the analytics lifecycle. This is one of the best books on analytics I have read...and I think I have read them all!" Bob Gladden, Vice President, Enterprise Analytics, Highmark Health
Koennen Computer alles? Wenn es so ware, gabe es dieses Buch nicht. Es beweist bestechend logisch, dass selbst die groessten, schnellsten, intelligentesten und teuersten Computer der Welt nur beschrankt leistungsfahig sind. Der Mensch kann noch so viel Geld, Zeit und Know-how investieren, es gibt Computer-Probleme, die er niemals loesen wird. Eine beunruhigende, provokative Botschaft - und doch: wussten wir es nicht eigentlich schon, haben es aber nie wirklich glauben wollen? Der bekannte Computer-Wissenschaftler David Harel vermittelt die mathematischen Fakten spannend, unterhaltsam und allgemeinverstandlich. Mit der Beschranktheit des Computers werden wir an die Grenzen allen Wissens gefuhrt. Grenzen, die den Menschen beflugeln, das Moegliche weiter zu verbessern und selbst aus dem Unmoeglichen Nutzen zu ziehen. Eine brillante tour de force mit uberraschenden Aspekten, die den Leser - ob vorgebildeter Laie oder Fachkundiger - von der ersten bis zur letzten Seite fesselt. |
You may like...
This Is How It Is - True Stories From…
The Life Righting Collective
Paperback
1 Recce: Volume 3 - Onsigbaarheid Is Ons…
Alexander Strachan
Paperback
Abnormal Psychology - An Integrative…
V. Durand, David Barlow, …
Paperback
(1)R930 Discovery Miles 9 300
|