![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
This book presents the statistical analysis of compositional data sets, i.e., data in percentages, proportions, concentrations, etc. The subject is covered from its grounding principles to the practical use in descriptive exploratory analysis, robust linear models and advanced multivariate statistical methods, including zeros and missing values, and paying special attention to data visualization and model display issues. Many illustrated examples and code chunks guide the reader into their modeling and interpretation. And, though the book primarily serves as a reference guide for the R package "compositions," it is also a general introductory text on Compositional Data Analysis. Awareness of their special characteristics spread in the Geosciences in the early sixties, but a strategy for properly dealing with them was not available until the works of Aitchison in the eighties. Since then, research has expanded our understanding of their theoretical principles and the potentials and limitations of their interpretation. This is the first comprehensive textbook addressing these issues, as well as their practical implications with regard to software. The book is intended for scientists interested in statistically analyzing their compositional data. The subject enjoys relatively broad awareness in the geosciences and environmental sciences, but the spectrum of recent applications also covers areas like medicine, official statistics, and economics. Readers should be familiar with basic univariate and multivariate statistics. Knowledge of R is recommended but not required, as the book is self-contained.
This is the fourth edition of the training manual for the Data Modelling Master Class that Steve Hoberman teaches onsite and through public classes. This text can be purchased prior to attending the Master Class, the latest course schedule and detailed description can be found on Steve Hoberman's website, stevehoberman.com. The Master Class is a complete course on requirements elicitation and data modeling, containing three days of practical techniques for producing solid relational and dimensional data models. After learning the styles and steps in capturing and modelling requirements, you will apply a best practices approach to building and validating data models through the Data Model Scorecard(r). You will know not just how to build a data model, but also how to build a data model well. Two case studies and many exercises reinforce the material and enable you to apply these techniques in your current projects. By the end of the course, you will know how to: Explain data modeling building blocks and identify these constructs by following a question-driven approach to ensure model precision; Demonstrate reading a data model of any size and complexity with the same confidence as reading a book; Validate any data model with key "settings" (scope, abstraction, timeframe, function, and format) as well as through the Data Model Scorecard; Apply requirements elicitation techniques including interviewing and prototyping; Build relational and dimensional conceptual, logical, and physical data models through two case studies; Practice finding structural soundness issues and standards violations; Recognize situations where abstraction would be most valuable and situations where abstraction would be most dangerous; Use a series of templates for capturing and validating requirements, and for data profiling; Express how to write clear, complete, and correct definitions; Leverage the Grain Matrix, enterprise data model, and available industry data models for a successful enterprise architecture.
Achieve best-in-class metrics and get more from your data with JMP JMP Connections is the small- and medium-sized business owner's guide to exceeding customer expectations by getting more out of your data using JMP. Uniquely bifunctional, this book is divided into two parts: the first half of the book shows you what JMP can do for you. You'll discover how to wring every last drop of insight out of your data, and let JMP parse reams of raw numbers into actionable insight that leads to better strategic decisions. You'll also discover why it works so well; clear explanations break down the Connectivity platform and metrics in business terms to demystify data analysis and JMP while giving you a macro view of the benefits that come from optimal implementation. The second half of the book is for your technical team, demonstrating how to implement specific solutions relating to data set development and data virtualization. In the end, your organization reduces Full Time Equivalents while increasing productivity and competitiveness. JMP is a powerful tool for business, but many organizations aren't even scratching the surface of what their data can do for them. This book provides the information and technical guidance your business needs to achieve more. Learn what a JMP Connectivity Platform can do for your business Understand Metrics-on-Demand, Real-Time Metrics, and their implementation Delve into technical implementation with information on configuration and management, version control, data visualization, and more Make better business decisions by getting more and better information from your data Business leadership relies on good information to make good business decisions--but what if you could increase the quality of the information you receive, while getting more of what you want to know and less of what you don't need to know? How would that affect strategy, operations, customer experience, and other critical areas? JMP can help with that, and JMP Connections provides real, actionable guidance on getting more out of JMP.
If you're a business team leader, CIO, business analyst, or developer interested in how Apache Hadoop and Apache HBase-related technologies can address problems involving large-scale data in cost-effective ways, this book is for you. Using real-world stories and situations, authors Ted Dunning and Ellen Friedman show Hadoop newcomers and seasoned users alike how NoSQL databases and Hadoop can solve a variety of business and research issues. You'll learn about early decisions and pre-planning that can make the process easier and more productive. If you're already using these technologies, you'll discover ways to gain the full range of benefits possible with Hadoop. While you don't need a deep technical background to get started, this book does provide expert guidance to help managers, architects, and practitioners succeed with their Hadoop projects.Examine a day in the life of big data: India's ambitious Aadhaar project; review tools in the Hadoop ecosystem such as Apache's Spark, Storm, and Drill to learn how they can help you; pick up a collection of technical and strategic tips that have helped others succeed with Hadoop; learn from several prototypical Hadoop use cases, based on how organizations have actually applied the technology. You can explore real-world stories that reveal how MapR customers combine use cases when putting Hadoop and NoSQL to work, including in production.
Richly illustrated in color, Statistics and Data Analysis for Microarrays Using R and Bioconductor, Second Edition provides a clear and rigorous description of powerful analysis techniques and algorithms for mining and interpreting biological information. Omitting tedious details, heavy formalisms, and cryptic notations, the text takes a hands-on, example-based approach that teaches students the basics of R and microarray technology as well as how to choose and apply the proper data analysis tool to specific problems. New to the Second EditionCompletely updated and double the size of its predecessor, this timely second edition replaces the commercial software with the open source R and Bioconductor environments. Fourteen new chapters cover such topics as the basic mechanisms of the cell, reliability and reproducibility issues in DNA microarrays, basic statistics and linear models in R, experiment design, multiple comparisons, quality control, data pre-processing and normalization, Gene Ontology analysis, pathway analysis, and machine learning techniques. Methods are illustrated with toy examples and real data and the R code for all routines is available on an accompanying downloadable resource. With all the necessary prerequisites included, this best-selling book guides students from very basic notions to advanced analysis techniques in R and Bioconductor. The first half of the text presents an overview of microarrays and the statistical elements that form the building blocks of any data analysis. The second half introduces the techniques most commonly used in the analysis of microarray data.
SQL is full of difficulties and traps for the unwary. You can avoid them if you understand relational theory, but only if you know how to put that theory into practice. In this book, Chris Date explains relational theory in depth, and demonstrates through numerous examples and exercises how you can apply it to your use of SQL. This third edition has been revised, extended, and improved throughout. Topics whose treatment has been expanded include data types and domains, table comparisons, image relations, aggregate operators and summarization, view updating, and subqueries. A special feature of this edition is a new appendix on NoSQL and relational theory. Could you write an SQL query to find employees who have worked at least once in every programming department in the company? And be sure it's correct? Why is proper column naming so important? Nulls in the database cause wrong answers. Why? What you can do about it? How can image relations help you formulate complex SQL queries? SQL supports "quantified comparisons," but they're better avoided. Why? And how? Database theory and practice have evolved considerably since Codd first defined the relational model, back in 1969. This book draws on decades of experience to present the most up to date treatment of the material available anywhere. Anyone with a modest to advanced background in SQL can benefit from the insights it contains. The book is product independent.
Manufacturing Execution Systeme (MES) sind das Werkzeug, mit dem die Fertigungsprozesse transparent gemacht werden und mit dem die Ablaufe in Realtime unter Berucksichtigung von Zielvorgaben geregelt werden konnen. Das Buch soll helfen, ein MES zielorientiert im Unternehmen einzufuhren. Hierzu werden nicht nur Ratschlage zur Konzeption gegeben, sondern es wird auch bei der "internen Vermarktung" des MES-Vorhabens in Form von Ratschlagen und Wirtschaftlichkeitsbetrachtungen unterstutzt. Im Anschluss daran werden Hinweise zur Erstellung eines Pflichtenhefts sowie zur Ausschreibung und Anbieterauswahl gegeben. Neben Tipps vom Projektstart bis zum Produktivstart des Systems werden Themen wie Mitarbeiterqualifizierung und Support angesprochen. Ferner wird aufgezeigt, wie der Einfuhrungsprozess durch externe MES-Berater unterstutzt werden kann. Zwei Fallbeispiele zeigen, wie die Einfuhrung in der Praxis verlief und welcher Nutzen durch das MES erzielt werden konnte. Zur Besserung Nutzung des Systems werden noch organisatorische Massnahmen beschrieben, wie die Mitarbeitereinbindung mit Zielvereinbarungen und Pramienentlohnung, die auch neue Tarifmodelle, wie z.B. ERA (Entgeltrahmenabkommen) vorsehen. Ein Kapitel mit Checklisten, Literaturtipps und Weblinks schliesst dieses Buch ab."
This book gathers a collection of high-quality peer-reviewed research papers presented at the International Conference on Big Data, IoT and Machine Learning (BIM 2021), held in Cox's Bazar, Bangladesh, during 23-25 September 2021. The book covers research papers in the field of big data, IoT and machine learning. The book will be helpful for active researchers and practitioners in the field.
This book includes original unpublished contributions presented at the International Conference on Data Analytics and Management (ICDAM 2021), held at Jan Wyzykowski University, Poland, during June 2021. The book covers the topics in data analytics, data management, big data, computational intelligence, and communication networks. The book presents innovative work by leading academics, researchers, and experts from industry which is useful for young researchers and students.
At the intersection of computer science and healthcare, data analytics has emerged as a promising tool for solving problems across many healthcare-related disciplines. Supplying a comprehensive overview of recent healthcare analytics research, Healthcare Data Analytics provides a clear understanding of the analytical techniques currently available to solve healthcare problems. The book details novel techniques for acquiring, handling, retrieving, and making best use of healthcare data. It analyzes recent developments in healthcare computing and discusses emerging technologies that can help improve the health and well-being of patients. Written by prominent researchers and experts working in the healthcare domain, the book sheds light on many of the computational challenges in the field of medical informatics. Each chapter in the book is structured as a "survey-style" article discussing the prominent research issues and the advances made on that research topic. The book is divided into three major categories: Healthcare Data Sources and Basic Analytics - details the various healthcare data sources and analytical techniques used in the processing and analysis of such data Advanced Data Analytics for Healthcare - covers advanced analytical methods, including clinical prediction models, temporal pattern mining methods, and visual analytics Applications and Practical Systems for Healthcare - covers the applications of data analytics to pervasive healthcare, fraud detection, and drug discovery along with systems for medical imaging and decision support Computer scientists are usually not trained in domain-specific medical concepts, whereas medical practitioners and researchers have limited exposure to the data analytics area. The contents of this book will help to bring together these diverse communities by carefully and comprehensively discussing the most relevant contributions from each domain.
It is not lost on commercial organisations that where we live colours how we view ourselves and others. That is why so many now place us into social groups on the basis of the type of postcode in which we live. Social scientists call this practice "commercial sociology". Richard Webber originated Acorn and Mosaic, the two most successful geodemographic classifications. Roger Burrows is a critical interdisciplinary social scientist. Together they chart the origins of this practice and explain the challenges it poses to long-established social scientific beliefs such as: the role of the questionnaire in an era of "big data" the primacy of theory the relationship between qualitative and quantitative modes of understanding the relevance of visual clues to lay understanding. To help readers evaluate the validity of this form of classification, the book assesses how well geodemographic categories track the emergence of new types of residential neighbourhood and subject a number of key contemporary issues to geodemographic modes of analysis.
This is a book about how ecologists can integrate remote sensing and GIS in their research. It will allow readers to get started with the application of remote sensing and to understand its potential and limitations. Using practical examples, the book covers all necessary steps from planning field campaigns to deriving ecologically relevant information through remote sensing and modelling of species distributions. An Introduction to Spatial Data Analysis introduces spatial data handling using the open source software Quantum GIS (QGIS). In addition, readers will be guided through their first steps in the R programming language. The authors explain the fundamentals of spatial data handling and analysis, empowering the reader to turn data acquired in the field into actual spatial data. Readers will learn to process and analyse spatial data of different types and interpret the data and results. After finishing this book, readers will be able to address questions such as "What is the distance to the border of the protected area?", "Which points are located close to a road?", "Which fraction of land cover types exist in my study area?" using different software and techniques. This book is for novice spatial data users and does not assume any prior knowledge of spatial data itself or practical experience working with such data sets. Readers will likely include student and professional ecologists, geographers and any environmental scientists or practitioners who need to collect, visualize and analyse spatial data. The software used is the widely applied open source scientific programs QGIS and R. All scripts and data sets used in the book will be provided online at book.ecosens.org. This book covers specific methods including: what to consider before collecting in situ data how to work with spatial data collected in situ the difference between raster and vector data how to acquire further vector and raster data how to create relevant environmental information how to combine and analyse in situ and remote sensing data how to create useful maps for field work and presentations how to use QGIS and R for spatial analysis how to develop analysis scripts
PRAISE FOR THE ANALYTICS LIFECYCLE TOOLKIT "Full of wisdom and experience about analytics, this book's greatest strength is its lifecycle approach. From framing the question to getting results, you'll learn how analytics can really have an impact on organizations." Thomas H. Davenport, Ph.D., Author of Competing on Analytics and Only Humans Need Apply "This book condenses a lot of deep thinking on the wide field of analytics strategy. Analytics is not easy there are no quickie AI/BI/ML shortcuts to understanding your data, your business, or your processes. You have to build a diverse team of talent. You have to respect the hazards of 'fishing expeditions' that may need false-discovery-rate adjustments. You should consider designed experiments to get the true behavior of a process, something that observational data may hint at, but not provide complete understanding. There are dimensions of data wrangling, feature engineering, and data sense-making that all call for different skills. But with deep investment in analytics comes deep insight into processes and tremendous opportunity for improvements. This book puts analytics in the context of a strategic business system, with all its dimensions." John Sall, Ph.D., SAS co-founder and chief architect of JMP "The Analytics Lifecycle Toolkit provides a clear prescription for organizations aiming to develop a high-performing and scalable analytics capability. Greg organizes and develops with unusual clarity some of the critical nontechnical aspects of the analytics value-chain, and links them with the technical as building blocks in a comprehensive practice. Studying this map of how to negotiate the challenges to effectiveness and efficiency in analytics could save organizations months, or even years of painful trial and error on the road to proficiency." Scott Radcliffe, Executive Director, Data Analytics at Cox Communications "Many books exist that answer the question 'what is the right tool to solve a problem?' This is one of the few books I've read that answers the much more difficult question 'how do we make analytics become transformative throughout our organization?' Incorporating elements of data science, design thinking, and organizational theory, this book is a valuable resource for executives looking to build analytics into their organizational DNA, data scientists looking to expand their organizational reach, and analytics programs that teach students not just how to do data science, but how to use data science to affect tangible change." Jeremy Petranka, Ph.D., Assistant Dean Master of Quantitative Management at Duke University's Fuqua School of Business "This book is the 'thinking person's guide to analytics.' Greg has gone deep on some topics and provided considerable references across the analytics lifecycle. This is one of the best books on analytics I have read...and I think I have read them all!" Bob Gladden, Vice President, Enterprise Analytics, Highmark Health
This open access book presents the foundations of the Big Data research and innovation ecosystem and the associated enablers that facilitate delivering value from data for business and society. It provides insights into the key elements for research and innovation, technical architectures, business models, skills, and best practices to support the creation of data-driven solutions and organizations. The book is a compilation of selected high-quality chapters covering best practices, technologies, experiences, and practical recommendations on research and innovation for big data. The contributions are grouped into four parts: * Part I: Ecosystem Elements of Big Data Value focuses on establishing the big data value ecosystem using a holistic approach to make it attractive and valuable to all stakeholders. * Part II: Research and Innovation Elements of Big Data Value details the key technical and capability challenges to be addressed for delivering big data value. * Part III: Business, Policy, and Societal Elements of Big Data Value investigates the need to make more efficient use of big data and understanding that data is an asset that has significant potential for the economy and society. * Part IV: Emerging Elements of Big Data Value explores the critical elements to maximizing the future potential of big data value. Overall, readers are provided with insights which can support them in creating data-driven solutions, organizations, and productive data ecosystems. The material represents the results of a collective effort undertaken by the European data community as part of the Big Data Value Public-Private Partnership (PPP) between the European Commission and the Big Data Value Association (BDVA) to boost data-driven digital transformation.
Das St. Galler Modell fur prozesszentriertes Customer Relationship Management basiert auf Praxiserfahrungen, die in acht Fallstudien fuhrender Unternehmen dokumentiert sind: Ganzheitliches Kundenbindungsmarketing der Direkt Anlage Bank; Contact Center der Swisscom; Kampagnen- und Kundenmanagement bei Genossenschaftsbanken; Kundenzentrierte Prozesse und Systeme der Credit Suisse, LGT Bank in Liechtenstein und Neuen Zurcher Zeitung; Management von Projekt- und Kundenwissen bei der SAP. Das Gesamtmodell beschreibt mit Kunden-, Kanal- sowie Prozess- und Wissensmanagement die wesentlichen Instrumente zur radikalen Ausrichtung auf Kundenprozesse. Eine Ubersicht der achtzehn wichtigsten Einfuhrungsmethoden aus Literatur, Beratung und von Systemanbietern unterstutzt die erfolgreiche Projektdurchfuhrung."
This book highlights advanced applications of geospatial data analytics to address real-world issues in urban society. With a connected world, we are generating spatial at unprecedented rates which can be harnessed for insightful analytics which define the way we analyze past events and define the future directions. This book is an anthology of applications of spatial data and analytics performed on them for gaining insights which can be used for problem solving in an urban setting. Each chapter is contributed by spatially aware data scientists in the making who present spatial perspectives drawn on spatial big data. The book shall benefit mature researchers and student alike to discourse a variety of urban applications which display the use of machine learning algorithms on spatial big data for real-world problem solving.
With a growing ecosystem of tools and libraries available, and the flexibility to run on many platforms (web, desktop and mobile), JavaScript is a terrific all-round environment for all data wrangling needs! Data Wrangling with JavaScript teaches readers core data munging techniques in JavaScript, along with many libraries and tools that will make their data tasks even easier. Key Features * How to handle unusual data sets * Cleaning and preparing raw data * Visualizing your results Audience Written for developers with experience using JavaScript. No prior knowledge of data analytics is needed. Author Bio Ashley Davis is a software developer, entrepreneur, writer, and a stock trader. He is the creator of Data-Forge, a data transformation and analysis toolkit for JavaScript inspired by Pandas and Microsoft LINQ.
Educational Data Analytics (EDA) have been attributed with significant benefits for enhancing on-demand personalized educational support of individual learners as well as reflective course (re)design for achieving more authentic teaching, learning and assessment experiences integrated into real work-oriented tasks. This open access textbook is a tutorial for developing, practicing and self-assessing core competences on educational data analytics for digital teaching and learning. It combines theoretical knowledge on core issues related to collecting, analyzing, interpreting and using educational data, including ethics and privacy concerns. The textbook provides questions and teaching materials/ learning activities as quiz tests of multiple types of questions, added after each section, related to the topic studied or the video(s) referenced. These activities reproduce real-life contexts by using a suitable use case scenario (storytelling), encouraging learners to link theory with practice; self-assessed assignments enabling learners to apply their attained knowledge and acquired competences on EDL. By studying this book, you will know where to locate useful educational data in different sources and understand their limitations; know the basics for managing educational data to make them useful; understand relevant methods; and be able to use relevant tools; know the basics for organising, analysing, interpreting and presenting learner-generated data within their learning context, understand relevant learning analytics methods and be able to use relevant learning analytics tools; know the basics for analysing and interpreting educational data to facilitate educational decision making, including course and curricula design, understand relevant teaching analytics methods and be able to use relevant teaching analytics tools; understand issues related with educational data ethics and privacy. This book is intended for school leaders and teachers engaged in blended (using the flipped classroom model) and online (during COVID-19 crisis and beyond) teaching and learning; e-learning professionals (such as, instructional designers and e-tutors) of online and blended courses; instructional technologists; researchers as well as undergraduate and postgraduate university students studying education, educational technology and relevant fields.
Koennen Computer alles? Wenn es so ware, gabe es dieses Buch nicht. Es beweist bestechend logisch, dass selbst die groessten, schnellsten, intelligentesten und teuersten Computer der Welt nur beschrankt leistungsfahig sind. Der Mensch kann noch so viel Geld, Zeit und Know-how investieren, es gibt Computer-Probleme, die er niemals loesen wird. Eine beunruhigende, provokative Botschaft - und doch: wussten wir es nicht eigentlich schon, haben es aber nie wirklich glauben wollen? Der bekannte Computer-Wissenschaftler David Harel vermittelt die mathematischen Fakten spannend, unterhaltsam und allgemeinverstandlich. Mit der Beschranktheit des Computers werden wir an die Grenzen allen Wissens gefuhrt. Grenzen, die den Menschen beflugeln, das Moegliche weiter zu verbessern und selbst aus dem Unmoeglichen Nutzen zu ziehen. Eine brillante tour de force mit uberraschenden Aspekten, die den Leser - ob vorgebildeter Laie oder Fachkundiger - von der ersten bis zur letzten Seite fesselt.
The different facets of the sharing economy offer numerous opportunities for businesses ? particularly those that can be distinguished by their creative ideas and their ability to easily connect buyers and senders of goods and services via digital platforms. At the beginning of the growth of this economy, the advanced digital technologies generated billions of bytes of data that constitute what we call Big Data. This book underlines the facilitating role of Big Data analytics, explaining why and how data analysis algorithms can be integrated operationally, in order to extract value and to improve the practices of the sharing economy. It examines the reasons why these new techniques are necessary for businesses of this economy and proposes a series of useful applications that illustrate the use of data in the sharing ecosystem.
Data Science and Analytics explores the solutions to problems in society, environment and in industry. With the increase in the availability of data, analytics has now become a major element in both the top line and the bottom line of any organization. This book explores perspectives on how big data and business analytics are increasingly essential in better decision making. This edited work explores the application of big data and business analytics by academics, researchers, industrial experts, policy makers and practitioners, helping the reader to understand how big data can be efficiently utilized in better managerial applications. Data Science and Analytics brings together researchers, engineers and practitioners to encompass a wide and diverse range of topics in a wide range of fields. The book will provide unique insights to researchers, academics and data scientists from a variety of disciplines interested in analyzing and application of big data analytics, as well as data analysts, students and scholars pursuing advanced study in big data.
There is an easier way to build Hadoop applications. With this hands-on book, you'll learn how to use Cascading, the open source abstraction framework for Hadoop that lets you easily create and manage powerful enterprise-grade data processing applications - without having to learn the intricacies of MapReduce. Working with sample apps based on Java and other JVM languages, you'll quickly learn Cascading's streamlined approach to data processing, data filtering, and workflow optimization. This book demonstrates how this framework can help your business extract meaningful information from large amounts of distributed data.Start working on Cascading example projects right away Model and analyze unstructured data in any format, from any source Build and test applications with familiar constructs and reusable components Work with the Scalding and Cascalog Domain-Specific Languages Easily deploy applications to Hadoop, regardless of cluster location or data size Build workflows that integrate several big data frameworks and processes Explore common use cases for Cascading, including features and tools that support them Examine a case study that uses a dataset from the Open Data Initiative
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance - simulation and sampling, as well as experimental design and data collection - that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference.
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance - simulation and sampling, as well as experimental design and data collection - that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference.
Die SchAnheit der Natur mit dem Rechner nachzubilden, fasziniert die Computergraphik seit jeher. Im vorliegenden Buch werden Verfahren zur Erzeugung kA1/4nstlicher Pflanzenmodelle beschrieben und deren Anwendung in Bereichen wie Simulation, Virtual Reality, Botanik, Landschaftsplanung und Architektur. Die Modelle werden zu GArten, Parks und ganzen Landschaften kombiniert. Die Palette der Darstellungsformen reicht von tAuschend echt wirkenden Bildern bis zu abstrakten ReprAsentationen. Mit Ahnlichen Algorithmen kAnnen organische KArper hergestellt, verAndert und animiert werden. Die beigefA1/4gten Programme (Windows) erlauben dies auch dem Leser. |
![]() ![]() You may like...
Spectral and High Order Methods for…
Robert M. Kirby, Martin Berzins, …
Hardcover
R5,266
Discovery Miles 52 660
Educating the Engineer for the 21st…
D. Weichert, B. Rauhut, …
Hardcover
R4,534
Discovery Miles 45 340
Basic and Applied Zooplankton Biology
Perumal Santhanam, Ajima Begum, …
Hardcover
R3,687
Discovery Miles 36 870
Sea Changes - Historicizing the Ocean
Bernhard Klein, Gesa MacKenthun
Hardcover
R4,485
Discovery Miles 44 850
Quasi-Geostrophic Theory of Oceans and…
Fabio Cavallini, Fulvio Crisciani
Hardcover
R2,934
Discovery Miles 29 340
|