![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
Among the various multi-level formulations of mathematical models in decision making processes, this book focuses on the bi-level model. Being the most frequently used, the bi-level model addresses conflicts which exist in multi-level decision making processes. From the perspective of bi-level structure and uncertainty, this book takes real-life problems as the background, focuses on the so-called random-like uncertainty, and develops the general framework of random-like bi-level decision making problems. The random-like uncertainty considered in this book includes random phenomenon, random-overlapped random (Ra-Ra) phenomenon and fuzzy-overlapped random (Ra-Fu) phenomenon. Basic theory, models, algorithms and practical applications for different types of random-like bi-level decision making problems are also presented in this book.
This book shows how to look at ways of visualizing large datasets, whether large in numbers of cases, or large in numbers of variables, or large in both. All ideas are illustrated with displays from analyses of real datasets and the importance of interpreting displays effectively is emphasized. Graphics should be drawn to convey information and the book includes many insightful examples. New approaches to graphics are needed to visualize the information in large datasets and most of the innovations described in this book are developments of standard graphics. The book is accessible to readers with some experience of drawing statistical graphics.
This book gathers a selection of invited and contributed lectures from the European Conference on Numerical Mathematics and Advanced Applications (ENUMATH) held in Lausanne, Switzerland, August 26-30, 2013. It provides an overview of recent developments in numerical analysis, computational mathematics and applications from leading experts in the field. New results on finite element methods, multiscale methods, numerical linear algebra and discretization techniques for fluid mechanics and optics are presented. As such, the book offers a valuable resource for a wide range of readers looking for a state-of-the-art overview of advanced techniques, algorithms and results in numerical mathematics and scientific computing.
This book presents four mathematical essays which explore the foundations of mathematics and related topics ranging from philosophy and logic to modern computer mathematics. While connected to the historical evolution of these concepts, the essays place strong emphasis on developments still to come. The book originated in a 2002 symposium celebrating the work of Bruno Buchberger, Professor of Computer Mathematics at Johannes Kepler University, Linz, Austria, on the occasion of his 60th birthday. Among many other accomplishments, Professor Buchberger in 1985 was the founding editor of the Journal of Symbolic Computation; the founder of the Research Institute for Symbolic Computation (RISC) and its chairman from 1987-2000; the founder in 1990 of the Softwarepark Hagenberg, Austria, and since then its director. More than a decade in the making, Mathematics, Computer Science and Logic - A Never Ending Story includes essays by leading authorities, on such topics as mathematical foundations from the perspective of computer verification; a symbolic-computational philosophy and methodology for mathematics; the role of logic and algebra in software engineering; and new directions in the foundations of mathematics. These inspiring essays invite general, mathematically interested readers to share state-of-the-art ideas which advance the never ending story of mathematics, computer science and logic. Mathematics, Computer Science and Logic - A Never Ending Story is edited by Professor Peter Paule, Bruno Buchberger's successor as director of the Research Institute for Symbolic Computation.
Thirty years ago mathematical, as opposed to applied numerical, computation was difficult to perform and so relatively little used. Three threads changed that: the emergence of the personal computer; the discovery of fiber-optics and the consequent development of the modern internet; and the building of the Three "M's" Maple, Mathematica and Matlab. We intend to persuade that Mathematica and other similar tools are worth knowing, assuming only that one wishes to be a mathematician, a mathematics educator, a computer scientist, an engineer or scientist, or anyone else who wishes/needs to use mathematics better. We also hope to explain how to become an "experimental mathematician" while learning to be better at proving things. To accomplish this our material is divided into three main chapters followed by a postscript. These cover elementary number theory, calculus of one and several variables, introductory linear algebra, and visualization and interactive geometric computation.
This book offers a snapshot of the state-of-the-art in classification at the interface between statistics, computer science and application fields. The contributions span a broad spectrum, from theoretical developments to practical applications; they all share a strong computational component. The topics addressed are from the following fields: Statistics and Data Analysis; Machine Learning and Knowledge Discovery; Data Analysis in Marketing; Data Analysis in Finance and Economics; Data Analysis in Medicine and the Life Sciences; Data Analysis in the Social, Behavioural, and Health Care Sciences; Data Analysis in Interdisciplinary Domains; Classification and Subject Indexing in Library and Information Science. The book presents selected papers from the Second European Conference on Data Analysis, held at Jacobs University Bremen in July 2014. This conference unites diverse researchers in the pursuit of a common topic, creating truly unique synergies in the process.
This handbook provides a comprehensive overview of Partial Least Squares (PLS) methods with specific reference to their use in marketing and with a discussion of the directions of current research and perspectives. It covers the broad area of PLS methods, from regression to structural equation modeling applications, software and interpretation of results. The handbook serves both as an introduction for those without prior knowledge of PLS and as a comprehensive reference for researchers and practitioners interested in the most recent advances in PLS methodology.
Learn how to develop powerful data analytics applications quickly for SQL Server database administrators and developers. Organizations will be able to sift data and derive the business intelligence needed to drive business decisions and profit. The addition of R to SQL Server 2016 places a powerful analytical processor into an environment most developers are already comfortable with - Visual Studio. This book walks even the newest of users through the creation process of a powerful R-language tool set for use in analyzing and reporting on your data. As a SQL Server database administrator or developer, it is sometimes difficult to stay on the bleeding edge of technology. Microsoft's addition of R to SQL Server 2016 is sure to be a game-changer, and the language will certainly become an integral part of future releases. R is in fact widely used today in statistical and related applications, and its use is only growing. Beginning SQL Server R Services helps you jump on board this important trend by providing good examples with detailed explanations of the WHY and not just the HOW. Walks you through setup and installation of SQL Server R Services. Explains the basics of working with R Tools for Visual Studio. Provides a road map to successfully creating custom R code. What You Will Learn Discover R's role in the SQL Server 2016 hierarchy. Manage the components needed to run SQL Server R Services code. Run R-language analytics and queries inside the database. Create analytic solutions that run across multiple datasets. Gain in-depth knowledge of the R language itself. Implement custom SQL Server R Services solutions. Who This Book Is For Any level of database administrator or developer, but specifically it's for those developers with the need to develop powerful data analytics applications quickly. Seasoned R developers will appreciate the book for its robust learning pattern, using visual aids in combination with properties explanations and scenarios. Beginning SQL Server R Services is the perfect "new hire" gift for new database developers in any organization.
"A well-written and -illustrated work, recommended for all college libraries. Lower-division undergraduates through faculty." Doing Statistics With SPSS is derived from the authors' many years of experience teaching undergraduates data handling using SPSS. It assumes no prior understanding beyond that of basic mathematical operations and is therefore suitable for anyone undertaking an introductory statistics course as part of a science based undergraduate programme. The text will: enable the reader to make informed choices about what statistical tests to employ; what assumptions are made in using a particular test; demonstrate how to execute the analysis using SPSS; and guide the reader in his//her interpretation of its output. Each chapter ends with an exercise and provides detailed instructions on how to run the analysis using SPSS release 10. Learning is further guided by pointing the reader to particular aspects of the SPSS output and by having the reader engage with specified items of information from the SPSS results.This text is more complete than the alternatives that usually fall into one of two camps. They either provide an explanation of the concepts but no instructions on how to execute the analysis with SPSS, or they are a manual which instructs the reader on how to drive the software but with minimal explanation of what it all means. This book offers the best elements of both in a style that is economical and accessible. Doing Statistics with SPSS will be essential reading for undergraduates in psychology and health-related disciplines, and likely to be of invaluable use to many other students in the social sciences taking a course in statistics.
Introduction to Real World Statistics provides students with the basic concepts and practices of applied statistics, including data management and preparation; an introduction to the concept of probability; data screening and descriptive statistics; various inferential analysis techniques; and a series of exercises that are designed to integrate core statistical concepts. The author's systematic approach, which assumes no prior knowledge of the subject, equips student practitioners with a fundamental understanding of applied statistics that can be deployed across a wide variety of disciplines and professions. Notable features include: short, digestible chapters that build and integrate statistical skills with real-world applications, demonstrating the flexible usage of statistics for evidence-based decision-making statistical procedures presented in a practical context with less emphasis on technical jargon early chapters that build a foundation before presenting statistical procedures SPSS step-by-step detailed instructions designed to reinforce student understanding real world exercises complete with answers chapter PowerPoints and test banks for instructors.
This book presents recent results on positivity and optimization of polynomials in non-commuting variables. Researchers in non-commutative algebraic geometry, control theory, system engineering, optimization, quantum physics and information science will find the unified notation and mixture of algebraic geometry and mathematical programming useful. Theoretical results are matched with algorithmic considerations; several examples and information on how to use NCSOStools open source package to obtain the results provided. Results are presented on detecting the eigenvalue and trace positivity of polynomials in non-commuting variables using Newton chip method and Newton cyclic chip method, relaxations for constrained and unconstrained optimization problems, semidefinite programming formulations of the relaxations and finite convergence of the hierarchies of these relaxations, and the practical efficiency of algorithms.
This textbook on computational statistics presents tools and concepts of univariate and multivariate statistical data analysis with a strong focus on applications and implementations in the statistical software R. It covers mathematical, statistical as well as programming problems in computational statistics and contains a wide variety of practical examples. In addition to the numerous R sniplets presented in the text, all computer programs (quantlets) and data sets to the book are available on GitHub and referred to in the book. This enables the reader to fully reproduce as well as modify and adjust all examples to their needs. The book is intended for advanced undergraduate and first-year graduate students as well as for data analysts new to the job who would like a tour of the various statistical tools in a data analysis workshop. The experienced reader with a good knowledge of statistics and programming might skip some sections on univariate models and enjoy the various ma thematical roots of multivariate techniques. The Quantlet platform quantlet.de, quantlet.com, quantlet.org is an integrated QuantNet environment consisting of different types of statistics-related documents and program codes. Its goal is to promote reproducibility and offer a platform for sharing validated knowledge native to the social web. QuantNet and the corresponding Data-Driven Documents-based visualization allows readers to reproduce the tables, pictures and calculations inside this Springer book.
Contingency tables arise in diverse fields, including life sciences, education, social and political sciences, notably market research and opinion surveys. Their analysis plays an essential role in gaining insight into structures of the quantities under consideration and in supporting decision making. Combining both theory and applications, this book presents models and methods for the analysis of two- and multidimensional-contingency tables. An excellent reference for advanced undergraduates, graduate students, and practitioners in statistics as well as biosciences, social sciences, education, and economics, the work may also be used as a textbook for a course on categorical data analysis. Prerequisites include basic background on statistical inference and knowledge of statistical software packages.
The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, computer-intensive methods such as the bootstrap and cross-validation freed practitioners from the limitations of parametric models, and paved the way towards the `big data' era of the 21st century. Nonetheless, there is a further step one may take, i.e., going beyond even nonparametric models; this is where the Model-Free Prediction Principle is useful. Interestingly, being able to predict a response variable Y associated with a regressor variable X taking on any possible value seems to inadvertently also achieve the main goal of modeling, i.e., trying to describe how Y depends on X. Hence, as prediction can be treated as a by-product of model-fitting, key estimation problems can be addressed as a by-product of being able to perform prediction. In other words, a practitioner can use Model-Free Prediction ideas in order to additionally obtain point estimates and confidence intervals for relevant parameters leading to an alternative, transformation-based approach to statistical inference.
R for Business Analytics looks at some of the most common tasks performed by business analysts and helps the user navigate the wealth of information in R and its 4000 packages. With this information the reader can select the packages that can help process the analytical tasks with minimum effort and maximum usefulness. The use of Graphical User Interfaces (GUI) is emphasized in this book to further cut down and bend the famous learning curve in learning R. This book is aimed to help you kick-start with analytics including chapters on data visualization, code examples on web analytics and social media analytics, clustering, regression models, text mining, data mining models and forecasting. The book tries to expose the reader to a breadth of business analytics topics without burying the user in needless depth. The included references and links allow the reader to pursue business analytics topics. This book is aimed at business analysts with basic programming skills for using R for Business Analytics. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. Business analytics (BA) refers to the field of exploration and investigation of data generated by businesses. Business Intelligence (BI) is the seamless dissemination of information through the organization, which primarily involves business metrics both past and current for the use of decision support in businesses. Data Mining (DM) is the process of discovering new patterns from large data using algorithms and statistical methods. To differentiate between the three, BI is mostly current reports, BA is models to predict and strategize and DM matches patterns in big data. The R statistical software is the fastest growing analytics platform in the world, and is established in both academia and corporations for robustness, reliability and accuracy. The book utilizes Albert Einstein's famous remarks on making things as simple as possible, but no simpler. This book will blow the last remaining doubts in your mind about using R in your business environment. Even non-technical users will enjoy the easy-to-use examples. The interviews with creators and corporate users of R make the book very readable. The author firmly believes Isaac Asimov was a better writer in spreading science than any textbook or journal author.
Visualizing the data is an essential part of any data analysis. Modern computing developments have led to big improvements in graphic capabilities and there are many new possibilities for data displays. This book gives an overview of modern data visualization methods, both in theory and practice. It details modern graphical tools such as mosaic plots, parallel coordinate plots, and linked views. Coverage also examines graphical methodology for particular areas of statistics, for example Bayesian analysis, genomic data and cluster analysis, as well software for graphics.
Beginning R, Second Edition is a hands-on book showing how to use the R language, write and save R scripts, read in data files, and write custom statistical functions as well as use built in functions. This book shows the use of R in specific cases such as one-way ANOVA analysis, linear and logistic regression, data visualization, parallel processing, bootstrapping, and more. It takes a hands-on, example-based approach incorporating best practices with clear explanations of the statistics being done. It has been completely re-written since the first edition to make use of the latest packages and features in R version 3. R is a powerful open-source language and programming environment for statistics and has become the de facto standard for doing, teaching, and learning computational statistics. R is both an object-oriented language and a functional language that is easy to learn, easy to use, and completely free. A large community of dedicated R users and programmers provides an excellent source of R code, functions, and data sets, with a constantly evolving ecosystem of packages providing new functionality for data analysis. R has also become popular in commercial use at companies such as Microsoft, Google, and Oracle. Your investment in learning R is sure to pay off in the long term as R continues to grow into the go to language for data analysis and research.What You Will Learn: How to acquire and install R Hot to import and export data and scripts How to analyze data and generate graphics How to program in R to write custom functions Hot to use R for interactive statistical explorations How to conduct bootstrapping and other advanced techniques
This textbook on statistical modeling and statistical inference will assist advanced undergraduate and graduate students. Statistical Modeling and Computation provides a unique introduction to modern Statistics from both classical and Bayesian perspectives. It also offers an integrated treatment of Mathematical Statistics and modern statistical computation, emphasizing statistical modeling, computational techniques, and applications. Each of the three parts will cover topics essential to university courses. Part I covers the fundamentals of probability theory. In Part II, the authors introduce a wide variety of classical models that include, among others, linear regression and ANOVA models. In Part III, the authors address the statistical analysis and computation of various advanced models, such as generalized linear, state-space and Gaussian models. Particular attention is paid to fast Monte Carlo techniques for Bayesian inference on these models. Throughout the book the authors include a large number of illustrative examples and solved problems. The book also features a section with solutions, an appendix that serves as a MATLAB primer, and a mathematical supplement.
This is the first book to show the capabilities of Microsoft Excel to teach engineering statistics effectively. It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical engineering problems. If understanding statistics isn't your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in engineering courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2013 for Engineering Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Each chapter explains statistical formulas and directs the reader to use Excel commands to solve specific, easy-to-understand engineering problems. Practice problems are provided at the end of each chapter with their solutions in an Appendix. Separately, there is a full Practice Test (with answers in an Appendix) that allows readers to test what they have learned.
The development of software system with acceptable level of reliability and quality within available time frame and budget becomes a challenging objective. This objective could be achieved to some extent through early prediction of number of faults present in the software, which reduces the cost of development as it provides an opportunity to make early corrections during development process. The book presents an early software reliability prediction model that will help to grow the reliability of the software systems by monitoring it in each development phase, i.e. from requirement phase to testing phase. Different approaches are discussed in this book to tackle this challenging issue. An important approach presented in this book is a model to classify the modules into two categories (a) fault-prone and (b) not fault-prone. The methods presented in this book for assessing expected number of faults present in the software, assessing expected number of faults present at the end of each phase and classification of software modules in fault-prone or no fault-prone category are easy to understand, develop and use for any practitioner. The practitioners are expected to gain more information about their development process and product reliability, which can help to optimize the resources used.
This volume contains pioneering contributions to both the theory and practice of optimal experimental design. Topics include the optimality of designs in linear and nonlinear models, as well as designs for correlated observations and for sequential experimentation. There is an emphasis on applications to medicine, in particular, to the design of clinical trials. Scientists from Europe, the US, Asia, Australia and Africa contributed to this volume of papers from the 11th Workshop on Model Oriented Design and Analysis.
Through this book, researchers and students will learn to use R for analysis of large-scale genomic data and how to create routines to automate analytical steps. The philosophy behind the book is to start with real world raw datasets and perform all the analytical steps needed to reach final results. Though theory plays an important role, this is a practical book for graduate and undergraduate courses in bioinformatics and genomic analysis or for use in lab sessions. How to handle and manage high-throughput genomic data, create automated workflows and speed up analyses in R is also taught. A wide range of R packages useful for working with genomic data are illustrated with practical examples. The key topics covered are association studies, genomic prediction, estimation of population genetic parameters and diversity, gene expression analysis, functional annotation of results using publically available databases and how to work efficiently in R with large genomic datasets. Important principles are demonstrated and illustrated through engaging examples which invite the reader to work with the provided datasets. Some methods that are discussed in this volume include: signatures of selection, population parameters (LD, FST, FIS, etc); use of a genomic relationship matrix for population diversity studies; use of SNP data for parentage testing; snpBLUP and gBLUP for genomic prediction. Step-by-step, all the R code required for a genome-wide association study is shown: starting from raw SNP data, how to build databases to handle and manage the data, quality control and filtering measures, association testing and evaluation of results, through to identification and functional annotation of candidate genes. Similarly, gene expression analyses are shown using microarray and RNAseq data. At a time when genomic data is decidedly big, the skills from this book are critical. In recent years R has become the de facto< tool for analysis of gene expression data, in addition to its prominent role in analysis of genomic data. Benefits to using R include the integrated development environment for analysis, flexibility and control of the analytic workflow. Included topics are core components of advanced undergraduate and graduate classes in bioinformatics, genomics and statistical genetics. This book is also designed to be used by students in computer science and statistics who want to learn the practical aspects of genomic analysis without delving into algorithmic details. The datasets used throughout the book may be downloaded from the publisher's website.
This book focuses on the applications of convex optimization and highlights several topics, including support vector machines, parameter estimation, norm approximation and regularization, semi-definite programming problems, convex relaxation, and geometric problems. All derivation processes are presented in detail to aid in comprehension. The book offers concrete guidance, helping readers recognize and formulate convex optimization problems they might encounter in practice.
This Festschrift in honour of Ursula Gather's 60th birthday deals with modern topics in the field of robust statistical methods, especially for time series and regression analysis, and with statistical methods for complex data structures. The individual contributions of leading experts provide a textbook-style overview of the topic, supplemented by current research results and questions. The statistical theory and methods in this volume aim at the analysis of data which deviate from classical stringent model assumptions, which contain outlying values and/or have a complex structure. Written for researchers as well as master and PhD students with a good knowledge of statistics.
Up-to-Date Guidance from One of the Foremost Members of the R Core Team Written by John M. Chambers, the leading developer of the original S software, Extending R covers key concepts and techniques in R to support analysis and research projects. It presents the core ideas of R, provides programming guidance for projects of all scales, and introduces new, valuable techniques that extend R. The book first describes the fundamental characteristics and background of R, giving readers a foundation for the remainder of the text. It next discusses topics relevant to programming with R, including the apparatus that supports extensions. The book then extends R's data structures through object-oriented programming, which is the key technique for coping with complexity. The book also incorporates a new structure for interfaces applicable to a variety of languages. A reflection of what R is today, this guide explains how to design and organize extensions to R by correctly using objects, functions, and interfaces. It enables current and future users to add their own contributions and packages to R. A 2017 Choice Outstanding Academic Title |
![]() ![]() You may like...
An Introduction to Creating Standardized…
Todd Case, Yuting Tian
Hardcover
R1,623
Discovery Miles 16 230
Portfolio and Investment Analysis with…
John B. Guerard, Ziwei Wang, …
Hardcover
R2,491
Discovery Miles 24 910
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,341
Discovery Miles 13 410
|