![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This book offers a concise and gentle introduction to finite element programming in Python based on the popular FEniCS software library. Using a series of examples, including the Poisson equation, the equations of linear elasticity, the incompressible Navier-Stokes equations, and systems of nonlinear advection-diffusion-reaction equations, it guides readers through the essential steps to quickly solving a PDE in FEniCS, such as how to define a finite variational problem, how to set boundary conditions, how to solve linear and nonlinear systems, and how to visualize solutions and structure finite element Python programs. This book is open access under a CC BY license.
This guide for practicing statisticians, data scientists, and R users and programmers will teach the essentials of preprocessing: data leveraging the R programming language to easily and quickly turn noisy data into usable pieces of information. Data wrangling, which is also commonly referred to as data munging, transformation, manipulation, janitor work, etc., can be a painstakingly laborious process. Roughly 80% of data analysis is spent on cleaning and preparing data; however, being a prerequisite to the rest of the data analysis workflow (visualization, analysis, reporting), it is essential that one become fluent and efficient in data wrangling techniques. This book will guide the user through the data wrangling process via a step-by-step tutorial approach and provide a solid foundation for working with data in R. The author's goal is to teach the user how to easily wrangle data in order to spend more time on understanding the content of the data. By the end of the book, the user will have learned: How to work with different types of data such as numerics, characters, regular expressions, factors, and dates The difference between different data structures and how to create, add additional components to, and subset each data structure How to acquire and parse data from locations previously inaccessible How to develop functions and use loop control structures to reduce code redundancy How to use pipe operators to simplify code and make it more readable How to reshape the layout of data and manipulate, summarize, and join data sets
This book constitutes the refereed proceedings of the 19th International Conference on Distributed and Computer and Communication Networks, DCCN 2016, held in Moscow, Russia, in November 2016. The 50 revised full papers and the 6 revised short papers presented were carefully reviewed and selected from 141 submissions. The papers cover the following topics: computer and communication networks architecture optimization; control in computer and communication networks; performance and QoS/QoE evaluation in wireless networks; analytical modeling and simulation of next-generation communications systems; queuing theory and reliability theory applications in computer networks; wireless 4G/5G networks, cm- and mm-wave radio technologies; RFID technology and its application in intellectual transportation networks; internet of things, wearables, and applications of distributed information systems; probabilistic and statistical models in information systems; mathematical modeling of high-tech systems; mathematical modeling and control problems; distributed and cloud computing systems, big data analytics.
This edited volume lays the groundwork for Social Data Science, addressing epistemological issues, methods, technologies, software and applications of data science in the social sciences. It presents data science techniques for the collection, analysis and use of both online and offline new (big) data in social research and related applications. Among others, the individual contributions cover topics like social media, learning analytics, clustering, statistical literacy, recurrence analysis and network analysis. Data science is a multidisciplinary approach based mainly on the methods of statistics and computer science, and its aim is to develop appropriate methodologies for forecasting and decision-making in response to an increasingly complex reality often characterized by large amounts of data (big data) of various types (numeric, ordinal and nominal variables, symbolic data, texts, images, data streams, multi-way data, social networks etc.) and from diverse sources. This book presents selected papers from the international conference on Data Science & Social Research, held in Naples, Italy in February 2016, and will appeal to researchers in the social sciences working in academia as well as in statistical institutes and offices.
After the fundamental volume and the advanced technique volume, this volume focuses on R applications in the quantitative investment area. Quantitative investment has been hot for some years, and there are more and more startups working on it, combined with many other internet communities and business models. R is widely used in this area, and can be a very powerful tool. The author introduces R applications with cases from his own startup, covering topics like portfolio optimization and risk management.
This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, Minitab, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and Minitab. Of those, we look at Minitab and SAS in this textbook. One of the main reasons to use Minitab is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities extend to about 90 percent of statistical analysis done in the business world. We demonstrate much of our statistical analysis using Excel and double check the analysis and outcomes using Minitab and SAS-also helpful in some analytical methods not possible or practical to do in Excel.
Intuitive Probability and Random Processes using MATLAB (R) is an introduction to probability and random processes that merges theory with practice. Based on the author's belief that only "hands-on" experience with the material can promote intuitive understanding, the approach is to motivate the need for theory using MATLAB examples, followed by theory and analysis, and finally descriptions of "real-world" examples to acquaint the reader with a wide variety of applications. The latter is intended to answer the usual question "Why do we have to study this?" Other salient features are: *heavy reliance on computer simulation for illustration and student exercises *the incorporation of MATLAB programs and code segments *discussion of discrete random variables followed by continuous random variables to minimize confusion *summary sections at the beginning of each chapter *in-line equation explanations *warnings on common errors and pitfalls *over 750 problems designed to help the reader assimilate and extend the concepts Intuitive Probability and Random Processes using MATLAB (R) is intended for undergraduate and first-year graduate students in engineering. The practicing engineer as well as others having the appropriate mathematical background will also benefit from this book. About the Author Steven M. Kay is a Professor of Electrical Engineering at the University of Rhode Island and a leading expert in signal processing. He has received the Education Award "for outstanding contributions in education and in writing scholarly books and texts..." from the IEEE Signal Processing society and has been listed as among the 250 most cited researchers in the world in engineering.
This book discusses the problem of model choice when the statistical models are separate, also called nonnested. Chapter 1 provides an introduction, motivating examples and a general overview of the problem. Chapter 2 presents the classical or frequentist approach to the problem as well as several alternative procedures and their properties. Chapter 3 explores the Bayesian approach, the limitations of the classical Bayes factors and the proposed alternative Bayes factors to overcome these limitations. It also discusses a significance Bayesian procedure. Lastly, Chapter 4 examines the pure likelihood approach. Various real-data examples and computer simulations are provided throughout the text.
Familiarize yourself with MATLAB using this concise, practical tutorial that is focused on writing code to learn concepts. Starting from the basics, this book covers array-based computing, plotting and working with files, numerical computation formalism, and the primary concepts of approximations. Introduction to MATLAB is useful for industry engineers, researchers, and students who are looking for open-source solutions for numerical computation. In this book you will learn by doing, avoiding technical jargon, which makes the concepts easy to learn. First you'll see how to run basic calculations, absorbing technical complexities incrementally as you progress toward advanced topics. Throughout, the language is kept simple to ensure that readers at all levels can grasp the concepts. What You'll Learn Apply sample code to your engineering or science problems Work with MATLAB arrays, functions, and loops Use MATLAB's plotting functions for data visualization Solve numerical computing and computational engineering problems with a MATLAB case study Who This Book Is For Engineers, scientists, researchers, and students who are new to MATLAB. Some prior programming experience would be helpful but not required.
A unique point of this book is its low threshold, textually simple and at the same time full of self-assessment opportunities. Other unique points are the succinctness of the chapters with 3 to 6 pages, the presence of entire-commands-texts of the statistical methodologies reviewed and the fact that dull scientific texts imposing an unnecessary burden on busy and jaded professionals have been left out. For readers requesting more background, theoretical and mathematical information a note section with references is in each chapter. The first edition in 2010 was the first publication of a complete overview of SPSS methodologies for medical and health statistics. Well over 100,000 copies of various chapters were sold within the first year of publication. Reasons for a rewrite were four. First, many important comments from readers urged for a rewrite. Second, SPSS has produced many updates and upgrades, with relevant novel and improved methodologies. Third, the authors felt that the chapter texts needed some improvements for better readability: chapters have now been classified according the outcome data helpful for choosing your analysis rapidly, a schematic overview of data, and explanatory graphs have been added. Fourth, current data are increasingly complex and many important methods for analysis were missing in the first edition. For that latter purpose some more advanced methods seemed unavoidable, like hierarchical loglinear methods, gamma and Tweedie regressions and random intercept analyses. In order for the contents of the book to remain covered by the title, the authors renamed the book: SPSS for Starters and 2nd Levelers. Special care was, nonetheless, taken to keep things as simple as possible, simple menu commands are given. The arithmetic is still of a no-more-than high-school level. Step-by-step analyses of different statistical methodologies are given with the help of 60 SPSS data files available through the internet. Because of the lack of time of this busy group of people, the authors have given every effort to produce a text as succinct as possible.
The objective of Kai Zhang and his research is to assess the existing process monitoring and fault detection (PM-FD) methods. His aim is to provide suggestions and guidance for choosing appropriate PM-FD methods, because the performance assessment study for PM-FD methods has become an area of interest in both academics and industry. The author first compares basic FD statistics, and then assesses different PM-FD methods to monitor the key performance indicators of static processes, steady-state dynamic processes and general dynamic processes including transient states. He validates the theoretical developments using both benchmark and real industrial processes.
This book provides new insights on the study of global environmental changes using the ecoinformatics tools and the adaptive-evolutionary technology of geoinformation monitoring. The main advantage of this book is that it gathers and presents extensive interdisciplinary expertise in the parameterization of global biogeochemical cycles and other environmental processes in the context of globalization and sustainable development. In this regard, the crucial global problems concerning the dynamics of the nature-society system are considered and the key problems of ensuring the system's sustainable development are studied. A new approach to the numerical modeling of the nature-society system is proposed and results are provided on modeling the dynamics of the system's characteristics with regard to scenarios of anthropogenic impacts on biogeochemical cycles, land ecosystems and oceans. The main purpose of this book is to develop a universal guide to information-modeling technologies for assessing the function of environmental subsystems under various climatic and anthropogenic conditions.
This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors' website.
Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work.
The R Companion to Elementary Applied Statistics includes traditional applications covered in elementary statistics courses as well as some additional methods that address questions that might arise during or after the application of commonly used methods. Beginning with basic tasks and computations with R, readers are then guided through ways to bring data into R, manipulate the data as needed, perform common statistical computations and elementary exploratory data analysis tasks, prepare customized graphics, and take advantage of R for a wide range of methods that find use in many elementary applications of statistics. Features: Requires no familiarity with R or programming to begin using this book. Can be used as a resource for a project-based elementary applied statistics course, or for researchers and professionals who wish to delve more deeply into R. Contains an extensive array of examples that illustrate ideas on various ways to use pre-packaged routines, as well as on developing individualized code. Presents quite a few methods that may be considered non-traditional, or advanced. Includes accompanying carefully documented script files that contain code for all examples presented, and more. R is a powerful and free product that is gaining popularity across the scientific community in both the professional and academic arenas. Statistical methods discussed in this book are used to introduce the fundamentals of using R functions and provide ideas for developing further skills in writing R code. These ideas are illustrated through an extensive collection of examples. About the Author: Christopher Hay-Jahans received his Doctor of Arts in mathematics from Idaho State University in 1999. After spending three years at University of South Dakota, he moved to Juneau, Alaska, in 2002 where he has taught a wide range of undergraduate courses at University of Alaska Southeast.
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for the construction of optimum designs using Hadamard matrices, the Kronecker product, Rao-Khatri product, mixed orthogonal arrays to name a few.
Describing novel mathematical concepts for recommendation engines, Realtime Data Mining: Self-Learning Techniques for Recommendation Engines features a sound mathematical framework unifying approaches based on control and learning theories, tensor factorization, and hierarchical methods. Furthermore, it presents promising results of numerous experiments on real-world data. The area of realtime data mining is currently developing at an exceptionally dynamic pace, and realtime data mining systems are the counterpart of today's "classic" data mining systems. Whereas the latter learn from historical data and then use it to deduce necessary actions, realtime analytics systems learn and act continuously and autonomously. In the vanguard of these new analytics systems are recommendation engines. They are principally found on the Internet, where all information is available in realtime and an immediate feedback is guaranteed. This monograph appeals to computer scientists and specialists in machine learning, especially from the area of recommender systems, because it conveys a new way of realtime thinking by considering recommendation tasks as control-theoretic problems. Realtime Data Mining: Self-Learning Techniques for Recommendation Engines will also interest application-oriented mathematicians because it consistently combines some of the most promising mathematical areas, namely control theory, multilevel approximation, and tensor factorization.
This volume compiles the major results of conference participants from the "Third International Conference in Network Analysis" held at the Higher School of Economics, Nizhny Novgorod in May 2013, with the aim to initiate further joint research among different groups. The contributions in this book cover a broad range of topics relevant to the theory and practice of network analysis, including the reliability of complex networks, software, theory, methodology, and applications. Network analysis has become a major research topic over the last several years. The broad range of applications that can be described and analyzed by means of a network has brought together researchers, practitioners from numerous fields such as operations research, computer science, transportation, energy, biomedicine, computational neuroscience and social sciences. In addition, new approaches and computer environments such as parallel computing, grid computing, cloud computing, and quantum computing have helped to solve large scale network optimization problems.
This pioneering book teaches readers to use R within four core analytical areas applicable to the Humanities: networks, text, geospatial data, and images. This book is also designed to be a bridge: between quantitative and qualitative methods, individual and collaborative work, and the humanities and social sciences. Humanities Data with R does not presuppose background programming experience. Early chapters take readers from R set-up to exploratory data analysis (continuous and categorical data, multivariate analysis, and advanced graphics with emphasis on aesthetics and facility). Following this, networks, geospatial data, image data, natural language processing and text analysis each have a dedicated chapter. Each chapter is grounded in examples to move readers beyond the intimidation of adding new tools to their research. Everything is hands-on: networks are explained using U.S. Supreme Court opinions, and low-level NLP methods are applied to short stories by Sir Arthur Conan Doyle. After working through these examples with the provided data, code and book website, readers are prepared to apply new methods to their own work. The open source R programming language, with its myriad packages and popularity within the sciences and social sciences, is particularly well-suited to working with humanities data. R packages are also highlighted in an appendix. This book uses an expanded conception of the forms data may take and the information it represents. The methodology will have wide application in classrooms and self-study for the humanities, but also for use in linguistics, anthropology, and political science. Outside the classroom, this intersection of humanities and computing is particularly relevant for research and new modes of dissemination across archives, museums and libraries.
This book highlights recent advances in natural computing, including biology and its theory, bio-inspired computing, computational aesthetics, computational models and theories, computing with natural media, philosophy of natural computing and educational technology. It presents extended versions of the best papers selected from the symposium "7th International Workshop on Natural Computing" (IWNC7), held in Tokyo, Japan, in 2013. The target audience is not limited to researchers working in natural computing but also those active in biological engineering, fine/media art design, aesthetics and philosophy.
Economists can use computer algebra systems to manipulate symbolic models, derive numerical computations, and analyze empirical relationships among variables. Maxima is an open-source multi-platform computer algebra system that rivals proprietary software. Maxima's symbolic and computational capabilities enable economists and financial analysts to develop a deeper understanding of models by allowing them to explore the implications of differences in parameter values, providing numerical solutions to problems that would be otherwise intractable, and by providing graphical representations that can guide analysis. This book provides a step-by-step tutorial for using this program to examine the economic relationships that form the core of microeconomics in a way that complements traditional modeling techniques. Readers learn how to phrase the relevant analysis and how symbolic expressions, numerical computations, and graphical representations can be used to learn from microeconomic models. In particular, comparative statics analysis is facilitated. Little has been published on Maxima and its applications in economics and finance, and this volume will appeal to advanced undergraduates, graduate-level students studying microeconomics, academic researchers in economics and finance, economists, and financial analysts.
The book opens with a short introduction to Indian music, in particular classical Hindustani music, followed by a chapter on the role of statistics in computational musicology. The authors then show how to analyze musical structure using Rubato, the music software package for statistical analysis, in particular addressing modeling, melodic similarity and lengths, and entropy analysis; they then show how to analyze musical performance. Finally, they explain how the concept of seminatural composition can help a music composer to obtain the opening line of a raga-based song using Monte Carlo simulation. The book will be of interest to musicians and musicologists, particularly those engaged with Indian music.
Among the various multi-level formulations of mathematical models in decision making processes, this book focuses on the bi-level model. Being the most frequently used, the bi-level model addresses conflicts which exist in multi-level decision making processes. From the perspective of bi-level structure and uncertainty, this book takes real-life problems as the background, focuses on the so-called random-like uncertainty, and develops the general framework of random-like bi-level decision making problems. The random-like uncertainty considered in this book includes random phenomenon, random-overlapped random (Ra-Ra) phenomenon and fuzzy-overlapped random (Ra-Fu) phenomenon. Basic theory, models, algorithms and practical applications for different types of random-like bi-level decision making problems are also presented in this book.
This book shows how to look at ways of visualizing large datasets, whether large in numbers of cases, or large in numbers of variables, or large in both. All ideas are illustrated with displays from analyses of real datasets and the importance of interpreting displays effectively is emphasized. Graphics should be drawn to convey information and the book includes many insightful examples. New approaches to graphics are needed to visualize the information in large datasets and most of the innovations described in this book are developments of standard graphics. The book is accessible to readers with some experience of drawing statistical graphics.
This book gathers a selection of invited and contributed lectures from the European Conference on Numerical Mathematics and Advanced Applications (ENUMATH) held in Lausanne, Switzerland, August 26-30, 2013. It provides an overview of recent developments in numerical analysis, computational mathematics and applications from leading experts in the field. New results on finite element methods, multiscale methods, numerical linear algebra and discretization techniques for fluid mechanics and optics are presented. As such, the book offers a valuable resource for a wide range of readers looking for a state-of-the-art overview of advanced techniques, algorithms and results in numerical mathematics and scientific computing. |
![]() ![]() You may like...
Reading & Writing Farsi (Persian): A…
Pegah Vil, Amir Hossein Ahooie
Paperback
Working with Text and Around Text in…
Halina Chodkiewicz, Piotr Steinbrich, …
Hardcover
R3,623
Discovery Miles 36 230
Artificial Intelligence for Neurological…
Ajith Abraham, Sujata Dash, …
Paperback
R4,171
Discovery Miles 41 710
26x2 Intricate Coloring Pages with the…
Fingeralphabet Org
Paperback
Innovations in Artificial Intelligence…
Surbhi Bhatia, Suyel Namasudra, …
Paperback
|