![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This beginner's introduction to MATLAB teaches a sufficient subset of the functionality and gives the reader practical experience on how to find more information. A forty-page appendix contains unique user-friendly summaries and tables of MATLAB functions enabling the reader to find appropriate functions, understand their syntax and get a good overview. The large number of exercises, tips, and solutions mean that the course can be followed with or without a computer. Recent development in MATLAB to advance programming is described using realistic examples in order to prepare students for larger programming projects. Revolutionary step by step 'guided tour' eliminates the steep learning curve encountered in learning new programming languages. Each chapter corresponds to an actual engineering course, where examples in MATLAB illustrate the typical theory, providing a practical understanding of these courses. Complementary homepage contains exercises, a take-home examination, and an automatic marking that grades the solution. End of chapter exercises with selected solutions in an appendix. The development of MATLAB programming and the rapid increase in the use of MATLAB in engineering courses makes this a valuable self-study guide for both engineering students and practising engineers. Readers will find that this time-less material can be used throughout their education and into their career.
Over the past 80 years, the way that citation frequency was counted and analyzed changed dramatically from the early manual transcribing and statistical computation of citation data to computer-based citation data creation and its manipulation.""Author Cocitation Analysis: Quantitative Methods for Mapping the Intellectual Structure of an Academic Discipline"" provides a blueprint for researchers to follow in a wide variety of investigations. Pertinent to faculty, researchers, and graduate students in any academic field, this book introduces an alternative approach to conducting author cocitation analysis (ACA) without relying on commercial citation databases.
Given the explosion of interest in mathematical methods for solving problems in finance and trading, a great deal of research and development is taking place in universities, large brokerage firms, and in the supporting trading software industry. Mathematical advances have been made both analytically and numerically in finding practical solutions. This book provides a comprehensive overview of existing and original material, about what mathematics when allied with Mathematica can do for finance. Sophisticated theories are presented systematically in a user-friendly style, and a powerful combination of mathematical rigor and Mathematica programming. Three kinds of solution methods are emphasized: symbolic, numerical, and Monte-- Carlo. Nowadays, only good personal computers are required to handle the symbolic and numerical methods that are developed in this book. Key features: * No previous knowledge of Mathematica programming is required * The symbolic, numeric, data management and graphic capabilities of Mathematica are fully utilized * Monte--Carlo solutions of scalar and multivariable SDEs are developed and utilized heavily in discussing trading issues such as Black--Scholes hedging * Black--Scholes and Dupire PDEs are solved symbolically and numerically * Fast numerical solutions to free boundary problems with details of their Mathematica realizations are provided * Comprehensive study of optimal portfolio diversification, including an original theory of optimal portfolio hedging under non-Log-Normal asset price dynamics is presented The book is designed for the academic community of instructors and students, and most importantly, will meet the everyday trading needs of quantitatively inclined professional and individual investors.
The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, computer-intensive methods such as the bootstrap and cross-validation freed practitioners from the limitations of parametric models, and paved the way towards the `big data' era of the 21st century. Nonetheless, there is a further step one may take, i.e., going beyond even nonparametric models; this is where the Model-Free Prediction Principle is useful. Interestingly, being able to predict a response variable Y associated with a regressor variable X taking on any possible value seems to inadvertently also achieve the main goal of modeling, i.e., trying to describe how Y depends on X. Hence, as prediction can be treated as a by-product of model-fitting, key estimation problems can be addressed as a by-product of being able to perform prediction. In other words, a practitioner can use Model-Free Prediction ideas in order to additionally obtain point estimates and confidence intervals for relevant parameters leading to an alternative, transformation-based approach to statistical inference.
The first edition (94301-3) was published in 1995 in TIMS and had 2264 regular US sales, 928 IC, and 679 bulk. This new edition updates the text to Mathematica 5.0 and offers a more extensive treatment of linear algebra. It has been thoroughly revised and corrected throughout.
This book explores inductive inference using the minimum message length (MML) principle, a Bayesian method which is a realisation of Ockham's Razor based on information theory. Accompanied by a library of software, the book can assist an applications programmer, student or researcher in the fields of data analysis and machine learning to write computer programs based upon this principle. MML inference has been around for 50 years and yet only one highly technical book has been written about the subject. The majority of research in the field has been backed by specialised one-off programs but this book includes a library of general MML-based software, in Java. The Java source code is available under the GNU GPL open-source license. The software library is documented using Javadoc which produces extensive cross referenced HTML manual pages. Every probability distribution and statistical model that is described in the book is implemented and documented in the software library. The library may contain a component that directly solves a reader's inference problem, or contain components that can be put together to solve the problem, or provide a standard interface under which a new component can be written to solve the problem. This book will be of interest to application developers in the fields of machine learning and statistics as well as academics, postdocs, programmers and data scientists. It could also be used by third year or fourth year undergraduate or postgraduate students.
This book brings together two major trends: data science and blockchains. It is one of the first books to systematically cover the analytics aspects of blockchains, with the goal of linking traditional data mining research communities with novel data sources. Data science and big data technologies can be considered cornerstones of the data-driven digital transformation of organizations and society. The concept of blockchain is predicted to enable and spark transformation on par with that associated with the invention of the Internet. Cryptocurrencies are the first successful use case of highly distributed blockchains, like the world wide web was to the Internet. The book takes the reader through basic data exploration topics, proceeding systematically, method by method, through supervised and unsupervised learning approaches and information visualization techniques, all the way to understanding the blockchain data from the network science perspective. Chapters introduce the cryptocurrency blockchain data model and methods to explore it using structured query language, association rules, clustering, classification, visualization, and network science. Each chapter introduces basic concepts, presents examples with real cryptocurrency blockchain data and offers exercises and questions for further discussion. Such an approach intends to serve as a good starting point for undergraduate and graduate students to learn data science topics using cryptocurrency blockchain examples. It is also aimed at researchers and analysts who already possess good analytical and data skills, but who do not yet have the specific knowledge to tackle analytic questions about blockchain transactions. The readers improve their knowledge about the essential data science techniques in order to turn mere transactional information into social, economic, and business insights.
This comprehensive text covers the use of SAS for epidemiology and public health research. Developed with students in mind and from their feedback, the text addresses this material in a straightforward manner with a multitude of examples. It is directly applicable to students and researchers in the fields of public health, biostatistics and epidemiology. Through a hands on approach to the use of SAS for a broad number of epidemiologic analyses, readers learn techniques for data entry and cleaning, categorical analysis, ANOVA, and linear regression and much more. Exercises utilizing real-world data sets are featured throughout the book. SAS screen shots demonstrate the steps for successful programming. SAS (Statistical Analysis System) is an integrated system of software products provided by the SAS institute, which is headquartered in California. It provides programmers and statisticians the ability to engage in many sophisticated statistical analyses and data retrieval and mining exercises. SAS is widely used in the fields of epidemiology and public healthresearch, predominately due to its ability to reliably analyze very large administrative data sets, as well as more commonly encountered clinical trial and observational research data. "
This book discusses the latest advances in algorithms for symbolic summation, factorization, symbolic-numeric linear algebra and linear functional equations. It presents a collection of papers on original research topics from the Waterloo Workshop on Computer Algebra (WWCA-2016), a satellite workshop of the International Symposium on Symbolic and Algebraic Computation (ISSAC'2016), which was held at Wilfrid Laurier University (Waterloo, Ontario, Canada) on July 23-24, 2016. This workshop and the resulting book celebrate the 70th birthday of Sergei Abramov (Dorodnicyn Computing Centre of the Russian Academy of Sciences, Moscow), whose highly regarded and inspirational contributions to symbolic methods have become a crucial benchmark of computer algebra and have been broadly adopted by many Computer Algebra systems.
This Festschrift in honour of Paul Deheuvels' 65th birthday compiles recent research results in the area between mathematical statistics and probability theory with a special emphasis on limit theorems. The book brings together contributions from invited international experts to provide an up-to-date survey of the field. Written in textbook style, this collection of original material addresses researchers, PhD and advanced Master students with a solid grasp of mathematical statistics and probability theory.
This book prepares students to execute the quantitative and computational needs of the finance industry. The quantitative methods are explained in detail with examples from real financial problems like option pricing, risk management, portfolio selection, etc. Codes are provided in R programming language to execute the methods. Tables and figures, often with real data, illustrate the codes. References to related work are intended to aid the reader to pursue areas of specific interest in further detail. The comprehensive background with economic, statistical, mathematical, and computational theory strengthens the understanding. The coverage is broad, and linkages between different sections are explained. The primary audience is graduate students, while it should also be accessible to advanced undergraduates. Practitioners working in the finance industry will also benefit.
This research monograph utilizes exact and Monte Carlo permutation statistical methods to generate probability values and measures of effect size for a variety of measures of association. Association is broadly defined to include measures of correlation for two interval-level variables, measures of association for two nominal-level variables or two ordinal-level variables, and measures of agreement for two nominal-level or two ordinal-level variables. Additionally, measures of association for mixtures of the three levels of measurement are considered: nominal-ordinal, nominal-interval, and ordinal-interval measures. Numerous comparisons of permutation and classical statistical methods are presented. Unlike classical statistical methods, permutation statistical methods do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This book takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field. This topic is relatively new in that it took modern computing power to make permutation methods available to those working in mainstream research. Written for a statistically informed audience, it is particularly useful for teachers of statistics, practicing statisticians, applied statisticians, and quantitative graduate students in fields such as psychology, medical research, epidemiology, public health, and biology. It can also serve as a textbook in graduate courses in subjects like statistics, psychology, and biology.
This book contains a rich set of tools for nonparametric analyses, and the purpose of this text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses and tests using R to broadly compare differences between data sets and statistical approach.
This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in
This unique resource provides engineers and students with a practical approach to quickly learning the software-defined radio concepts they need to know for their work in the field. By prototyping and evaluating actual digital communication systems capable of performing "over-the-air" wireless data transmission and reception, this volume helps readers attain a first-hand understanding of critical design trade-offs and issues. Moreover, professionals gain a sense of the actual "real-world" operational behavior of these systems. With the purchase of the book, readers gain access to several ready-made Simulink experiments at the publisher's website. This collection of laboratory experiments, along with several examples, enables engineers to successfully implement the designs discussed the book in a short period of time. These files can be executed using MATLAB version R2011b or later.
Since the beginning of the seventies computer hardware is available to use programmable computers for various tasks. During the nineties the hardware has developed from the big main frames to personal workstations. Nowadays it is not only the hardware which is much more powerful, but workstations can do much more work than a main frame, compared to the seventies. In parallel we find a specialization in the software. Languages like COBOL for business orientated programming or Fortran for scientific computing only marked the beginning. The introduction of personal computers in the eighties gave new impulses for even further development, already at the beginning of the seven ties some special languages like SAS or SPSS were available for statisticians. Now that personal computers have become very popular the number of pro grams start to explode. Today we will find a wide variety of programs for almost any statistical purpose (Koch & Haag 1995)."
"R for Business Analytics" looks at some of the most common tasks performed by business analysts and helps the user navigate the wealth of information in R and its 4000 packages. With this information the reader can select the packages that can help process the analytical tasks with minimum effort and maximum usefulness. The use of Graphical User Interfaces (GUI) is emphasized in this book to further cut downand bend the famous learning curve in learning R. This book is aimed to help you kick-start with analytics including chapters on data visualization, code examples on web analytics and social media analytics, clustering, regression models, text mining, data mining models and forecasting. The book tries to expose the reader to a breadth of business analytics topics without burying the user in needless depth. The included references and links allow the reader to pursue business analytics topics. This book is aimed at business analysts with basic programming skills for using R for Business Analytics. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. Business analytics (BA) refers to the field ofexploration and investigation of data generated by businesses. Business Intelligence (BI) is the seamless dissemination of information through the organization, which primarily involves business metrics both past and current for the use of decision support in businesses. Data Mining (DM) is the process of discovering new patterns from large data using algorithms and statistical methods. To differentiate between the three, BI is mostly current reports, BA is models to predict and strategizeand DM matches patterns in big data. The R statistical software is the fastest growing analytics platform in the world, and is established in both academia and corporations for robustness, reliability and accuracy. The book utilizes Albert Einstein s famous remarks on making things as simple as possible, but no simpler. This book will blow the last remaining doubts in your mind about using R in your business environment. Even non-technical users will enjoy the easy-to-use examples. The interviews with creators and corporate users of R make the book very readable. The author firmly believes Isaac Asimovwas a better writer in spreading science than any textbook or journal author."
Intended for both researchers and practitioners, this book will be a valuable resource for studying and applying recent robust statistical methods. It contains up-to-date research results in the theory of robust statistics Treats computational aspects and algorithms and shows interesting and new applications. |
![]() ![]() You may like...
The Data Science Framework - A View from…
Juan J. Cuadrado-Gallego, Yuri Demchenko
Hardcover
R4,578
Discovery Miles 45 780
Developing Cross-Cultural Relational…
Gerrard Mugford
Hardcover
Power Amplifiers for the S-, C-, X- and…
Mladen Bozanic, Saurabh Sinha
Hardcover
English Language Teacher Education in…
Liz England, Georgios Kormpas, …
Paperback
R1,229
Discovery Miles 12 290
Automotive Embedded Systems - Key…
M. Kathiresh, R. Neelaveni
Hardcover
R3,896
Discovery Miles 38 960
3D Imaging for Safety and Security
Andreas Koschan, Marc Pollefeys, …
Hardcover
R1,573
Discovery Miles 15 730
Advanced Introduction to Artificial…
Tom Davenport, John Glaser, …
Paperback
R652
Discovery Miles 6 520
Trends, Applications, and Challenges of…
Mohammad Amin Kuhail, Bayan Abu Shawar, …
Hardcover
R7,249
Discovery Miles 72 490
|