![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, computer-intensive methods such as the bootstrap and cross-validation freed practitioners from the limitations of parametric models, and paved the way towards the `big data' era of the 21st century. Nonetheless, there is a further step one may take, i.e., going beyond even nonparametric models; this is where the Model-Free Prediction Principle is useful. Interestingly, being able to predict a response variable Y associated with a regressor variable X taking on any possible value seems to inadvertently also achieve the main goal of modeling, i.e., trying to describe how Y depends on X. Hence, as prediction can be treated as a by-product of model-fitting, key estimation problems can be addressed as a by-product of being able to perform prediction. In other words, a practitioner can use Model-Free Prediction ideas in order to additionally obtain point estimates and confidence intervals for relevant parameters leading to an alternative, transformation-based approach to statistical inference.
Confidently shepherd your organization's implementation of Microsoft Dynamics 365 to a successful conclusion In Mastering Microsoft Dynamics 365 Implementations, accomplished executive, project manager, and author Eric Newell delivers a holistic, step-by-step reference to implementing Microsoft's cloud-based ERP and CRM business applications. You'll find the detailed and concrete instructions you need to take your implementation project all the way to the finish line, on-time, and on-budget. You'll learn: The precise steps to take, in the correct order, to bring your Dynamics 365 implementation to life What to do before you begin the project, including identifying stakeholders and building your business case How to deal with a change management throughout the lifecycle of your project How to manage conference room pilots (CRPs) and what to expect during the sessions Perfect for CIOs, technology VPs, CFOs, Operations leaders, application directors, business analysts, ERP/CRM specialists, and project managers, Mastering Microsoft Dynamics 365 Implementations is an indispensable and practical reference for guiding your real-world Dynamics 365 implementation from planning to completion.
The first edition (94301-3) was published in 1995 in TIMS and had 2264 regular US sales, 928 IC, and 679 bulk. This new edition updates the text to Mathematica 5.0 and offers a more extensive treatment of linear algebra. It has been thoroughly revised and corrected throughout.
This book explores inductive inference using the minimum message length (MML) principle, a Bayesian method which is a realisation of Ockham's Razor based on information theory. Accompanied by a library of software, the book can assist an applications programmer, student or researcher in the fields of data analysis and machine learning to write computer programs based upon this principle. MML inference has been around for 50 years and yet only one highly technical book has been written about the subject. The majority of research in the field has been backed by specialised one-off programs but this book includes a library of general MML-based software, in Java. The Java source code is available under the GNU GPL open-source license. The software library is documented using Javadoc which produces extensive cross referenced HTML manual pages. Every probability distribution and statistical model that is described in the book is implemented and documented in the software library. The library may contain a component that directly solves a reader's inference problem, or contain components that can be put together to solve the problem, or provide a standard interface under which a new component can be written to solve the problem. This book will be of interest to application developers in the fields of machine learning and statistics as well as academics, postdocs, programmers and data scientists. It could also be used by third year or fourth year undergraduate or postgraduate students.
This book brings together two major trends: data science and blockchains. It is one of the first books to systematically cover the analytics aspects of blockchains, with the goal of linking traditional data mining research communities with novel data sources. Data science and big data technologies can be considered cornerstones of the data-driven digital transformation of organizations and society. The concept of blockchain is predicted to enable and spark transformation on par with that associated with the invention of the Internet. Cryptocurrencies are the first successful use case of highly distributed blockchains, like the world wide web was to the Internet. The book takes the reader through basic data exploration topics, proceeding systematically, method by method, through supervised and unsupervised learning approaches and information visualization techniques, all the way to understanding the blockchain data from the network science perspective. Chapters introduce the cryptocurrency blockchain data model and methods to explore it using structured query language, association rules, clustering, classification, visualization, and network science. Each chapter introduces basic concepts, presents examples with real cryptocurrency blockchain data and offers exercises and questions for further discussion. Such an approach intends to serve as a good starting point for undergraduate and graduate students to learn data science topics using cryptocurrency blockchain examples. It is also aimed at researchers and analysts who already possess good analytical and data skills, but who do not yet have the specific knowledge to tackle analytic questions about blockchain transactions. The readers improve their knowledge about the essential data science techniques in order to turn mere transactional information into social, economic, and business insights.
This comprehensive text covers the use of SAS for epidemiology and public health research. Developed with students in mind and from their feedback, the text addresses this material in a straightforward manner with a multitude of examples. It is directly applicable to students and researchers in the fields of public health, biostatistics and epidemiology. Through a hands on approach to the use of SAS for a broad number of epidemiologic analyses, readers learn techniques for data entry and cleaning, categorical analysis, ANOVA, and linear regression and much more. Exercises utilizing real-world data sets are featured throughout the book. SAS screen shots demonstrate the steps for successful programming. SAS (Statistical Analysis System) is an integrated system of software products provided by the SAS institute, which is headquartered in California. It provides programmers and statisticians the ability to engage in many sophisticated statistical analyses and data retrieval and mining exercises. SAS is widely used in the fields of epidemiology and public healthresearch, predominately due to its ability to reliably analyze very large administrative data sets, as well as more commonly encountered clinical trial and observational research data. "
Autopoietic systems show a remarkable property in the way they interact with their environment: on the one hand building blocks and energy (including information) are exchanged with the environment, which characterizes them as open systems; on the other hand, any functional mechanisms-the way the system processes, incorporates building blocks, and responds to information-are totally self-determined and cannot be controlled by interventions from the environment. Information systems in an organization seem to accept the autopoietic system way of development and can help managers to understand the operations of their organizations better. Autopoiesis and Self-Sustaining Processes for Organizational Success is an innovative reference book that presents the meaning of autopoietic organizations for social and information science, examines how autopoietic organizations are information self-producing and self-controlled, and provides a framework for its development in modern organizations. The book focuses on analyzing autopoiesis features such as self-managing, self-sustaining, self-producing, self-regulating, etc. Moreover, as the aforementioned characteristics receive a new interpretation in IT environments, the book also includes an exploration of IT solutions that enable the development of these characteristics. This book is ideal for professionals, academicians, researchers, and students working in the field of information economics and management in various disciplines such as information and communication sciences, administrative sciences and management, education, computer science, and information technology.
This is the fifth volume in a series dealing with such topics as information systems practice and theory, information systems and the accounting/auditing environment, and differing perspectives on information systems research.
This book discusses the latest advances in algorithms for symbolic summation, factorization, symbolic-numeric linear algebra and linear functional equations. It presents a collection of papers on original research topics from the Waterloo Workshop on Computer Algebra (WWCA-2016), a satellite workshop of the International Symposium on Symbolic and Algebraic Computation (ISSAC'2016), which was held at Wilfrid Laurier University (Waterloo, Ontario, Canada) on July 23-24, 2016. This workshop and the resulting book celebrate the 70th birthday of Sergei Abramov (Dorodnicyn Computing Centre of the Russian Academy of Sciences, Moscow), whose highly regarded and inspirational contributions to symbolic methods have become a crucial benchmark of computer algebra and have been broadly adopted by many Computer Algebra systems.
This Festschrift in honour of Paul Deheuvels' 65th birthday compiles recent research results in the area between mathematical statistics and probability theory with a special emphasis on limit theorems. The book brings together contributions from invited international experts to provide an up-to-date survey of the field. Written in textbook style, this collection of original material addresses researchers, PhD and advanced Master students with a solid grasp of mathematical statistics and probability theory.
This series is dedicated to developments in accounting information systems. Each volume is structured into three sections: information systems practice and theory; information systems and the accounting/auditing environment; and perspectives on information systems research. This volume includes evidence from three experiments relating to the effect of socioeconomic background on computer anxiety and performance. Other areas covered include audit expert system development, users affective responses to information systems through an empirical comparison of four operationalizations, articulating accounting database queries, audit decision aids and integrating group support systems into the accounting environment.
This is the sixth volume in a series dealing with such topics as information systems practice and theory, information systems and the accounting/auditing environment, and differing perspectives on information systems research.
This book prepares students to execute the quantitative and computational needs of the finance industry. The quantitative methods are explained in detail with examples from real financial problems like option pricing, risk management, portfolio selection, etc. Codes are provided in R programming language to execute the methods. Tables and figures, often with real data, illustrate the codes. References to related work are intended to aid the reader to pursue areas of specific interest in further detail. The comprehensive background with economic, statistical, mathematical, and computational theory strengthens the understanding. The coverage is broad, and linkages between different sections are explained. The primary audience is graduate students, while it should also be accessible to advanced undergraduates. Practitioners working in the finance industry will also benefit.
This research monograph utilizes exact and Monte Carlo permutation statistical methods to generate probability values and measures of effect size for a variety of measures of association. Association is broadly defined to include measures of correlation for two interval-level variables, measures of association for two nominal-level variables or two ordinal-level variables, and measures of agreement for two nominal-level or two ordinal-level variables. Additionally, measures of association for mixtures of the three levels of measurement are considered: nominal-ordinal, nominal-interval, and ordinal-interval measures. Numerous comparisons of permutation and classical statistical methods are presented. Unlike classical statistical methods, permutation statistical methods do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This book takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field. This topic is relatively new in that it took modern computing power to make permutation methods available to those working in mainstream research. Written for a statistically informed audience, it is particularly useful for teachers of statistics, practicing statisticians, applied statisticians, and quantitative graduate students in fields such as psychology, medical research, epidemiology, public health, and biology. It can also serve as a textbook in graduate courses in subjects like statistics, psychology, and biology.
This book contains a rich set of tools for nonparametric analyses, and the purpose of this text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses and tests using R to broadly compare differences between data sets and statistical approach.
This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in
This unique resource provides engineers and students with a practical approach to quickly learning the software-defined radio concepts they need to know for their work in the field. By prototyping and evaluating actual digital communication systems capable of performing "over-the-air" wireless data transmission and reception, this volume helps readers attain a first-hand understanding of critical design trade-offs and issues. Moreover, professionals gain a sense of the actual "real-world" operational behavior of these systems. With the purchase of the book, readers gain access to several ready-made Simulink experiments at the publisher's website. This collection of laboratory experiments, along with several examples, enables engineers to successfully implement the designs discussed the book in a short period of time. These files can be executed using MATLAB version R2011b or later.
Since the beginning of the seventies computer hardware is available to use programmable computers for various tasks. During the nineties the hardware has developed from the big main frames to personal workstations. Nowadays it is not only the hardware which is much more powerful, but workstations can do much more work than a main frame, compared to the seventies. In parallel we find a specialization in the software. Languages like COBOL for business orientated programming or Fortran for scientific computing only marked the beginning. The introduction of personal computers in the eighties gave new impulses for even further development, already at the beginning of the seven ties some special languages like SAS or SPSS were available for statisticians. Now that personal computers have become very popular the number of pro grams start to explode. Today we will find a wide variety of programs for almost any statistical purpose (Koch & Haag 1995)."
|
![]() ![]() You may like...
Winning Reviews - A Guide for Evaluating…
Y. Baruch, S. Sullivan, …
Hardcover
R3,971
Discovery Miles 39 710
Writing Centers and Writing Across the…
Robert W. Barnett, Jacob S. Blumner
Hardcover
R2,907
Discovery Miles 29 070
Linear Transformation - Examples and…
Nita H. Shah, Urmila B. Chaudhari
Hardcover
R5,160
Discovery Miles 51 600
Pacific Pidgins and Creoles - Origins…
Darrell T. Tryon, Jean-Michel Charpentier
Hardcover
|