Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer software packages > Other software packages
This beginner's introduction to MATLAB teaches a sufficient subset of the functionality and gives the reader practical experience on how to find more information. A forty-page appendix contains unique user-friendly summaries and tables of MATLAB functions enabling the reader to find appropriate functions, understand their syntax and get a good overview. The large number of exercises, tips, and solutions mean that the course can be followed with or without a computer. Recent development in MATLAB to advance programming is described using realistic examples in order to prepare students for larger programming projects. Revolutionary step by step 'guided tour' eliminates the steep learning curve encountered in learning new programming languages. Each chapter corresponds to an actual engineering course, where examples in MATLAB illustrate the typical theory, providing a practical understanding of these courses. Complementary homepage contains exercises, a take-home examination, and an automatic marking that grades the solution. End of chapter exercises with selected solutions in an appendix. The development of MATLAB programming and the rapid increase in the use of MATLAB in engineering courses makes this a valuable self-study guide for both engineering students and practising engineers. Readers will find that this time-less material can be used throughout their education and into their career.
The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, computer-intensive methods such as the bootstrap and cross-validation freed practitioners from the limitations of parametric models, and paved the way towards the `big data' era of the 21st century. Nonetheless, there is a further step one may take, i.e., going beyond even nonparametric models; this is where the Model-Free Prediction Principle is useful. Interestingly, being able to predict a response variable Y associated with a regressor variable X taking on any possible value seems to inadvertently also achieve the main goal of modeling, i.e., trying to describe how Y depends on X. Hence, as prediction can be treated as a by-product of model-fitting, key estimation problems can be addressed as a by-product of being able to perform prediction. In other words, a practitioner can use Model-Free Prediction ideas in order to additionally obtain point estimates and confidence intervals for relevant parameters leading to an alternative, transformation-based approach to statistical inference.
Given the explosion of interest in mathematical methods for solving problems in finance and trading, a great deal of research and development is taking place in universities, large brokerage firms, and in the supporting trading software industry. Mathematical advances have been made both analytically and numerically in finding practical solutions. This book provides a comprehensive overview of existing and original material, about what mathematics when allied with Mathematica can do for finance. Sophisticated theories are presented systematically in a user-friendly style, and a powerful combination of mathematical rigor and Mathematica programming. Three kinds of solution methods are emphasized: symbolic, numerical, and Monte-- Carlo. Nowadays, only good personal computers are required to handle the symbolic and numerical methods that are developed in this book. Key features: * No previous knowledge of Mathematica programming is required * The symbolic, numeric, data management and graphic capabilities of Mathematica are fully utilized * Monte--Carlo solutions of scalar and multivariable SDEs are developed and utilized heavily in discussing trading issues such as Black--Scholes hedging * Black--Scholes and Dupire PDEs are solved symbolically and numerically * Fast numerical solutions to free boundary problems with details of their Mathematica realizations are provided * Comprehensive study of optimal portfolio diversification, including an original theory of optimal portfolio hedging under non-Log-Normal asset price dynamics is presented The book is designed for the academic community of instructors and students, and most importantly, will meet the everyday trading needs of quantitatively inclined professional and individual investors.
Over the past 80 years, the way that citation frequency was counted and analyzed changed dramatically from the early manual transcribing and statistical computation of citation data to computer-based citation data creation and its manipulation.""Author Cocitation Analysis: Quantitative Methods for Mapping the Intellectual Structure of an Academic Discipline"" provides a blueprint for researchers to follow in a wide variety of investigations. Pertinent to faculty, researchers, and graduate students in any academic field, this book introduces an alternative approach to conducting author cocitation analysis (ACA) without relying on commercial citation databases.
This book prepares students to execute the quantitative and computational needs of the finance industry. The quantitative methods are explained in detail with examples from real financial problems like option pricing, risk management, portfolio selection, etc. Codes are provided in R programming language to execute the methods. Tables and figures, often with real data, illustrate the codes. References to related work are intended to aid the reader to pursue areas of specific interest in further detail. The comprehensive background with economic, statistical, mathematical, and computational theory strengthens the understanding. The coverage is broad, and linkages between different sections are explained. The primary audience is graduate students, while it should also be accessible to advanced undergraduates. Practitioners working in the finance industry will also benefit.
This book brings together two major trends: data science and blockchains. It is one of the first books to systematically cover the analytics aspects of blockchains, with the goal of linking traditional data mining research communities with novel data sources. Data science and big data technologies can be considered cornerstones of the data-driven digital transformation of organizations and society. The concept of blockchain is predicted to enable and spark transformation on par with that associated with the invention of the Internet. Cryptocurrencies are the first successful use case of highly distributed blockchains, like the world wide web was to the Internet. The book takes the reader through basic data exploration topics, proceeding systematically, method by method, through supervised and unsupervised learning approaches and information visualization techniques, all the way to understanding the blockchain data from the network science perspective. Chapters introduce the cryptocurrency blockchain data model and methods to explore it using structured query language, association rules, clustering, classification, visualization, and network science. Each chapter introduces basic concepts, presents examples with real cryptocurrency blockchain data and offers exercises and questions for further discussion. Such an approach intends to serve as a good starting point for undergraduate and graduate students to learn data science topics using cryptocurrency blockchain examples. It is also aimed at researchers and analysts who already possess good analytical and data skills, but who do not yet have the specific knowledge to tackle analytic questions about blockchain transactions. The readers improve their knowledge about the essential data science techniques in order to turn mere transactional information into social, economic, and business insights.
This book explores inductive inference using the minimum message length (MML) principle, a Bayesian method which is a realisation of Ockham's Razor based on information theory. Accompanied by a library of software, the book can assist an applications programmer, student or researcher in the fields of data analysis and machine learning to write computer programs based upon this principle. MML inference has been around for 50 years and yet only one highly technical book has been written about the subject. The majority of research in the field has been backed by specialised one-off programs but this book includes a library of general MML-based software, in Java. The Java source code is available under the GNU GPL open-source license. The software library is documented using Javadoc which produces extensive cross referenced HTML manual pages. Every probability distribution and statistical model that is described in the book is implemented and documented in the software library. The library may contain a component that directly solves a reader's inference problem, or contain components that can be put together to solve the problem, or provide a standard interface under which a new component can be written to solve the problem. This book will be of interest to application developers in the fields of machine learning and statistics as well as academics, postdocs, programmers and data scientists. It could also be used by third year or fourth year undergraduate or postgraduate students.
The first edition (94301-3) was published in 1995 in TIMS and had 2264 regular US sales, 928 IC, and 679 bulk. This new edition updates the text to Mathematica 5.0 and offers a more extensive treatment of linear algebra. It has been thoroughly revised and corrected throughout.
Computational finance deals with the mathematics of computer programs that realize financial models or systems. This book outlines the epistemic risks associated with the current valuations of different financial instruments and discusses the corresponding risk management strategies. It covers most of the research and practical areas in computational finance. Starting from traditional fundamental analysis and using algebraic and geometric tools, it is guided by the logic of science to explore information from financial data without prejudice. In fact, this book has the unique feature that it is structured around the simple requirement of objective science: the geometric structure of the data = the information contained in the data.
Computational finance deals with the mathematics of computer programs that realize financial models or systems. This book outlines the epistemic risks associated with the current valuations of different financial instruments and discusses the corresponding risk management strategies. It covers most of the research and practical areas in computational finance. Starting from traditional fundamental analysis and using algebraic and geometric tools, it is guided by the logic of science to explore information from financial data without prejudice. In fact, this book has the unique feature that it is structured around the simple requirement of objective science: the geometric structure of the data = the information contained in the data.
This comprehensive text covers the use of SAS for epidemiology and public health research. Developed with students in mind and from their feedback, the text addresses this material in a straightforward manner with a multitude of examples. It is directly applicable to students and researchers in the fields of public health, biostatistics and epidemiology. Through a hands on approach to the use of SAS for a broad number of epidemiologic analyses, readers learn techniques for data entry and cleaning, categorical analysis, ANOVA, and linear regression and much more. Exercises utilizing real-world data sets are featured throughout the book. SAS screen shots demonstrate the steps for successful programming. SAS (Statistical Analysis System) is an integrated system of software products provided by the SAS institute, which is headquartered in California. It provides programmers and statisticians the ability to engage in many sophisticated statistical analyses and data retrieval and mining exercises. SAS is widely used in the fields of epidemiology and public healthresearch, predominately due to its ability to reliably analyze very large administrative data sets, as well as more commonly encountered clinical trial and observational research data. "
This book discusses the latest advances in algorithms for symbolic summation, factorization, symbolic-numeric linear algebra and linear functional equations. It presents a collection of papers on original research topics from the Waterloo Workshop on Computer Algebra (WWCA-2016), a satellite workshop of the International Symposium on Symbolic and Algebraic Computation (ISSAC'2016), which was held at Wilfrid Laurier University (Waterloo, Ontario, Canada) on July 23-24, 2016. This workshop and the resulting book celebrate the 70th birthday of Sergei Abramov (Dorodnicyn Computing Centre of the Russian Academy of Sciences, Moscow), whose highly regarded and inspirational contributions to symbolic methods have become a crucial benchmark of computer algebra and have been broadly adopted by many Computer Algebra systems.
Autopoietic systems show a remarkable property in the way they interact with their environment: on the one hand building blocks and energy (including information) are exchanged with the environment, which characterizes them as open systems; on the other hand, any functional mechanisms-the way the system processes, incorporates building blocks, and responds to information-are totally self-determined and cannot be controlled by interventions from the environment. Information systems in an organization seem to accept the autopoietic system way of development and can help managers to understand the operations of their organizations better. Autopoiesis and Self-Sustaining Processes for Organizational Success is an innovative reference book that presents the meaning of autopoietic organizations for social and information science, examines how autopoietic organizations are information self-producing and self-controlled, and provides a framework for its development in modern organizations. The book focuses on analyzing autopoiesis features such as self-managing, self-sustaining, self-producing, self-regulating, etc. Moreover, as the aforementioned characteristics receive a new interpretation in IT environments, the book also includes an exploration of IT solutions that enable the development of these characteristics. This book is ideal for professionals, academicians, researchers, and students working in the field of information economics and management in various disciplines such as information and communication sciences, administrative sciences and management, education, computer science, and information technology.
Highly recommended by JASA, Technometrics, and other leading statistical journals, the first two editions of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Third Edition continues to lead readers step-by-step through the process of fitting LMMs. The third edition provides a comprehensive update of the available tools for fitting linear mixed-effects models in the newest versions of SAS, SPSS, R, Stata, and HLM. All examples have been updated, with a focus on new tools for visualization of results and interpretation. New conceptual and theoretical developments in mixed-effects modeling have been included, and there is a new chapter on power analysis for mixed-effects models. Features:*Dedicates an entire chapter to the key theories underlying LMMs for clustered, longitudinal, and repeated measures data *Provides descriptions, explanations, and examples of software code necessary to fit LMMs in SAS, SPSS, R, Stata, and HLM *Contains detailed tables of estimates and results, allowing for easy comparisons across software procedures *Presents step-by-step analyses of real-world data sets that arise from a variety of research settings and study designs, including hypothesis testing, interpretation of results, and model diagnostics *Integrates software code in each chapter to compare the relative advantages and disadvantages of each package *Supplemented by a website with software code, datasets, additional documents, and updates Ideal for anyone who uses software for statistical modeling, this book eliminates the need to read multiple software-specific texts by covering the most popular software programs for fitting LMMs in one handy guide. The authors illustrate the models and methods through real-world examples that enable comparisons of model-fitting options and results across the software procedures.
Immediately implementable code, with extensive and varied illustrations of graph variants and layouts. Examples and exercises across a variety of real-life contexts including business, politics, education, social media and crime investigation. Dedicated chapter on graph visualization methods. Practical walkthroughs of common methodological uses: finding influential actors in groups, discovering hidden community structures, facilitating diverse interaction in organizations, detecting political alignment, determining what influences connection and attachment. Various downloadable data sets for use both in class and individual learning projects. Final chapter dedicated to individual or group project examples.
This Festschrift in honour of Paul Deheuvels' 65th birthday compiles recent research results in the area between mathematical statistics and probability theory with a special emphasis on limit theorems. The book brings together contributions from invited international experts to provide an up-to-date survey of the field. Written in textbook style, this collection of original material addresses researchers, PhD and advanced Master students with a solid grasp of mathematical statistics and probability theory.
This series is dedicated to developments in accounting information systems. Each volume is structured into three sections: information systems practice and theory; information systems and the accounting/auditing environment; and perspectives on information systems research. This volume includes evidence from three experiments relating to the effect of socioeconomic background on computer anxiety and performance. Other areas covered include audit expert system development, users affective responses to information systems through an empirical comparison of four operationalizations, articulating accounting database queries, audit decision aids and integrating group support systems into the accounting environment.
This research monograph utilizes exact and Monte Carlo permutation statistical methods to generate probability values and measures of effect size for a variety of measures of association. Association is broadly defined to include measures of correlation for two interval-level variables, measures of association for two nominal-level variables or two ordinal-level variables, and measures of agreement for two nominal-level or two ordinal-level variables. Additionally, measures of association for mixtures of the three levels of measurement are considered: nominal-ordinal, nominal-interval, and ordinal-interval measures. Numerous comparisons of permutation and classical statistical methods are presented. Unlike classical statistical methods, permutation statistical methods do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This book takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field. This topic is relatively new in that it took modern computing power to make permutation methods available to those working in mainstream research. Written for a statistically informed audience, it is particularly useful for teachers of statistics, practicing statisticians, applied statisticians, and quantitative graduate students in fields such as psychology, medical research, epidemiology, public health, and biology. It can also serve as a textbook in graduate courses in subjects like statistics, psychology, and biology.
This is the sixth volume in a series dealing with such topics as information systems practice and theory, information systems and the accounting/auditing environment, and differing perspectives on information systems research. |
You may like...
Cybersecurity Issues and Challenges for…
Saqib Saeed, Abdullah M. Almuhaideb, …
Hardcover
R8,190
Discovery Miles 81 900
|