![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages
Mathematics is undoubtedly the key to state-of-the-art high technology. It is aninternationaltechnicallanguageandprovestobeaneternallyyoungscience to those who have learned its ways. Long an indispensable part of research thanks to modeling and simulation, mathematics is enjoying particular vit- ity now more than ever. Nevertheless, this stormy development is resulting in increasingly high requirements for students in technical disciplines, while general interest in mathematics continues to wane at the same time. This book and its appendices on the Internet seek to deal with this issue, helping students master the di?cult transition from the receptive to the productive phase of their education. The author has repeatedly held a three-semester introductory course - titled Higher Mathematics at the University of Stuttgart and used a series of "handouts" to show further aspects, make the course contents more motiv- ing, and connect with the mechanics lectures taking place at the same time. One part of the book has more or less evolved from this on its own. True to the original objective, this part treats a variety of separate topics of varying degrees of di?culty; nevertheless, all these topics are oriented to mechanics. Anotherpartofthisbookseekstoo?eraselectionofunderstandablereal- ticmodelsthatcanbeimplementeddirectlyfromthemultitudeofmathema- calresources.TheauthordoesnotattempttohidehispreferenceofNumerical Mathematics and thus places importance on careful theoretical preparation.
This beginner's introduction to MATLAB teaches a sufficient subset of the functionality and gives the reader practical experience on how to find more information. A forty-page appendix contains unique user-friendly summaries and tables of MATLAB functions enabling the reader to find appropriate functions, understand their syntax and get a good overview. The large number of exercises, tips, and solutions mean that the course can be followed with or without a computer. Recent development in MATLAB to advance programming is described using realistic examples in order to prepare students for larger programming projects. Revolutionary step by step 'guided tour' eliminates the steep learning curve encountered in learning new programming languages. Each chapter corresponds to an actual engineering course, where examples in MATLAB illustrate the typical theory, providing a practical understanding of these courses. Complementary homepage contains exercises, a take-home examination, and an automatic marking that grades the solution. End of chapter exercises with selected solutions in an appendix. The development of MATLAB programming and the rapid increase in the use of MATLAB in engineering courses makes this a valuable self-study guide for both engineering students and practising engineers. Readers will find that this time-less material can be used throughout their education and into their career.
Cognitive Intelligence with Neutrosophic Statistics in Bioinformatics investigates and presents the many applications that have arisen in the last ten years using neutrosophic statistics in bioinformatics, medicine, agriculture and cognitive science. This book will be very useful to the scientific community, appealing to audiences interested in fuzzy, vague concepts from which uncertain data are collected, including academic researchers, practicing engineers and graduate students. Neutrosophic statistics is a generalization of classical statistics. In classical statistics, the data is known, formed by crisp numbers. In comparison, data in neutrosophic statistics has some indeterminacy. This data may be ambiguous, vague, imprecise, incomplete, and even unknown. Neutrosophic statistics refers to a set of data, such that the data or a part of it are indeterminate in some degree, and to methods used to analyze the data.
Given the explosion of interest in mathematical methods for solving problems in finance and trading, a great deal of research and development is taking place in universities, large brokerage firms, and in the supporting trading software industry. Mathematical advances have been made both analytically and numerically in finding practical solutions. This book provides a comprehensive overview of existing and original material, about what mathematics when allied with Mathematica can do for finance. Sophisticated theories are presented systematically in a user-friendly style, and a powerful combination of mathematical rigor and Mathematica programming. Three kinds of solution methods are emphasized: symbolic, numerical, and Monte-- Carlo. Nowadays, only good personal computers are required to handle the symbolic and numerical methods that are developed in this book. Key features: * No previous knowledge of Mathematica programming is required * The symbolic, numeric, data management and graphic capabilities of Mathematica are fully utilized * Monte--Carlo solutions of scalar and multivariable SDEs are developed and utilized heavily in discussing trading issues such as Black--Scholes hedging * Black--Scholes and Dupire PDEs are solved symbolically and numerically * Fast numerical solutions to free boundary problems with details of their Mathematica realizations are provided * Comprehensive study of optimal portfolio diversification, including an original theory of optimal portfolio hedging under non-Log-Normal asset price dynamics is presented The book is designed for the academic community of instructors and students, and most importantly, will meet the everyday trading needs of quantitatively inclined professional and individual investors.
The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, computer-intensive methods such as the bootstrap and cross-validation freed practitioners from the limitations of parametric models, and paved the way towards the `big data' era of the 21st century. Nonetheless, there is a further step one may take, i.e., going beyond even nonparametric models; this is where the Model-Free Prediction Principle is useful. Interestingly, being able to predict a response variable Y associated with a regressor variable X taking on any possible value seems to inadvertently also achieve the main goal of modeling, i.e., trying to describe how Y depends on X. Hence, as prediction can be treated as a by-product of model-fitting, key estimation problems can be addressed as a by-product of being able to perform prediction. In other words, a practitioner can use Model-Free Prediction ideas in order to additionally obtain point estimates and confidence intervals for relevant parameters leading to an alternative, transformation-based approach to statistical inference.
Over the past 80 years, the way that citation frequency was counted and analyzed changed dramatically from the early manual transcribing and statistical computation of citation data to computer-based citation data creation and its manipulation.""Author Cocitation Analysis: Quantitative Methods for Mapping the Intellectual Structure of an Academic Discipline"" provides a blueprint for researchers to follow in a wide variety of investigations. Pertinent to faculty, researchers, and graduate students in any academic field, this book introduces an alternative approach to conducting author cocitation analysis (ACA) without relying on commercial citation databases.
Computational finance deals with the mathematics of computer programs that realize financial models or systems. This book outlines the epistemic risks associated with the current valuations of different financial instruments and discusses the corresponding risk management strategies. It covers most of the research and practical areas in computational finance. Starting from traditional fundamental analysis and using algebraic and geometric tools, it is guided by the logic of science to explore information from financial data without prejudice. In fact, this book has the unique feature that it is structured around the simple requirement of objective science: the geometric structure of the data = the information contained in the data.
This up-to-date quick reference guides the reader through the most popular SAP module (myERP Financial 6.0). It thoroughly covers all of the sub modules of ERP Financials, including, FICO, FSCM, New GL functionality, SAP integration points, and Report Painter. Unlike other books that only provide questions and answers for certification preparation, this book covers both configurations and end user transactions for validating the implementation methods. A companion CD-ROM with FICO templates, short cuts, and color figures is included.Features: * Includes both configurations and end-user transactions for validation* Uses a quick-reference style for finding information quickly* Covers the latest account configurations for New GL* Includes a CD-ROM with FICO templates, short cuts, and color figures
This textbook on computational statistics presents tools and concepts of univariate and multivariate statistical data analysis with a strong focus on applications and implementations in the statistical software R. It covers mathematical, statistical as well as programming problems in computational statistics and contains a wide variety of practical examples. In addition to the numerous R sniplets presented in the text, all computer programs (quantlets) and data sets to the book are available on GitHub and referred to in the book. This enables the reader to fully reproduce as well as modify and adjust all examples to their needs. The book is intended for advanced undergraduate and first-year graduate students as well as for data analysts new to the job who would like a tour of the various statistical tools in a data analysis workshop. The experienced reader with a good knowledge of statistics and programming might skip some sections on univariate models and enjoy the various ma thematical roots of multivariate techniques. The Quantlet platform quantlet.de, quantlet.com, quantlet.org is an integrated QuantNet environment consisting of different types of statistics-related documents and program codes. Its goal is to promote reproducibility and offer a platform for sharing validated knowledge native to the social web. QuantNet and the corresponding Data-Driven Documents-based visualization allows readers to reproduce the tables, pictures and calculations inside this Springer book.
This book discusses the latest advances in algorithms for symbolic summation, factorization, symbolic-numeric linear algebra and linear functional equations. It presents a collection of papers on original research topics from the Waterloo Workshop on Computer Algebra (WWCA-2016), a satellite workshop of the International Symposium on Symbolic and Algebraic Computation (ISSAC'2016), which was held at Wilfrid Laurier University (Waterloo, Ontario, Canada) on July 23-24, 2016. This workshop and the resulting book celebrate the 70th birthday of Sergei Abramov (Dorodnicyn Computing Centre of the Russian Academy of Sciences, Moscow), whose highly regarded and inspirational contributions to symbolic methods have become a crucial benchmark of computer algebra and have been broadly adopted by many Computer Algebra systems.
This series is dedicated to developments in accounting information systems. Each volume is structured into three sections: information systems practice and theory; information systems and the accounting/auditing environment; and perspectives on information systems research. This volume includes evidence from three experiments relating to the effect of socioeconomic background on computer anxiety and performance. Other areas covered include audit expert system development, users affective responses to information systems through an empirical comparison of four operationalizations, articulating accounting database queries, audit decision aids and integrating group support systems into the accounting environment.
This book contains a rich set of tools for nonparametric analyses, and the purpose of this text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses and tests using R to broadly compare differences between data sets and statistical approach.
This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in
Since the beginning of the seventies computer hardware is available to use programmable computers for various tasks. During the nineties the hardware has developed from the big main frames to personal workstations. Nowadays it is not only the hardware which is much more powerful, but workstations can do much more work than a main frame, compared to the seventies. In parallel we find a specialization in the software. Languages like COBOL for business orientated programming or Fortran for scientific computing only marked the beginning. The introduction of personal computers in the eighties gave new impulses for even further development, already at the beginning of the seven ties some special languages like SAS or SPSS were available for statisticians. Now that personal computers have become very popular the number of pro grams start to explode. Today we will find a wide variety of programs for almost any statistical purpose (Koch & Haag 1995)."
As businesses, researchers, and practitioners look to devise new and innovative technologies in the realm of e-commerce, the human side in contemporary organizations remains a test in the industry. ""Utilizing and Managing Commerce and Services Online"" broadens the overall body of knowledge regarding the human aspects of electronic commerce technologies and utilization in modern organizations. ""Utilizing and Managing Commerce and Services Online"" provides comprehensive coverage and understanding of the social, cultural, organizational, and cognitive impacts of e-commerce technologies and advances in organizations around the world. E-commerce strategic management, leadership, organizational behavior, development, and employee ethical issues are only a few of the challenges presented in this all-inclusive work.
With the increasing advances in hardware technology for data collection, and advances in software technology (databases) for data organization, computer scientists have increasingly participated in the latest advancements of the outlier analysis field. Computer scientists, specifically, approach this field based on their practical experiences in managing large amounts of data, and with far fewer assumptions- the data can be of any type, structured or unstructured, and may be extremely large. Outlier Analysis is a comprehensive exposition, as understood by data mining experts, statisticians and computer scientists. The book has been organized carefully, and emphasis was placed on simplifying the content, so that students and practitioners can also benefit. Chapters will typically cover one of three areas: methods and techniques commonly used in outlier analysis, such as linear methods, proximity-based methods, subspace methods, and supervised methods; data domains, such as, text, categorical, mixed-attribute, time-series, streaming, discrete sequence, spatial and network data; and key applications of these methods as applied to diverse domains such as credit card fraud detection, intrusion detection, medical diagnosis, earth science, web log analytics, and social network analysis are covered.
Artificial Intelligence and Industry 4.0 explores recent advancements in blockchain technology and artificial intelligence (AI) as well as their crucial impacts on realizing Industry 4.0 goals. The book explores AI applications in industry including Internet of Things (IoT) and Industrial Internet of Things (IIoT) technology. Chapters explore how AI (machine learning, smart cities, healthcare, Society 5.0, etc.) have numerous potential applications in the Industry 4.0 era. This book is a useful resource for researchers and graduate students in computer science researching and developing AI and the IIoT.
This book covers the MATLAB syntax and the environment suitable for someone with no programming background. The first four chapters present information on basic MATLAB programming including computing terminology, MATLAB specific syntax and control structures, operators, arrays and matrices. The next cluster covers grouping data, working with files, making images, creating graphical user interfaces, experimenting with sound, and the debugging environment. The final three chapters contain case studies on using MATLAB and other tools and devices (e.g., Arduino, Linux, Git, Mex, etc.) important for basic programming knowledge. Companion files with code and 4 color figures are on the disc or available from the publisher. Features: Covers the MATLAB syntax and the environment, suitable for someone with no programming background Numerous examples, projects, and practical applications enhance understanding of subjects under discussion with over 100 MATLAB scripts and functions Includes companion files with code and 4 color figures from the text (on the disc or available from the publisher)
|
You may like...
Higher Dimensional Complex Varieties…
Marco Andreatta, Thomas Peternell
Hardcover
R5,701
Discovery Miles 57 010
|