![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages
Computational Finance Using C and C#: Derivatives and Valuation, Second Edition provides derivatives pricing information for equity derivatives, interest rate derivatives, foreign exchange derivatives, and credit derivatives. By providing free access to code from a variety of computer languages, such as Visual Basic/Excel, C++, C, and C#, it gives readers stand-alone examples that they can explore before delving into creating their own applications. It is written for readers with backgrounds in basic calculus, linear algebra, and probability. Strong on mathematical theory, this second edition helps empower readers to solve their own problems. *Features new programming problems, examples, and exercises for each chapter. *Includes freely-accessible source code in languages such as C, C++, VBA, C#, and Excel.. *Includes a new chapter on the history of finance which also covers the 2008 credit crisis and the use of mortgage backed securities, CDSs and CDOs. *Emphasizes mathematical theory.
The ability to preserve electronic evidence is critical to presenting a solid case for civil litigation, as well as in criminal and regulatory investigations. Preserving Electronic Evidence for Trial provides everyone connected with digital forensics investigation and litigation with a clear and practical hands-on guide to the best practices in preserving electronic evidence. Corporate management personnel (legal & IT) and outside counsel need reliable processes for the litigation hold - identifying, locating, and preserving electronic evidence. Preserving Electronic Evidence for Trial provides the road map, showing you how to organize the digital evidence team before the crisis, not in the middle of litigation. This practice handbook by an internationally known digital forensics expert and an experienced litigator focuses on what corporate and litigation counsel as well as IT managers and forensic consultants need to know to communicate effectively about electronic evidence. You will find tips on how all your team members can get up to speed on each other's areas of specialization before a crisis arises. The result is a plan to effectively identify and pre-train the critical electronic-evidence team members. You will be ready to lead the team to success when a triggering event indicates that litigation is likely, by knowing what to ask in coordinating effectively with litigation counsel and forensic consultants throughout the litigation progress. Your team can also be ready for action in various business strategies, such as merger evaluation and non-litigation conflict resolution.
The fun and friendly guide to mastering IBM's Statistical Package for the Social Sciences Written by an author team with a combined 55 years of experience using SPSS, this updated guide takes the guesswork out of the subject and helps you get the most out of using the leader in predictive analysis. Covering the latest release and updates to SPSS 27.0, and including more than 150 pages of basic statistical theory, it helps you understand the mechanics behind the calculations, perform predictive analysis, produce informative graphs, and more. You'll even dabble in programming as you expand SPSS functionality to suit your specific needs. Master the fundamental mechanics of SPSS Learn how to get data into and out of the program Graph and analyze your data more accurately and efficiently Program SPSS with Command Syntax Get ready to start handling data like a pro--with step-by-step instruction and expert advice!
This book is a valuable read for a diverse group of researchers and practitioners who analyze assessment data and construct test instruments. It focuses on the use of classical test theory (CTT) and item response theory (IRT), which are often required in the fields of psychology (e.g. for measuring psychological traits), health (e.g. for measuring the severity of disorders), and education (e.g. for measuring student performance), and makes these analytical tools accessible to a broader audience. Having taught assessment subjects to students from diverse backgrounds for a number of years, the three authors have a wealth of experience in presenting educational measurement topics, in-depth concepts and applications in an accessible format. As such, the book addresses the needs of readers who use CTT and IRT in their work but do not necessarily have an extensive mathematical background. The book also sheds light on common misconceptions in applying measurement models, and presents an integrated approach to different measurement methods, such as contrasting CTT with IRT and multidimensional IRT models with unidimensional IRT models. Wherever possible, comparisons between models are explicitly made. In addition, the book discusses concepts for test equating and differential item functioning, as well as Bayesian IRT models and plausible values using simple examples. This book can serve as a textbook for introductory courses on educational measurement, as supplementary reading for advanced courses, or as a valuable reference guide for researchers interested in analyzing student assessment data.
Applied Computing in Medicine and Health is a comprehensive presentation of on-going investigations into current applied computing challenges and advances, with a focus on a particular class of applications, primarily artificial intelligence methods and techniques in medicine and health. Applied computing is the use of practical computer science knowledge to enable use of the latest technology and techniques in a variety of different fields ranging from business to scientific research. One of the most important and relevant areas in applied computing is the use of artificial intelligence (AI) in health and medicine. Artificial intelligence in health and medicine (AIHM) is assuming the challenge of creating and distributing tools that can support medical doctors and specialists in new endeavors. The material included covers a wide variety of interdisciplinary perspectives concerning the theory and practice of applied computing in medicine, human biology, and health care. Particular attention is given to AI-based clinical decision-making, medical knowledge engineering, knowledge-based systems in medical education and research, intelligent medical information systems, intelligent databases, intelligent devices and instruments, medical AI tools, reasoning and metareasoning in medicine, and methodological, philosophical, ethical, and intelligent medical data analysis.
Now in its second edition, Text Analysis with R provides a practical introduction to computational text analysis using the open source programming language R. R is an extremely popular programming language, used throughout the sciences; due to its accessibility, R is now used increasingly in other research areas. In this volume, readers immediately begin working with text, and each chapter examines a new technique or process, allowing readers to obtain a broad exposure to core R procedures and a fundamental understanding of the possibilities of computational text analysis at both the micro and the macro scale. Each chapter builds on its predecessor as readers move from small scale "microanalysis" of single texts to large scale "macroanalysis" of text corpora, and each concludes with a set of practice exercises that reinforce and expand upon the chapter lessons. The book's focus is on making the technical palatable and making the technical useful and immediately gratifying. Text Analysis with R is written with students and scholars of literature in mind but will be applicable to other humanists and social scientists wishing to extend their methodological toolkit to include quantitative and computational approaches to the study of text. Computation provides access to information in text that readers simply cannot gather using traditional qualitative methods of close reading and human synthesis. This new edition features two new chapters: one that introduces dplyr and tidyr in the context of parsing and analyzing dramatic texts to extract speaker and receiver data, and one on sentiment analysis using the syuzhet package. It is also filled with updated material in every chapter to integrate new developments in the field, current practices in R style, and the use of more efficient algorithms.
Improve Your Analytical Skills Incorporating the latest R packages as well as new case studies and applications, Using R and RStudio for Data Management, Statistical Analysis, and Graphics, Second Edition covers the aspects of R most often used by statistical analysts. New users of R will find the book's simple approach easy to understand while more sophisticated users will appreciate the invaluable source of task-oriented information. New to the Second Edition The use of RStudio, which increases the productivity of R users and helps users avoid error-prone cut-and-paste workflows New chapter of case studies illustrating examples of useful data management tasks, reading complex files, making and annotating maps, "scraping" data from the web, mining text files, and generating dynamic graphics New chapter on special topics that describes key features, such as processing by group, and explores important areas of statistics, including Bayesian methods, propensity scores, and bootstrapping New chapter on simulation that includes examples of data generated from complex models and distributions A detailed discussion of the philosophy and use of the knitr and markdown packages for R New packages that extend the functionality of R and facilitate sophisticated analyses Reorganized and enhanced chapters on data input and output, data management, statistical and mathematical functions, programming, high-level graphics plots, and the customization of plots Easily Find Your Desired Task Conveniently organized by short, clear descriptive entries, this edition continues to show users how to easily perform an analytical task in R. Users can quickly find and implement the material they need through the extensive indexing, cross-referencing, and worked examples in the text. Datasets and code are available for download on a supplementary website.
R Visualizations: Derive Meaning from Data focuses on one of the two major topics of data analytics: data visualization, a.k.a., computer graphics. In the book, major R systems for visualization are discussed, organized by topic and not by system. Anyone doing data analysis will be shown how to use R to generate any of the basic visualizations with the R visualization systems. Further, this book introduces the author's lessR system, which always can accomplish a visualization with less coding than the use of other systems, sometimes dramatically so, and also provides accompanying statistical analyses. Key Features Presents thorough coverage of the leading R visualization system, ggplot2. Gives specific guidance on using base R graphics to attain visualizations of the same quality as those provided by ggplot2. Shows how to create a wide range of data visualizations: distributions of categorical and continuous variables, many types of scatterplots including with a third variable, time series, and maps. Inclusion of the various approaches to R graphics organized by topic instead of by system. Presents the recent work on interactive visualization in R. David W. Gerbing received his PhD from Michigan State University in 1979 in quantitative analysis, and currently is a professor of quantitative analysis in the School of Business at Portland State University. He has published extensively in the social and behavioral sciences with a focus on quantitative methods. His lessR package has been in development since 2009.
Behavior Analysis with Machine Learning Using R introduces machine learning and deep learning concepts and algorithms applied to a diverse set of behavior analysis problems. It focuses on the practical aspects of solving such problems based on data collected from sensors or stored in electronic records. The included examples demonstrate how to perform common data analysis tasks such as: data exploration, visualization, preprocessing, data representation, model training and evaluation. All of this, using the R programming language and real-life behavioral data. Even though the examples focus on behavior analysis tasks, the covered underlying concepts and methods can be applied in any other domain. No prior knowledge in machine learning is assumed. Basic experience with R and basic knowledge in statistics and high school level mathematics are beneficial. Features: Build supervised machine learning models to predict indoor locations based on WiFi signals, recognize physical activities from smartphone sensors and 3D skeleton data, detect hand gestures from accelerometer signals, and so on. Program your own ensemble learning methods and use Multi-View Stacking to fuse signals from heterogeneous data sources. Use unsupervised learning algorithms to discover criminal behavioral patterns. Build deep learning neural networks with TensorFlow and Keras to classify muscle activity from electromyography signals and Convolutional Neural Networks to detect smiles in images. Evaluate the performance of your models in traditional and multi-user settings. Build anomaly detection models such as Isolation Forests and autoencoders to detect abnormal fish behaviors. This book is intended for undergraduate/graduate students and researchers from ubiquitous computing, behavioral ecology, psychology, e-health, and other disciplines who want to learn the basics of machine learning and deep learning and for the more experienced individuals who want to apply machine learning to analyze behavioral data.
Coherent treatment of a variety of approaches to multiple comparisons Broad coverage of topics, with contributions by internationally leading experts Detailed treatment of applications in medicine and life sciences Suitable for researchers, lecturers / students, and practitioners
Nearly every large corporation and governmental agency is taking a fresh look at their current enterprise-scale business intelligence (BI) and data warehousing implementations at the dawn of the "Big Data Era"...and most see a critical need to revitalize their current capabilities. Whether they find the frustrating and business-impeding continuation of a long-standing "silos of data" problem, or an over-reliance on static production reports at the expense of predictive analytics and other true business intelligence capabilities, or a lack of progress in achieving the long-sought-after enterprise-wide "single version of the truth" - or all of the above - IT Directors, strategists, and architects find that they need to go back to the drawing board and produce a brand new BI/data warehousing roadmap to help move their enterprises from their current state to one where the promises of emerging technologies and a generation's worth of best practices can finally deliver high-impact, architecturally evolvable enterprise-scale business intelligence and data warehousing. Author Alan Simon, whose BI and data warehousing experience dates back to the late 1970s and who has personally delivered or led more than thirty enterprise-wide BI/data warehousing roadmap engagements since the mid-1990s, details a comprehensive step-by-step approach to building a best practices-driven, multi-year roadmap in the quest for architecturally evolvable BI and data warehousing at the enterprise scale. Simon addresses the triad of technology, work processes, and organizational/human factors considerations in a manner that blends the visionary and the pragmatic.
The SPSS Survival Manual throws a lifeline to students and researchers grappling with this powerful data analysis software. In her bestselling guide, Julie Pallant takes you through the entire research process, helping you choose the right data analysis technique for your project. This edition has been updated to include up to SPSS version 26. From the formulation of research questions, to the design of the study and analysis of data, to reporting the results, Julie discusses basic and advanced statistical techniques. She outlines each technique clearly, with step-by-step procedures for performing the analysis, a detailed guide to interpreting data output and an example of how to present the results in a report. For both beginners and experienced users in Psychology, Sociology, Health Sciences, Medicine, Education, Business and related disciplines, the SPSS Survival Manual is an essential text. It is illustrated throughout with screen grabs, examples of output and tips, and is also further supported by a website with sample data and guidelines on report writing. This seventh edition is fully revised and updated to accommodate changes to IBM SPSS procedures.
*Systematically introducing major components of SPM process. *Novel hybrid methods (228 hybrids plus numerous variants) of modern statistical methods or machine learning methods with mathematical and/or univariate geostatistical methods. *Novel predictive accuracy-based variable selection techniques for spatial predictive methods. *Predictive accuracy-based parameter/model optimization. *Reproducible examples for SPM of various data types in R.
Data Analytics for the Social Sciences is an introductory, graduate-level treatment of data analytics for social science. It features applications in the R language, arguably the fastest growing and leading statistical tool for researchers. The book starts with an ethics chapter on the uses and potential abuses of data analytics. Chapters 2 and 3 show how to implement a broad range of statistical procedures in R. Chapters 4 and 5 deal with regression and classification trees and with random forests. Chapter 6 deals with machine learning models and the "caret" package, which makes available to the researcher hundreds of models. Chapter 7 deals with neural network analysis, and Chapter 8 deals with network analysis and visualization of network data. A final chapter treats text analysis, including web scraping, comparative word frequency tables, word clouds, word maps, sentiment analysis, topic analysis, and more. All empirical chapters have two "Quick Start" exercises designed to allow quick immersion in chapter topics, followed by "In Depth" coverage. Data are available for all examples and runnable R code is provided in a "Command Summary". An appendix provides an extended tutorial on R and RStudio. Almost 30 online supplements provide information for the complete book, "books within the book" on a variety of topics, such as agent-based modeling. Rather than focusing on equations, derivations, and proofs, this book emphasizes hands-on obtaining of output for various social science models and how to interpret the output. It is suitable for all advanced level undergraduate and graduate students learning statistical data analysis.
A Strong Practical Focus on Applications and AlgorithmsComputational Statistics Handbook with MATLAB (R), Third Edition covers today's most commonly used techniques in computational statistics while maintaining the same philosophy and writing style of the bestselling previous editions. The text keeps theoretical concepts to a minimum, emphasizing the implementation of the methods. New to the Third EditionThis third edition is updated with the latest version of MATLAB and the corresponding version of the Statistics and Machine Learning Toolbox. It also incorporates new sections on the nearest neighbor classifier, support vector machines, model checking and regularization, partial least squares regression, and multivariate adaptive regression splines. Web ResourceThe authors include algorithmic descriptions of the procedures as well as examples that illustrate the use of algorithms in data analysis. The MATLAB code, examples, and data sets are available online.
Focused on practical matters: this book will not cover Shiny concepts, but practical tools and methodologies to use for production. Based on experience: this book will be a formalization of several years of experience building Shiny applications. Original content: this book will present new methodology and tooling, not just do a review of what already exists.
* Provides a comprehensive review of methods and applications of Bayesian variable selection. * Divided into four parts: Spike-and-Slab Priors; Continuous Shrinkage Priors; Extensions to various Modeling; Other Approaches to Bayesian Variable Selection. * Covers theoretical and methodological aspects, as well as worked out examples with R code provided in the online supplement. * Includes contributions by experts in the field.
The idea of the Grobner basis first appeared in a 1927 paper by F. S. Macaulay, who succeeded in creating a combinatorial characterization of the Hilbert functions of homogeneous ideals of the polynomial ring. Later, the modern definition of the Grobner basis was independently introduced by Heisuke Hironaka in 1964 and Bruno Buchberger in 1965. However, after the discovery of the notion of the Grobner basis by Hironaka and Buchberger, it was not actively pursued for 20 years. A breakthrough was made in the mid-1980s by David Bayer and Michael Stillman, who created the Macaulay computer algebra system with the help of the Grobner basis. Since then, rapid development on the Grobner basis has been achieved by many researchers, including Bernd Sturmfels. This book serves as a standard bible of the Grobner basis, for which the harmony of theory, application, and computation are indispensable. It provides all the fundamentals for graduate students to learn the ABC s of the Grobner basis, requiring no special knowledgeto understand those basic points. Starting from the introductory performance of the Grobner basis (Chapter 1), a trip around mathematical software follows (Chapter 2). Then comes a deep discussion of how to compute the Grobner basis (Chapter 3). These three chapters may be regarded as the first act of a mathematical play. The second act opens with topics on algebraic statistics (Chapter 4), a fascinating research area where the Grobner basis of a toric ideal is a fundamental tool of the Markov chain Monte Carlo method. Moreover, the Grobner basis of a toric ideal has had a great influence on the study of convex polytopes (Chapter 5). In addition, the Grobner basis of the ring of differential operators gives effective algorithms on holonomic functions (Chapter 6). The third act (Chapter 7) is a collection of concrete examples and problems for Chapters 4, 5 and 6 emphasizing computation by using various software systems."
An introduction to the Central Dogma of molecular biology and information flow in biological systems. A systematic overview of the methods for generating gene expression data. Background knowledge on statistical modeling and machine learning techniques. Detailed methodology of analyzing gene expression data with an example case study. Clustering methods for finding co-expression patterns from microarray, bulkRNA and scRNA data. A large number of practical tools, systems and repositories that are useful for computational biologists to create, analyze and validate biologically relevant gene expression patterns. Suitable for multi-disciplinary researchers and practitioners in computer science and biological sciences.
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for the construction of optimum designs using Hadamard matrices, the Kronecker product, Rao-Khatri product, mixed orthogonal arrays to name a few.
"MATLAB By Example" guides the reader through each step of writing MATLAB programs. The book assumes no previous programming experience on the part of the reader, and uses multiple examples in clear language to introduce concepts and practical tools. Straightforward and detailed instructions allow beginners to learn and develop their MATLAB skills quickly. The book consists of ten chapters, discussing in detail the
integrated development environment (IDE), scalars, vectors, arrays,
adopting structured programming style using functions and recursive
functions, control flow, debugging, profiling, and structures. A
chapter also describes Symbolic Math Toolbox, teaching readers how
to solve algebraic equations, differentiation, integration,
differential equations, and Laplace and Fourier transforms.
Containing hundreds of examples illustrated using screen shots,
hundreds of exercises, and three projects, this book can be used to
complement coursework or as a self-study book, and can be used as a
textbook in universities, colleges and high schools. |
You may like...
Synthesis and Operability Strategies for…
Efstratios N. Pistikopoulos, Yuhe Tian
Paperback
R3,954
Discovery Miles 39 540
14th International Symposium on Process…
Yoshiyuki Yamashita, Manabu Kano
Hardcover
R11,801
Discovery Miles 118 010
Case Studies in Geospatial Applications…
Pravat Kumar Shit, Gouri Sankar Bhunia, …
Paperback
R3,438
Discovery Miles 34 380
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,341
Discovery Miles 13 410
Intelligent Edge Computing for Cyber…
D. Jude Hemanth, Bb Gupta, …
Paperback
R3,137
Discovery Miles 31 370
|