![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This book provides hands-on guidance for researchers and practitioners in criminal justice and criminology to perform statistical analyses and data visualization in the free and open-source software R. It offers a step-by-step guide for beginners to become familiar with the RStudio platform and tidyverse set of packages. This volume will help users master the fundamentals of the R programming language, providing tutorials in each chapter that lay out research questions and hypotheses centering around a real criminal justice dataset, such as data from the National Survey on Drug Use and Health, National Crime Victimization Survey, Youth Risk Behavior Surveillance System, The Monitoring the Future Study, and The National Youth Survey. Users will also learn how to manipulate common sources of agency data, such as calls-for-service (CFS) data. The end of each chapter includes exercises that reinforce the R tutorial examples, designed to help master the software as well as to provide practice on statistical concepts, data analysis, and interpretation of results. The text can be used as a stand-alone guide to learning R or it can be used as a companion guide to an introductory statistics textbook, such as Basic Statistics in Criminal Justice (2020).
This book contains the proceedings of the 12th International Conference on TheoremProvinginHigherOrderLogics(TPHOLs 99), whichwasheldinNice at the University of Nice-Sophia Antipolis, September 14{17, 1999. Thirty- ve papers were submitted as completed research, and each of them was refereed by at least three reviewers appointed by the program committee. Twenty papers were selected for publication in this volume. Followingawell-establishedtraditioninthisseriesofconferences, anumberof researchers also came to discuss work in progress, using short talks and displays at a poster session. These papers are included in a supplementary proceedings volume. These supplementary proceedings take the form of a book published by INRIA in its series of research reports, under the following title: Theorem ProvinginHigherOrderLogics: EmergingTrends1999. The organizers were pleased that Dominique Bolignano, Arjeh Cohen, and Thomas Kropf accepted invitations to be guest speakers for TPHOLs 99. For several years, D. Bolignano has been the leader of the VIP team in the Dyade consortium between INRIA and Bull and is now at the head of a company Trusted Logic. His team has been concentrating on the use of formal methods for the e ective veri cationof securityproperties for protocols used in electronic commerce. A. Cohen has had a key in?uence on the development of computer algebra in The Netherlands and his contribution has been of particular imp- tance to researchersinterested in combining the severalknown methods of using computers to perform mathematical investigations. T. Kropf is an important actor in the Europe-wide project PROSPER, which aims to deliver the be- ts of mechanized formal analysis to system builders in industry."
Increasing the designer's con dence that a piece of software or hardwareis c- pliant with its speci cation has become a key objective in the design process for software and hardware systems. Many approaches to reaching this goal have been developed, including rigorous speci cation, formal veri cation, automated validation, and testing. Finite-state model checking, as it is supported by the explicit-state model checkerSPIN, is enjoying a constantly increasingpopularity in automated property validation of concurrent, message based systems. SPIN has been in large parts implemented and is being maintained by Gerard Ho- mann, and is freely available via ftp fromnetlib.bell-labs.comor from URL http: //cm.bell-labs.com/cm/cs/what/spin/Man/README.html. The beauty of nite-state model checking lies in the possibility of building \push-button" validation tools. When the state space is nite, the state-space traversal will eventually terminate with a de nite verdict on the property that is being validated. Equally helpful is the fact that in case the property is inv- idated the model checker will return a counterexample, a feature that greatly facilitates fault identi cation. On the downside, the time it takes to obtain a verdict may be very long if the state space is large and the type of properties that can be validated is restricted to a logic of rather limited expressiveness.
This compact introduction to Mathematicaaccessible to beginners at all levelspresents the basic elements of the latest version 3 (front End.txt.Int.:, kernel, standard packages). Using examples and exercises not specific to a scientific area, it teaches readers how to effectively solve problems in their own field. The cross-platform CD-ROM contains the entire book in the form of Mathematica notebooks, including color graphics, animations, and hyperlinks, plus the program MathReader.
This unusual introduction to Maple shows readers how Maple or any other computer algebra system fits naturally into a mathematically oriented work environment. Designed for mathematicians, engineers, econometricians, and other scientists, this book shows how computer algebra can enhance their theoretical work. A CD-ROM contains all the Maple worksheets presented in the book.
This book brings together some of the finest minds in the statistical and neural network research communities. It provides a broad overview of important current developments and highlights the likely future trends in the area of neural networks.
Newcomers to R are often intimidated by the command-line interface, the vast number of functions and packages, or the processes of importing data and performing a simple statistical analysis. The R Primer provides a collection of concise examples and solutions to R problems frequently encountered by new users of this statistical software. This new edition adds coverage of R Studio and reproducible research.
The interaction between computers and mathematics is becoming more and more important at all levels as computers become more sophisticated. This book shows how simple programs can be used to do significant mathematics. The purpose of this book is to give those with some mathematical background a wealth of material with which to appreciate both the power of the microcomputer and its relevance to the study of mathematics. The authors cover topics such as number theory, approximate solutions, differential equations and iterative processes, with each chapter self contained. Many exercises and projects are included giving ready-made material for demonstrating mathematical ideas. Only fundamental knowledge of mathematics is assumed and programming is restricted to "basic BASIC" which will be understood by any microcomputer. The book may be used as a textbook for algorithmic mathematics at several levels, since all the topics covered appear in any undergraduate mathematics course.
This Volume contains the Keynote, Invited and Full Contributed papers presented at COMPSTAT'98. A companion volume (Payne & Lane, 1998) contains papers describing the Short Communications and Posters. COMPSTAT is a one-week conference held every two years under the auspices of the International Association of Statistical Computing, a section of the International Statistical Institute. COMPSTAT'98 is organised by IACR-Rothamsted, IACR-Long Ashton, the University of Bristol Department of Mathematics and the University of Bath Department of Mathematical Sciences. It is taking place from 24-28 August 1998 at University of Bristol. Previous COMPSTATs (from 1974-1996) were in Vienna, Berlin, Leiden, Edinburgh, Toulouse, Prague, Rome, Copenhagen, Dubrovnik, Neuchatel, Vienna and Barcelona. The conference is the main European forum for developments at the interface between statistics and computing. This was encapsulated as follows in the COMPSTAT'98 Call for Papers. Statistical computing provides the link between statistical theory and applied statistics. The scientific programme of COMPSTAT ranges over all aspects of this link, from the development and implementation of new computer-based statistical methodology through to innovative applications and software evaluation. The programme should appeal to anyone working in statistics and using computers, whether in universities, industrial companies, research institutes or as software developers.
This companion to The New Statistical Analysis of Data by Anderson and Finn provides a hands-on guide to data analysis using SPSS. Included with this guide are instructions for obtaining the data sets to be analysed via the World Wide Web. First, the authors provide a brief review of using SPSS, and then, corresponding to the organisation of The New Statistical Analysis of Data, readers participate in analysing many of the data sets discussed in the book. In so doing, students both learn how to conduct reasonably sophisticated statistical analyses using SPSS whilst at the same time gaining an insight into the nature and purpose of statistical investigation.
Maple V Mathematics Programming Guide is the fully updated language and programming reference for Maple V Release 5. It presents a detailed description of Maple V Release 5 - the latest release of the powerful, interactive computer algebra system used worldwide as a tool for problem-solving in mathematics, the sciences, engineering, and education. This manual describes the use of both numeric and symbolic expressions, the data types available, and the programming language statements in Maple. It shows how the system can be extended or customized through user defined routines and gives complete descriptions of the system's user interface and 2D and 3D graphics capabilities.
This upper-division laboratory supplement for courses in abstract algebra consists of several Mathematica packages programmed as a foundation for group and ring theory. Additionally, the "user's guide" illustrates the functionality of the underlying code, while the lab portion of the book reflects the contents of the Mathematica-based electronic notebooks. Students interact with both the printed and electronic versions of the material in the laboratory, and can look up details and reference information in the user's guide. Exercises occur in the stream of the text of the lab, which provides a context within which to answer, and the questions are designed to be either written into the electronic notebook, or on paper. The notebooks are available in both 2.2 and 3.0 versions of Mathematica, and run across all platforms for which Mathematica exits. A very timely and unique addition to the undergraduate abstract algebra curriculum, filling a tremendous void in the literature.
S+SPATIALSTATS is the first comprehensive, object-oriented package for the analysis of spatial data. Providing a whole new set of analysis tools, S+SPATIALSTATS was created specifically for the exploration and modeling of spatially correlated data. It can be used to analyze data arising in areas such as environmental, mining, and petroleum engineering, natural resources, geography, epidemiology, demography, and others where data is sampled spatially. This users manual provides the documentation for the S+SPATIALSTATS module.
COMPSTAT symposia have been held regularly since 1974 when they started in Vienna. This tradition has made COMPSTAT a major forum for the interplay of statistics and computer sciences with contributions from many well known scientists all over the world. The scientific programme of COMPSTAT '96 covers all aspects of this interplay, from user-experiences and evaluation of software through the development and implementation of new statistical ideas. All papers presented belong to one of the three following categories: - Statistical methods (preferable new ones) that require a substantial use of computing; - Computer environments, tools and software useful in statistics; - Applications of computational statistics in areas of substantial interest (environment, health, industry, biometrics, etc.).
This book deals with selected problems of machine perception, using various 2D and 3D imaging sensors. It proposes several new original methods, and also provides a detailed state-of-the-art overview of existing techniques for automated, multi-level interpretation of the observed static or dynamic environment. To ensure a sound theoretical basis of the new models, the surveys and algorithmic developments are performed in well-established Bayesian frameworks. Low level scene understanding functions are formulated as various image segmentation problems, where the advantages of probabilistic inference techniques such as Markov Random Fields (MRF) or Mixed Markov Models are considered. For the object level scene analysis, the book mainly relies on the literature of Marked Point Process (MPP) approaches, which consider strong geometric and prior interaction constraints in object population modeling. In particular, key developments are introduced in the spatial hierarchical decomposition of the observed scenarios, and in the temporal extension of complex MRF and MPP models. Apart from utilizing conventional optical sensors, case studies are provided on passive radar (ISAR) and Lidar-based Bayesian environment perception tasks. It is shown, via several experiments, that the proposed contributions embedded into a strict mathematical toolkit can significantly improve the results in real world 2D/3D test images and videos, for applications in video surveillance, smart city monitoring, autonomous driving, remote sensing, and optical industrial inspection.
The book, which contains over two hundred illustrations, is designed for use in school computer labs or with home computers, running the computer algebra system Maple, or its student version. It supports the interactive Maple worksheets, which the authors have developed and which are available free of charge via anonymous ftp (ftp.utirc.utoronto.ca (/pub/ednet/maths/maple)). The book addresses readers who are learning calculus at a pre-university level.
A greatly expanded and heavily revised second edition, this popular
guide provides instructions and clear examples for running analyses
of variance (ANOVA) and several other related statistical tests of
significance with SPSS. No other guide offers the program
statements required for the more advanced tests in analysis of
variance. All of the programs in the book can be run using any
version of SPSS, including versions 11 and 11.5. A table at the end
of the preface indicates where each type of analysis (e.g., simple
comparisons) can be found for each type of design (e.g., mixed
two-factor design).
Maple is a computer algebraic system with a fast-growing number of users in universities, schools and other institutions. Werner Burkhardt provides a detailed step-by-step introduction for all first-time users, enabling you to become familiar with the way Maple works, as quickly and easily as possible. Using as examples problems from many different aspects of mathematics, problem solving using Maple is fully described in this easy-to-follow tutorial text. Each chapter is self-contained, so you can easily select areas of your own special interest. There are some 'test yourself' problems at the end of each chapter to check your progress, with solutions provided at the end of the book.
Mathematica combines symbolic and numerical calculations, plots, graphics programming, list calculations and structured documentation into an interactive environment. This book covers the program and shows with practical examples how even more complex problems can be solved with just a few commands. From the reviews: "A valuable introductory textbook on Mathematica and is very useful to scientists and engineers who use Mathematica in their work." -- ZENTRALBLATT MATH
This book assembles papers which were presented at the biennial sympo sium in Computational Statistics held und er the a uspices of the International Association for Statistical Computing (IASC), a section of ISI, the Interna tional Statistical Institute. This symposium named COMPSTAT '94 was organized by the Statistical Institutes of the University of Vienna and the University of Technology of Vienna, Austria. The series of COMPSTAT Symposia started 1974 in Vienna. Mean while they took place every other year in Berlin (Germany, 1976), Leiden (The Netherlands, 1978), Edinburgh (Great Britain, 1980), Toulouse (France, 1982), Prague (Czechoslovakia, 1984), Rom (Italy, 1986), Copenhagen (Den mark, 1988), Dubrovnik (Yugoslavia, 1990) and Neuchatel (Switzerland, 1992). This year we are celebrating the 20th anniversary in Vienna, Austria. It has obviously been observed a movement from "traditional" computa tional statistics with emphasis on methods which produce results quickly and reliably, to computationally intensive methods like resampling procedures, Bayesian methods, dynamic graphics, to very recent areas like neural net works, accentuation on spatial statistics, huge data sets, analysis strategies, etc. For the organization of the symposium, new guidelines worked out by the IASC in written form were in effect this time. The goal was to refresh somehow the spirit of the start of COMPSTAT '74, keep the tradition of the series and ensure a certain continuity in the sequence of biannual meetings."
The analysis of time series data is an important aspect of data analysis across a wide range of disciplines, including statistics, mathematics, business, engineering, and the natural and social sciences. This package provides both an introduction to time series analysis and an easy-to-use version of a well-known time series computing package called Interactive Time Series Modelling. The programs in the package are intended as a supplement to the text Time Series: Theory and Methods, 2nd edition, also by Peter J. Brockwell and Richard A. Davis. Many researchers and professionals will appreciate this straightforward approach enabling them to run desk-top analyses of their time series data. Amongst the many facilities available are tools for: ARIMA modelling, smoothing, spectral estimation, multivariate autoregressive modelling, transfer-function modelling, forecasting, and long-memory modelling. This version is designed to run under Microsoft Windows 3.1 or later. It comes with two diskettes: one suitable for less powerful machines (IBM PC 286 or later with 540K available RAM and 1.1 MB of hard disk space) and one for more powerful machines (IBM PC 386 or later with 8MB of RAM and 2.6 MB of hard disk space available).
This book is a collection of thirty invited papers, covering the important parts of a rapidly developing area like "computational statistics." All contributions supply information about a specialized topic in a tutorial and comprehensive style. Newest results and developments are discussed. Starting with the foundations of computational statistics, i.e. numerical reliability of software packages or construction principles for pseudorandom number generators, the volume includes design considerations on statistical programming languages and the basic issues of resampling techniques. Also covered are areas like design of experiments, graphical techniques, modelling and testing problems, a review of clustering algorithms, and concise discussions of regression trees or cognitive aspects of authoring systems.
Recommended by Bill Gates A thought-provoking and wide-ranging exploration of machine learning and the race to build computer intelligences as flexible as our own In the world's top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Master Algorithm, Pedro Domingos lifts the veil to give us a peek inside the learning machines that power Google, Amazon, and your smartphone. He assembles a blueprint for the future universal learner--the Master Algorithm--and discusses what it will mean for business, science, and society. If data-ism is today's philosophy, this book is its bible.
The emphasis of the book is given in how to construct different types of solutions (exact, approximate analytical, numerical, graphical) of numerous nonlinear PDEs correctly, easily, and quickly. The reader can learn a wide variety of techniques and solve numerous nonlinear PDEs included and many other differential equations, simplifying and transforming the equations and solutions, arbitrary functions and parameters, presented in the book). Numerous comparisons and relationships between various types of solutions, different methods and approaches are provided, the results obtained in Maple and Mathematica, facilitates a deeper understanding of the subject. Among a big number of CAS, we choose the two systems, Maple and Mathematica, that are used worldwide by students, research mathematicians, scientists, and engineers. As in the our previous books, we propose the idea to use in parallel both systems, Maple and Mathematica, since in many research problems frequently it is required to compare independent results obtained by using different computer algebra systems, Maple and/or Mathematica, at all stages of the solution process. One of the main points (related to CAS) is based on the implementation of a whole solution method (e.g. starting from an analytical derivation of exact governing equations, constructing discretizations and analytical formulas of a numerical method, performing numerical procedure, obtaining various visualizations, and comparing the numerical solution obtained with other types of solutions considered in the book, e.g. with asymptotic solution).
This volume presents the published Proceedings of the joint meeting of GUM92 and the 7th International Workshop on Statistical Modelling, held in Munich, Germany from 13 to 17 July 1992. The meeting aimed to bring together researchers interested in the development and applications of generalized linear modelling in GUM and those interested in statistical modelling in its widest sense. This joint meeting built upon the success of previous workshops and GUM conferences. Previous GUM conferences were held in London and Lancaster, and a joint GUM Conference/4th Modelling Workshop was held in Trento. (The Proceedings of previous GUM conferences/Statistical Modelling Workshops are available as numbers 14 , 32 and 57 of the Springer Verlag series of Lecture Notes in Statistics). Workshops have been organized in Innsbruck, Perugia, Vienna, Toulouse and Utrecht. (Proceedings of the Toulouse Workshop appear as numbers 3 and 4 of volume 13 of the journal Computational Statistics and Data Analysis). Much statistical modelling is carried out using GUM, as is apparent from many of the papers in these Proceedings. Thus the Programme Committee were also keen on encouraging papers which addressed problems which are not only of practical importance but which are also relevant to GUM or other software development. The Programme Committee requested both theoretical and applied papers. Thus there are papers in a wide range of practical areas, such as ecology, breast cancer remission and diabetes mortality, banking and insurance, quality control, social mobility, organizational behaviour. |
You may like...
Performance and Dependability in Service…
Valeria Cardellini, Emiliano Casalicchio, …
Hardcover
R5,002
Discovery Miles 50 020
Scatter Search - Methodology and…
Manuel Laguna, Rafael Marti
Paperback
R2,690
Discovery Miles 26 900
News Search, Blogs and Feeds - A Toolkit
Lars Vage, Lars Iselid
Paperback
R1,332
Discovery Miles 13 320
3D Imaging for Safety and Security
Andreas Koschan, Marc Pollefeys, …
Hardcover
R1,455
Discovery Miles 14 550
Introducing Delphi Programming - Theory…
John Barrow, Linda Miller, …
Paperback
(1)R785 Discovery Miles 7 850
|