![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages > Mathematical & statistical software
This textbook examines empirical linguistics from a theoretical linguist's perspective. It provides both a theoretical discussion of what quantitative corpus linguistics entails and detailed, hands-on, step-by-step instructions to implement the techniques in the field. The statistical methodology and R-based coding from this book teach readers the basic and then more advanced skills to work with large data sets in their linguistics research and studies. Massive data sets are now more than ever the basis for work that ranges from usage-based linguistics to the far reaches of applied linguistics. This book presents much of the methodology in a corpus-based approach. However, the corpus-based methods in this book are also essential components of recent developments in sociolinguistics, historical linguistics, computational linguistics, and psycholinguistics. Material from the book will also be appealing to researchers in digital humanities and the many non-linguistic fields that use textual data analysis and text-based sensorimetrics. Chapters cover topics including corpus processing, frequencing data, and clustering methods. Case studies illustrate each chapter with accompanying data sets, R code, and exercises for use by readers. This book may be used in advanced undergraduate courses, graduate courses, and self-study.
Most books on linear systems for undergraduates cover discrete and continuous systems material together in a single volume. Such books also include topics in discrete and continuous filter design, and discrete and continuous state-space representations. However, with this magnitude of coverage, the student typically gets a little of both discrete and continuous linear systems but not enough of either. Minimal coverage of discrete linear systems material is acceptable provided that there is ample coverage of continuous linear systems. On the other hand, minimal coverage of continuous linear systems does no justice to either of the two areas. Under the best of circumstances, a student needs a solid background in both these subjects. Continuous linear systems and discrete linear systems are broad topics and each merit a single book devoted to the respective subject matter. The objective of this set of two volumes is to present the needed material for each at the undergraduate level, and present the required material using MATLAB (R) (The MathWorks Inc.).
This book is a comprehensive guide to qualitative comparative analysis (QCA) using R. Using Boolean algebra to implement principles of comparison used by scholars engaged in the qualitative study of macro social phenomena, QCA acts as a bridge between the quantitative and the qualitative traditions. The QCA package for R, created by the author, facilitates QCA within a graphical user interface. This book provides the most current information on the latest version of the QCA package, which combines written commands with a cross-platform interface. Beginning with a brief introduction to the concept of QCA, this book moves from theory to calibration, from analysis to factorization, and hits on all the key areas of QCA in between. Chapters one through three are introductory, familiarizing the reader with R, the QCA package, and elementary set theory. The next few chapters introduce important applications of the package beginning with calibration, analysis of necessity, analysis of sufficiency, parameters of fit, negation and factorization, and the construction of Venn diagrams. The book concludes with extensions to the classical package, including temporal applications and panel data. Providing a practical introduction to an increasingly important research tool for the social sciences, this book will be indispensable for students, scholars, and practitioners interested in conducting qualitative research in political science, sociology, business and management, and evaluation studies.
This volume collects selected, peer-reviewed contributions from the 2nd Conference of the International Society for Nonparametric Statistics (ISNPS), held in Cadiz (Spain) between June 11-16 2014, and sponsored by the American Statistical Association, the Institute of Mathematical Statistics, the Bernoulli Society for Mathematical Statistics and Probability, the Journal of Nonparametric Statistics and Universidad Carlos III de Madrid. The 15 articles are a representative sample of the 336 contributed papers presented at the conference. They cover topics such as high-dimensional data modelling, inference for stochastic processes and for dependent data, nonparametric and goodness-of-fit testing, nonparametric curve estimation, object-oriented data analysis, and semiparametric inference. The aim of the ISNPS 2014 conference was to bring together recent advances and trends in several areas of nonparametric statistics in order to facilitate the exchange of research ideas, promote collaboration among researchers from around the globe, and contribute to the further development of the field.
This text presents a wide-ranging and rigorous overview of nearest neighbor methods, one of the most important paradigms in machine learning. Now in one self-contained volume, this book systematically covers key statistical, probabilistic, combinatorial and geometric ideas for understanding, analyzing and developing nearest neighbor methods. Gerard Biau is a professor at Universite Pierre et Marie Curie (Paris). Luc Devroye is a professor at the School of Computer Science at McGill University (Montreal).
Pulsar timing is a promising method for detecting gravitational waves in the nano-Hertz band. In his prize winning Ph.D. thesis Rutger van Haasteren deals with how one takes thousands of seemingly random timing residuals which are measured by pulsar observers, and extracts information about the presence and character of the gravitational waves in the nano-Hertz band that are washing over our Galaxy. The author presents a sophisticated mathematical algorithm that deals with this issue. His algorithm is probably the most well-developed of those that are currently in use in the Pulsar Timing Array community. In chapter 3, the gravitational-wave memory effect is described. This is one of the first descriptions of this interesting effect in relation with pulsar timing, which may become observable in future Pulsar Timing Array projects. The last part of the work is dedicated to an effort to combine the European pulsar timing data sets in order to search for gravitational waves. This study has placed the most stringent limit to date on the intensity of gravitational waves that are produced by pairs of supermassive black holes dancing around each other in distant galaxies, as well as those that may be produced by vibrating cosmic strings. Rutger van Haasteren has won the 2011 GWIC Thesis Prize of the Gravitational Wave International Community for his innovative work in various directions of the search for gravitational waves by pulsar timing. The work is presented in this Ph.D. thesis.
This book discusses recent developments in mathematical programming and game theory, and the application of several mathematical models to problems in finance, games, economics and graph theory. All contributing authors are eminent researchers in their respective fields, from across the world. This book contains a collection of selected papers presented at the 2017 Symposium on Mathematical Programming and Game Theory at New Delhi during 9-11 January 2017. Researchers, professionals and graduate students will find the book an essential resource for current work in mathematical programming, game theory and their applications in finance, economics and graph theory. The symposium provides a forum for new developments and applications of mathematical programming and game theory as well as an excellent opportunity to disseminate the latest major achievements and to explore new directions and perspectives.
R is open source statistical computing software. Since the R core group was formed in 1997, R has been extended by a very large number of packages with extensive documentation along with examples freely available on the internet. It offers a large number of statistical and numerical methods and graphical tools and visualization of extraordinarily high quality. R was recently ranked in 14th place by the Transparent Language Popularity Index and 6th as a scripting language, after PHP, Python, and Perl. The book is designed so that it can be used right away by novices while appealing to experienced users as well. Each article begins with a data example that can be downloaded directly from the R website. Data analysis questions are articulated following the presentation of the data. The necessary R commands are spelled out and executed and the output is presented and discussed. Other examples of data sets with a different flavor and different set of commands but following the theme of the article are presented as well. Each chapter predents a hands-on-experience. R has superb graphical outlays and the book brings out the essentials in this arena. The end user can benefit immensely by applying the graphics to enhance research findings. The core statistical methodologies such as regression, survival analysis, and discrete data are all covered.
Designed to help readers analyze and interpret research data using IBM SPSS, this user-friendly book shows readers how to choose the appropriate statistic based on the design; perform intermediate statistics, including multivariate statistics; interpret output; and write about the results. The book reviews research designs and how to assess the accuracy and reliability of data; how to determine whether data meet the assumptions of statistical tests; how to calculate and interpret effect sizes for intermediate statistics, including odds ratios for logistic and discriminant analyses; how to compute and interpret post-hoc power; and an overview of basic statistics for those who need a review. Unique chapters on multilevel linear modeling; multivariate analysis of variance (MANOVA); assessing reliability of data; multiple imputation; mediation, moderation, and canonical correlation; and factor analysis are provided. SPSS syntax with output is included for those who prefer this format. The new edition features: IBM SPSS version 22; although the book can be used with most older and newer versions New discusiion of intraclass correlations (Ch. 3) Expanded discussion of effect sizes that includes confidence intervals of effect sizes (ch.5) New information on part and partial correlations and how they are interpreted and a new discussion on backward elimination, another useful multiple regression method (Ch. 6) New chapter on how use a variable as a mediator or a moderator (ch. 7) Revised chapter on multilevel and hierarchical linear modeling (ch. 12) A new chapter (ch. 13) on multiple imputation that demonstrates how to deal with missing data Updated web resources for instructors including PowerPoint slides, answers to interpretation questions, extra SPSS problems and for students, data sets, and chapter outlines and study guides. " IBM SPSS for Intermediate Statistics, Fifth Edition "provides helpful teaching tools: all of the key SPSS windows needed to perform the analyses outputs with call-out boxes to highlight key points interpretation sections and questions to help students better understand and interpret the output extra problems with realistic data sets for practice using intermediate statistics Appendices on how to get started with SPSS, write research questions, and basic statistics. An ideal supplement for courses in either intermediate/advanced statistics or research methods taught in departments of psychology, education, and other social, behavioral, and health sciences. This book is also appreciated by researchers in these areas looking for a handy reference for SPSS"
This volume conveys some of the surprises, puzzles and success stories in high-dimensional and complex data analysis and related fields. Its peer-reviewed contributions showcase recent advances in variable selection, estimation and prediction strategies for a host of useful models, as well as essential new developments in the field. The continued and rapid advancement of modern technology now allows scientists to collect data of increasingly unprecedented size and complexity. Examples include epigenomic data, genomic data, proteomic data, high-resolution image data, high-frequency financial data, functional and longitudinal data, and network data. Simultaneous variable selection and estimation is one of the key statistical problems involved in analyzing such big and complex data. The purpose of this book is to stimulate research and foster interaction between researchers in the area of high-dimensional data analysis. More concretely, its goals are to: 1) highlight and expand the breadth of existing methods in big data and high-dimensional data analysis and their potential for the advancement of both the mathematical and statistical sciences; 2) identify important directions for future research in the theory of regularization methods, in algorithmic development, and in methodologies for different application areas; and 3) facilitate collaboration between theoretical and subject-specific researchers.
Linear mixed-effects models (LMMs) are an important class of statistical models that can be used to analyze correlated data. Such data are encountered in a variety of fields including biostatistics, public health, psychometrics, educational measurement, and sociology. This book aims to support a wide range of uses for the models by applied researchers in those and other fields by providing state-of-the-art descriptions of the implementation of LMMs in R. To help readers to get familiar with the features of the models and the details of carrying them out in R, the book includes a review of the most important theoretical concepts of the models. The presentation connects theory, software and applications. It is built up incrementally, starting with a summary of the concepts underlying simpler classes of linear models like the classical regression model, and carrying them forward to LMMs. A similar step-by-step approach is used to describe the R tools for LMMs. All the classes of linear models presented in the book are illustrated using real-life data. The book also introduces several novel R tools for LMMs, including new class of variance-covariance structure for random-effects, methods for influence diagnostics and for power calculations. They are included into an R package that should assist the readers in applying these and other methods presented in this text.
This book is designed as a gentle introduction to the fascinating field of choice modeling and its practical implementation using the R language. Discrete choice analysis is a family of methods useful to study individual decision-making. With strong theoretical foundations in consumer behavior, discrete choice models are used in the analysis of health policy, transportation systems, marketing, economics, public policy, political science, urban planning, and criminology, to mention just a few fields of application. The book does not assume prior knowledge of discrete choice analysis or R, but instead strives to introduce both in an intuitive way, starting from simple concepts and progressing to more sophisticated ideas. Loaded with a wealth of examples and code, the book covers the fundamentals of data and analysis in a progressive way. Readers begin with simple data operations and the underlying theory of choice analysis and conclude by working with sophisticated models including latent class logit models, mixed logit models, and ordinal logit models with taste heterogeneity. Data visualization is emphasized to explore both the input data as well as the results of models. This book should be of interest to graduate students, faculty, and researchers conducting empirical work using individual level choice data who are approaching the field of discrete choice analysis for the first time. In addition, it should interest more advanced modelers wishing to learn about the potential of R for discrete choice analysis. By embedding the treatment of choice modeling within the R ecosystem, readers benefit from learning about the larger R family of packages for data exploration, analysis, and visualization.
The papers in this volume represent the most timely and advanced contributions to the 2014 Joint Applied Statistics Symposium of the International Chinese Statistical Association (ICSA) and the Korean International Statistical Society (KISS), held in Portland, Oregon. The contributions cover new developments in statistical modeling and clinical research: including model development, model checking, and innovative clinical trial design and analysis. Each paper was peer-reviewed by at least two referees and also by an editor. The conference was attended by over 400 participants from academia, industry, and government agencies around the world, including from North America, Asia, and Europe. It offered 3 keynote speeches, 7 short courses, 76 parallel scientific sessions, student paper sessions, and social events.
With the increasing advances in hardware technology for data collection, and advances in software technology (databases) for data organization, computer scientists have increasingly participated in the latest advancements of the outlier analysis field. Computer scientists, specifically, approach this field based on their practical experiences in managing large amounts of data, and with far fewer assumptions- the data can be of any type, structured or unstructured, and may be extremely large. Outlier Analysis is a comprehensive exposition, as understood by data mining experts, statisticians and computer scientists. The book has been organized carefully, and emphasis was placed on simplifying the content, so that students and practitioners can also benefit. Chapters will typically cover one of three areas: methods and techniques commonly used in outlier analysis, such as linear methods, proximity-based methods, subspace methods, and supervised methods; data domains, such as, text, categorical, mixed-attribute, time-series, streaming, discrete sequence, spatial and network data; and key applications of these methods as applied to diverse domains such as credit card fraud detection, intrusion detection, medical diagnosis, earth science, web log analytics, and social network analysis are covered.
"This volume provides essential guidance for transforming
mathematics learning in schools through the use of innovative
technology, pedagogy, and curriculum. It presents clear, rigorous
evidence of the impact technology can have in improving students
learning of important yet complex mathematical concepts -- and goes
beyond a focus on technology alone to clearly explain how teacher
professional development, pedagogy, curriculum, and student
participation and identity each play an essential role in
transforming mathematics classrooms with technology. Further,
evidence of effectiveness is complemented by insightful case
studies of how key factors lead to enhancing learning, including
the contributions of design research, classroom discourse, and
meaningful assessment. "* Engaging students in deeply learning the important concepts
in mathematics "* Engaging students in deeply learning the important concepts
in mathematics
This Festschrift in honour of Ursula Gather's 60th birthday deals with modern topics in the field of robust statistical methods, especially for time series and regression analysis, and with statistical methods for complex data structures. The individual contributions of leading experts provide a textbook-style overview of the topic, supplemented by current research results and questions. The statistical theory and methods in this volume aim at the analysis of data which deviate from classical stringent model assumptions, which contain outlying values and/or have a complex structure. Written for researchers as well as master and PhD students with a good knowledge of statistics.
This book provides new insights on the study of global environmental changes using the ecoinformatics tools and the adaptive-evolutionary technology of geoinformation monitoring. The main advantage of this book is that it gathers and presents extensive interdisciplinary expertise in the parameterization of global biogeochemical cycles and other environmental processes in the context of globalization and sustainable development. In this regard, the crucial global problems concerning the dynamics of the nature-society system are considered and the key problems of ensuring the system's sustainable development are studied. A new approach to the numerical modeling of the nature-society system is proposed and results are provided on modeling the dynamics of the system's characteristics with regard to scenarios of anthropogenic impacts on biogeochemical cycles, land ecosystems and oceans. The main purpose of this book is to develop a universal guide to information-modeling technologies for assessing the function of environmental subsystems under various climatic and anthropogenic conditions. |
You may like...
Portfolio and Investment Analysis with…
John B. Guerard, Ziwei Wang, …
Hardcover
R2,322
Discovery Miles 23 220
An Introduction to Creating Standardized…
Todd Case, Yuting Tian
Hardcover
R1,501
Discovery Miles 15 010
The Little SAS Enterprise Guide Book
Susan J Slaughter, Lora D Delwiche
Hardcover
R1,790
Discovery Miles 17 900
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,266
Discovery Miles 12 660
Mathematical Modeling for Smart…
Debabrata Samanta, Debabrata Singh
Hardcover
R11,427
Discovery Miles 114 270
|