![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
Linear mixed-effects models (LMMs) are an important class of statistical models that can be used to analyze correlated data. Such data are encountered in a variety of fields including biostatistics, public health, psychometrics, educational measurement, and sociology. This book aims to support a wide range of uses for the models by applied researchers in those and other fields by providing state-of-the-art descriptions of the implementation of LMMs in R. To help readers to get familiar with the features of the models and the details of carrying them out in R, the book includes a review of the most important theoretical concepts of the models. The presentation connects theory, software and applications. It is built up incrementally, starting with a summary of the concepts underlying simpler classes of linear models like the classical regression model, and carrying them forward to LMMs. A similar step-by-step approach is used to describe the R tools for LMMs. All the classes of linear models presented in the book are illustrated using real-life data. The book also introduces several novel R tools for LMMs, including new class of variance-covariance structure for random-effects, methods for influence diagnostics and for power calculations. They are included into an R package that should assist the readers in applying these and other methods presented in this text.
This book illustrates the potential for computer simulation in the study of modern slavery and worker abuse, and by extension in all social issues. It lays out a philosophy of how agent-based modelling can be used in the social sciences. In addressing modern slavery, Chesney considers precarious work that is vulnerable to abuse, like sweat-shop labour and prostitution, and shows how agent modelling can be used to study, understand and fight abuse in these areas. He explores the philosophy, application and practice of agent modelling through the popular and free software NetLogo. This topical book is grounded in the technology needed to address the messy, chaotic, real world problems that humanity faces-in this case the serious problem of abuse at work-but equally in the social sciences which are needed to avoid the unintended consequences inherent to human responses. It includes a short but extensive NetLogo guide which readers can use to quickly learn this software and go on to develop complex models. This is an important book for students and researchers of computational social science and others interested in agent-based modelling.
The basics of computer algebra and the language of Mathematica are described. This title will lead toward an understanding of Mathematica that allows the reader to solve problems in physics, mathematics, and chemistry. Mathematica is the most widely used system for doing mathematical calculations by computer, including symbolic and numeric calculations and graphics. It is used in physics and other branches of science, in mathematics, education and many other areas. Many important results in physics would never be obtained without a wide use of computer algebra.
This book is designed as a gentle introduction to the fascinating field of choice modeling and its practical implementation using the R language. Discrete choice analysis is a family of methods useful to study individual decision-making. With strong theoretical foundations in consumer behavior, discrete choice models are used in the analysis of health policy, transportation systems, marketing, economics, public policy, political science, urban planning, and criminology, to mention just a few fields of application. The book does not assume prior knowledge of discrete choice analysis or R, but instead strives to introduce both in an intuitive way, starting from simple concepts and progressing to more sophisticated ideas. Loaded with a wealth of examples and code, the book covers the fundamentals of data and analysis in a progressive way. Readers begin with simple data operations and the underlying theory of choice analysis and conclude by working with sophisticated models including latent class logit models, mixed logit models, and ordinal logit models with taste heterogeneity. Data visualization is emphasized to explore both the input data as well as the results of models. This book should be of interest to graduate students, faculty, and researchers conducting empirical work using individual level choice data who are approaching the field of discrete choice analysis for the first time. In addition, it should interest more advanced modelers wishing to learn about the potential of R for discrete choice analysis. By embedding the treatment of choice modeling within the R ecosystem, readers benefit from learning about the larger R family of packages for data exploration, analysis, and visualization.
The papers in this volume represent the most timely and advanced contributions to the 2014 Joint Applied Statistics Symposium of the International Chinese Statistical Association (ICSA) and the Korean International Statistical Society (KISS), held in Portland, Oregon. The contributions cover new developments in statistical modeling and clinical research: including model development, model checking, and innovative clinical trial design and analysis. Each paper was peer-reviewed by at least two referees and also by an editor. The conference was attended by over 400 participants from academia, industry, and government agencies around the world, including from North America, Asia, and Europe. It offered 3 keynote speeches, 7 short courses, 76 parallel scientific sessions, student paper sessions, and social events.
This Festschrift in honour of Ursula Gather's 60th birthday deals with modern topics in the field of robust statistical methods, especially for time series and regression analysis, and with statistical methods for complex data structures. The individual contributions of leading experts provide a textbook-style overview of the topic, supplemented by current research results and questions. The statistical theory and methods in this volume aim at the analysis of data which deviate from classical stringent model assumptions, which contain outlying values and/or have a complex structure. Written for researchers as well as master and PhD students with a good knowledge of statistics.
With the increasing advances in hardware technology for data collection, and advances in software technology (databases) for data organization, computer scientists have increasingly participated in the latest advancements of the outlier analysis field. Computer scientists, specifically, approach this field based on their practical experiences in managing large amounts of data, and with far fewer assumptions- the data can be of any type, structured or unstructured, and may be extremely large. Outlier Analysis is a comprehensive exposition, as understood by data mining experts, statisticians and computer scientists. The book has been organized carefully, and emphasis was placed on simplifying the content, so that students and practitioners can also benefit. Chapters will typically cover one of three areas: methods and techniques commonly used in outlier analysis, such as linear methods, proximity-based methods, subspace methods, and supervised methods; data domains, such as, text, categorical, mixed-attribute, time-series, streaming, discrete sequence, spatial and network data; and key applications of these methods as applied to diverse domains such as credit card fraud detection, intrusion detection, medical diagnosis, earth science, web log analytics, and social network analysis are covered.
"This volume provides essential guidance for transforming
mathematics learning in schools through the use of innovative
technology, pedagogy, and curriculum. It presents clear, rigorous
evidence of the impact technology can have in improving students
learning of important yet complex mathematical concepts -- and goes
beyond a focus on technology alone to clearly explain how teacher
professional development, pedagogy, curriculum, and student
participation and identity each play an essential role in
transforming mathematics classrooms with technology. Further,
evidence of effectiveness is complemented by insightful case
studies of how key factors lead to enhancing learning, including
the contributions of design research, classroom discourse, and
meaningful assessment. "* Engaging students in deeply learning the important concepts
in mathematics "* Engaging students in deeply learning the important concepts
in mathematics
This book features selected papers presented at the 2nd International Conference on Advanced Computing Technologies and Applications, held at SVKM's Dwarkadas J. Sanghvi College of Engineering, Mumbai, India, from 28 to 29 February 2020. Covering recent advances in next-generation computing, the book focuses on recent developments in intelligent computing, such as linguistic computing, statistical computing, data computing and ambient applications.
"Managing Data in Motion" describes techniques that have been developed for significantly reducing the complexity of managing system interfaces and enabling scalable architectures. Author April Reeve brings over two decades of experience to present a vendor-neutral approach to moving data between computing environments and systems. Readers will learn the techniques, technologies, and best practices for managing the passage of data between computer systems and integrating disparate data together in an enterprise environment. The average enterprise's computing environment is comprised of hundreds to thousands computer systems that have been built, purchased, and acquired over time. The data from these various systems needs to be integrated for reporting and analysis, shared for business transaction processing, and converted from one format to another when old systems are replaced and new systems are acquired. The management of the "data in motion" in organizations is
rapidly becoming one of the biggest concerns for business and IT
management. Data warehousing and conversion, real-time data
integration, and cloud and "big data" applications are just a few
of the challenges facing organizations and businesses today.
"Managing Data in Motion" tackles these and other topics in a style
easily understood by business and IT managers as well as
programmers and architects.
This book discusses a variety of methods for outlier ensembles and organizes them by the specific principles with which accuracy improvements are achieved. In addition, it covers the techniques with which such methods can be made more effective. A formal classification of these methods is provided, and the circumstances in which they work well are examined. The authors cover how outlier ensembles relate (both theoretically and practically) to the ensemble techniques used commonly for other data mining problems like classification. The similarities and (subtle) differences in the ensemble techniques for the classification and outlier detection problems are explored. These subtle differences do impact the design of ensemble algorithms for the latter problem. This book can be used for courses in data mining and related curricula. Many illustrative examples and exercises are provided in order to facilitate classroom teaching. A familiarity is assumed to the outlier detection problem and also to generic problem of ensemble analysis in classification. This is because many of the ensemble methods discussed in this book are adaptations from their counterparts in the classification domain. Some techniques explained in this book, such as wagging, randomized feature weighting, and geometric subsampling, provide new insights that are not available elsewhere. Also included is an analysis of the performance of various types of base detectors and their relative effectiveness. The book is valuable for researchers and practitioners for leveraging ensemble methods into optimal algorithmic design.
This Bayesian modeling book provides a self-contained entry to computational Bayesian statistics. Focusing on the most standard statistical models and backed up by real datasets and an all-inclusive R (CRAN) package called bayess, the book provides an operational methodology for conducting Bayesian inference, rather than focusing on its theoretical and philosophical justifications. Readers are empowered to participate in the real-life data analysis situations depicted here from the beginning. The stakes are high and the reader determines the outcome. Special attention is paid to the derivation of prior distributions in each case and specific reference solutions are given for each of the models. Similarly, computational details are worked out to lead the reader towards an effective programming of the methods given in the book. In particular, all R codes are discussed with enough detail to make them readily understandable and expandable. This works in conjunction with the bayess package. Bayesian Essentials with R can be used as a textbook at both undergraduate and graduate levels, as exemplified by courses given at Universite Paris Dauphine (France), University of Canterbury (New Zealand), and University of British Columbia (Canada). It is particularly useful with students in professional degree programs and scientists to analyze data the Bayesian way. The text will also enhance introductory courses on Bayesian statistics. Prerequisites for the book are an undergraduate background in probability and statistics, if not in Bayesian statistics. A strength of the text is the noteworthy emphasis on the role of models in statistical analysis. This is the new, fully-revised edition to the book Bayesian Core: A Practical Approach to Computational Bayesian Statistics. Jean-Michel Marin is Professor of Statistics at Universite Montpellier 2, France, and Head of the Mathematics and Modelling research unit. He has written over 40 papers on Bayesian methodology and computing, as well as worked closely with population geneticists over the past ten years. Christian Robert is Professor of Statistics at Universite Paris-Dauphine, France. He has written over 150 papers on Bayesian Statistics and computational methods and is the author or co-author of seven books on those topics, including The Bayesian Choice (Springer, 2001), winner of the ISBA DeGroot Prize in 2004. He is a Fellow of the Institute of Mathematical Statistics, the Royal Statistical Society and the American Statistical Society. He has been co-editor of the Journal of the Royal Statistical Society, Series B, and in the editorial boards of the Journal of the American Statistical Society, the Annals of Statistics, Statistical Science, and Bayesian Analysis. He is also a recipient of an Erskine Fellowship from the University of Canterbury (NZ) in 2006 and a senior member of the Institut Universitaire de France (2010-2015)."
This book provides new insights on the study of global environmental changes using the ecoinformatics tools and the adaptive-evolutionary technology of geoinformation monitoring. The main advantage of this book is that it gathers and presents extensive interdisciplinary expertise in the parameterization of global biogeochemical cycles and other environmental processes in the context of globalization and sustainable development. In this regard, the crucial global problems concerning the dynamics of the nature-society system are considered and the key problems of ensuring the system's sustainable development are studied. A new approach to the numerical modeling of the nature-society system is proposed and results are provided on modeling the dynamics of the system's characteristics with regard to scenarios of anthropogenic impacts on biogeochemical cycles, land ecosystems and oceans. The main purpose of this book is to develop a universal guide to information-modeling technologies for assessing the function of environmental subsystems under various climatic and anthropogenic conditions.
The contributions gathered here provide an overview of current research projects and selected software products of the Fraunhofer Institute for Algorithms and Scientific Computing SCAI. They show the wide range of challenges that scientific computing currently faces, the solutions it offers, and its important role in developing applications for industry. Given the exciting field of applied collaborative research and development it discusses, the book will appeal to scientists, practitioners, and students alike. The Fraunhofer Institute for Algorithms and Scientific Computing SCAI combines excellent research and application-oriented development to provide added value for our partners. SCAI develops numerical techniques, parallel algorithms and specialized software tools to support and optimize industrial simulations. Moreover, it implements custom software solutions for production and logistics, and offers calculations on high-performance computers. Its services and products are based on state-of-the-art methods from applied mathematics and information technology.
This book provides a groundbreaking introduction to the likelihood inference for correlated survival data via the hierarchical (or h-) likelihood in order to obtain the (marginal) likelihood and to address the computational difficulties in inferences and extensions. The approach presented in the book overcomes shortcomings in the traditional likelihood-based methods for clustered survival data such as intractable integration. The text includes technical materials such as derivations and proofs in each chapter, as well as recently developed software programs in R ("frailtyHL"), while the real-world data examples together with an R package, "frailtyHL" in CRAN, provide readers with useful hands-on tools. Reviewing new developments since the introduction of the h-likelihood to survival analysis (methods for interval estimation of the individual frailty and for variable selection of the fixed effects in the general class of frailty models) and guiding future directions, the book is of interest to researchers in medical and genetics fields, graduate students, and PhD (bio) statisticians.
In today s world, most global companies face enormous challenges in dealing with an inflexible budget climate when complex changes are required. "Secrets to a Successful Commercial Software Implementation" will help guide business leaders to gain understanding of how commercial, off-the-shelf (COTS) software like SAP, Seibel, and PeopleSoft should be applied in order to ultimately achieve significant cost savings. Project management professional Nick Berg utilizes his strong background in domestic and international Systems Applications and Products to teach others the potential benefits of implementing COTS products such as faster deployment time, enhanced quality and reliability, reduced development risk, provided periodic upgrades and improvements, and an already established support system. He introduces a unique process for COTS development, presents best-practice processes for COTS projects, and defines the architecture procedures within the COTS environment. Finally, he walks through each project phase of a COTS-based project by introducing the objectives, road map, roles, activities, artifacts, and milestone of the phase. The cultural impact on an organization facing this decision is profound, but if implemented with forethought, planning, and dedicated guidance and execution, the benefits to an organization will be long reaching and significant.
Organisationen im Wandel der Markte: A.-W. Scheer, R. Borowsky, U. Markus: Neue Markte, neue Medien, neue Methoden - Roadmap zur agilen Organisation; B. Anderer, K. Knue: Sichere Transaktionen in Electronic Banking und Electronic Commerce; D. Budaus: Public Private Partnership als innovative Organisationsform; U. Dalkmann, F. Karbenn: Energieabrechnug im Wandel - Der Weg zum Kunden uber leistungsfahigen 'Customer Service'; E. Frese: Von der Planwirtschaft zur Marktwirtschaft - auch in der Unternehmung?; P. Neef, M. Moeller: Erfolg im Netz; E. Rauch: Bankenfusionen.- Methoden der Organisationsentwicklung: P. Hintermann, W. Hoffmann, C.-P. Koch: Prozessorientiertes Informationssystem als Voraussetzung fur eine erfolgreiche Unternehmensintegration; M. Lapp: Intranet - Internes Internet; R. Minz: IT als Managementaufgabe begreifen; A. Muller: Vom Geschaftsprozessdesign zum prozessorientierten Managementsystem; S. Neumann, G. Fenk, B. Fluegge, J.T. Finerty: Knowledge Management Systems - optimaler Einsatz des 'Produktionsfaktors Wissen'; M. Pastowsky, F. Hausen-Mabilon: Gestaltung von Kommunikationsprozessen im Entwicklungsbereich: Rahmenbedingungen, Vorgehen und Beispiele; A. Poscay: Mit neuen Medien zu einem effizienten Beratungsnetzwerk.- M. Reiss Wandel im Management des Wandels; J. Hagemeyer, R. Rolles, Y. Schmidt, J. Bachmann, A. Haas: Dynamische Prozesse durch workflow-zentrierte Geschaftsprozessoptimierung: Herausforderungen in der Praxis; J. Schweitzer, H. Baltes, K. Merschjahn, G. Schneider: Professionelle Telekooperation fur das Teammanagement in virtuellen Unternehmen; H.-G. Servatius: Vom Reengineering zum Wissensmanagement.- Anforderungen an das Controlling: J. Fiedler, G. Barzel, K. Vernau: Kosten- und Leistungsrechnung als Steuerungsinstrument - flachendeckende und zugige Einfuhrung in einer deutschen Grossstadt; H. Frei: Mit Qualitatscontrolling auf dem Weg zum European Quality Award (EQA); O. Froehling: Controlling goes Multimedia; P. Hirschmann: Prozesskostenrechnerische Bewertung von Dienstleistungen zur Verbesserung der innerbetrieblichen Leistungsverrechnung; P. Horvath: Mit Balanced Scorecard Strategien erfolgreich umsetzen; R. Mahnkopf: Neues Kommunales Rechnungswesen - eine neue Methode fur ein anspruchsvolles Verwaltungsinformationssystem; R. Moser: Neue Perspektiven durch die Istkostenrechnung; H. Neukam: Neues Controlling durch integrierte Standardsoftware; K. Vikas: Trends und neue Entwicklungen im Controlling; A. Hoffmann, K. Wolf: Wertorientiertes Management auch fur Informatik Investitionen?; J. Zinkernagel: Prozesscontrolling im Entwicklungsprozess eines Automobilzulieferers; W. Kraemer, F. Milius, V. Zimmermann: Elektronische Bildungsmarkte fur ein integriertes Wissens- und Qualitatsmanagement.
This book introduces the ade4 package for R which provides multivariate methods for the analysis of ecological data. It is implemented around the mathematical concept of the duality diagram, and provides a unified framework for multivariate analysis. The authors offer a detailed presentation of the theoretical framework of the duality diagram and also of its application to real-world ecological problems. These two goals may seem contradictory, as they concern two separate groups of scientists, namely statisticians and ecologists. However, statistical ecology has become a scientific discipline of its own, and the good use of multivariate data analysis methods by ecologists implies a fair knowledge of the mathematical properties of these methods. The organization of the book is based on ecological questions, but these questions correspond to particular classes of data analysis methods. The first chapters present both usual and multiway data analysis methods. Further chapters are dedicated for example to the analysis of spatial data, of phylogenetic structures, and of biodiversity patterns. One chapter deals with multivariate data analysis graphs. In each chapter, the basic mathematical definitions of the methods and the outputs of the R functions available in ade4 are detailed in two different boxes. The text of the book itself can be read independently from these boxes. Thus the book offers the opportunity to find information about the ecological situation from which a question raises alongside the mathematical properties of methods that can be applied to answer this question, as well as the details of software outputs. Each example and all the graphs in this book come with executable R code.
Biological and biomedical studies have entered a new era over the past two decades thanks to the wide use of mathematical models and computational approaches. A booming of computational biology, which sheerly was a theoretician's fantasy twenty years ago, has become a reality. Obsession with computational biology and theoretical approaches is evidenced in articles hailing the arrival of what are va- ously called quantitative biology, bioinformatics, theoretical biology, and systems biology. New technologies and data resources in genetics, such as the International HapMap project, enable large-scale studies, such as genome-wide association st- ies, which could potentially identify most common genetic variants as well as rare variants of the human DNA that may alter individual's susceptibility to disease and the response to medical treatment. Meanwhile the multi-electrode recording from behaving animals makes it feasible to control the animal mental activity, which could potentially lead to the development of useful brain-machine interfaces. - bracing the sheer volume of genetic, genomic, and other type of data, an essential approach is, ?rst of all, to avoid drowning the true signal in the data. It has been witnessed that theoretical approach to biology has emerged as a powerful and st- ulating research paradigm in biological studies, which in turn leads to a new - search paradigm in mathematics, physics, and computer science and moves forward with the interplays among experimental studies and outcomes, simulation studies, and theoretical investigations. |
![]() ![]() You may like...
Research Anthology on Convergence of…
Information R Management Association
Hardcover
R12,904
Discovery Miles 129 040
Semantic Web Services, Processes And…
Jorge Cardoso, Amit P. Sheth
Hardcover
R4,418
Discovery Miles 44 180
Global Nuclear Developments - Insights…
Pantelis F. Ikonomou
Hardcover
R3,549
Discovery Miles 35 490
Principles and Practice of Semantic Web…
Francois Fages, Sylvain Soliman
Paperback
R1,546
Discovery Miles 15 460
Advances in Quantum Chemistry, Volume 84
Erkki J. Brandas
Hardcover
|