|
|
Books > Computing & IT > Computer software packages > Other software packages
Growth curve models in longitudinal studies are widely used to
model population size, body height, biomass, fungal growth, and
other variables in the biological sciences, but these statistical
methods for modeling growth curves and analyzing longitudinal data
also extend to general statistics, economics, public health,
demographics, epidemiology, SQC, sociology, nano-biotechnology,
fluid mechanics, and other applied areas. There is no
one-size-fits-all approach to growth measurement. The selected
papers in this volume build on presentations from the GCM workshop
held at the Indian Statistical Institute, Giridih, on March 28-29,
2016. They represent recent trends in GCM research on different
subject areas, both theoretical and applied. This book includes
tools and possibilities for further work through new techniques and
modification of existing ones. The volume includes original
studies, theoretical findings and case studies from a wide range of
applied work, and these contributions have been externally refereed
to the high quality standards of leading journals in the field.
This uniquely accessible book helps readers use CABology to solve
real-world business problems and drive real competitive advantage.
It provides reliable, concise information on the real benefits,
usage and operationalization aspects of utilizing the "Trio Wave"
of cloud, analytic and big data. Anyone who thinks that the game
changing technology is slow paced needs to think again. This book
opens readers' eyes to the fact that the dynamics of global
technology and business are changing. Moreover, it argues that
businesses must transform themselves in alignment with the Trio
Wave if they want to survive and excel in the future. CABology
focuses on the art and science of optimizing the business goals to
deliver true value and benefits to the customer through cloud,
analytic and big data. It offers business of all sizes a structured
and comprehensive way of discovering the real benefits, usage and
operationalization aspects of utilizing the Trio Wave.
This book features research contributions from The Abel Symposium
on Statistical Analysis for High Dimensional Data, held in Nyvagar,
Lofoten, Norway, in May 2014. The focus of the symposium was on
statistical and machine learning methodologies specifically
developed for inference in "big data" situations, with particular
reference to genomic applications. The contributors, who are among
the most prominent researchers on the theory of statistics for high
dimensional inference, present new theories and methods, as well as
challenging applications and computational solutions. Specific
themes include, among others, variable selection and screening,
penalised regression, sparsity, thresholding, low dimensional
structures, computational challenges, non-convex situations,
learning graphical models, sparse covariance and precision
matrices, semi- and non-parametric formulations, multiple testing,
classification, factor models, clustering, and preselection.
Highlighting cutting-edge research and casting light on future
research directions, the contributions will benefit graduate
students and researchers in computational biology, statistics and
the machine learning community.
This book presents a comprehensive study of multivariate time
series with linear state space structure. The emphasis is put on
both the clarity of the theoretical concepts and on efficient
algorithms for implementing the theory. In particular, it
investigates the relationship between VARMA and state space models,
including canonical forms. It also highlights the relationship
between Wiener-Kolmogorov and Kalman filtering both with an
infinite and a finite sample. The strength of the book also lies in
the numerous algorithms included for state space models that take
advantage of the recursive nature of the models. Many of these
algorithms can be made robust, fast, reliable and efficient. The
book is accompanied by a MATLAB package called SSMMATLAB and a
webpage presenting implemented algorithms with many examples and
case studies. Though it lays a solid theoretical foundation, the
book also focuses on practical application, and includes exercises
in each chapter. It is intended for researchers and students
working with linear state space models, and who are familiar with
linear algebra and possess some knowledge of statistics.
This book is a selection of peer-reviewed contributions presented
at the third Bayesian Young Statisticians Meeting, BAYSM 2016,
Florence, Italy, June 19-21. The meeting provided a unique
opportunity for young researchers, M.S. students, Ph.D. students,
and postdocs dealing with Bayesian statistics to connect with the
Bayesian community at large, to exchange ideas, and to network with
others working in the same field. The contributions develop and
apply Bayesian methods in a variety of fields, ranging from the
traditional (e.g., biostatistics and reliability) to the most
innovative ones (e.g., big data and networks).
This book offers an original and broad exploration of the
fundamental methods in Clustering and Combinatorial Data Analysis,
presenting new formulations and ideas within this very active
field. With extensive introductions, formal and mathematical
developments and real case studies, this book provides readers with
a deeper understanding of the mutual relationships between these
methods, which are clearly expressed with respect to three facets:
logical, combinatorial and statistical. Using relational
mathematical representation, all types of data structures can be
handled in precise and unified ways which the author highlights in
three stages: Clustering a set of descriptive attributes Clustering
a set of objects or a set of object categories Establishing
correspondence between these two dual clusterings Tools for
interpreting the reasons of a given cluster or clustering are also
included. Foundations and Methods in Combinatorial and Statistical
Data Analysis and Clustering will be a valuable resource for
students and researchers who are interested in the areas of Data
Analysis, Clustering, Data Mining and Knowledge Discovery.
 |
Computer Mathematics
- 9th Asian Symposium (ASCM2009), Fukuoka, December 2009, 10th Asian Symposium (ASCM2012), Beijing, October 2012, Contributed Papers and Invited Talks
(Hardcover, 2014 ed.)
Ruyong Feng, Wen-shin Lee, Yosuke Sato
|
R5,202
R3,639
Discovery Miles 36 390
Save R1,563 (30%)
|
Ships in 10 - 15 working days
|
|
|
This book covers original research and the latest advances in
symbolic, algebraic and geometric computation; computational
methods for differential and difference equations,
symbolic-numerical computation; mathematics software design and
implementation; and scientific and engineering applications based
on features, invited talks, special sessions and contributed papers
presented at the 9th (in Fukuoka, Japan in 2009) and 10th (in
Beijing China in 2012) Asian Symposium on Computer Mathematics
(ASCM). Thirty selected and refereed articles in the book present
the conference participants' ideas and views on researching
mathematics using computers.
The advancement of computing and communication technologies have
profoundly accelerated the development and deployment of complex
enterprise systems, creating an importance in its implementation
across corporate and industrial organizations worldwide.""The
Handbook of Research on Enterprise Systems"" addresses the field of
enterprise systems with more breadth and depth than any other
resource, covering progressive technologies, leading theories, and
advanced applications. Comprising over 25 articles from 47 expert
authors from around the globe, this exhaustive collection of highly
developed research extends the field of enterprise systems to offer
libraries an unrivaled reference.This title features: 27
authoritative contributions by over 45 of the world's leading
experts on enterprise systems from 16 countries; comprehensive
coverage of each specific topic, highlighting recent trends and
describing the latest advances in the field; more than 800
references to existing literature and research on enterprise
systems; and, a compendium of over 200 key terms with detailed
definitions. It is organized by topic and indexed, making it a
convenient method of reference for all IT/IS scholars and
professionals. It also features cross-referencing of key terms,
figures, and information pertinent to enterprise systems.
This textbook examines empirical linguistics from a theoretical
linguist's perspective. It provides both a theoretical discussion
of what quantitative corpus linguistics entails and detailed,
hands-on, step-by-step instructions to implement the techniques in
the field. The statistical methodology and R-based coding from this
book teach readers the basic and then more advanced skills to work
with large data sets in their linguistics research and studies.
Massive data sets are now more than ever the basis for work that
ranges from usage-based linguistics to the far reaches of applied
linguistics. This book presents much of the methodology in a
corpus-based approach. However, the corpus-based methods in this
book are also essential components of recent developments in
sociolinguistics, historical linguistics, computational
linguistics, and psycholinguistics. Material from the book will
also be appealing to researchers in digital humanities and the many
non-linguistic fields that use textual data analysis and text-based
sensorimetrics. Chapters cover topics including corpus processing,
frequencing data, and clustering methods. Case studies illustrate
each chapter with accompanying data sets, R code, and exercises for
use by readers. This book may be used in advanced undergraduate
courses, graduate courses, and self-study.
Most books on linear systems for undergraduates cover discrete and
continuous systems material together in a single volume. Such books
also include topics in discrete and continuous filter design, and
discrete and continuous state-space representations. However, with
this magnitude of coverage, the student typically gets a little of
both discrete and continuous linear systems but not enough of
either. Minimal coverage of discrete linear systems material is
acceptable provided that there is ample coverage of continuous
linear systems. On the other hand, minimal coverage of continuous
linear systems does no justice to either of the two areas. Under
the best of circumstances, a student needs a solid background in
both these subjects. Continuous linear systems and discrete linear
systems are broad topics and each merit a single book devoted to
the respective subject matter. The objective of this set of two
volumes is to present the needed material for each at the
undergraduate level, and present the required material using MATLAB
(R) (The MathWorks Inc.).
This book is a comprehensive guide to qualitative comparative
analysis (QCA) using R. Using Boolean algebra to implement
principles of comparison used by scholars engaged in the
qualitative study of macro social phenomena, QCA acts as a bridge
between the quantitative and the qualitative traditions. The QCA
package for R, created by the author, facilitates QCA within a
graphical user interface. This book provides the most current
information on the latest version of the QCA package, which
combines written commands with a cross-platform interface.
Beginning with a brief introduction to the concept of QCA, this
book moves from theory to calibration, from analysis to
factorization, and hits on all the key areas of QCA in between.
Chapters one through three are introductory, familiarizing the
reader with R, the QCA package, and elementary set theory. The next
few chapters introduce important applications of the package
beginning with calibration, analysis of necessity, analysis of
sufficiency, parameters of fit, negation and factorization, and the
construction of Venn diagrams. The book concludes with extensions
to the classical package, including temporal applications and panel
data. Providing a practical introduction to an increasingly
important research tool for the social sciences, this book will be
indispensable for students, scholars, and practitioners interested
in conducting qualitative research in political science, sociology,
business and management, and evaluation studies.
This volume collects selected, peer-reviewed contributions from the
2nd Conference of the International Society for Nonparametric
Statistics (ISNPS), held in Cadiz (Spain) between June 11-16 2014,
and sponsored by the American Statistical Association, the
Institute of Mathematical Statistics, the Bernoulli Society for
Mathematical Statistics and Probability, the Journal of
Nonparametric Statistics and Universidad Carlos III de Madrid. The
15 articles are a representative sample of the 336 contributed
papers presented at the conference. They cover topics such as
high-dimensional data modelling, inference for stochastic processes
and for dependent data, nonparametric and goodness-of-fit testing,
nonparametric curve estimation, object-oriented data analysis, and
semiparametric inference. The aim of the ISNPS 2014 conference was
to bring together recent advances and trends in several areas of
nonparametric statistics in order to facilitate the exchange of
research ideas, promote collaboration among researchers from around
the globe, and contribute to the further development of the field.
This text presents a wide-ranging and rigorous overview of nearest
neighbor methods, one of the most important paradigms in machine
learning. Now in one self-contained volume, this book
systematically covers key statistical, probabilistic, combinatorial
and geometric ideas for understanding, analyzing and developing
nearest neighbor methods. Gerard Biau is a professor at Universite
Pierre et Marie Curie (Paris). Luc Devroye is a professor at the
School of Computer Science at McGill University (Montreal).
Corporations and governmental agencies of all sizes are embracing a
new generation of enterprise-scale business intelligence (BI) and
data warehousing (DW), and very often appoint a single senior-level
individual to serve as the Enterprise BI/DW Program Manager. This
book is the essential guide to the incremental and iterative
build-out of a successful enterprise-scale BI/DW program comprised
of multiple underlying projects, and what the Enterprise Program
Manager must successfully accomplish to orchestrate the many moving
parts in the quest for true enterprise-scale business intelligence
and data warehousing. Author Alan Simon has served as an enterprise
business intelligence and data warehousing program management
advisor to many of his clients, and spent an entire year with a
single client as the adjunct consulting director for a $10 million
enterprise data warehousing (EDW) initiative. He brings a wealth of
knowledge about best practices, risk management, organizational
culture alignment, and other Critical Success Factors (CSFs) to the
discipline of enterprise-scale business intelligence and data
warehousing.
|
You may like...
Oracle 12c - SQL
Joan Casteel
Paperback
(1)
R1,321
R1,228
Discovery Miles 12 280
MIS
Hossein Bidgoli
Paperback
R1,169
R1,095
Discovery Miles 10 950
|