|
|
Books > Computing & IT > Computer software packages > Other software packages
This book is a selection of peer-reviewed contributions presented
at the third Bayesian Young Statisticians Meeting, BAYSM 2016,
Florence, Italy, June 19-21. The meeting provided a unique
opportunity for young researchers, M.S. students, Ph.D. students,
and postdocs dealing with Bayesian statistics to connect with the
Bayesian community at large, to exchange ideas, and to network with
others working in the same field. The contributions develop and
apply Bayesian methods in a variety of fields, ranging from the
traditional (e.g., biostatistics and reliability) to the most
innovative ones (e.g., big data and networks).
This book offers an original and broad exploration of the
fundamental methods in Clustering and Combinatorial Data Analysis,
presenting new formulations and ideas within this very active
field. With extensive introductions, formal and mathematical
developments and real case studies, this book provides readers with
a deeper understanding of the mutual relationships between these
methods, which are clearly expressed with respect to three facets:
logical, combinatorial and statistical. Using relational
mathematical representation, all types of data structures can be
handled in precise and unified ways which the author highlights in
three stages: Clustering a set of descriptive attributes Clustering
a set of objects or a set of object categories Establishing
correspondence between these two dual clusterings Tools for
interpreting the reasons of a given cluster or clustering are also
included. Foundations and Methods in Combinatorial and Statistical
Data Analysis and Clustering will be a valuable resource for
students and researchers who are interested in the areas of Data
Analysis, Clustering, Data Mining and Knowledge Discovery.
 |
Computer Mathematics
- 9th Asian Symposium (ASCM2009), Fukuoka, December 2009, 10th Asian Symposium (ASCM2012), Beijing, October 2012, Contributed Papers and Invited Talks
(Hardcover, 2014 ed.)
Ruyong Feng, Wen-shin Lee, Yosuke Sato
|
R5,202
R3,639
Discovery Miles 36 390
Save R1,563 (30%)
|
Ships in 10 - 15 working days
|
|
|
This book covers original research and the latest advances in
symbolic, algebraic and geometric computation; computational
methods for differential and difference equations,
symbolic-numerical computation; mathematics software design and
implementation; and scientific and engineering applications based
on features, invited talks, special sessions and contributed papers
presented at the 9th (in Fukuoka, Japan in 2009) and 10th (in
Beijing China in 2012) Asian Symposium on Computer Mathematics
(ASCM). Thirty selected and refereed articles in the book present
the conference participants' ideas and views on researching
mathematics using computers.
The advancement of computing and communication technologies have
profoundly accelerated the development and deployment of complex
enterprise systems, creating an importance in its implementation
across corporate and industrial organizations worldwide.""The
Handbook of Research on Enterprise Systems"" addresses the field of
enterprise systems with more breadth and depth than any other
resource, covering progressive technologies, leading theories, and
advanced applications. Comprising over 25 articles from 47 expert
authors from around the globe, this exhaustive collection of highly
developed research extends the field of enterprise systems to offer
libraries an unrivaled reference.This title features: 27
authoritative contributions by over 45 of the world's leading
experts on enterprise systems from 16 countries; comprehensive
coverage of each specific topic, highlighting recent trends and
describing the latest advances in the field; more than 800
references to existing literature and research on enterprise
systems; and, a compendium of over 200 key terms with detailed
definitions. It is organized by topic and indexed, making it a
convenient method of reference for all IT/IS scholars and
professionals. It also features cross-referencing of key terms,
figures, and information pertinent to enterprise systems.
This textbook examines empirical linguistics from a theoretical
linguist's perspective. It provides both a theoretical discussion
of what quantitative corpus linguistics entails and detailed,
hands-on, step-by-step instructions to implement the techniques in
the field. The statistical methodology and R-based coding from this
book teach readers the basic and then more advanced skills to work
with large data sets in their linguistics research and studies.
Massive data sets are now more than ever the basis for work that
ranges from usage-based linguistics to the far reaches of applied
linguistics. This book presents much of the methodology in a
corpus-based approach. However, the corpus-based methods in this
book are also essential components of recent developments in
sociolinguistics, historical linguistics, computational
linguistics, and psycholinguistics. Material from the book will
also be appealing to researchers in digital humanities and the many
non-linguistic fields that use textual data analysis and text-based
sensorimetrics. Chapters cover topics including corpus processing,
frequencing data, and clustering methods. Case studies illustrate
each chapter with accompanying data sets, R code, and exercises for
use by readers. This book may be used in advanced undergraduate
courses, graduate courses, and self-study.
This book is a comprehensive guide to qualitative comparative
analysis (QCA) using R. Using Boolean algebra to implement
principles of comparison used by scholars engaged in the
qualitative study of macro social phenomena, QCA acts as a bridge
between the quantitative and the qualitative traditions. The QCA
package for R, created by the author, facilitates QCA within a
graphical user interface. This book provides the most current
information on the latest version of the QCA package, which
combines written commands with a cross-platform interface.
Beginning with a brief introduction to the concept of QCA, this
book moves from theory to calibration, from analysis to
factorization, and hits on all the key areas of QCA in between.
Chapters one through three are introductory, familiarizing the
reader with R, the QCA package, and elementary set theory. The next
few chapters introduce important applications of the package
beginning with calibration, analysis of necessity, analysis of
sufficiency, parameters of fit, negation and factorization, and the
construction of Venn diagrams. The book concludes with extensions
to the classical package, including temporal applications and panel
data. Providing a practical introduction to an increasingly
important research tool for the social sciences, this book will be
indispensable for students, scholars, and practitioners interested
in conducting qualitative research in political science, sociology,
business and management, and evaluation studies.
This volume collects selected, peer-reviewed contributions from the
2nd Conference of the International Society for Nonparametric
Statistics (ISNPS), held in Cadiz (Spain) between June 11-16 2014,
and sponsored by the American Statistical Association, the
Institute of Mathematical Statistics, the Bernoulli Society for
Mathematical Statistics and Probability, the Journal of
Nonparametric Statistics and Universidad Carlos III de Madrid. The
15 articles are a representative sample of the 336 contributed
papers presented at the conference. They cover topics such as
high-dimensional data modelling, inference for stochastic processes
and for dependent data, nonparametric and goodness-of-fit testing,
nonparametric curve estimation, object-oriented data analysis, and
semiparametric inference. The aim of the ISNPS 2014 conference was
to bring together recent advances and trends in several areas of
nonparametric statistics in order to facilitate the exchange of
research ideas, promote collaboration among researchers from around
the globe, and contribute to the further development of the field.
This text presents a wide-ranging and rigorous overview of nearest
neighbor methods, one of the most important paradigms in machine
learning. Now in one self-contained volume, this book
systematically covers key statistical, probabilistic, combinatorial
and geometric ideas for understanding, analyzing and developing
nearest neighbor methods. Gerard Biau is a professor at Universite
Pierre et Marie Curie (Paris). Luc Devroye is a professor at the
School of Computer Science at McGill University (Montreal).
Pulsar timing is a promising method for detecting gravitational
waves in the nano-Hertz band. In his prize winning Ph.D. thesis
Rutger van Haasteren deals with how one takes thousands of
seemingly random timing residuals which are measured by pulsar
observers, and extracts information about the presence and
character of the gravitational waves in the nano-Hertz band that
are washing over our Galaxy. The author presents a sophisticated
mathematical algorithm that deals with this issue. His algorithm is
probably the most well-developed of those that are currently in use
in the Pulsar Timing Array community. In chapter 3, the
gravitational-wave memory effect is described. This is one of the
first descriptions of this interesting effect in relation with
pulsar timing, which may become observable in future Pulsar Timing
Array projects. The last part of the work is dedicated to an effort
to combine the European pulsar timing data sets in order to search
for gravitational waves. This study has placed the most stringent
limit to date on the intensity of gravitational waves that are
produced by pairs of supermassive black holes dancing around each
other in distant galaxies, as well as those that may be produced by
vibrating cosmic strings. Rutger van Haasteren has won the 2011
GWIC Thesis Prize of the Gravitational Wave International Community
for his innovative work in various directions of the search for
gravitational waves by pulsar timing. The work is presented in this
Ph.D. thesis.
This book discusses recent developments in mathematical programming
and game theory, and the application of several mathematical models
to problems in finance, games, economics and graph theory. All
contributing authors are eminent researchers in their respective
fields, from across the world. This book contains a collection of
selected papers presented at the 2017 Symposium on Mathematical
Programming and Game Theory at New Delhi during 9-11 January 2017.
Researchers, professionals and graduate students will find the book
an essential resource for current work in mathematical programming,
game theory and their applications in finance, economics and graph
theory. The symposium provides a forum for new developments and
applications of mathematical programming and game theory as well as
an excellent opportunity to disseminate the latest major
achievements and to explore new directions and perspectives.
Tourism is one of the leading industries worldwide. The magnitude
of growth in tourism will bring both opportunities and problems to
source and destination markets in years to come, especially in the
internal and external exchange of information in the industry.
""Information and Communication Technologies in Support of the
Tourism Industry"" examines the process of transformation as it
relates to the tourism industry, and the changes to that industry
from modern electronic communications. ""Information and
Communication Technologies in Support of the Tourism Industry""
covers not only geographically supportive technologies in
communication, but also in terms of culture, economics, marketing,
social, and regional issues. In-depth analyses range from the use
of the Internet to supply information to the emerging patterns of
tourist decision making and investments.
This volume conveys some of the surprises, puzzles and success
stories in high-dimensional and complex data analysis and related
fields. Its peer-reviewed contributions showcase recent advances in
variable selection, estimation and prediction strategies for a host
of useful models, as well as essential new developments in the
field. The continued and rapid advancement of modern technology now
allows scientists to collect data of increasingly unprecedented
size and complexity. Examples include epigenomic data, genomic
data, proteomic data, high-resolution image data, high-frequency
financial data, functional and longitudinal data, and network data.
Simultaneous variable selection and estimation is one of the key
statistical problems involved in analyzing such big and complex
data. The purpose of this book is to stimulate research and foster
interaction between researchers in the area of high-dimensional
data analysis. More concretely, its goals are to: 1) highlight and
expand the breadth of existing methods in big data and
high-dimensional data analysis and their potential for the
advancement of both the mathematical and statistical sciences; 2)
identify important directions for future research in the theory of
regularization methods, in algorithmic development, and in
methodologies for different application areas; and 3) facilitate
collaboration between theoretical and subject-specific researchers.
Corporations and governmental agencies of all sizes are embracing a
new generation of enterprise-scale business intelligence (BI) and
data warehousing (DW), and very often appoint a single senior-level
individual to serve as the Enterprise BI/DW Program Manager. This
book is the essential guide to the incremental and iterative
build-out of a successful enterprise-scale BI/DW program comprised
of multiple underlying projects, and what the Enterprise Program
Manager must successfully accomplish to orchestrate the many moving
parts in the quest for true enterprise-scale business intelligence
and data warehousing. Author Alan Simon has served as an enterprise
business intelligence and data warehousing program management
advisor to many of his clients, and spent an entire year with a
single client as the adjunct consulting director for a $10 million
enterprise data warehousing (EDW) initiative. He brings a wealth of
knowledge about best practices, risk management, organizational
culture alignment, and other Critical Success Factors (CSFs) to the
discipline of enterprise-scale business intelligence and data
warehousing.
|
|