![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
This easy-to-follow applied book on semiparametric regression methods using R is intended to close the gap between the available methodology and its use in practice. Semiparametric regression has a large literature but much of it is geared towards data analysts who have advanced knowledge of statistical methods. While R now has a great deal of semiparametric regression functionality, many of these developments have not trickled down to rank-and-file statistical analysts. The authors assemble a broad range of semiparametric regression R analyses and put them in a form that is useful for applied researchers. There are chapters devoted to penalized spines, generalized additive models, grouped data, bivariate extensions of penalized spines, and spatial semi-parametric regression models. Where feasible, the R code is provided in the text, however the book is also accompanied by an external website complete with datasets and R code. Because of its flexibility, semiparametric regression has proven to be of great value with many applications in fields as diverse as astronomy, biology, medicine, economics, and finance. This book is intended for applied statistical analysts who have some familiarity with R.
This volume conveys some of the surprises, puzzles and success stories in high-dimensional and complex data analysis and related fields. Its peer-reviewed contributions showcase recent advances in variable selection, estimation and prediction strategies for a host of useful models, as well as essential new developments in the field. The continued and rapid advancement of modern technology now allows scientists to collect data of increasingly unprecedented size and complexity. Examples include epigenomic data, genomic data, proteomic data, high-resolution image data, high-frequency financial data, functional and longitudinal data, and network data. Simultaneous variable selection and estimation is one of the key statistical problems involved in analyzing such big and complex data. The purpose of this book is to stimulate research and foster interaction between researchers in the area of high-dimensional data analysis. More concretely, its goals are to: 1) highlight and expand the breadth of existing methods in big data and high-dimensional data analysis and their potential for the advancement of both the mathematical and statistical sciences; 2) identify important directions for future research in the theory of regularization methods, in algorithmic development, and in methodologies for different application areas; and 3) facilitate collaboration between theoretical and subject-specific researchers.
This book presents a theoretical and practical overview of computational modeling in bioengineering, focusing on a range of applications including electrical stimulation of neural and cardiac tissue, implantable drug delivery, cancer therapy, biomechanics, cardiovascular dynamics, as well as fluid-structure interaction for modelling of organs, tissues, cells and devices. It covers the basic principles of modeling and simulation with ordinary and partial differential equations using MATLAB and COMSOL Multiphysics numerical software. The target audience primarily comprises postgraduate students and researchers, but the book may also be beneficial for practitioners in the medical device industry.
Get an introduction to functional data structures using R and write more effective code and gain performance for your programs. This book teaches you workarounds because data in functional languages is not mutable: for example you'll learn how to change variable-value bindings by modifying environments, which can be exploited to emulate pointers and implement traditional data structures. You'll also see how, by abandoning traditional data structures, you can manipulate structures by building new versions rather than modifying them. You'll discover how these so-called functional data structures are different from the traditional data structures you might know, but are worth understanding to do serious algorithmic programming in a functional language such as R. By the end of Functional Data Structures in R, you'll understand the choices to make in order to most effectively work with data structures when you cannot modify the data itself. These techniques are especially applicable for algorithmic development important in big data, finance, and other data science applications. What You'll Learn Carry out algorithmic programming in R Use abstract data structures Work with both immutable and persistent data Emulate pointers and implement traditional data structures in R Build new versions of traditional data structures that are known Who This Book Is For Experienced or advanced programmers with at least a comfort level with R. Some experience with data structures recommended.
This textbook presents the basic concepts and methods of fluid mechanics, including Lagrangian and Eulerian descriptions, tensors of stresses and strains, continuity, momentum, energy, thermodynamics laws, and similarity theory. The models and their solutions are presented within a context of the mechanics of multiphase media. The treatment fully utilizes the computer algebra and software system Mathematica (R) to both develop concepts and help the reader to master modern methods of solving problems in fluid mechanics. Topics and features: Glossary of over thirty Mathematica (R) computer programs Extensive, self-contained appendix of Mathematica (R) functions and their use Chapter coverage of mechanics of multiphase heterogeneous media Detailed coverage of theory of shock waves in gas dynamics Thorough discussion of aerohydrodynamics of ideal and viscous fluids an d gases Complete worked examples with detailed solutions Problem-solving approach Foundations of Fluid Mechanics with Applications is a complete and accessible text or reference for graduates and professionals in mechanics, applied mathematics, physical sciences, materials science, and engineering. It is an essential resource for the study and use of modern solution methods for problems in fluid mechanics and the underlying mathematical models. The present, softcover reprint is designed to make this classic textbook available to a wider audience.
This book provides a practical approach to designing and implementing a Knowledge Management (KM) Strategy. The book explains how to design KM strategy so as to align business goals with KM objectives. The book also presents an approach for implementing KM strategy so as to make it sustainable. It covers all basic KM concepts, components of KM and the steps that are required for designing a KM strategy. As a result, the book can be used by beginners as well as practitioners. Knowledge management is a discipline that promotes an integrated approach to identifying, capturing, evaluating, retrieving, and sharing all of an enterprise's information assets. These assets may include databases, documents, policies, procedures, and previously un-captured expertise and experience in individual workers. Knowledge is considered to be the learning that results from experience and is embedded within individuals. Sometimes the knowledge is gained through critical thinking, watching others, and observing results of others. These observations then form a pattern which is converted in a 'generic form' to knowledge. This implies that knowledge can be formed only after data (which is generated through experience or observation) is grouped into information and then this information pattern is made generic wisdom. However, dissemination and acceptance of this knowledge becomes a key factor in knowledge management. The knowledge pyramid represents the usual concept of knowledge transformations, where data is transformed into information, and information is transformed into knowledge. Many organizations have struggled to manage knowledge and translate it into business benefits. This book is an attempt to show them how it can be done.
This focuses on the developing field of building probability models with the power of symbolic algebra systems. The book combines the uses of symbolic algebra with probabilistic/stochastic application and highlights the applications in a variety of contexts. The research explored in each chapter is unified by the use of A Probability Programming Language (APPL) to achieve the modeling objectives. APPL, as a research tool, enables a probabilist or statistician the ability to explore new ideas, methods, and models. Furthermore, as an open-source language, it sets the foundation for future algorithms to augment the original code. Computational Probability Applications is comprised of fifteen chapters, each presenting a specific application of computational probability using the APPL modeling and computer language. The chapter topics include using inverse gamma as a survival distribution, linear approximations of probability density functions, and also moment-ratio diagrams for univariate distributions. These works highlight interesting examples, often done by undergraduate students and graduate students that can serve as templates for future work. In addition, this book should appeal to researchers and practitioners in a range of fields including probability, statistics, engineering, finance, neuroscience, and economics.
This book provides comprehensive coverage of the field of outlier analysis from a computer science point of view. It integrates methods from data mining, machine learning, and statistics within the computational framework and therefore appeals to multiple communities. The chapters of this book can be organized into three categories: Basic algorithms: Chapters 1 through 7 discuss the fundamental algorithms for outlier analysis, including probabilistic and statistical methods, linear methods, proximity-based methods, high-dimensional (subspace) methods, ensemble methods, and supervised methods. Domain-specific methods: Chapters 8 through 12 discuss outlier detection algorithms for various domains of data, such as text, categorical data, time-series data, discrete sequence data, spatial data, and network data. Applications: Chapter 13 is devoted to various applications of outlier analysis. Some guidance is also provided for the practitioner. The second edition of this book is more detailed and is written to appeal to both researchers and practitioners. Significant new material has been added on topics such as kernel methods, one-class support-vector machines, matrix factorization, neural networks, outlier ensembles, time-series methods, and subspace methods. It is written as a textbook and can be used for classroom teaching.
Technology/Engineering/Mechanical Provides all the tools needed to begin solving optimization problems using MATLAB(R) The Second Edition of Applied Optimization with MATLAB(R) Programming enables readers to harness all the features of MATLAB(R) to solve optimization problems using a variety of linear and nonlinear design optimization techniques. By breaking down complex mathematical concepts into simple ideas and offering plenty of easy-to-follow examples, this text is an ideal introduction to the field. Examples come from all engineering disciplines as well as science, economics, operations research, and mathematics, helping readers understand how to apply optimization techniques to solve actual problems. This Second Edition has been thoroughly revised, incorporating current optimization techniques as well as the improved MATLAB(R) tools. Two important new features of the text are: Introduction to the scan and zoom method, providing a simple, effective technique that works for unconstrained, constrained, and global optimization problems New chapter, Hybrid Mathematics: An Application, using examples to illustrate how optimization can develop analytical or explicit solutions to differential systems and data-fitting problems Each chapter ends with a set of problems that give readers an opportunity to put their new skills into practice. Almost all of the numerical techniques covered in the text are supported by MATLAB(R) code, which readers can download on the text's companion Web site www.wiley.com/go/venkat2e and use to begin solving problems on their own. This text is recommended for upper-level undergraduate and graduate students in all areas of engineering as well as other disciplines that use optimization techniques to solve design problems.
The statistical analyses that students of the life-sciences are being expected to perform are becoming increasingly advanced. Whether at the undergraduate, graduate, or post-graduate level, this book provides the tools needed to properly analyze your data in an efficient, accessible, plainspoken, frank, and occasionally humorous manner, ensuring that readers come away with the knowledge of which analyses they should use and when they should use them. The book uses the statistical language R, which is the choice of ecologists worldwide and is rapidly becoming the 'go-to' stats program throughout the life-sciences. Furthermore, by using a single, real-world dataset throughout the book, readers are encouraged to become deeply familiar with an imperfect but realistic set of data. Indeed, early chapters are specifically designed to teach basic data manipulation skills and build good habits in preparation for learning more advanced analyses. This approach also demonstrates the importance of viewing data through different lenses, facilitating an easy and natural progression from linear and generalized linear models through to mixed effects versions of those same analyses. Readers will also learn advanced plotting and data-wrangling techniques, and gain an introduction to writing their own functions. Applied Statistics with R is suitable for senior undergraduate and graduate students, professional researchers, and practitioners throughout the life-sciences, whether in the fields of ecology, evolution, environmental studies, or computational biology.
Key features: Unique in its combination of serving as an introduction to spatial statistics and to modeling agricultural and ecological data using R Provides exercises in each chapter to facilitate the book's use as a course textbook or for self-study Adds new material on generalized additive models, point pattern analysis, and new methods of Bayesian analysis of spatial data. Includes a completely revised chapter on the analysis of spatiotemporal data featuring recently introduced software and methods Updates its coverage of R software including newly introduced packages Spatial Data Analysis in Ecology and Agriculture Using R, 2nd Edition provides practical instruction on the use of the R programming language to analyze spatial data arising from research in ecology, agriculture, and environmental science. Readers have praised the book's practical coverage of spatial statistics, real-world examples, and user-friendly approach in presenting and explaining R code, aspects maintained in this update. Using data sets from cultivated and uncultivated ecosystems, the book guides the reader through the analysis of each data set, including setting research objectives, designing the sampling plan, data quality control, exploratory and confirmatory data analysis, and drawing scientific conclusions. Additional material to accompany the book, on both analyzing satellite data and on multivariate analysis, can be accessed at https://www.plantsciences.ucdavis.edu/plant/additionaltopics.htm.
This book presents a comprehensive study of multivariate time series with linear state space structure. The emphasis is put on both the clarity of the theoretical concepts and on efficient algorithms for implementing the theory. In particular, it investigates the relationship between VARMA and state space models, including canonical forms. It also highlights the relationship between Wiener-Kolmogorov and Kalman filtering both with an infinite and a finite sample. The strength of the book also lies in the numerous algorithms included for state space models that take advantage of the recursive nature of the models. Many of these algorithms can be made robust, fast, reliable and efficient. The book is accompanied by a MATLAB package called SSMMATLAB and a webpage presenting implemented algorithms with many examples and case studies. Though it lays a solid theoretical foundation, the book also focuses on practical application, and includes exercises in each chapter. It is intended for researchers and students working with linear state space models, and who are familiar with linear algebra and possess some knowledge of statistics.
This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.
This book traces the theory and methodology of multivariate statistical analysis and shows how it can be conducted in practice using the LISREL computer program. It presents not only the typical uses of LISREL, such as confirmatory factor analysis and structural equation models, but also several other multivariate analysis topics, including regression (univariate, multivariate, censored, logistic, and probit), generalized linear models, multilevel analysis, and principal component analysis. It provides numerous examples from several disciplines and discusses and interprets the results, illustrated with sections of output from the LISREL program, in the context of the example. The book is intended for masters and PhD students and researchers in the social, behavioral, economic and many other sciences who require a basic understanding of multivariate statistical theory and methods for their analysis of multivariate data. It can also be used as a textbook on various topics of multivariate statistical analysis.
Teaches you to use Zoho CRM effectively to benefit your business. This book takes you through a number of real-life scenarios and teaches you how to use Zoho CRM to create solutions for your business, with no technical background needed and with little to no coding required. Sound too good to be true? Technology makes our lives easier and there are a large number of resources on offer to help with various tasks, including managing business information. With all the tools, apps, and services to choose from, it is still a daunting and often expensive undertaking for businesses to create solutions that fit their specific requirements. That's where Zoho CRM comes in. Using this book you can create a fully-functional cloud-based app that manages your company information, is elegant to use, and cost-effective to maintain. Basic computer and internet skills is all you need to successfully launch your very own CRM with the help of this book. Get started today with Mastering Zoho CRM. What You'll Learn Set up Zoho CRM properly from the ground up Model your business processes and implement them on Zoho CRM Centralize and manage your entire marketing, sales, and customer service processes Integrate CRM with other Zoho tools to streamline day to day business operations Create powerful dashboards and reports to provide relevant, actionable information to concerned people Use advanced CRM features such as workflow automation, role-based security, territories, etc. Connect Zoho CRM to external tools and services to extend features, and let CRM scale up with your business needs. Who This Book Is For Small business owners and solopreneurs who want to take control of the beating heart of their business -their marketing, sales, and customer-service efforts- without spending tens of thousands of dollars on customized solutions. Solution providers and consultants who want to learn the ins and outs of one of the hottest CRM tools in the market and provide winning related services to their clients by adding Zoho to their list of offerings.
This book is a valuable read for a diverse group of researchers and practitioners who analyze assessment data and construct test instruments. It focuses on the use of classical test theory (CTT) and item response theory (IRT), which are often required in the fields of psychology (e.g. for measuring psychological traits), health (e.g. for measuring the severity of disorders), and education (e.g. for measuring student performance), and makes these analytical tools accessible to a broader audience. Having taught assessment subjects to students from diverse backgrounds for a number of years, the three authors have a wealth of experience in presenting educational measurement topics, in-depth concepts and applications in an accessible format. As such, the book addresses the needs of readers who use CTT and IRT in their work but do not necessarily have an extensive mathematical background. The book also sheds light on common misconceptions in applying measurement models, and presents an integrated approach to different measurement methods, such as contrasting CTT with IRT and multidimensional IRT models with unidimensional IRT models. Wherever possible, comparisons between models are explicitly made. In addition, the book discusses concepts for test equating and differential item functioning, as well as Bayesian IRT models and plausible values using simple examples. This book can serve as a textbook for introductory courses on educational measurement, as supplementary reading for advanced courses, or as a valuable reference guide for researchers interested in analyzing student assessment data.
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response transformations for multiple linear regression or experimental design models. This text is for graduates and undergraduates with a strong mathematical background. The prerequisites for this text are linear algebra and a calculus based course in statistics.
This book provides a unified view on a new methodology for Machine Translation (MT). This methodology extracts information from widely available resources (extensive monolingual corpora) while only assuming the existence of a very limited parallel corpus, thus having a unique starting point to Statistical Machine Translation (SMT). In this book, a detailed presentation of the methodology principles and system architecture is followed by a series of experiments, where the proposed system is compared to other MT systems using a set of established metrics including BLEU, NIST, Meteor and TER. Additionally, a free-to-use code is available, that allows the creation of new MT systems. The volume is addressed to both language professionals and researchers. Prerequisites for the readers are very limited and include a basic understanding of the machine translation as well as of the basic tools of natural language processing.
This new Edition of Electronic Commerce is a complete update of the leading graduate level/advanced undergraduate level textbook on the subject. Electronic commerce (EC) describes the manner in which transactions take place over electronic networks, mostly the Internet. It is the process of electronically buying and selling goods, services, and information. Certain EC applications, such as buying and selling stocks and airline tickets online, are reaching maturity, some even exceeding non-Internet trades. However, EC is not just about buying and selling; it also is about electronically communicating, collaborating, and discovering information. It is about e-learning, e-government, social networks, and much more. EC is having an impact on a significant portion of the world, affecting businesses, professions, trade, and of course, people. The most important developments in EC since 2014 are the continuous phenomenal growth of social networks, especially Facebook , LinkedIn and Instagram, and the trend toward conducting EC with mobile devices. Other major developments are the expansion of EC globally, especially in China where you can find the world's largest EC company. Much attention is lately being given to smart commerce and the use of AI-based analytics and big data to enhance the field. Finally, some emerging EC business models are changing industries (e.g., the shared economy models of Uber and Airbnb). The 2018 (9th) edition, brings forth the latest trends in e-commerce, including smart commerce, social commerce, social collaboration, shared economy, innovations, and mobility.
This textbook on practical data analytics unites fundamental principles, algorithms, and data. Algorithms are the keystone of data analytics and the focal point of this textbook. Clear and intuitive explanations of the mathematical and statistical foundations make the algorithms transparent. But practical data analytics requires more than just the foundations. Problems and data are enormously variable and only the most elementary of algorithms can be used without modification. Programming fluency and experience with real and challenging data is indispensable and so the reader is immersed in Python and R and real data analysis. By the end of the book, the reader will have gained the ability to adapt algorithms to new problems and carry out innovative analyses. This book has three parts:(a) Data Reduction: Begins with the concepts of data reduction, data maps, and information extraction. The second chapter introduces associative statistics, the mathematical foundation of scalable algorithms and distributed computing. Practical aspects of distributed computing is the subject of the Hadoop and MapReduce chapter.(b) Extracting Information from Data: Linear regression and data visualization are the principal topics of Part II. The authors dedicate a chapter to the critical domain of Healthcare Analytics for an extended example of practical data analytics. The algorithms and analytics will be of much interest to practitioners interested in utilizing the large and unwieldly data sets of the Centers for Disease Control and Prevention's Behavioral Risk Factor Surveillance System.(c) Predictive Analytics Two foundational and widely used algorithms, k-nearest neighbors and naive Bayes, are developed in detail. A chapter is dedicated to forecasting. The last chapter focuses on streaming data and uses publicly accessible data streams originating from the Twitter API and the NASDAQ stock market in the tutorials. This book is intended for a one- or two-semester course in data analytics for upper-division undergraduate and graduate students in mathematics, statistics, and computer science. The prerequisites are kept low, and students with one or two courses in probability or statistics, an exposure to vectors and matrices, and a programming course will have no difficulty. The core material of every chapter is accessible to all with these prerequisites. The chapters often expand at the close with innovations of interest to practitioners of data science. Each chapter includes exercises of varying levels of difficulty. The text is eminently suitable for self-study and an exceptional resource for practitioners.
This book introduces multidimensional scaling (MDS) and unfolding as data analysis techniques for applied researchers. MDS is used for the analysis of proximity data on a set of objects, representing the data as distances between points in a geometric space (usually of two dimensions). Unfolding is a related method that maps preference data (typically evaluative ratings of different persons on a set of objects) as distances between two sets of points (representing the persons and the objects, resp.). This second edition has been completely revised to reflect new developments and the coverage of unfolding has also been substantially expanded. Intended for applied researchers whose main interests are in using these methods as tools for building substantive theories, it discusses numerous applications (classical and recent), highlights practical issues (such as evaluating model fit), presents ways to enforce theoretical expectations for the scaling solutions, and addresses the typical mistakes that MDS/unfolding users tend to make. Further, it shows how MDS and unfolding can be used in practical research work, primarily by using the smacof package in the R environment but also Proxscal in SPSS. It is a valuable resource for psychologists, social scientists, and market researchers, with a basic understanding of multivariate statistics (such as multiple regression and factor analysis).
This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are explained. Each section includes a set of exercises on the respective topics. Various functions and tools for the analysis of discrete survival data are collected in the R package discSurv that accompanies the book.
This textbook examines empirical linguistics from a theoretical linguist's perspective. It provides both a theoretical discussion of what quantitative corpus linguistics entails and detailed, hands-on, step-by-step instructions to implement the techniques in the field. The statistical methodology and R-based coding from this book teach readers the basic and then more advanced skills to work with large data sets in their linguistics research and studies. Massive data sets are now more than ever the basis for work that ranges from usage-based linguistics to the far reaches of applied linguistics. This book presents much of the methodology in a corpus-based approach. However, the corpus-based methods in this book are also essential components of recent developments in sociolinguistics, historical linguistics, computational linguistics, and psycholinguistics. Material from the book will also be appealing to researchers in digital humanities and the many non-linguistic fields that use textual data analysis and text-based sensorimetrics. Chapters cover topics including corpus processing, frequencing data, and clustering methods. Case studies illustrate each chapter with accompanying data sets, R code, and exercises for use by readers. This book may be used in advanced undergraduate courses, graduate courses, and self-study.
"Integrated Business Processes with ERP Systems" covers the key processes supported by modern ERP systems. This textbook and the WileyPLUS online course is designed for use as both a reference guide and a conceptual resource for students taking ERP-focused courses using SAP. It examines in depth the core concepts applicable to all ERP environments, and it explains how those concepts can be utilized to implement business processes in SAP systems. Hallmark Features: Integrated Business Processes with ERP Systems approaches topics using an integrated process perspective of the firm. Each process is discussed within the context of its execution across functional areas in the company, with special emphasis on the role of data in managing the coordination between activities and groups. Students will gain a deep appreciation for the role of enterprise systems in efficiently managing processes from multiple functional perspectives.Running Case Study - Many key examples, demonstrations, and assignments incorporated throughout the book are based on a fictional company, Global Bike Incorporated (GBI). GBI exists virtually in the GBI ERP system, which will be used to provide hands-on experience with executing the various processes in SAP ERP.Real-World Examples - In addition to the integrated approach and the GBI case study, the text includes multiple scenarios that demonstrate how businesses actually utilize ERP capabilities. Examples of both positive and negative issues associated with enterprise systems are integrated throughout the chapters to illustrate the concepts with real-world experiences. |
![]() ![]() You may like...
Contemporary Plays by African Women…
Yvette Hutchison, Amy Jephta
Paperback
R843
Discovery Miles 8 430
PostDiabetic - An Easy-To-Follow 9-Week…
Eric Edmeades, Ruben Ruiz
Hardcover
Genetically Modified Plants - Assessing…
Roger Hull, Graham Head, …
Hardcover
R3,235
Discovery Miles 32 350
Research Handbook on Law, Environment…
Philippe Cullet, Sujith Koonan
Paperback
R1,749
Discovery Miles 17 490
|