![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
This book discusses the problem of model choice when the statistical models are separate, also called nonnested. Chapter 1 provides an introduction, motivating examples and a general overview of the problem. Chapter 2 presents the classical or frequentist approach to the problem as well as several alternative procedures and their properties. Chapter 3 explores the Bayesian approach, the limitations of the classical Bayes factors and the proposed alternative Bayes factors to overcome these limitations. It also discusses a significance Bayesian procedure. Lastly, Chapter 4 examines the pure likelihood approach. Various real-data examples and computer simulations are provided throughout the text.
A unique point of this book is its low threshold, textually simple and at the same time full of self-assessment opportunities. Other unique points are the succinctness of the chapters with 3 to 6 pages, the presence of entire-commands-texts of the statistical methodologies reviewed and the fact that dull scientific texts imposing an unnecessary burden on busy and jaded professionals have been left out. For readers requesting more background, theoretical and mathematical information a note section with references is in each chapter. The first edition in 2010 was the first publication of a complete overview of SPSS methodologies for medical and health statistics. Well over 100,000 copies of various chapters were sold within the first year of publication. Reasons for a rewrite were four. First, many important comments from readers urged for a rewrite. Second, SPSS has produced many updates and upgrades, with relevant novel and improved methodologies. Third, the authors felt that the chapter texts needed some improvements for better readability: chapters have now been classified according the outcome data helpful for choosing your analysis rapidly, a schematic overview of data, and explanatory graphs have been added. Fourth, current data are increasingly complex and many important methods for analysis were missing in the first edition. For that latter purpose some more advanced methods seemed unavoidable, like hierarchical loglinear methods, gamma and Tweedie regressions and random intercept analyses. In order for the contents of the book to remain covered by the title, the authors renamed the book: SPSS for Starters and 2nd Levelers. Special care was, nonetheless, taken to keep things as simple as possible, simple menu commands are given. The arithmetic is still of a no-more-than high-school level. Step-by-step analyses of different statistical methodologies are given with the help of 60 SPSS data files available through the internet. Because of the lack of time of this busy group of people, the authors have given every effort to produce a text as succinct as possible.
The objective of Kai Zhang and his research is to assess the existing process monitoring and fault detection (PM-FD) methods. His aim is to provide suggestions and guidance for choosing appropriate PM-FD methods, because the performance assessment study for PM-FD methods has become an area of interest in both academics and industry. The author first compares basic FD statistics, and then assesses different PM-FD methods to monitor the key performance indicators of static processes, steady-state dynamic processes and general dynamic processes including transient states. He validates the theoretical developments using both benchmark and real industrial processes.
This book provides new insights on the study of global environmental changes using the ecoinformatics tools and the adaptive-evolutionary technology of geoinformation monitoring. The main advantage of this book is that it gathers and presents extensive interdisciplinary expertise in the parameterization of global biogeochemical cycles and other environmental processes in the context of globalization and sustainable development. In this regard, the crucial global problems concerning the dynamics of the nature-society system are considered and the key problems of ensuring the system's sustainable development are studied. A new approach to the numerical modeling of the nature-society system is proposed and results are provided on modeling the dynamics of the system's characteristics with regard to scenarios of anthropogenic impacts on biogeochemical cycles, land ecosystems and oceans. The main purpose of this book is to develop a universal guide to information-modeling technologies for assessing the function of environmental subsystems under various climatic and anthropogenic conditions.
This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors' website.
Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work.
The R Companion to Elementary Applied Statistics includes traditional applications covered in elementary statistics courses as well as some additional methods that address questions that might arise during or after the application of commonly used methods. Beginning with basic tasks and computations with R, readers are then guided through ways to bring data into R, manipulate the data as needed, perform common statistical computations and elementary exploratory data analysis tasks, prepare customized graphics, and take advantage of R for a wide range of methods that find use in many elementary applications of statistics. Features: Requires no familiarity with R or programming to begin using this book. Can be used as a resource for a project-based elementary applied statistics course, or for researchers and professionals who wish to delve more deeply into R. Contains an extensive array of examples that illustrate ideas on various ways to use pre-packaged routines, as well as on developing individualized code. Presents quite a few methods that may be considered non-traditional, or advanced. Includes accompanying carefully documented script files that contain code for all examples presented, and more. R is a powerful and free product that is gaining popularity across the scientific community in both the professional and academic arenas. Statistical methods discussed in this book are used to introduce the fundamentals of using R functions and provide ideas for developing further skills in writing R code. These ideas are illustrated through an extensive collection of examples. About the Author: Christopher Hay-Jahans received his Doctor of Arts in mathematics from Idaho State University in 1999. After spending three years at University of South Dakota, he moved to Juneau, Alaska, in 2002 where he has taught a wide range of undergraduate courses at University of Alaska Southeast.
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for the construction of optimum designs using Hadamard matrices, the Kronecker product, Rao-Khatri product, mixed orthogonal arrays to name a few.
Leverage the flexibility and power of SAP MII to integrate your business operations with your manufacturing processes. You'll explore important new features of the product and see how to apply best practices to connect all the stakeholders in your business. This book starts with an overview of SAP's manufacturing integration and intelligence application and explains why it is so important. You'll then see how it is applied in various manufacturing sectors. The biggest challenge in manufacturing industries is to reduce the manual work and human intervention so that the process becomes automatic. SAP MII explains how to bridge the gap between management and production and bring sound vital information to the shop floor in real time. With this book you'll see how to ensure existing manufacturing and information systems share a common interface for all users in your enterprise. What You'll Learn Understand the functional aspects of SAP MII Implement SAP MII in different Manufacturing sectors Explore new technical features of SAP MII 12.x Integrate scenarios with SAP MII Discover practice guidelines Who This Book is for All levels of SAP manufacturing professionals.
Describing novel mathematical concepts for recommendation engines, Realtime Data Mining: Self-Learning Techniques for Recommendation Engines features a sound mathematical framework unifying approaches based on control and learning theories, tensor factorization, and hierarchical methods. Furthermore, it presents promising results of numerous experiments on real-world data. The area of realtime data mining is currently developing at an exceptionally dynamic pace, and realtime data mining systems are the counterpart of today's "classic" data mining systems. Whereas the latter learn from historical data and then use it to deduce necessary actions, realtime analytics systems learn and act continuously and autonomously. In the vanguard of these new analytics systems are recommendation engines. They are principally found on the Internet, where all information is available in realtime and an immediate feedback is guaranteed. This monograph appeals to computer scientists and specialists in machine learning, especially from the area of recommender systems, because it conveys a new way of realtime thinking by considering recommendation tasks as control-theoretic problems. Realtime Data Mining: Self-Learning Techniques for Recommendation Engines will also interest application-oriented mathematicians because it consistently combines some of the most promising mathematical areas, namely control theory, multilevel approximation, and tensor factorization.
This volume compiles the major results of conference participants from the "Third International Conference in Network Analysis" held at the Higher School of Economics, Nizhny Novgorod in May 2013, with the aim to initiate further joint research among different groups. The contributions in this book cover a broad range of topics relevant to the theory and practice of network analysis, including the reliability of complex networks, software, theory, methodology, and applications. Network analysis has become a major research topic over the last several years. The broad range of applications that can be described and analyzed by means of a network has brought together researchers, practitioners from numerous fields such as operations research, computer science, transportation, energy, biomedicine, computational neuroscience and social sciences. In addition, new approaches and computer environments such as parallel computing, grid computing, cloud computing, and quantum computing have helped to solve large scale network optimization problems.
This pioneering book teaches readers to use R within four core analytical areas applicable to the Humanities: networks, text, geospatial data, and images. This book is also designed to be a bridge: between quantitative and qualitative methods, individual and collaborative work, and the humanities and social sciences. Humanities Data with R does not presuppose background programming experience. Early chapters take readers from R set-up to exploratory data analysis (continuous and categorical data, multivariate analysis, and advanced graphics with emphasis on aesthetics and facility). Following this, networks, geospatial data, image data, natural language processing and text analysis each have a dedicated chapter. Each chapter is grounded in examples to move readers beyond the intimidation of adding new tools to their research. Everything is hands-on: networks are explained using U.S. Supreme Court opinions, and low-level NLP methods are applied to short stories by Sir Arthur Conan Doyle. After working through these examples with the provided data, code and book website, readers are prepared to apply new methods to their own work. The open source R programming language, with its myriad packages and popularity within the sciences and social sciences, is particularly well-suited to working with humanities data. R packages are also highlighted in an appendix. This book uses an expanded conception of the forms data may take and the information it represents. The methodology will have wide application in classrooms and self-study for the humanities, but also for use in linguistics, anthropology, and political science. Outside the classroom, this intersection of humanities and computing is particularly relevant for research and new modes of dissemination across archives, museums and libraries.
This book highlights recent advances in natural computing, including biology and its theory, bio-inspired computing, computational aesthetics, computational models and theories, computing with natural media, philosophy of natural computing and educational technology. It presents extended versions of the best papers selected from the symposium "7th International Workshop on Natural Computing" (IWNC7), held in Tokyo, Japan, in 2013. The target audience is not limited to researchers working in natural computing but also those active in biological engineering, fine/media art design, aesthetics and philosophy.
Economists can use computer algebra systems to manipulate symbolic models, derive numerical computations, and analyze empirical relationships among variables. Maxima is an open-source multi-platform computer algebra system that rivals proprietary software. Maxima's symbolic and computational capabilities enable economists and financial analysts to develop a deeper understanding of models by allowing them to explore the implications of differences in parameter values, providing numerical solutions to problems that would be otherwise intractable, and by providing graphical representations that can guide analysis. This book provides a step-by-step tutorial for using this program to examine the economic relationships that form the core of microeconomics in a way that complements traditional modeling techniques. Readers learn how to phrase the relevant analysis and how symbolic expressions, numerical computations, and graphical representations can be used to learn from microeconomic models. In particular, comparative statics analysis is facilitated. Little has been published on Maxima and its applications in economics and finance, and this volume will appeal to advanced undergraduates, graduate-level students studying microeconomics, academic researchers in economics and finance, economists, and financial analysts.
The book opens with a short introduction to Indian music, in particular classical Hindustani music, followed by a chapter on the role of statistics in computational musicology. The authors then show how to analyze musical structure using Rubato, the music software package for statistical analysis, in particular addressing modeling, melodic similarity and lengths, and entropy analysis; they then show how to analyze musical performance. Finally, they explain how the concept of seminatural composition can help a music composer to obtain the opening line of a raga-based song using Monte Carlo simulation. The book will be of interest to musicians and musicologists, particularly those engaged with Indian music.
Among the various multi-level formulations of mathematical models in decision making processes, this book focuses on the bi-level model. Being the most frequently used, the bi-level model addresses conflicts which exist in multi-level decision making processes. From the perspective of bi-level structure and uncertainty, this book takes real-life problems as the background, focuses on the so-called random-like uncertainty, and develops the general framework of random-like bi-level decision making problems. The random-like uncertainty considered in this book includes random phenomenon, random-overlapped random (Ra-Ra) phenomenon and fuzzy-overlapped random (Ra-Fu) phenomenon. Basic theory, models, algorithms and practical applications for different types of random-like bi-level decision making problems are also presented in this book.
This book shows how to look at ways of visualizing large datasets, whether large in numbers of cases, or large in numbers of variables, or large in both. All ideas are illustrated with displays from analyses of real datasets and the importance of interpreting displays effectively is emphasized. Graphics should be drawn to convey information and the book includes many insightful examples. New approaches to graphics are needed to visualize the information in large datasets and most of the innovations described in this book are developments of standard graphics. The book is accessible to readers with some experience of drawing statistical graphics.
This book gathers a selection of invited and contributed lectures from the European Conference on Numerical Mathematics and Advanced Applications (ENUMATH) held in Lausanne, Switzerland, August 26-30, 2013. It provides an overview of recent developments in numerical analysis, computational mathematics and applications from leading experts in the field. New results on finite element methods, multiscale methods, numerical linear algebra and discretization techniques for fluid mechanics and optics are presented. As such, the book offers a valuable resource for a wide range of readers looking for a state-of-the-art overview of advanced techniques, algorithms and results in numerical mathematics and scientific computing.
This book presents four mathematical essays which explore the foundations of mathematics and related topics ranging from philosophy and logic to modern computer mathematics. While connected to the historical evolution of these concepts, the essays place strong emphasis on developments still to come. The book originated in a 2002 symposium celebrating the work of Bruno Buchberger, Professor of Computer Mathematics at Johannes Kepler University, Linz, Austria, on the occasion of his 60th birthday. Among many other accomplishments, Professor Buchberger in 1985 was the founding editor of the Journal of Symbolic Computation; the founder of the Research Institute for Symbolic Computation (RISC) and its chairman from 1987-2000; the founder in 1990 of the Softwarepark Hagenberg, Austria, and since then its director. More than a decade in the making, Mathematics, Computer Science and Logic - A Never Ending Story includes essays by leading authorities, on such topics as mathematical foundations from the perspective of computer verification; a symbolic-computational philosophy and methodology for mathematics; the role of logic and algebra in software engineering; and new directions in the foundations of mathematics. These inspiring essays invite general, mathematically interested readers to share state-of-the-art ideas which advance the never ending story of mathematics, computer science and logic. Mathematics, Computer Science and Logic - A Never Ending Story is edited by Professor Peter Paule, Bruno Buchberger's successor as director of the Research Institute for Symbolic Computation.
Thirty years ago mathematical, as opposed to applied numerical, computation was difficult to perform and so relatively little used. Three threads changed that: the emergence of the personal computer; the discovery of fiber-optics and the consequent development of the modern internet; and the building of the Three "M's" Maple, Mathematica and Matlab. We intend to persuade that Mathematica and other similar tools are worth knowing, assuming only that one wishes to be a mathematician, a mathematics educator, a computer scientist, an engineer or scientist, or anyone else who wishes/needs to use mathematics better. We also hope to explain how to become an "experimental mathematician" while learning to be better at proving things. To accomplish this our material is divided into three main chapters followed by a postscript. These cover elementary number theory, calculus of one and several variables, introductory linear algebra, and visualization and interactive geometric computation.
This book offers a snapshot of the state-of-the-art in classification at the interface between statistics, computer science and application fields. The contributions span a broad spectrum, from theoretical developments to practical applications; they all share a strong computational component. The topics addressed are from the following fields: Statistics and Data Analysis; Machine Learning and Knowledge Discovery; Data Analysis in Marketing; Data Analysis in Finance and Economics; Data Analysis in Medicine and the Life Sciences; Data Analysis in the Social, Behavioural, and Health Care Sciences; Data Analysis in Interdisciplinary Domains; Classification and Subject Indexing in Library and Information Science. The book presents selected papers from the Second European Conference on Data Analysis, held at Jacobs University Bremen in July 2014. This conference unites diverse researchers in the pursuit of a common topic, creating truly unique synergies in the process.
Learn how to configure, implement, enhance, and customize SAP OEE to address manufacturing performance management. Manufacturing Performance Management using SAP OEE will show you how to connect your business processes with your plant systems and how to integrate SAP OEE with ERP through standard workflows and shop floor systems for automated data collection. Manufacturing Performance Management using SAP OEE is a must-have comprehensive guide to implementing SAP OEE. It will ensure that SAP consultants and users understand how SAP OEE can offer solutions for manufacturing performance management in process industries. With this book in hand, managing shop floor execution effectively will become easier than ever. Authors Dipankar Saha and Mahalakshmi Symsunder, both SAP manufacturing solution experts, and Sumanta Chakraborty, product owner of SAP OEE, will explain execution and processing related concepts, manual and automatic data collection through the OEE Worker UI, and how to enhance and customize interfaces and dashboards for your specific purposes. You'll learn how to capture and categorize production and loss data and use it effectively for root-cause analysis. In addition, this book will show you: Various down-time handling scenarios. How to monitor, calculate, and define standard as well as industry-specific KPIs. How to carry out standard operational analytics for continuous improvement on the shop floor, at local plant level using MII and SAP Lumira, and also global consolidated analytics at corporation level using SAP HANA. Steps to benchmark manufacturing performance to compare similar manufacturing plants' performance, leading to a more efficient and effective shop floor. Manufacturing Performance Management using SAP OEE will provide you with in-depth coverage of SAP OEE and how to effectively leverage its features. This will allow you to efficiently manage the manufacturing process and to enhance the shop floor's overall performance, making you the sought-after SAP OEE expert in the organization. What You Will Learn Configure your ERP OEE add-on to build your plant and global hierarchy and relevant master data and KPIs Use the SAP OEE standard integration (SAP OEEINT) to integrate your ECC and OEE system to establish bi-directional integration between the enterprise and the shop floor Enable your shop floor operator on the OEE Worker UI to handle shop floor production execution Use SAP OEE as a tool for measuring manufacturing performance Enhance and customize SAP OEE to suit your specific requirements Create local plant-based reporting using SAP Lumira and MII Use standard SAP OEE HANA analytics Who This Book Is For SAP MII, ME, and OEE consultants and users who will implement and use the solution.
This handbook provides a comprehensive overview of Partial Least Squares (PLS) methods with specific reference to their use in marketing and with a discussion of the directions of current research and perspectives. It covers the broad area of PLS methods, from regression to structural equation modeling applications, software and interpretation of results. The handbook serves both as an introduction for those without prior knowledge of PLS and as a comprehensive reference for researchers and practitioners interested in the most recent advances in PLS methodology.
Learn how to develop powerful data analytics applications quickly for SQL Server database administrators and developers. Organizations will be able to sift data and derive the business intelligence needed to drive business decisions and profit. The addition of R to SQL Server 2016 places a powerful analytical processor into an environment most developers are already comfortable with - Visual Studio. This book walks even the newest of users through the creation process of a powerful R-language tool set for use in analyzing and reporting on your data. As a SQL Server database administrator or developer, it is sometimes difficult to stay on the bleeding edge of technology. Microsoft's addition of R to SQL Server 2016 is sure to be a game-changer, and the language will certainly become an integral part of future releases. R is in fact widely used today in statistical and related applications, and its use is only growing. Beginning SQL Server R Services helps you jump on board this important trend by providing good examples with detailed explanations of the WHY and not just the HOW. Walks you through setup and installation of SQL Server R Services. Explains the basics of working with R Tools for Visual Studio. Provides a road map to successfully creating custom R code. What You Will Learn Discover R's role in the SQL Server 2016 hierarchy. Manage the components needed to run SQL Server R Services code. Run R-language analytics and queries inside the database. Create analytic solutions that run across multiple datasets. Gain in-depth knowledge of the R language itself. Implement custom SQL Server R Services solutions. Who This Book Is For Any level of database administrator or developer, but specifically it's for those developers with the need to develop powerful data analytics applications quickly. Seasoned R developers will appreciate the book for its robust learning pattern, using visual aids in combination with properties explanations and scenarios. Beginning SQL Server R Services is the perfect "new hire" gift for new database developers in any organization.
Calling all SAP Business One users! Your must-have handbook is here. Now updated for SAP Business One 10.0, this bestselling guide has the expertise you need to keep your business running smoothly. Whether you're a new hire or a super user, get step-by-step instructions for your core processes, from purchasing and manufacturing to sales and financials. Master the tools and transactions that keep you focused on business outcomes and improved KPIs. This book is what you've been waiting for: the key to doing your job better in SAP Business One. Highlights Include:1) Administration2) Financials and banking3) Sales and purchasing4) Inventory management5) Resource management6) Production and MRP7) Human resources 8) Project management9) Reporting and analytics10) Mobile11) SAP HANA and SQL versions12) Cloud and on-premise systems |
![]() ![]() You may like...
Cognitive Psychology - EMEA Edition
E. Bruce Goldstein, Johanna C. van Hooff
Paperback
Abusing Scripture - The Consequences of…
Manfred T. Brauch
Paperback
Kingdom Come - How Jesus Wants to Change…
Allen M. Wakabayashi
Paperback
Truth is Stranger That is Used to be…
J. Richard Middleton, Brian J. Walsh
Paperback
|