![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
This book presents strategies for analyzing qualitative and mixed methods data with MAXQDA software, and provides guidance on implementing a variety of research methods and approaches, e.g. grounded theory, discourse analysis and qualitative content analysis, using the software. In addition, it explains specific topics, such as transcription, building a coding frame, visualization, analysis of videos, concept maps, group comparisons and the creation of literature reviews. The book is intended for masters and PhD students as well as researchers and practitioners dealing with qualitative data in various disciplines, including the educational and social sciences, psychology, public health, business or economics.
The book is aimed at Project Management Professionals who are casual or new users and understand the software basics but require a short and snappy guide. It is the sort of book that may be read without a computer on the bus, train or plane. This book quickly gets down to the issues that many people grapple with when trying to use some of the more advanced features of the software and enlightens readers on the traps that some users fall into and how to avoid them. It demonstrates how the software ticks and explains some tricks that may be used to become more productive with the software and generate better schedules. Suitable for people who understand the basics of Microsoft Project but want a short guide to give them insight into the less intuitive features of the software. It is packed with screen shots, constructive tips and is written in plain English. The book is based on the Microsoft Project 365 and 2021 but may be used with earlier versions of Microsoft Project as this book points out the differences where appropriate. The book picks out many of the key aspects from the author's exiting books and adds a substantial amount of new and original text to produce a pocket guide that omits describing the intuitive and obvious functions and concentrates on the issues that many users get stuck on or find hard to understand.
This book introduces readers to statistical methodologies used to analyze doubly truncated data. The first book exclusively dedicated to the topic, it provides likelihood-based methods, Bayesian methods, non-parametric methods, and linear regression methods. These procedures can be used to effectively analyze continuous data, especially survival data arising in biostatistics and economics. Because truncation is a phenomenon that is often encountered in non-experimental studies, the methods presented here can be applied to many branches of science. The book provides R codes for most of the statistical methods, to help readers analyze their data. Given its scope, the book is ideally suited as a textbook for students of statistics, mathematics, econometrics, and other fields.
Mathematical Statistics with Applications in R, Third Edition, offers a modern calculus-based theoretical introduction to mathematical statistics and applications. The book covers many modern statistical computational and simulation concepts that are not covered in other texts, such as the Jackknife, bootstrap methods, the EM algorithms, and Markov chain Monte Carlo (MCMC) methods, such as the Metropolis algorithm, Metropolis-Hastings algorithm and the Gibbs sampler. By combining discussion on the theory of statistics with a wealth of real-world applications, the book helps students to approach statistical problem-solving in a logical manner. Step-by-step procedure to solve real problems make the topics very accessible.
Collecting, analyzing, and extracting valuable information from a large amount of data requires easily accessible, robust, computational and analytical tools. Data Mining and Business Analytics with R utilizes the open source software R for the analysis, exploration, and simplification of large high-dimensional data sets. As a result, readers are provided with the needed guidance to model and interpret complicated data and become adept at building powerful models for prediction and classification. Highlighting both underlying concepts and practical computational skills, Data Mining and Business Analytics with R begins with coverage of standard linear regression and the importance of parsimony in statistical modeling. The book includes important topics such as penalty-based variable selection (LASSO); logistic regression; regression and classification trees; clustering; principal components and partial least squares; and the analysis of text and network data. In addition, the book presents: A thorough discussion and extensive demonstration of the theory behind the most useful data mining tools Illustrations of how to use the outlined concepts in real-world situations Readily available additional data sets and related R code allowing readers to apply their own analyses to the discussed materials Numerous exercises to help readers with computing skills and deepen their understanding of the material Data Mining and Business Analytics with R is an excellent graduate-level textbook for courses on data mining and business analytics. The book is also a valuable reference for practitioners who collect and analyze data in the fields of finance, operations management, marketing, and the information sciences.
Get up to speed on Microsoft Project 2013 and learn how to manage projects large and small. This crystal-clear book not only guides you step-by-step through Project 2013's new features, it also gives you real-world guidance: how to prep a project before touching your PC, and which Project tools will keep you on target. With this Missing Manual, you'll go from project manager to Project master. The important stuff you need to knowLearn Project 2013 inside out. Get hands-on instructions for the Standard and Professional editions.Start with a project management primer. Discover what it takes to handle a project successfully.Build and refine your plan. Put together your team, schedule, and budget.Achieve the results you want. Build realistic schedules with Project, and learn how to keep costs under control.Track your progress. Measure your performance, make course corrections, and manage changes.Create attractive reports. Communicate clearly to stakeholders and team members using charts, tables, and dashboards.Use Project's power tools. Customize Project's features and views, and transfer info via the cloud, using Microsoft SkyDrive.
This text provides a practical, hands-on introduction to data conceptualization, measurement, and association through active learning. Students get step-by-step instruction on data analysis using the latest version of SPSS and the most current General Social Survey data. The text starts with an introduction to computerized data analysis and the social research process, then walks users through univariate, bivariate, and multivariate analysis using SPSS. The book contains applications from across the social sciences-sociology, political science, social work, criminal justice, health-so it can be used in courses offered in any of these departments. The Eleventh Edition uses the latest general Social Survey (GSS) data, and the latest available version of SPSS. The GSS datasets now offer additional variables for more possibilities in the demonstrations and exercises within each chapter.
This book introduces the main theoretical findings related to copulas and shows how statistical modeling of multivariate continuous distributions using copulas can be carried out in the R statistical environment with the package copula (among others). Copulas are multivariate distribution functions with standard uniform univariate margins. They are increasingly applied to modeling dependence among random variables in fields such as risk management, actuarial science, insurance, finance, engineering, hydrology, climatology, and meteorology, to name a few. In the spirit of the Use R! series, each chapter combines key theoretical definitions or results with illustrations in R. Aimed at statisticians, actuaries, risk managers, engineers and environmental scientists wanting to learn about the theory and practice of copula modeling using R without an overwhelming amount of mathematics, the book can also be used for teaching a course on copula modeling.
Written at a readily accessible level, " Basic Data Analysis for Time Series with R" emphasizes the mathematical importance of collaborative analysis of data used to collect increments of time or space. Balancing a theoretical and practical approach to analyzing data within the context of serial correlation, the book presents a coherent and systematic regression-based approach to model selection. The book illustrates these principles of model selection and model building through the use of information criteria, cross validation, hypothesis tests, and confidence intervals. Focusing on frequency- and time-domain and trigonometric regression as the primary themes, the book also includes modern topical coverage on Fourier series and Akaike's Information Criterion (AIC). In addition, "Basic Data Analysis for Time Series with R" also features: Real-world examples to provide readers with practical hands-on experienceMultiple R software subroutines employed with graphical displaysNumerous exercise sets intended to support readers understanding of the core conceptsSpecific chapters devoted to the analysis of the Wolf sunspot number data and the Vostok ice core data sets
This text presents a wide-ranging and rigorous overview of nearest neighbor methods, one of the most important paradigms in machine learning. Now in one self-contained volume, this book systematically covers key statistical, probabilistic, combinatorial and geometric ideas for understanding, analyzing and developing nearest neighbor methods. Gerard Biau is a professor at Universite Pierre et Marie Curie (Paris). Luc Devroye is a professor at the School of Computer Science at McGill University (Montreal).
This book is the modern first treatment of experimental designs, providing a comprehensive introduction to the interrelationship between the theory of optimal designs and the theory of cubature formulas in numerical analysis. It also offers original new ideas for constructing optimal designs. The book opens with some basics on reproducing kernels, and builds up to more advanced topics, including bounds for the number of cubature formula points, equivalence theorems for statistical optimalities, and the Sobolev Theorem for the cubature formula. It concludes with a functional analytic generalization of the above classical results. Although it is intended for readers who are interested in recent advances in the construction theory of optimal experimental designs, the book is also useful for researchers seeking rich interactions between optimal experimental designs and various mathematical subjects such as spherical designs in combinatorics and cubature formulas in numerical analysis, both closely related to embeddings of classical finite-dimensional Banach spaces in functional analysis and Hilbert identities in elementary number theory. Moreover, it provides a novel communication platform for "design theorists" in a wide variety of research fields.
SAP ABAP (Advanced Business Application Programming) offers a detailed tutorial on the numerous features of the core programming platform, used for development for the entire SAP software suite. SAP ABAP uses hands on business oriented use cases and a valuable dedicated e-resource to demonstrate the underlying advanced concepts of the OO ABAP environment and the SAP UI. SAP ABAP covers the latest version (NetWeaver 7.3 and SAP application programming release 6.0) of the platform for demonstrating the customization and implementation phases of the SAP software implementation. Void of theoretical treatments and preoccupation with language syntax, SAP ABAP is a comprehensive, practical one stop solution,which demonstrates and conveys the language's commands and features through hands on examples. The accompanying e-resource is a take off point to the book. SAP ABAP works in tandem with the accompanying e-resource to create an interactive learning environment where the book provides a brief description and an overview of a specified feature/command, showing and discussing the corresponding code. At the reader's option, the user can utilize the accompanying e-resource, where a step-by-step guide to creating and running the feature's object is available. The presentation of the features is scenario oriented, i.e. most of the features are demonstrated in terms of small business scenarios. The e-resource contains the scenario descriptions, screen shots, detailed screen cams and ABAP program source to enable the reader to create all objects related to the scenario and run/execute them. The underlying concepts of a feature/command are conveyed through execution of these hands-on programs. Further exercises to be performed independently by the reader are also proposed. The demonstration/illustration objects including the programs rely on some of the SAP application tables being populated, for example an IDES system which is now a de facto system for all SAP training related activities.
The 2nd edition of R for Marketing Research and Analytics continues to be the best place to learn R for marketing research. This book is a complete introduction to the power of R for marketing research practitioners. The text describes statistical models from a conceptual point of view with a minimal amount of mathematics, presuming only an introductory knowledge of statistics. Hands-on chapters accelerate the learning curve by asking readers to interact with R from the beginning. Core topics include the R language, basic statistics, linear modeling, and data visualization, which is presented throughout as an integral part of analysis. Later chapters cover more advanced topics yet are intended to be approachable for all analysts. These sections examine logistic regression, customer segmentation, hierarchical linear modeling, market basket analysis, structural equation modeling, and conjoint analysis in R. The text uniquely presents Bayesian models with a minimally complex approach, demonstrating and explaining Bayesian methods alongside traditional analyses for analysis of variance, linear models, and metric and choice-based conjoint analysis. With its emphasis on data visualization, model assessment, and development of statistical intuition, this book provides guidance for any analyst looking to develop or improve skills in R for marketing applications. The 2nd edition increases the book's utility for students and instructors with the inclusion of exercises and classroom slides. At the same time, it retains all of the features that make it a vital resource for practitioners: non-mathematical exposition, examples modeled on real world marketing problems, intuitive guidance on research methods, and immediately applicable code.
This book deals with problems related to the evaluation of customer satisfaction in very different contexts and ways. Often satisfaction about a product or service is investigated through suitable surveys which try to capture the satisfaction about several partial aspects which characterize the perceived quality of that product or service. This book presents a series of statistical techniques adopted to analyze data from real situations where customer satisfaction surveys were performed. The aim is to give a simple guide of the variety of analysis that can be performed when analyzing data from sample surveys: starting from latent variable models to heterogeneity in satisfaction and also introducing some testing methods for comparing different customers. The book also discusses the construction of composite indicators including different benchmarks of satisfaction. Finally, some rank-based procedures for analyzing survey data are also shown.
This book brings together selected peer-reviewed contributions from various research fields in statistics, and highlights the diverse approaches and analyses related to real-life phenomena. Major topics covered in this volume include, but are not limited to, bayesian inference, likelihood approach, pseudo-likelihoods, regression, time series, and data analysis as well as applications in the life and social sciences. The software packages used in the papers are made available by the authors. This book is a result of the 47th Scientific Meeting of the Italian Statistical Society, held at the University of Cagliari, Italy, in 2014.
This book presents a detailed description of the development of statistical theory. In the mid twentieth century, the development of mathematical statistics underwent an enduring change, due to the advent of more refined mathematical tools. New concepts like sufficiency, superefficiency, adaptivity etc. motivated scholars to reflect upon the interpretation of mathematical concepts in terms of their real-world relevance. Questions concerning the optimality of estimators, for instance, had remained unanswered for decades, because a meaningful concept of optimality (based on the regularity of the estimators, the representation of their limit distribution and assertions about their concentration by means of Anderson's Theorem) was not yet available. The rapidly developing asymptotic theory provided approximate answers to questions for which non-asymptotic theory had found no satisfying solutions. In four engaging essays, this book presents a detailed description of how the use of mathematical methods stimulated the development of a statistical theory. Primarily focused on methodology, questionable proofs and neglected questions of priority, the book offers an intriguing resource for researchers in theoretical statistics, and can also serve as a textbook for advanced courses in statisticc.
The subject of this book stands at the crossroads of ergodic theory and measurable dynamics. With an emphasis on irreversible systems, the text presents a framework of multi-resolutions tailored for the study of endomorphisms, beginning with a systematic look at the latter. This entails a whole new set of tools, often quite different from those used for the "easier" and well-documented case of automorphisms. Among them is the construction of a family of positive operators (transfer operators), arising naturally as a dual picture to that of endomorphisms. The setting (close to one initiated by S. Karlin in the context of stochastic processes) is motivated by a number of recent applications, including wavelets, multi-resolution analyses, dissipative dynamical systems, and quantum theory. The automorphism-endomorphism relationship has parallels in operator theory, where the distinction is between unitary operators in Hilbert space and more general classes of operators such as contractions. There is also a non-commutative version: While the study of automorphisms of von Neumann algebras dates back to von Neumann, the systematic study of their endomorphisms is more recent; together with the results in the main text, the book includes a review of recent related research papers, some by the co-authors and their collaborators.
The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitable for a second course in applied statistics with every method explained using SAS analysis to illustrate a real-world problem. New to this edition: * Covers SAS v9.2 and incorporates new commands * Uses SAS ODS (output delivery system) for reproduction of tables and graphics output * Presents new commands needed to produce ODS output * All chapters rewritten for clarity * New and updated examples throughout * All SAS outputs are new and updated, including graphics * More exercises and problems * Completely new chapter on analysis of nonlinear and generalized linear models * Completely new appendix Mervyn G. Marasinghe, PhD, is Associate Professor Emeritus of Statistics at Iowa State University, where he has taught courses in statistical methods and statistical computing. Kenneth J. Koehler, PhD, is University Professor of Statistics at Iowa State University, where he teaches courses in statistical methodology at both graduate and undergraduate levels and primarily uses SAS to supplement his teaching.
This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the links and interplay between ostensibly diverse techniques.
The quantity, diversity and availability of transport data is increasing rapidly, requiring new skills in the management and interrogation of data and databases. Recent years have seen a new wave of 'big data', 'Data Science', and 'smart cities' changing the world, with the Harvard Business Review describing Data Science as the "sexiest job of the 21st century". Transportation professionals and researchers need to be able to use data and databases in order to establish quantitative, empirical facts, and to validate and challenge their mathematical models, whose axioms have traditionally often been assumed rather than rigorously tested against data. This book takes a highly practical approach to learning about Data Science tools and their application to investigating transport issues. The focus is principally on practical, professional work with real data and tools, including business and ethical issues. "Transport modeling practice was developed in a data poor world, and many of our current techniques and skills are building on that sparsity. In a new data rich world, the required tools are different and the ethical questions around data and privacy are definitely different. I am not sure whether current professionals have these skills; and I am certainly not convinced that our current transport modeling tools will survive in a data rich environment. This is an exciting time to be a data scientist in the transport field. We are trying to get to grips with the opportunities that big data sources offer; but at the same time such data skills need to be fused with an understanding of transport, and of transport modeling. Those with these combined skills can be instrumental at providing better, faster, cheaper data for transport decision- making; and ultimately contribute to innovative, efficient, data driven modeling techniques of the future. It is not surprising that this course, this book, has been authored by the Institute for Transport Studies. To do this well, you need a blend of academic rigor and practical pragmatism. There are few educational or research establishments better equipped to do that than ITS Leeds". - Tom van Vuren, Divisional Director, Mott MacDonald "WSP is proud to be a thought leader in the world of transport modelling, planning and economics, and has a wide range of opportunities for people with skills in these areas. The evidence base and forecasts we deliver to effectively implement strategies and schemes are ever more data and technology focused a trend we have helped shape since the 1970's, but with particular disruption and opportunity in recent years. As a result of these trends, and to suitably skill the next generation of transport modellers, we asked the world-leading Institute for Transport Studies, to boost skills in these areas, and they have responded with a new MSc programme which you too can now study via this book." - Leighton Cardwell, Technical Director, WSP. "From processing and analysing large datasets, to automation of modelling tasks sometimes requiring different software packages to "talk" to each other, to data visualization, SYSTRA employs a range of techniques and tools to provide our clients with deeper insights and effective solutions. This book does an excellent job in giving you the skills to manage, interrogate and analyse databases, and develop powerful presentations. Another important publication from ITS Leeds." - Fitsum Teklu, Associate Director (Modelling & Appraisal) SYSTRA Ltd "Urban planning has relied for decades on statistical and computational practices that have little to do with mainstream data science. Information is still often used as evidence on the impact of new infrastructure even when it hardly contains any valid evidence. This book is an extremely welcome effort to provide young professionals with the skills needed to analyse how cities and transport networks actually work. The book is also highly relevant to anyone who will later want to build digital solutions to optimise urban travel based on emerging data sources". - Yaron Hollander, author of "Transport Modelling for a Complete Beginner"
Improve your audit results and extend your capabilities with Mastering IDEAScript: The Definitive Guide Run audit programs and data analysis projects with ease. Create a local automated audit system. Meet current audit standards. Do it all with Mastering IDEAScript, the official guide from CaseWare IDEA(R). Designed to help the complete novice develop the skills required to write applications using IDEAScript, this resource starts with simple topics, progressively working up to more complex areas. It helps you understand what automation can help you do and set reasonable goals for working with IDEAScript. Although basic familiarity with the IDEA(R) program is helpful in the use of this book's concepts, neither programming skills nor special equipment is required. Here, you'll find plain-English, easy-to-follow explanations for: Creating your first IDEAScript application Using the IDEAScript Editor Writing code quickly and efficiently Building complete applications without any code at all, using the Macro Recorder Troubleshooting errors that occur in your application Performing basic tasks, such as indexing, sorting, and closing your database Finding information easily within databases And much more Along with a companion website containing all of the scripts found in this book, Mastering IDEAScript: The Definitive Guide is packed with practical techniques and rules of thumb to help you understand the workings of IDEAScript. The days of filling in forms and answering countless questions when running audit programs are over. From now on, let IDEAScript do the work IDEA(R) is a leading provider of data analysis software targeted to auditors to use as a tool for fraud detection and internal control assessment. IDEA(R) software is used in sixteen languages in more than ninety countries, by major accounting firms, governments, and corporations in all industry sectors, as well as by universities as a teaching tool. IDEA(R) is a registered trademark of CaseWare International Inc.
This book provides practical applications of doubly classified models by using R syntax to generate the models. It also presents these models in symbolic tables so as to cater to those who are not mathematically inclined, while numerous examples throughout the book illustrate the concepts and their applications. For those who are not aware of this modeling approach, it serves as a good starting point to acquire a basic understanding of doubly classified models. It is also a valuable resource for academics, postgraduate students, undergraduates, data analysts and researchers who are interested in examining square contingency tables.
|
![]() ![]() You may like...
Environmental Consequences of War and…
Tarek A. Kassim, Damia Barcelo
Hardcover
R8,325
Discovery Miles 83 250
The Oxford Handbook of Food Ethics
Anne Barnhill, Tyler Doggett, …
Hardcover
R4,443
Discovery Miles 44 430
Secure Broadcast Communication - In…
Adrian Perrig, J. D. Tygar
Hardcover
R3,010
Discovery Miles 30 100
A Latter-Day Saint Ode to Jesus - The…
Edward Kenneth Watson
Hardcover
|