![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
In honor of professor and renowned statistician R. Dennis Cook, this festschrift explores his influential contributions to an array of statistical disciplines ranging from experimental design and population genetics, to statistical diagnostics and all areas of regression-related inference and analysis. Since the early 1990s, Prof. Cook has led the development of dimension reduction methodology in three distinct but related regression contexts: envelopes, sufficient dimension reduction (SDR), and regression graphics. In particular, he has made fundamental and pioneering contributions to SDR, inventing or co-inventing many popular dimension reduction methods, such as sliced average variance estimation, the minimum discrepancy approach, model-free variable selection, and sufficient dimension reduction subspaces. A prolific researcher and mentor, Prof. Cook is known for his ability to identify research problems in statistics that are both challenging and important, as well as his deep appreciation for the applied side of statistics. This collection of Prof. Cook's collaborators, colleagues, friends, and former students reflects the broad array of his contributions to the research and instructional arenas of statistics.
Notable author Katsuhiko Ogata presents the only new book available to discuss, "in sufficient detail, " the details of MATLAB(R) materials needed to solve many analysis and design problems associated with control systems. Complements a large number of examples with in-depth explanations, encouraging complete understanding of the MATLAB approach to solving problems. Distills the large volume of MATLAB information available to focus on those materials needed to study analysis and design problems of deterministic, continuous-time control systems. Covers conventional control systems such as transient response, root locus, frequency response analyses and designs; analysis and design problems associated with state space formulation of control systems; and useful MATLAB approaches to solve optimization problems. A useful self-study guide for practicing control engineers.
This Brief provides a roadmap for the R language and programming environment with signposts to further resources and documentation.
This highly anticipated second edition features new chapters and sections, 225 new references, and comprehensive R software. In keeping with the previous edition, this book is about the art and science of data analysis and predictive modelling, which entails choosing and using multiple tools. Instead of presenting isolated techniques, this text emphasises problem solving strategies that address the many issues arising when developing multi-variable models using real data and not standard textbook examples. Regression Modelling Strategies presents full-scale case studies of non-trivial data-sets instead of over-simplified illustrations of each method. These case studies use freely available R functions that make the multiple imputation, model building, validation and interpretation tasks described in the book relatively easy to do. Most of the methods in this text apply to all regression models, but special emphasis is given to multiple regression using generalised least squares for longitudinal data, the binary logistic model, models for ordinal responses, parametric survival regression models and the Cox semi parametric survival model. A new emphasis is given to the robust analysis of continuous dependent variables using ordinal regression. As in the first edition, this text is intended for Masters' or PhD. level graduate students who have had a general introductory probability and statistics course and who are well versed in ordinary multiple regression and intermediate algebra. The book will also serve as a reference for data analysts and statistical methodologists, as it contains an up-to-date survey and bibliography of modern statistical modelling techniques.
Adjustment Models in 3D Geomatics and Computational Geophysics: With MATLAB Examples, Volume Four introduces a complete package of theoretical and practical subjects in adjustment computations relating to Geomatics and geophysical applications, particularly photogrammetry, surveying, remote sensing, GIS, cartography, and geodesy. Supported by illustrating figures and solved examples with MATLAB codes, the book provides clear methods for processing 3D data for accurate and reliable results. Problems cover free net adjustment, adjustment with constraints, blunder detection, RANSAC, robust estimation, error propagation, 3D co-registration, image pose determination, and more.
SPSS syntax is the command language used by SPSS to carry out all of its commands and functions. In this book, Jacqueline Collier introduces the use of syntax to those who have not used it before, or who are taking their first steps in using syntax. Without requiring any knowledge of programming, the text outlines: - how to become familiar with the syntax commands; - how to create and manage the SPSS journal and syntax files; - and how to use them throughout the data entry, management and analysis process. Collier covers all aspects of data management from data entry through to data analysis, including managing the errors and the error messages created by SPSS. Syntax commands are clearly explained and the value of syntax is demonstrated through examples. This book also supports the use of SPSS syntax alongside the usual button and menu-driven graphical interface (GIF) using the two methods together, in a complementary way. The book is written in such a way as to enable you to pick and choose how much you rely on one method over the other, encouraging you to use them side-by-side, with a gradual increase in use of syntax as your knowledge, skills and confidence develop. This book is ideal for all those carrying out quantitative research in the health and social sciences who can benefit from SPSS syntax's capacity to save time, reduce errors and allow a data audit trail.
In "Large-Scale Scrum," Craig Larman and Bas Vodde offer the most direct, concise, actionable guide to reaping the full benefits of agile in distributed, global enterprises. Larman and Vodde have distilled their immense experience helping geographically distributed development organizations move to agile. Going beyond their previous books, they offer today's fastest, most focused guidance: "brass tacks" advice and field-proven best practices for achieving value fast, and achieving even more value as you move forward. Targeted to enterprise project participants and stakeholders, "Large-Scale Scrum" offers straight-to-the-point insights for scaling Scrum across the entire project lifecycle, from sprint planning to retrospective. Larman and Vodde help you:
This book chronicles a 10-year introduction of blended learning into the delivery at a leading technological university, with a longstanding tradition of technology-enabled teaching and learning, and state-of-the-art infrastructure. Hence, both teachers and students were familiar with the idea of online courses. Despite this, the longitudinal experiment did not proceed as expected. Though few technical problems, it required behavioural changes from teachers and learners, thus unearthing a host of socio-technical issues, challenges, and conundrums. With the undercurrent of design ideals such as "tech for good", any industrial sector must examine whether digital platforms are credible substitutes or at best complementary. In this era of Industry 4.0, higher education, like any other industry, should not be about the creative destruction of what we value in universities, but their digital transformation. The book concludes with an agenda for large, repeatable Randomised Controlled Trials (RCTs) to validate digital platforms that could fulfil the aspirations of the key stakeholder groups - students, faculty, and regulators as well as delving into the role of Massive Open Online Courses (MOOCs) as surrogates for "fees-free" higher education and whether the design of such a HiEd 4.0 platform is even a credible proposition. Specifically, the book examines the data-driven evidence within a design-based research methodology to present outcomes of two alternative instructional designs evaluated - traditional lecturing and blended learning. Based on the research findings and statistical analysis, it concludes that the inexorable shift to online delivery of education must be guided by informed educational management and innovation.
Because of its large command structure and intricate syntax, Mathematica can be difficult to learn. Wolfram's Mathematica manual, while certainly comprehensive, is so large and complex that when trying to learn the software from scratch -- or find answers to specific questions -- one can be quickly overwhelmed. A Beginner's Guide to Mathematica offers a simple, step-by-step approach to help math-savvy newcomers build the skills needed to use the software in practice. Concise and easy to use, this book teaches by example and points out potential pitfalls along the way. The presentation starts with simple problems and discusses multiple solution paths, ranging from basic to elegant, to gradually introduce the Mathematica toolkit. More challenging and eventually cutting-edge problems follow. The authors place high value on notebook and file system organization, cross-platform capabilities, and data reading and writing. The text features an array of error messages you will likely encounter and clearly describes how to deal with those situations. While it is by no means exhaustive, this book offers a non-threatening introduction to Mathematica that will teach you the aspects needed for many practical applications, get you started on performing specific, relatively simple tasks, and enable you to build on this experience and move on to more real-world problems.
Group method of data handling (GMDH) is a typical inductive modeling method built on the principles of self-organization. Since its introduction, inductive modelling has been developed to support complex systems in prediction, clusterization, system identification, as well as data mining and knowledge extraction technologies in social science, science, engineering, and medicine.This is the first book to explore GMDH using MATLAB (matrix laboratory) language. Readers will learn how to implement GMDH in MATLAB as a method of dealing with big data analytics. Error-free source codes in MATLAB have been included in supplementary material (accessible online) to assist users in their understanding in GMDH and to make it easy for users to further develop variations of GMDH algorithms.
The investigation of the role of mechanical and mechano-chemical interactions in cellular processes and tissue development is a rapidly growing research field in the life sciences and in biomedical engineering. Quantitative understanding of this important area in the study of biological systems requires the development of adequate mathematical models for the simulation of the evolution of these systems in space and time. Since expertise in various fields is necessary, this calls for a multidisciplinary approach. This edited volume connects basic physical, biological, and physiological concepts to methods for the mathematical modeling of various materials by pursuing a multiscale approach, from subcellular to organ and system level. Written by active researchers, each chapter provides a detailed introduction to a given field, illustrates various approaches to creating models, and explores recent advances and future research perspectives. Topics covered include molecular dynamics simulations of lipid membranes, phenomenological continuum mechanics of tissue growth, and translational cardiovascular modeling. Modeling Biomaterials will be a valuable resource for both non-specialists and experienced researchers from various domains of science, such as applied mathematics, biophysics, computational physiology, and medicine.
This open access book examines the implications of internal crowdsourcing (IC) in companies. Presenting an employee-oriented, cross-sector reference model for good IC practice, it discusses the core theoretical foundations, and offers guidelines for process-management and blueprints for the implementation of IC. Furthermore, it examines solutions for employee training and competence development based on crowdsourcing. As such, the book will appeal to scholars of management science, work studies, organizational and participation research and to readers interested in inclusive approaches for cooperative change management and the IT implications for IC platforms.
Cyber-solutions to real-world business problems Artificial Intelligence in Practice is a fascinating look into how companies use AI and machine learning to solve problems. Presenting 50 case studies of actual situations, this book demonstrates practical applications to issues faced by businesses around the globe. The rapidly evolving field of artificial intelligence has expanded beyond research labs and computer science departments and made its way into the mainstream business environment. Artificial intelligence and machine learning are cited as the most important modern business trends to drive success. It is used in areas ranging from banking and finance to social media and marketing. This technology continues to provide innovative solutions to businesses of all sizes, sectors and industries. This engaging and topical book explores a wide range of cases illustrating how businesses use AI to boost performance, drive efficiency, analyse market preferences and many others. Best-selling author and renowned AI expert Bernard Marr reveals how machine learning technology is transforming the way companies conduct business. This detailed examination provides an overview of each company, describes the specific problem and explains how AI facilitates resolution. Each case study provides a comprehensive overview, including some technical details as well as key learning summaries: Understand how specific business problems are addressed by innovative machine learning methods Explore how current artificial intelligence applications improve performance and increase efficiency in various situations Expand your knowledge of recent AI advancements in technology Gain insight on the future of AI and its increasing role in business and industry Artificial Intelligence in Practice: How 50 Successful Companies Used Artificial Intelligence to Solve Problems is an insightful and informative exploration of the transformative power of technology in 21st century commerce.
The Distributed and Unified Numerics Environment (Dune) is a set of open-source C++ libraries for the implementation of finite element and finite volume methods. Over the last 15 years it has become one of the most commonly used libraries for the implementation of new, efficient simulation methods in science and engineering. Describing the main Dune libraries in detail, this book covers access to core features like grids, shape functions, and linear algebra, but also higher-level topics like function space bases and assemblers. It includes extensive information on programmer interfaces, together with a wealth of completed examples that illustrate how these interfaces are used in practice. After having read the book, readers will be prepared to write their own advanced finite element simulators, tapping the power of Dune to do so.
There's a lot more to the blockchain than mining Bitcoin. This secure system for registering and verifying ownership and identity is perfect for supply chain logistics, health records, and other sensitive data management tasks. Blockchain in Action unlocks the full potential of this revolutionary technology, showing you how to build own decentralized apps for secure applications including digital democracy, private auctions, and electronic record management. Key Features * How blockchain differs from other distributed systems * Smart contract development with Ethereum and the Solidity language * Web UI for decentralized apps * Identity, privacy and security techniques * On-chain and off-chain data storage For intermediate programmers who know the basics of object-oriented languages and have a working knowledge of JavaScript. About the technology A blockchain is a decentralized record, stored across numerous devices with no central control or authority. Copies of this shared database are constantly reconciled with one another, and records are cryptographically encoded to make them unchangeable. The result is a type of database that is at once transparent and publicly accessible, and where it is impossible to falsify or alter the historic data record. Bina Ramamurthy holds a Ph.D. in fault-tolerant distributed systems, and has thirty years of experience teaching cryptography, peer-to-peer networking, and distributed systems. She is the instructor and content creator for the University of Buffalo four-course specialization on blockchain technology on the Coursera MOOC platform, and the recipient of the 2019 SUNY Chancellor's Award for Teaching Excellence.
Arguably the strongest addition to numerical finance of the past decade, Algorithmic Adjoint Differentiation (AAD) is the technology implemented in modern financial software to produce thousands of accurate risk sensitivities, within seconds, on light hardware. AAD recently became a centerpiece of modern financial systems and a key skill for all quantitative analysts, developers, risk professionals or anyone involved with derivatives. It is increasingly taught in Masters and PhD programs in finance. Danske Bank's wide scale implementation of AAD in its production and regulatory systems won the In-House System of the Year 2015 Risk award. The Modern Computational Finance books, written by three of the very people who designed Danske Bank's systems, offer a unique insight into the modern implementation of financial models. The volumes combine financial modelling, mathematics and programming to resolve real life financial problems and produce effective derivatives software. This volume is a complete, self-contained learning reference for AAD, and its application in finance. AAD is explained in deep detail throughout chapters that gently lead readers from the theoretical foundations to the most delicate areas of an efficient implementation, such as memory management, parallel implementation and acceleration with expression templates. The book comes with professional source code in C++, including an efficient, up to date implementation of AAD and a generic parallel simulation library. Modern C++, high performance parallel programming and interfacing C++ with Excel are also covered. The book builds the code step-by-step, while the code illustrates the concepts and notions developed in the book.
Highlighting the latest advances in nonparametric and semiparametric statistics, this book gathers selected peer-reviewed contributions presented at the 4th Conference of the International Society for Nonparametric Statistics (ISNPS), held in Salerno, Italy, on June 11-15, 2018. It covers theory, methodology, applications and computational aspects, addressing topics such as nonparametric curve estimation, regression smoothing, models for time series and more generally dependent data, varying coefficient models, symmetry testing, robust estimation, and rank-based methods for factorial design. It also discusses nonparametric and permutation solutions for several different types of data, including ordinal data, spatial data, survival data and the joint modeling of both longitudinal and time-to-event data, permutation and resampling techniques, and practical applications of nonparametric statistics. The International Society for Nonparametric Statistics is a unique global organization, and its international conferences are intended to foster the exchange of ideas and the latest advances and trends among researchers from around the world and to develop and disseminate nonparametric statistics knowledge. The ISNPS 2018 conference in Salerno was organized with the support of the American Statistical Association, the Institute of Mathematical Statistics, the Bernoulli Society for Mathematical Statistics and Probability, the Journal of Nonparametric Statistics and the University of Salerno.
Starting from a basic knowledge of mathematics and mechanics gained in standard foundation classes, "Theory of Lift: Introductory Computational Aerodynamics in MATLAB/Octave" takes the reader conceptually through from the fundamental mechanics of lift to the stage of actually being able to make practical calculations and predictions of the coefficient of lift for realistic wing profile and planform geometries. The classical framework and methods of aerodynamics are covered in detail and the reader is shown how they may be used to develop simple yet powerful MATLAB or Octave programs that accurately predict and visualise the dynamics of real wing shapes, using lumped vortex, panel, and vortex lattice methods. This book contains all the mathematical development and formulae required in standard incompressible aerodynamics as well as dozens of small but complete working programs which can be put to use immediately using either the popular MATLAB or free Octave computional modelling packages. Key features: Synthesizes the classical foundations of aerodynamics with hands-on computation, emphasizing interactivity and visualization.Includes complete source code for all programs, all listings having been tested for compatibility with both MATLAB and Octave.Companion website (www.wiley.com/go/mcbain) hosting codes and solutions. "Theory of Lift: Introductory Computational Aerodynamics in MATLAB/Octave" is an introductory text for graduate and senior undergraduate students on aeronautical and aerospace engineering courses and also forms a valuable reference for engineers and designers.
Construct, analyze, and visualize networks with networkx, a Python language module. Network analysis is a powerful tool you can apply to a multitude of datasets and situations. Discover how to work with all kinds of networks, including social, product, temporal, spatial, and semantic networks. Convert almost any real-world data into a complex network--such as recommendations on co-using cosmetic products, muddy hedge fund connections, and online friendships. Analyze and visualize the network, and make business decisions based on your analysis. If you're a curious Python programmer, a data scientist, or a CNA specialist interested in mechanizing mundane tasks, you'll increase your productivity exponentially. Complex network analysis used to be done by hand or with non-programmable network analysis tools, but not anymore! You can now automate and program these tasks in Python. Complex networks are collections of connected items, words, concepts, or people. By exploring their structure and individual elements, we can learn about their meaning, evolution, and resilience. Starting with simple networks, convert real-life and synthetic network graphs into networkx data structures. Look at more sophisticated networks and learn more powerful machinery to handle centrality calculation, blockmodeling, and clique and community detection. Get familiar with presentation-quality network visualization tools, both programmable and interactive--such as Gephi, a CNA explorer. Adapt the patterns from the case studies to your problems. Explore big networks with NetworKit, a high-performance networkx substitute. Each part in the book gives you an overview of a class of networks, includes a practical study of networkx functions and techniques, and concludes with case studies from various fields, including social networking, anthropology, marketing, and sports analytics. Combine your CNA and Python programming skills to become a better network analyst, a more accomplished data scientist, and a more versatile programmer. What You Need: You will need a Python 3.x installation with the following additional modules: Pandas (>=0.18), NumPy (>=1.10), matplotlib (>=1.5), networkx (>=1.11), python-louvain (>=0.5), NetworKit (>=3.6), and generalizesimilarity. We recommend using the Anaconda distribution that comes with all these modules, except for python-louvain, NetworKit, and generalizedsimilarity, and works on all major modern operating systems.
This book brings together two major trends: data science and blockchains. It is one of the first books to systematically cover the analytics aspects of blockchains, with the goal of linking traditional data mining research communities with novel data sources. Data science and big data technologies can be considered cornerstones of the data-driven digital transformation of organizations and society. The concept of blockchain is predicted to enable and spark transformation on par with that associated with the invention of the Internet. Cryptocurrencies are the first successful use case of highly distributed blockchains, like the world wide web was to the Internet. The book takes the reader through basic data exploration topics, proceeding systematically, method by method, through supervised and unsupervised learning approaches and information visualization techniques, all the way to understanding the blockchain data from the network science perspective. Chapters introduce the cryptocurrency blockchain data model and methods to explore it using structured query language, association rules, clustering, classification, visualization, and network science. Each chapter introduces basic concepts, presents examples with real cryptocurrency blockchain data and offers exercises and questions for further discussion. Such an approach intends to serve as a good starting point for undergraduate and graduate students to learn data science topics using cryptocurrency blockchain examples. It is also aimed at researchers and analysts who already possess good analytical and data skills, but who do not yet have the specific knowledge to tackle analytic questions about blockchain transactions. The readers improve their knowledge about the essential data science techniques in order to turn mere transactional information into social, economic, and business insights.
Review of the First Edition: The authors strive to reduce theory to a minimum, which makes it a self-learning text that is comprehensible for biologists, physicians, etc. who lack an advanced mathematics background. Unlike in many other textbooks, R is not introduced with meaningless toy examples; instead the reader is taken by the hand and shown around some analyses, graphics, and simulations directly relating to meta-analysis... A useful hands-on guide for practitioners who want to familiarize themselves with the fundamentals of meta-analysis and get started without having to plough through theorems and proofs. -Journal of Applied Statistics Statistical Meta-Analysis with R and Stata, Second Edition provides a thorough presentation of statistical meta-analyses (MA) with step-by-step implementations using R/Stata. The authors develop analysis step by step using appropriate R/Stata functions, which enables readers to gain an understanding of meta-analysis methods and R/Stata implementation so that they can use these two popular software packages to analyze their own meta-data. Each chapter gives examples of real studies compiled from the literature. After presenting the data and necessary background for understanding the applications, various methods for analyzing meta-data are introduced. The authors then develop analysis code using the appropriate R/Stata packages and functions. What's New in the Second Edition: Adds Stata programs along with the R programs for meta-analysis Updates all the statistical meta-analyses with R/Stata programs Covers fixed-effects and random-effects MA, meta-regression, MA with rare-event, and MA-IPD vs MA-SS Adds five new chapters on multivariate MA, publication bias, missing data in MA, MA in evaluating diagnostic accuracy, and network MA Suitable as a graduate-level text for a meta-data analysis course, the book is also a valuable reference for practitioners and biostatisticians (even those with little or no experience in using R or Stata) in public health, medical research, governmental agencies, and the pharmaceutical industry.
This book examines current topics and trends in strategic auditing, accounting and finance in digital transformation both from a theoretical and practical perspective. It covers areas such as internal control, corporate governance, enterprise risk management, sustainability and competition. The contributors of this volume emphasize how strategic approaches in this area help companies in achieving targets. The contributions illustrate how by providing good governance, reliable financial reporting, and accountability, businesses can win a competitive advantage. It further discusses how new technological developments like artificial intelligence (AI), cybersystems, network technologies, financial mobility and smart applications, will shape the future of accounting and auditing for firms.
An accessible primer on how to create effective graphics from data This book provides students and researchers a hands-on introduction to the principles and practice of data visualization. It explains what makes some graphs succeed while others fail, how to make high-quality figures from data using powerful and reproducible methods, and how to think about data visualization in an honest and effective way. Data Visualization builds the reader's expertise in ggplot2, a versatile visualization library for the R programming language. Through a series of worked examples, this accessible primer then demonstrates how to create plots piece by piece, beginning with summaries of single variables and moving on to more complex graphics. Topics include plotting continuous and categorical variables; layering information on graphics; producing effective "small multiple" plots; grouping, summarizing, and transforming data for plotting; creating maps; working with the output of statistical models; and refining plots to make them more comprehensible. Effective graphics are essential to communicating ideas and a great way to better understand data. This book provides the practical skills students and practitioners need to visualize quantitative data and get the most out of their research findings. Provides hands-on instruction using R and ggplot2 Shows how the "tidyverse" of data analysis tools makes working with R easier and more consistent Includes a library of data sets, code, and functions
|
![]() ![]() You may like...
Exploring Christian Song
M. Jennifer Bloxam, Andrew Shenton
Hardcover
From Medieval Pilgrimage to Religious…
William H. Swatos, Luigi Tomasi
Hardcover
R2,766
Discovery Miles 27 660
The Seraphic Order - A Traditional…
Marion A Habig, Aquinas Barth
Hardcover
R1,237
Discovery Miles 12 370
|