![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
Building the Agile Enterprise with Capabilities, Collaborations and Values, Second Edition covers advances that make technology more powerful and pervasive while, at the same time, improving alignment of technology with business. Using numerous examples, illustrations, and case studies, Fred Cummins, an industry expert, author and former fellow with EDS and Hewlett Packard, updates his first edition incorporating the following industry developments: The ubiquitous use of the Internet along with intelligent, mobile devices, which have enabled everyone and everything to be connected anytime, anywhere The emergence of a "business architecture" discipline that has driven improvements in business design and transformation practices The development of CMMN (Case Management Model and Notation) that will provide automation to support the collaboration of knowledge workers and managers The development of VDML (Value Delivery Modeling Language) that supports modeling of business design from a management perspective The importance of "big data" management and analysis as a new source of insight into evolution of the business and the ecosystem How the architecture of the agile enterprise and business modeling change enterprise governance, management and innovation Building the Agile Enterprise with Capabilities, Collaborations and Values, Second Edition is a must have reference for business leaders, CTOs; business architects, information systems architects and business process modeling professionals who wish to close the gap between strategic planning and business operations as well as the gap between business and IT and enhance the creation and delivery of business value.
Since 1984, Geophysical Data Analysis has filled the need for a short, concise reference on inverse theory for individuals who have an intermediate background in science and mathematics. The new edition maintains the accessible and succinct manner for which it is known, with the addition of: MATLAB examples and problem sets Advanced color graphics Coverage of new topics, including Adjoint Methods; Inversion by Steepest Descent, Monte Carlo and Simulated Annealing methods; and Bootstrap algorithm for determining empirical confidence intervals
Highly recommended by JASA, Technometrics, and other leading statistical journals, the first two editions of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Third Edition continues to lead readers step-by-step through the process of fitting LMMs. The third edition provides a comprehensive update of the available tools for fitting linear mixed-effects models in the newest versions of SAS, SPSS, R, Stata, and HLM. All examples have been updated, with a focus on new tools for visualization of results and interpretation. New conceptual and theoretical developments in mixed-effects modeling have been included, and there is a new chapter on power analysis for mixed-effects models. Features:*Dedicates an entire chapter to the key theories underlying LMMs for clustered, longitudinal, and repeated measures data *Provides descriptions, explanations, and examples of software code necessary to fit LMMs in SAS, SPSS, R, Stata, and HLM *Contains detailed tables of estimates and results, allowing for easy comparisons across software procedures *Presents step-by-step analyses of real-world data sets that arise from a variety of research settings and study designs, including hypothesis testing, interpretation of results, and model diagnostics *Integrates software code in each chapter to compare the relative advantages and disadvantages of each package *Supplemented by a website with software code, datasets, additional documents, and updates Ideal for anyone who uses software for statistical modeling, this book eliminates the need to read multiple software-specific texts by covering the most popular software programs for fitting LMMs in one handy guide. The authors illustrate the models and methods through real-world examples that enable comparisons of model-fitting options and results across the software procedures.
OCEB 2 Certification Guide, Second Edition has been updated to cover the new version 2 of the BPMN standard and delivers expert insight into BPM from one of the developers of the OCEB Fundamental exam, offering full coverage of the fundamental exam material for both the business and technical tracks to further certification. The first study guide prepares candidates to take-and pass-the OCEB Fundamental exam, explaining and building on basic concepts, focusing on key areas, and testing knowledge of all critical topics with sample questions and detailed answers. Suitable for practitioners, and those newer to the field, this book provides a solid grounding in business process management based on the authors' own extensive BPM consulting experiences.
Computational Finance Using C and C#: Derivatives and Valuation, Second Edition provides derivatives pricing information for equity derivatives, interest rate derivatives, foreign exchange derivatives, and credit derivatives. By providing free access to code from a variety of computer languages, such as Visual Basic/Excel, C++, C, and C#, it gives readers stand-alone examples that they can explore before delving into creating their own applications. It is written for readers with backgrounds in basic calculus, linear algebra, and probability. Strong on mathematical theory, this second edition helps empower readers to solve their own problems. *Features new programming problems, examples, and exercises for each chapter. *Includes freely-accessible source code in languages such as C, C++, VBA, C#, and Excel.. *Includes a new chapter on the history of finance which also covers the 2008 credit crisis and the use of mortgage backed securities, CDSs and CDOs. *Emphasizes mathematical theory.
This book is a valuable read for a diverse group of researchers and practitioners who analyze assessment data and construct test instruments. It focuses on the use of classical test theory (CTT) and item response theory (IRT), which are often required in the fields of psychology (e.g. for measuring psychological traits), health (e.g. for measuring the severity of disorders), and education (e.g. for measuring student performance), and makes these analytical tools accessible to a broader audience. Having taught assessment subjects to students from diverse backgrounds for a number of years, the three authors have a wealth of experience in presenting educational measurement topics, in-depth concepts and applications in an accessible format. As such, the book addresses the needs of readers who use CTT and IRT in their work but do not necessarily have an extensive mathematical background. The book also sheds light on common misconceptions in applying measurement models, and presents an integrated approach to different measurement methods, such as contrasting CTT with IRT and multidimensional IRT models with unidimensional IRT models. Wherever possible, comparisons between models are explicitly made. In addition, the book discusses concepts for test equating and differential item functioning, as well as Bayesian IRT models and plausible values using simple examples. This book can serve as a textbook for introductory courses on educational measurement, as supplementary reading for advanced courses, or as a valuable reference guide for researchers interested in analyzing student assessment data.
Now in its second edition, Text Analysis with R provides a practical introduction to computational text analysis using the open source programming language R. R is an extremely popular programming language, used throughout the sciences; due to its accessibility, R is now used increasingly in other research areas. In this volume, readers immediately begin working with text, and each chapter examines a new technique or process, allowing readers to obtain a broad exposure to core R procedures and a fundamental understanding of the possibilities of computational text analysis at both the micro and the macro scale. Each chapter builds on its predecessor as readers move from small scale "microanalysis" of single texts to large scale "macroanalysis" of text corpora, and each concludes with a set of practice exercises that reinforce and expand upon the chapter lessons. The book's focus is on making the technical palatable and making the technical useful and immediately gratifying. Text Analysis with R is written with students and scholars of literature in mind but will be applicable to other humanists and social scientists wishing to extend their methodological toolkit to include quantitative and computational approaches to the study of text. Computation provides access to information in text that readers simply cannot gather using traditional qualitative methods of close reading and human synthesis. This new edition features two new chapters: one that introduces dplyr and tidyr in the context of parsing and analyzing dramatic texts to extract speaker and receiver data, and one on sentiment analysis using the syuzhet package. It is also filled with updated material in every chapter to integrate new developments in the field, current practices in R style, and the use of more efficient algorithms.
Matrix Algorithms in MATLAB focuses on the MATLAB code implementations of matrix algorithms. The MATLAB codes presented in the book are tested with thousands of runs of MATLAB randomly generated matrices, and the notation in the book follows the MATLAB style to ensure a smooth transition from formulation to the code, with MATLAB codes discussed in this book kept to within 100 lines for the sake of clarity. The book provides an overview and classification of the interrelations of various algorithms, as well as numerous examples to demonstrate code usage and the properties of the presented algorithms. Despite the wide availability of computer programs for matrix computations, it continues to be an active area of research and development. New applications, new algorithms, and improvements to old algorithms are constantly emerging.
The ability to preserve electronic evidence is critical to presenting a solid case for civil litigation, as well as in criminal and regulatory investigations. Preserving Electronic Evidence for Trial provides everyone connected with digital forensics investigation and litigation with a clear and practical hands-on guide to the best practices in preserving electronic evidence. Corporate management personnel (legal & IT) and outside counsel need reliable processes for the litigation hold - identifying, locating, and preserving electronic evidence. Preserving Electronic Evidence for Trial provides the road map, showing you how to organize the digital evidence team before the crisis, not in the middle of litigation. This practice handbook by an internationally known digital forensics expert and an experienced litigator focuses on what corporate and litigation counsel as well as IT managers and forensic consultants need to know to communicate effectively about electronic evidence. You will find tips on how all your team members can get up to speed on each other's areas of specialization before a crisis arises. The result is a plan to effectively identify and pre-train the critical electronic-evidence team members. You will be ready to lead the team to success when a triggering event indicates that litigation is likely, by knowing what to ask in coordinating effectively with litigation counsel and forensic consultants throughout the litigation progress. Your team can also be ready for action in various business strategies, such as merger evaluation and non-litigation conflict resolution.
*Systematically introducing major components of SPM process. *Novel hybrid methods (228 hybrids plus numerous variants) of modern statistical methods or machine learning methods with mathematical and/or univariate geostatistical methods. *Novel predictive accuracy-based variable selection techniques for spatial predictive methods. *Predictive accuracy-based parameter/model optimization. *Reproducible examples for SPM of various data types in R.
Applied Computing in Medicine and Health is a comprehensive presentation of on-going investigations into current applied computing challenges and advances, with a focus on a particular class of applications, primarily artificial intelligence methods and techniques in medicine and health. Applied computing is the use of practical computer science knowledge to enable use of the latest technology and techniques in a variety of different fields ranging from business to scientific research. One of the most important and relevant areas in applied computing is the use of artificial intelligence (AI) in health and medicine. Artificial intelligence in health and medicine (AIHM) is assuming the challenge of creating and distributing tools that can support medical doctors and specialists in new endeavors. The material included covers a wide variety of interdisciplinary perspectives concerning the theory and practice of applied computing in medicine, human biology, and health care. Particular attention is given to AI-based clinical decision-making, medical knowledge engineering, knowledge-based systems in medical education and research, intelligent medical information systems, intelligent databases, intelligent devices and instruments, medical AI tools, reasoning and metareasoning in medicine, and methodological, philosophical, ethical, and intelligent medical data analysis.
Improve Your Analytical Skills Incorporating the latest R packages as well as new case studies and applications, Using R and RStudio for Data Management, Statistical Analysis, and Graphics, Second Edition covers the aspects of R most often used by statistical analysts. New users of R will find the book's simple approach easy to understand while more sophisticated users will appreciate the invaluable source of task-oriented information. New to the Second Edition The use of RStudio, which increases the productivity of R users and helps users avoid error-prone cut-and-paste workflows New chapter of case studies illustrating examples of useful data management tasks, reading complex files, making and annotating maps, "scraping" data from the web, mining text files, and generating dynamic graphics New chapter on special topics that describes key features, such as processing by group, and explores important areas of statistics, including Bayesian methods, propensity scores, and bootstrapping New chapter on simulation that includes examples of data generated from complex models and distributions A detailed discussion of the philosophy and use of the knitr and markdown packages for R New packages that extend the functionality of R and facilitate sophisticated analyses Reorganized and enhanced chapters on data input and output, data management, statistical and mathematical functions, programming, high-level graphics plots, and the customization of plots Easily Find Your Desired Task Conveniently organized by short, clear descriptive entries, this edition continues to show users how to easily perform an analytical task in R. Users can quickly find and implement the material they need through the extensive indexing, cross-referencing, and worked examples in the text. Datasets and code are available for download on a supplementary website.
* Provides a comprehensive review of methods and applications of Bayesian variable selection. * Divided into four parts: Spike-and-Slab Priors; Continuous Shrinkage Priors; Extensions to various Modeling; Other Approaches to Bayesian Variable Selection. * Covers theoretical and methodological aspects, as well as worked out examples with R code provided in the online supplement. * Includes contributions by experts in the field.
R Visualizations: Derive Meaning from Data focuses on one of the two major topics of data analytics: data visualization, a.k.a., computer graphics. In the book, major R systems for visualization are discussed, organized by topic and not by system. Anyone doing data analysis will be shown how to use R to generate any of the basic visualizations with the R visualization systems. Further, this book introduces the author's lessR system, which always can accomplish a visualization with less coding than the use of other systems, sometimes dramatically so, and also provides accompanying statistical analyses. Key Features Presents thorough coverage of the leading R visualization system, ggplot2. Gives specific guidance on using base R graphics to attain visualizations of the same quality as those provided by ggplot2. Shows how to create a wide range of data visualizations: distributions of categorical and continuous variables, many types of scatterplots including with a third variable, time series, and maps. Inclusion of the various approaches to R graphics organized by topic instead of by system. Presents the recent work on interactive visualization in R. David W. Gerbing received his PhD from Michigan State University in 1979 in quantitative analysis, and currently is a professor of quantitative analysis in the School of Business at Portland State University. He has published extensively in the social and behavioral sciences with a focus on quantitative methods. His lessR package has been in development since 2009.
An introduction to the Central Dogma of molecular biology and information flow in biological systems. A systematic overview of the methods for generating gene expression data. Background knowledge on statistical modeling and machine learning techniques. Detailed methodology of analyzing gene expression data with an example case study. Clustering methods for finding co-expression patterns from microarray, bulkRNA and scRNA data. A large number of practical tools, systems and repositories that are useful for computational biologists to create, analyze and validate biologically relevant gene expression patterns. Suitable for multi-disciplinary researchers and practitioners in computer science and biological sciences.
Behavior Analysis with Machine Learning Using R introduces machine learning and deep learning concepts and algorithms applied to a diverse set of behavior analysis problems. It focuses on the practical aspects of solving such problems based on data collected from sensors or stored in electronic records. The included examples demonstrate how to perform common data analysis tasks such as: data exploration, visualization, preprocessing, data representation, model training and evaluation. All of this, using the R programming language and real-life behavioral data. Even though the examples focus on behavior analysis tasks, the covered underlying concepts and methods can be applied in any other domain. No prior knowledge in machine learning is assumed. Basic experience with R and basic knowledge in statistics and high school level mathematics are beneficial. Features: Build supervised machine learning models to predict indoor locations based on WiFi signals, recognize physical activities from smartphone sensors and 3D skeleton data, detect hand gestures from accelerometer signals, and so on. Program your own ensemble learning methods and use Multi-View Stacking to fuse signals from heterogeneous data sources. Use unsupervised learning algorithms to discover criminal behavioral patterns. Build deep learning neural networks with TensorFlow and Keras to classify muscle activity from electromyography signals and Convolutional Neural Networks to detect smiles in images. Evaluate the performance of your models in traditional and multi-user settings. Build anomaly detection models such as Isolation Forests and autoencoders to detect abnormal fish behaviors. This book is intended for undergraduate/graduate students and researchers from ubiquitous computing, behavioral ecology, psychology, e-health, and other disciplines who want to learn the basics of machine learning and deep learning and for the more experienced individuals who want to apply machine learning to analyze behavioral data.
Coherent treatment of a variety of approaches to multiple comparisons Broad coverage of topics, with contributions by internationally leading experts Detailed treatment of applications in medicine and life sciences Suitable for researchers, lecturers / students, and practitioners
Nearly every large corporation and governmental agency is taking a fresh look at their current enterprise-scale business intelligence (BI) and data warehousing implementations at the dawn of the "Big Data Era"...and most see a critical need to revitalize their current capabilities. Whether they find the frustrating and business-impeding continuation of a long-standing "silos of data" problem, or an over-reliance on static production reports at the expense of predictive analytics and other true business intelligence capabilities, or a lack of progress in achieving the long-sought-after enterprise-wide "single version of the truth" - or all of the above - IT Directors, strategists, and architects find that they need to go back to the drawing board and produce a brand new BI/data warehousing roadmap to help move their enterprises from their current state to one where the promises of emerging technologies and a generation's worth of best practices can finally deliver high-impact, architecturally evolvable enterprise-scale business intelligence and data warehousing. Author Alan Simon, whose BI and data warehousing experience dates back to the late 1970s and who has personally delivered or led more than thirty enterprise-wide BI/data warehousing roadmap engagements since the mid-1990s, details a comprehensive step-by-step approach to building a best practices-driven, multi-year roadmap in the quest for architecturally evolvable BI and data warehousing at the enterprise scale. Simon addresses the triad of technology, work processes, and organizational/human factors considerations in a manner that blends the visionary and the pragmatic.
Data Analytics for the Social Sciences is an introductory, graduate-level treatment of data analytics for social science. It features applications in the R language, arguably the fastest growing and leading statistical tool for researchers. The book starts with an ethics chapter on the uses and potential abuses of data analytics. Chapters 2 and 3 show how to implement a broad range of statistical procedures in R. Chapters 4 and 5 deal with regression and classification trees and with random forests. Chapter 6 deals with machine learning models and the "caret" package, which makes available to the researcher hundreds of models. Chapter 7 deals with neural network analysis, and Chapter 8 deals with network analysis and visualization of network data. A final chapter treats text analysis, including web scraping, comparative word frequency tables, word clouds, word maps, sentiment analysis, topic analysis, and more. All empirical chapters have two "Quick Start" exercises designed to allow quick immersion in chapter topics, followed by "In Depth" coverage. Data are available for all examples and runnable R code is provided in a "Command Summary". An appendix provides an extended tutorial on R and RStudio. Almost 30 online supplements provide information for the complete book, "books within the book" on a variety of topics, such as agent-based modeling. Rather than focusing on equations, derivations, and proofs, this book emphasizes hands-on obtaining of output for various social science models and how to interpret the output. It is suitable for all advanced level undergraduate and graduate students learning statistical data analysis.
Mathematica by Example, Sixth Edition is an essential resource for the Mathematica user, providing step-by-step instructions on achieving results from this powerful software tool. The book fully accounts for the changes to functionality and visualization capabilities and accomodates the full array of new extensions in the types of data and problems that Mathematica can immediately handle, including cloud services and systems, geographic and geometric computation, dynamic visualization, interactive applications and other improvements. It is an ideal text for scientific students, researchers, and aspiring programmers seeking further understanding of Mathematica. Written by seasoned practitioners with a view to practical implementation and problem-solving, the book's pedagogy is delivered clearly and without jargon using representative biological, physical and engineering problems. Code is provided on an ancillary website to support the use of Mathematica across diverse applications and subject areas.
The idea of the Grobner basis first appeared in a 1927 paper by F. S. Macaulay, who succeeded in creating a combinatorial characterization of the Hilbert functions of homogeneous ideals of the polynomial ring. Later, the modern definition of the Grobner basis was independently introduced by Heisuke Hironaka in 1964 and Bruno Buchberger in 1965. However, after the discovery of the notion of the Grobner basis by Hironaka and Buchberger, it was not actively pursued for 20 years. A breakthrough was made in the mid-1980s by David Bayer and Michael Stillman, who created the Macaulay computer algebra system with the help of the Grobner basis. Since then, rapid development on the Grobner basis has been achieved by many researchers, including Bernd Sturmfels. This book serves as a standard bible of the Grobner basis, for which the harmony of theory, application, and computation are indispensable. It provides all the fundamentals for graduate students to learn the ABC s of the Grobner basis, requiring no special knowledgeto understand those basic points. Starting from the introductory performance of the Grobner basis (Chapter 1), a trip around mathematical software follows (Chapter 2). Then comes a deep discussion of how to compute the Grobner basis (Chapter 3). These three chapters may be regarded as the first act of a mathematical play. The second act opens with topics on algebraic statistics (Chapter 4), a fascinating research area where the Grobner basis of a toric ideal is a fundamental tool of the Markov chain Monte Carlo method. Moreover, the Grobner basis of a toric ideal has had a great influence on the study of convex polytopes (Chapter 5). In addition, the Grobner basis of the ring of differential operators gives effective algorithms on holonomic functions (Chapter 6). The third act (Chapter 7) is a collection of concrete examples and problems for Chapters 4, 5 and 6 emphasizing computation by using various software systems."
A Strong Practical Focus on Applications and AlgorithmsComputational Statistics Handbook with MATLAB (R), Third Edition covers today's most commonly used techniques in computational statistics while maintaining the same philosophy and writing style of the bestselling previous editions. The text keeps theoretical concepts to a minimum, emphasizing the implementation of the methods. New to the Third EditionThis third edition is updated with the latest version of MATLAB and the corresponding version of the Statistics and Machine Learning Toolbox. It also incorporates new sections on the nearest neighbor classifier, support vector machines, model checking and regularization, partial least squares regression, and multivariate adaptive regression splines. Web ResourceThe authors include algorithmic descriptions of the procedures as well as examples that illustrate the use of algorithms in data analysis. The MATLAB code, examples, and data sets are available online. |
![]() ![]() You may like...
Hoe Voel dit om aan `n Walvis te Wikkel?
Malgorzata Detner
Board book
|