![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
This new edition includes the latest advances and developments in computational probability involving A Probability Programming Language (APPL). The book examines and presents, in a systematic manner, computational probability methods that encompass data structures and algorithms. The developed techniques address problems that require exact probability calculations, many of which have been considered intractable in the past. The book addresses the plight of the probabilist by providing algorithms to perform calculations associated with random variables. Computational Probability: Algorithms and Applications in the Mathematical Sciences, 2nd Edition begins with an introductory chapter that contains short examples involving the elementary use of APPL. Chapter 2 reviews the Maple data structures and functions necessary to implement APPL. This is followed by a discussion of the development of the data structures and algorithms (Chapters 3-6 for continuous random variables and Chapters 7-9 for discrete random variables) used in APPL. The book concludes with Chapters 10-15 introducing a sampling of various applications in the mathematical sciences. This book should appeal to researchers in the mathematical sciences with an interest in applied probability and instructors using the book for a special topics course in computational probability taught in a mathematics, statistics, operations research, management science, or industrial engineering department.
This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation.
This book is about the role and potential of using digital technology in designing teaching and learning tasks in the mathematics classroom. Digital technology has opened up different new educational spaces for the mathematics classroom in the past few decades and, as technology is constantly evolving, novel ideas and approaches are brewing to enrich these spaces with diverse didactical flavors. A key issue is always how technology can, or cannot, play epistemic and pedagogic roles in the mathematics classroom. The main purpose of this book is to explore mathematics task design when digital technology is part of the teaching and learning environment. What features of the technology used can be capitalized upon to design tasks that transform learners' experiential knowledge, gained from using the technology, into conceptual mathematical knowledge? When do digital environments actually bring an essential (educationally, speaking) new dimension to classroom activities? What are some pragmatic and semiotic values of the technology used? These are some of the concerns addressed in the book by expert scholars in this area of research in mathematics education. This volume is the first devoted entirely to issues on designing mathematical tasks in digital teaching and learning environments, outlining different current research scenarios.
This comprehensive and richly illustrated volume provides up-to-date material on Singular Spectrum Analysis (SSA). SSA is a well-known methodology for the analysis and forecasting of time series. Since quite recently, SSA is also being used to analyze digital images and other objects that are not necessarily of planar or rectangular form and may contain gaps. SSA is multi-purpose and naturally combines both model-free and parametric techniques, which makes it a very special and attractive methodology for solving a wide range of problems arising in diverse areas, most notably those associated with time series and digital images. An effective, comfortable and accessible implementation of SSA is provided by the R-package Rssa, which is available from CRAN and reviewed in this book. Written by prominent statisticians who have extensive experience with SSA, the book (a) presents the up-to-date SSA methodology, including multidimensional extensions, in language accessible to a large circle of users, (b) combines different versions of SSA into a single tool, (c) shows the diverse tasks that SSA can be used for, (d) formally describes the main SSA methods and algorithms, and (e) provides tutorials on the Rssa package and the use of SSA. The book offers a valuable resource for a very wide readership, including professional statisticians, specialists in signal and image processing, as well as specialists in numerous applied disciplines interested in using statistical methods for time series analysis, forecasting, signal and image processing. The book is written on a level accessible to a broad audience and includes a wealth of examples; hence it can also be used as a textbook for undergraduate and postgraduate courses on time series analysis and signal processing.
This textbook addresses postgraduate students in applied mathematics, probability, and statistics, as well as computer scientists, biologists, physicists and economists, who are seeking a rigorous introduction to applied stochastic processes. Pursuing a pedagogic approach, the content follows a path of increasing complexity, from the simplest random sequences to the advanced stochastic processes. Illustrations are provided from many applied fields, together with connections to ergodic theory, information theory, reliability and insurance. The main content is also complemented by a wealth of examples and exercises with solutions.
This book on statistical disclosure control presents the theory, applications and software implementation of the traditional approach to (micro)data anonymization, including data perturbation methods, disclosure risk, data utility, information loss and methods for simulating synthetic data. Introducing readers to the R packages sdcMicro and simPop, the book also features numerous examples and exercises with solutions, as well as case studies with real-world data, accompanied by the underlying R code to allow readers to reproduce all results. The demand for and volume of data from surveys, registers or other sources containing sensible information on persons or enterprises have increased significantly over the last several years. At the same time, privacy protection principles and regulations have imposed restrictions on the access and use of individual data. Proper and secure microdata dissemination calls for the application of statistical disclosure control methods to the da ta before release. This book is intended for practitioners at statistical agencies and other national and international organizations that deal with confidential data. It will also be interesting for researchers working in statistical disclosure control and the health sciences.
This book reports on the results of an interdisciplinary and multidisciplinary workshop on provenance that brought together researchers and practitioners from different areas such as archival science, law, information science, computing, forensics and visual analytics that work at the frontiers of new knowledge on provenance. Each of these fields understands the meaning and purpose of representing provenance in subtly different ways. The aim of this book is to create cross-disciplinary bridges of understanding with a view to arriving at a deeper and clearer perspective on the different facets of provenance and how traditional definitions and applications may be enriched and expanded via an interdisciplinary and multidisciplinary synthesis. This volume brings together all of these developments, setting out an encompassing vision of provenance to establish a robust framework for expanded provenance theory, standards and technologies that can be used to build trust in financial and other types of information.
This book focuses on the methodological treatment of UML/P and addresses three core topics of model-based software development: code generation, the systematic testing of programs using a model-based definition of test cases, and the evolutionary refactoring and transformation of models. For each of these topics, it first details the foundational concepts and techniques, and then presents their application with UML/P. This separation between basic principles and applications makes the content more accessible and allows the reader to transfer this knowledge directly to other model-based approaches and languages. After an introduction to the book and its primary goals in Chapter 1, Chapter 2 outlines an agile UML-based approach using UML/P as the primary development language for creating executable models, generating code from the models, designing test cases, and planning iterative evolution through refactoring. In the interest of completeness, Chapter 3 provides a brief summary of UML/P, which is used throughout the book. Next, Chapters 4 and 5 discuss core techniques for code generation, addressing the architecture of a code generator and methods for controlling it, as well as the suitability of UML/P notations for test or product code. Chapters 6 and 7 then discuss general concepts for testing software as well as the special features which arise due to the use of UML/P. Chapter 8 details test patterns to show how to use UML/P diagrams to define test cases and emphasizes in particular the use of functional tests for distributed and concurrent software systems. In closing, Chapters 9 and 10 examine techniques for transforming models and code and thus provide a solid foundation for refactoring as a type of transformation that preserves semantics. Overall, this book will be of great benefit for practical software development, for academic training in the field of Software Engineering, and for research in the area of model-based software development. Practitioners will learn how to use modern model-based techniques to improve the production of code and thus significantly increase quality. Students will find both important scientific basics as well as direct applications of the techniques presented. And last but not least, the book will offer scientists a comprehensive overview of the current state of development in the three core topics it covers.
This book discusses a variety of methods for outlier ensembles and organizes them by the specific principles with which accuracy improvements are achieved. In addition, it covers the techniques with which such methods can be made more effective. A formal classification of these methods is provided, and the circumstances in which they work well are examined. The authors cover how outlier ensembles relate (both theoretically and practically) to the ensemble techniques used commonly for other data mining problems like classification. The similarities and (subtle) differences in the ensemble techniques for the classification and outlier detection problems are explored. These subtle differences do impact the design of ensemble algorithms for the latter problem. This book can be used for courses in data mining and related curricula. Many illustrative examples and exercises are provided in order to facilitate classroom teaching. A familiarity is assumed to the outlier detection problem and also to generic problem of ensemble analysis in classification. This is because many of the ensemble methods discussed in this book are adaptations from their counterparts in the classification domain. Some techniques explained in this book, such as wagging, randomized feature weighting, and geometric subsampling, provide new insights that are not available elsewhere. Also included is an analysis of the performance of various types of base detectors and their relative effectiveness. The book is valuable for researchers and practitioners for leveraging ensemble methods into optimal algorithmic design.
This book is a selection of peer-reviewed contributions presented at the third Bayesian Young Statisticians Meeting, BAYSM 2016, Florence, Italy, June 19-21. The meeting provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and postdocs dealing with Bayesian statistics to connect with the Bayesian community at large, to exchange ideas, and to network with others working in the same field. The contributions develop and apply Bayesian methods in a variety of fields, ranging from the traditional (e.g., biostatistics and reliability) to the most innovative ones (e.g., big data and networks).
This book presents a variant of UML that is especially suitable for agile development of high-quality software. It adjusts the language UML profile, called UML/P, for optimal assistance for the design, implementation, and agile evolution to facilitate its use especially in agile, yet model based development methods for data intensive or control driven systems. After a general introduction to UML and the choices made in the development of UML/P in Chapter 1, Chapter 2 includes a definition of the language elements of class diagrams and their forms of use as views and representations. Next, Chapter 3 introduces the design and semantic facets of the Object Constraint Language (OCL), which is conceptually improved and syntactically adjusted to Java for better comfort. Subsequently, Chapter 4 introduces object diagrams as an independent, exemplary notation in UML/P, and Chapter 5 offers a detailed introduction to UML/P Statecharts. Lastly, Chapter 6 presents a simplified form of sequence diagrams for exemplary descriptions of object interactions. For completeness, appendixes A-C describe the full syntax of UML/P, and appendix D explains a sample application from the E-commerce domain, which is used in all chapters. This book is ideal for introductory courses for students and practitioners alike.
This book contains a rich set of tools for nonparametric analyses, and the purpose of this text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses and tests using R to broadly compare differences between data sets and statistical approach.
This book presents a proposal for designing business process management (BPM) systems that comprise much more than just process modelling. Based on a purified Business Process Model and Notation (BPMN) variant, the authors present proposals for several important issues in BPM that have not been adequately considered in the BPMN 2.0 standard. It focusses on modality as well as actor and user interaction modelling and offers an enhanced communication concept. In order to render models executable, the semantics of the modelling language needs to be described rigorously enough to prevent deviating interpretations by different tools. For this reason, the semantics of the necessary concepts introduced in this book are defined using the Abstract State Machine (ASM) method. Finally, the authors show how the different parts of the model fit together using a simple example process, and introduce the enhanced Process Platform (eP2) architecture, which binds all the different components together. The resulting method is named Hagenberg Business Process Modelling (H-BPM) after the Austrian village where it was designed. The motivation for the development of the H-BPM method stems from several industrial projects in which business analysts and software developers struggled with redundancies and inconsistencies in system documentation due to missing integration. The book is aimed at researchers in business process management and industry 4.0 as well as advanced professionals in these areas.
This book presents computer programming as a key method for solving mathematical problems. There are two versions of the book, one for MATLAB and one for Python. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows the students to write simple programs for solving common mathematical problems with numerical methods in engineering and science courses. The emphasis is on generic algorithms, clean design of programs, use of functions, and automatic tests for verification.
This book identifies, analyzes and discusses the current trends of digitalized, decentralized, and networked physical value creation by focusing on the particular example of 3D printing. In addition to evaluating 3D printing's disruptive potentials against a broader economic background, it also addresses the technology's potential impacts on sustainability and emerging modes of bottom-up and community-based innovation. Emphasizing these topics from economic, technical, social and environmental perspectives, the book offers a multifaceted overview that scrutinizes the scenario of a fundamental transition: from a centralized to a far more decentralized system of value creation.
This volume presents selected peer-reviewed contributions from The International Work-Conference on Time Series, ITISE 2015, held in Granada, Spain, July 1-3, 2015. It discusses topics in time series analysis and forecasting, advanced methods and online learning in time series, high-dimensional and complex/big data time series as well as forecasting in real problems. The International Work-Conferences on Time Series (ITISE) provide a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing the disciplines of computer science, mathematics, statistics and econometrics.
This book accomplishes an analysis of critical aspects of managerial implications on the business with information. The business dealing with information is spreading in the service market; and, an efficient management of informational processes, in order to perform successful business with them, is now crucial. Besides, economical/business, technological or any other kind of information, organized in a variety of forms, can be considered as an 'informational product'. Thus, creating a business value out of information is challenging but vital, especially in the modern digital age. Accordingly, the book covers the methods and technologies to capture, integrate, analyze, mine, interpret and visualize information out of distributed data, which in turn can help to manage information competently. This volume explores the challenges being faced and opportunities to look out for in this research area, while discussing different aspects of this subject. The book will be of interest to those working in or are interested in joining interdisciplinary and transdisciplinary work in the areas of information management, service management, and service business. It will also be of use to young generation researchers by giving them an overview on different aspects of doing business with information. While introducing them to both technical and non-technical details, as well as economic aspects, the book will also be extremely informative for professionals who want to understand and realize the potential of using the cutting-edge managerial technologies for doing successful business with information/ services.
This book offers a collection of recent contributions and emerging ideas in the areas of robust statistics presented at the International Conference on Robust Statistics 2015 (ICORS 2015) held in Kolkata during 12-16 January, 2015. The book explores the applicability of robust methods in other non-traditional areas which includes the use of new techniques such as skew and mixture of skew distributions, scaled Bregman divergences, and multilevel functional data methods; application areas being circular data models and prediction of mortality and life expectancy. The contributions are of both theoretical as well as applied in nature. Robust statistics is a relatively young branch of statistical sciences that is rapidly emerging as the bedrock of statistical analysis in the 21st century due to its flexible nature and wide scope. Robust statistics supports the application of parametric and other inference techniques over a broader domain than the strictly interpreted model scenarios employed in classical statistical methods. The aim of the ICORS conference, which is being organized annually since 2001, is to bring together researchers interested in robust statistics, data analysis and related areas. The conference is meant for theoretical and applied statisticians, data analysts from other fields, leading experts, junior researchers and graduate students. The ICORS meetings offer a forum for discussing recent advances and emerging ideas in statistics with a focus on robustness, and encourage informal contacts and discussions among all the participants. They also play an important role in maintaining a cohesive group of international researchers interested in robust statistics and related topics, whose interactions transcend the meetings and endure year round.
This book examines trends and challenges in research on IT governance in public organizations, reporting innovative research and new insights in the theories, models and practices within the area. As we noticed, IT governance plays an important role in generating value from organization's IT investments. However there are different challenges for researchers in studying IT governance in public organizations due to the differences between political, administrative, and practices in these organizations. The first section of the book looks at Management issues, including an introduction to IT governance in public organizations; a systematic review of IT alignment research in public organizations; the role of middle managers in aligning strategy and IT in public service organizations; and an analysis of alignment and governance with regard to IT-related policy decisions. The second section examines Modelling, including a consideration of the challenges faced by public administration; a discussion of a framework for IT governance implementation suitable to improve alignment and communication between stakeholders of IT services; the design and implementation of IT architecture; and the adoption of enterprise architecture in public organizations. Finally, section three presents Case Studies, including IT governance in the context of e-government strategy implementation in the Caribbean; the relationship of IT organizational structure and IT governance performance in the IT department of a public research and education organization in a developing country; the relationship between organizational ambidexterity and IT governance through a study of the Swedish Tax Authorities; and the role of institutional logics in IT project activities and interactions in a large Swedish hospital.
Enterprise Resource Planning (ERP), Supply Chain Management (SCM), Customer Relationship Management (CRM), Business Intelligence (BI) and Big Data Analytics (BDA) are business related tasks and processes, which are supported by standardized software solutions. The book explains that this requires business oriented thinking and acting from IT specialists and data scientists. It is a good idea to let students experience this directly from the business perspective, for example as executives of a virtual company. The course simulates the stepwise integration of the linked business process chain ERP-SCM-CRM-BI-Big Data of four competing groups of companies. The course participants become board members with full P&L responsibility for business units of one of four beer brewery groups managing supply chains from production to retailer.
This book presents the latest findings and ongoing research in the field of green information systems as well as green information and communication technology (ICT). It provides insights into a whole range of cross-cutting concerns in ICT and environmental sciences and showcases how information and communication technologies allow environmental and energy efficiency issues to be handled effectively. Offering a selection of extended and reworked contributions to the 30th International Conference EnviroInfo 2016, it is essential reading for anyone wanting to extend their expertise in the area.
This volume collects selected, peer-reviewed contributions from the 2nd Conference of the International Society for Nonparametric Statistics (ISNPS), held in Cadiz (Spain) between June 11-16 2014, and sponsored by the American Statistical Association, the Institute of Mathematical Statistics, the Bernoulli Society for Mathematical Statistics and Probability, the Journal of Nonparametric Statistics and Universidad Carlos III de Madrid. The 15 articles are a representative sample of the 336 contributed papers presented at the conference. They cover topics such as high-dimensional data modelling, inference for stochastic processes and for dependent data, nonparametric and goodness-of-fit testing, nonparametric curve estimation, object-oriented data analysis, and semiparametric inference. The aim of the ISNPS 2014 conference was to bring together recent advances and trends in several areas of nonparametric statistics in order to facilitate the exchange of research ideas, promote collaboration among researchers from around the globe, and contribute to the further development of the field.
This second edition is an intensively revised and updated version of the book MATLAB (R) and Design Recipes for Earth Sciences. It aims to introduce students to the typical course followed by a data analysis project in earth sciences. A project usually involves searching relevant literature, reviewing and ranking published books and journal articles, extracting relevant information from the literature in the form of text, data, or graphs, searching and processing the relevant original data using MATLAB, and compiling and presenting the results as posters, abstracts, and oral presentations using graphics design software. The text of this book includes numerous examples on the use of internet resources, on the visualization of data with MATLAB, and on preparing scientific presentations. As with the book MATLAB Recipes for Earth Sciences-4rd Edition (2015), which demonstrates the use of statistical and numerical methods on earth science data, this book uses state-of-the art software packages, including MATLAB and the Adobe Creative Suite, to process and present geoscientific information collected during the course of an earth science project. The book's supplementary electronic material (available online through the publisher's website) includes color versions of all figures, recipes with all the MATLAB commands featured in the book, the example data, exported MATLAB graphics, and screenshots of the most important steps involved in processing the graphics.
This book identifies and discusses the main challenges facing digital business innovation and the emerging trends and practices that will define its future. The book is divided into three sections covering trends in digital systems, digital management, and digital innovation. The opening chapters consider the issues associated with machine intelligence, wearable technology, digital currencies, and distributed ledgers as their relevance for business grows. Furthermore, the strategic role of data visualization and trends in digital security are extensively discussed. The subsequent section on digital management focuses on the impact of neuroscience on the management of information systems, the role of IT ambidexterity in managing digital transformation, and the way in which IT alignment is being reconfigured by digital business. Finally, examples of digital innovation in practice at the global level are presented and reviewed. The book will appeal to both practitioners and academics. The text is supported by informative illustrations and case studies, so that practitioners can use the book as a toolbox that enables easy understanding and assists in exploiting business opportunities involving digital business innovation.
This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS. |
![]() ![]() You may like...
Database Systems - Design…
Carlos Coronel, Steven Morris
Paperback
Cybersecurity Issues and Challenges for…
Saqib Saeed, Abdullah M. Almuhaideb, …
Hardcover
R8,848
Discovery Miles 88 480
|