Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer software packages > Other software packages
This book reports on the results of an interdisciplinary and multidisciplinary workshop on provenance that brought together researchers and practitioners from different areas such as archival science, law, information science, computing, forensics and visual analytics that work at the frontiers of new knowledge on provenance. Each of these fields understands the meaning and purpose of representing provenance in subtly different ways. The aim of this book is to create cross-disciplinary bridges of understanding with a view to arriving at a deeper and clearer perspective on the different facets of provenance and how traditional definitions and applications may be enriched and expanded via an interdisciplinary and multidisciplinary synthesis. This volume brings together all of these developments, setting out an encompassing vision of provenance to establish a robust framework for expanded provenance theory, standards and technologies that can be used to build trust in financial and other types of information.
This book is a selection of peer-reviewed contributions presented at the third Bayesian Young Statisticians Meeting, BAYSM 2016, Florence, Italy, June 19-21. The meeting provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and postdocs dealing with Bayesian statistics to connect with the Bayesian community at large, to exchange ideas, and to network with others working in the same field. The contributions develop and apply Bayesian methods in a variety of fields, ranging from the traditional (e.g., biostatistics and reliability) to the most innovative ones (e.g., big data and networks).
A complete guide to Pentaho Kettle, the Pentaho Data lntegration toolset for ETL This practical book is a complete guide to installing, configuring, and managing Pentaho Kettle. If you're a database administrator or developer, you'll first get up to speed on Kettle basics and how to apply Kettle to create ETL solutions--before progressing to specialized concepts such as clustering, extensibility, and data vault models. Learn how to design and build every phase of an ETL solution.Shows developers and database administrators how to use the open-source Pentaho Kettle for enterprise-level ETL processes (Extracting, Transforming, and Loading data)Assumes no prior knowledge of Kettle or ETL, and brings beginners thoroughly up to speed at their own paceExplains how to get Kettle solutions up and running, then follows the 34 ETL subsystems model, as created by the Kimball Group, to explore the entire ETL lifecycle, including all aspects of data warehousing with KettleGoes beyond routine tasks to explore how to extend Kettle and scale Kettle solutions using a distributed "cloud" Get the most out of Pentaho Kettle and your data warehousing with this detailed guide--from simple single table data migration to complex multisystem clustered data integration tasks.
Marking the 30th anniversary of the European Conference on Modelling and Simulation (ECMS), this inspirational text/reference reviews significant advances in the field of modelling and simulation, as well as key applications of simulation in other disciplines. The broad-ranging volume presents contributions from a varied selection of distinguished experts chosen from high-impact keynote speakers and best paper winners from the conference, including a Nobel Prize recipient, and the first president of the European Council for Modelling and Simulation (also abbreviated to ECMS). This authoritative book will be of great value to all researchers working in the field of modelling and simulation, in addition to scientists from other disciplines who make use of modelling and simulation approaches in their work.
This book presents a variant of UML that is especially suitable for agile development of high-quality software. It adjusts the language UML profile, called UML/P, for optimal assistance for the design, implementation, and agile evolution to facilitate its use especially in agile, yet model based development methods for data intensive or control driven systems. After a general introduction to UML and the choices made in the development of UML/P in Chapter 1, Chapter 2 includes a definition of the language elements of class diagrams and their forms of use as views and representations. Next, Chapter 3 introduces the design and semantic facets of the Object Constraint Language (OCL), which is conceptually improved and syntactically adjusted to Java for better comfort. Subsequently, Chapter 4 introduces object diagrams as an independent, exemplary notation in UML/P, and Chapter 5 offers a detailed introduction to UML/P Statecharts. Lastly, Chapter 6 presents a simplified form of sequence diagrams for exemplary descriptions of object interactions. For completeness, appendixes A-C describe the full syntax of UML/P, and appendix D explains a sample application from the E-commerce domain, which is used in all chapters. This book is ideal for introductory courses for students and practitioners alike.
This book contains a rich set of tools for nonparametric analyses, and the purpose of this text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses and tests using R to broadly compare differences between data sets and statistical approach.
This textbook on computational statistics presents tools and concepts of univariate and multivariate statistical data analysis with a strong focus on applications and implementations in the statistical software R. It covers mathematical, statistical as well as programming problems in computational statistics and contains a wide variety of practical examples. In addition to the numerous R sniplets presented in the text, all computer programs (quantlets) and data sets to the book are available on GitHub and referred to in the book. This enables the reader to fully reproduce as well as modify and adjust all examples to their needs. The book is intended for advanced undergraduate and first-year graduate students as well as for data analysts new to the job who would like a tour of the various statistical tools in a data analysis workshop. The experienced reader with a good knowledge of statistics and programming might skip some sections on univariate models and enjoy the various ma thematical roots of multivariate techniques. The Quantlet platform quantlet.de, quantlet.com, quantlet.org is an integrated QuantNet environment consisting of different types of statistics-related documents and program codes. Its goal is to promote reproducibility and offer a platform for sharing validated knowledge native to the social web. QuantNet and the corresponding Data-Driven Documents-based visualization allows readers to reproduce the tables, pictures and calculations inside this Springer book.
This book presents a proposal for designing business process management (BPM) systems that comprise much more than just process modelling. Based on a purified Business Process Model and Notation (BPMN) variant, the authors present proposals for several important issues in BPM that have not been adequately considered in the BPMN 2.0 standard. It focusses on modality as well as actor and user interaction modelling and offers an enhanced communication concept. In order to render models executable, the semantics of the modelling language needs to be described rigorously enough to prevent deviating interpretations by different tools. For this reason, the semantics of the necessary concepts introduced in this book are defined using the Abstract State Machine (ASM) method. Finally, the authors show how the different parts of the model fit together using a simple example process, and introduce the enhanced Process Platform (eP2) architecture, which binds all the different components together. The resulting method is named Hagenberg Business Process Modelling (H-BPM) after the Austrian village where it was designed. The motivation for the development of the H-BPM method stems from several industrial projects in which business analysts and software developers struggled with redundancies and inconsistencies in system documentation due to missing integration. The book is aimed at researchers in business process management and industry 4.0 as well as advanced professionals in these areas.
This volume presents selected peer-reviewed contributions from The International Work-Conference on Time Series, ITISE 2015, held in Granada, Spain, July 1-3, 2015. It discusses topics in time series analysis and forecasting, advanced methods and online learning in time series, high-dimensional and complex/big data time series as well as forecasting in real problems. The International Work-Conferences on Time Series (ITISE) provide a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing the disciplines of computer science, mathematics, statistics and econometrics.
This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are explained. Each section includes a set of exercises on the respective topics. Various functions and tools for the analysis of discrete survival data are collected in the R package discSurv that accompanies the book.
This book presents computer programming as a key method for solving mathematical problems. There are two versions of the book, one for MATLAB and one for Python. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows the students to write simple programs for solving common mathematical problems with numerical methods in engineering and science courses. The emphasis is on generic algorithms, clean design of programs, use of functions, and automatic tests for verification.
This book identifies, analyzes and discusses the current trends of digitalized, decentralized, and networked physical value creation by focusing on the particular example of 3D printing. In addition to evaluating 3D printing's disruptive potentials against a broader economic background, it also addresses the technology's potential impacts on sustainability and emerging modes of bottom-up and community-based innovation. Emphasizing these topics from economic, technical, social and environmental perspectives, the book offers a multifaceted overview that scrutinizes the scenario of a fundamental transition: from a centralized to a far more decentralized system of value creation.
This edited three volume edition brings together significant papers previously published in the Journal of information Technology (JIT) over its 30 year publication history. The three volumes of Enacting Research Methods in Information Systems celebrate the methodological pluralism used to advance our understanding of information technology's role in the world today. In addition to quantitative methods from the positivist tradition, JIT also values methodological articles from critical research perspectives, interpretive traditions, historical perspectives, grounded theory, and action research and design science approaches. Volume 1 covers Critical Research, Grounded Theory, and Historical Approaches. Volume 2 deals with Interpretive Approaches and also explores Action Research. Volume 3 focuses on Design Science Approaches and discusses Alternative Approaches including Semiotics Research, Complexity Theory and Gender in IS Research. The Journal of Information Technology (JIT) was started in 1986 by Professors Frank Land and Igor Aleksander with the aim of bringing technology and management together and bridging the 'great divide' between the two disciplines. The Journal was created with the vision of making the impact of complex interactions and developments in technology more accessible to a wider audience. Retaining this initial focus, the JIT has gone on to extend into new and innovative areas of research such as the launch of JITTC in 2010. A high impact journal, JIT shall continue to publish leading trends based on significant research in the field.
This textbook examines empirical linguistics from a theoretical linguist's perspective. It provides both a theoretical discussion of what quantitative corpus linguistics entails and detailed, hands-on, step-by-step instructions to implement the techniques in the field. The statistical methodology and R-based coding from this book teach readers the basic and then more advanced skills to work with large data sets in their linguistics research and studies. Massive data sets are now more than ever the basis for work that ranges from usage-based linguistics to the far reaches of applied linguistics. This book presents much of the methodology in a corpus-based approach. However, the corpus-based methods in this book are also essential components of recent developments in sociolinguistics, historical linguistics, computational linguistics, and psycholinguistics. Material from the book will also be appealing to researchers in digital humanities and the many non-linguistic fields that use textual data analysis and text-based sensorimetrics. Chapters cover topics including corpus processing, frequencing data, and clustering methods. Case studies illustrate each chapter with accompanying data sets, R code, and exercises for use by readers. This book may be used in advanced undergraduate courses, graduate courses, and self-study.
This book is a valuable read for a diverse group of researchers and practitioners who analyze assessment data and construct test instruments. It focuses on the use of classical test theory (CTT) and item response theory (IRT), which are often required in the fields of psychology (e.g. for measuring psychological traits), health (e.g. for measuring the severity of disorders), and education (e.g. for measuring student performance), and makes these analytical tools accessible to a broader audience. Having taught assessment subjects to students from diverse backgrounds for a number of years, the three authors have a wealth of experience in presenting educational measurement topics, in-depth concepts and applications in an accessible format. As such, the book addresses the needs of readers who use CTT and IRT in their work but do not necessarily have an extensive mathematical background. The book also sheds light on common misconceptions in applying measurement models, and presents an integrated approach to different measurement methods, such as contrasting CTT with IRT and multidimensional IRT models with unidimensional IRT models. Wherever possible, comparisons between models are explicitly made. In addition, the book discusses concepts for test equating and differential item functioning, as well as Bayesian IRT models and plausible values using simple examples. This book can serve as a textbook for introductory courses on educational measurement, as supplementary reading for advanced courses, or as a valuable reference guide for researchers interested in analyzing student assessment data.
This focuses on the developing field of building probability models with the power of symbolic algebra systems. The book combines the uses of symbolic algebra with probabilistic/stochastic application and highlights the applications in a variety of contexts. The research explored in each chapter is unified by the use of A Probability Programming Language (APPL) to achieve the modeling objectives. APPL, as a research tool, enables a probabilist or statistician the ability to explore new ideas, methods, and models. Furthermore, as an open-source language, it sets the foundation for future algorithms to augment the original code. Computational Probability Applications is comprised of fifteen chapters, each presenting a specific application of computational probability using the APPL modeling and computer language. The chapter topics include using inverse gamma as a survival distribution, linear approximations of probability density functions, and also moment-ratio diagrams for univariate distributions. These works highlight interesting examples, often done by undergraduate students and graduate students that can serve as templates for future work. In addition, this book should appeal to researchers and practitioners in a range of fields including probability, statistics, engineering, finance, neuroscience, and economics.
This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.
This book offers a collection of recent contributions and emerging ideas in the areas of robust statistics presented at the International Conference on Robust Statistics 2015 (ICORS 2015) held in Kolkata during 12-16 January, 2015. The book explores the applicability of robust methods in other non-traditional areas which includes the use of new techniques such as skew and mixture of skew distributions, scaled Bregman divergences, and multilevel functional data methods; application areas being circular data models and prediction of mortality and life expectancy. The contributions are of both theoretical as well as applied in nature. Robust statistics is a relatively young branch of statistical sciences that is rapidly emerging as the bedrock of statistical analysis in the 21st century due to its flexible nature and wide scope. Robust statistics supports the application of parametric and other inference techniques over a broader domain than the strictly interpreted model scenarios employed in classical statistical methods. The aim of the ICORS conference, which is being organized annually since 2001, is to bring together researchers interested in robust statistics, data analysis and related areas. The conference is meant for theoretical and applied statisticians, data analysts from other fields, leading experts, junior researchers and graduate students. The ICORS meetings offer a forum for discussing recent advances and emerging ideas in statistics with a focus on robustness, and encourage informal contacts and discussions among all the participants. They also play an important role in maintaining a cohesive group of international researchers interested in robust statistics and related topics, whose interactions transcend the meetings and endure year round.
This book accomplishes an analysis of critical aspects of managerial implications on the business with information. The business dealing with information is spreading in the service market; and, an efficient management of informational processes, in order to perform successful business with them, is now crucial. Besides, economical/business, technological or any other kind of information, organized in a variety of forms, can be considered as an 'informational product'. Thus, creating a business value out of information is challenging but vital, especially in the modern digital age. Accordingly, the book covers the methods and technologies to capture, integrate, analyze, mine, interpret and visualize information out of distributed data, which in turn can help to manage information competently. This volume explores the challenges being faced and opportunities to look out for in this research area, while discussing different aspects of this subject. The book will be of interest to those working in or are interested in joining interdisciplinary and transdisciplinary work in the areas of information management, service management, and service business. It will also be of use to young generation researchers by giving them an overview on different aspects of doing business with information. While introducing them to both technical and non-technical details, as well as economic aspects, the book will also be extremely informative for professionals who want to understand and realize the potential of using the cutting-edge managerial technologies for doing successful business with information/ services.
Modeling spatial and spatio-temporal continuous processes is an important and challenging problem in spatial statistics. Advanced Spatial Modeling with Stochastic Partial Differential Equations Using R and INLA describes in detail the stochastic partial differential equations (SPDE) approach for modeling continuous spatial processes with a Matern covariance, which has been implemented using the integrated nested Laplace approximation (INLA) in the R-INLA package. Key concepts about modeling spatial processes and the SPDE approach are explained with examples using simulated data and real applications. This book has been authored by leading experts in spatial statistics, including the main developers of the INLA and SPDE methodologies and the R-INLA package. It also includes a wide range of applications: * Spatial and spatio-temporal models for continuous outcomes * Analysis of spatial and spatio-temporal point patterns * Coregionalization spatial and spatio-temporal models * Measurement error spatial models * Modeling preferential sampling * Spatial and spatio-temporal models with physical barriers * Survival analysis with spatial effects * Dynamic space-time regression * Spatial and spatio-temporal models for extremes * Hurdle models with spatial effects * Penalized Complexity priors for spatial models All the examples in the book are fully reproducible. Further information about this book, as well as the R code and datasets used, is available from the book website at http://www.r-inla.org/spde-book. The tools described in this book will be useful to researchers in many fields such as biostatistics, spatial statistics, environmental sciences, epidemiology, ecology and others. Graduate and Ph.D. students will also find this book and associated files a valuable resource to learn INLA and the SPDE approach for spatial modeling.
This book examines trends and challenges in research on IT governance in public organizations, reporting innovative research and new insights in the theories, models and practices within the area. As we noticed, IT governance plays an important role in generating value from organization's IT investments. However there are different challenges for researchers in studying IT governance in public organizations due to the differences between political, administrative, and practices in these organizations. The first section of the book looks at Management issues, including an introduction to IT governance in public organizations; a systematic review of IT alignment research in public organizations; the role of middle managers in aligning strategy and IT in public service organizations; and an analysis of alignment and governance with regard to IT-related policy decisions. The second section examines Modelling, including a consideration of the challenges faced by public administration; a discussion of a framework for IT governance implementation suitable to improve alignment and communication between stakeholders of IT services; the design and implementation of IT architecture; and the adoption of enterprise architecture in public organizations. Finally, section three presents Case Studies, including IT governance in the context of e-government strategy implementation in the Caribbean; the relationship of IT organizational structure and IT governance performance in the IT department of a public research and education organization in a developing country; the relationship between organizational ambidexterity and IT governance through a study of the Swedish Tax Authorities; and the role of institutional logics in IT project activities and interactions in a large Swedish hospital.
Enterprise Resource Planning (ERP), Supply Chain Management (SCM), Customer Relationship Management (CRM), Business Intelligence (BI) and Big Data Analytics (BDA) are business related tasks and processes, which are supported by standardized software solutions. The book explains that this requires business oriented thinking and acting from IT specialists and data scientists. It is a good idea to let students experience this directly from the business perspective, for example as executives of a virtual company. The course simulates the stepwise integration of the linked business process chain ERP-SCM-CRM-BI-Big Data of four competing groups of companies. The course participants become board members with full P&L responsibility for business units of one of four beer brewery groups managing supply chains from production to retailer.
This volume collects selected, peer-reviewed contributions from the 2nd Conference of the International Society for Nonparametric Statistics (ISNPS), held in Cadiz (Spain) between June 11-16 2014, and sponsored by the American Statistical Association, the Institute of Mathematical Statistics, the Bernoulli Society for Mathematical Statistics and Probability, the Journal of Nonparametric Statistics and Universidad Carlos III de Madrid. The 15 articles are a representative sample of the 336 contributed papers presented at the conference. They cover topics such as high-dimensional data modelling, inference for stochastic processes and for dependent data, nonparametric and goodness-of-fit testing, nonparametric curve estimation, object-oriented data analysis, and semiparametric inference. The aim of the ISNPS 2014 conference was to bring together recent advances and trends in several areas of nonparametric statistics in order to facilitate the exchange of research ideas, promote collaboration among researchers from around the globe, and contribute to the further development of the field.
This book presents the latest findings and ongoing research in the field of green information systems as well as green information and communication technology (ICT). It provides insights into a whole range of cross-cutting concerns in ICT and environmental sciences and showcases how information and communication technologies allow environmental and energy efficiency issues to be handled effectively. Offering a selection of extended and reworked contributions to the 30th International Conference EnviroInfo 2016, it is essential reading for anyone wanting to extend their expertise in the area.
This book provides a practical approach to designing and implementing a Knowledge Management (KM) Strategy. The book explains how to design KM strategy so as to align business goals with KM objectives. The book also presents an approach for implementing KM strategy so as to make it sustainable. It covers all basic KM concepts, components of KM and the steps that are required for designing a KM strategy. As a result, the book can be used by beginners as well as practitioners. Knowledge management is a discipline that promotes an integrated approach to identifying, capturing, evaluating, retrieving, and sharing all of an enterprise's information assets. These assets may include databases, documents, policies, procedures, and previously un-captured expertise and experience in individual workers. Knowledge is considered to be the learning that results from experience and is embedded within individuals. Sometimes the knowledge is gained through critical thinking, watching others, and observing results of others. These observations then form a pattern which is converted in a 'generic form' to knowledge. This implies that knowledge can be formed only after data (which is generated through experience or observation) is grouped into information and then this information pattern is made generic wisdom. However, dissemination and acceptance of this knowledge becomes a key factor in knowledge management. The knowledge pyramid represents the usual concept of knowledge transformations, where data is transformed into information, and information is transformed into knowledge. Many organizations have struggled to manage knowledge and translate it into business benefits. This book is an attempt to show them how it can be done. |
You may like...
Database Systems - Design…
Carlos Coronel, Steven Morris
Hardcover
An Introduction to Creating Standardized…
Todd Case, Yuting Tian
Hardcover
R1,569
Discovery Miles 15 690
SAS Text Analytics for Business…
Teresa Jade, Biljana Belamaric-Wilsey, …
Hardcover
R2,581
Discovery Miles 25 810
|