![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages
Gain the R programming language fundamentals for doing the applied statistics useful for data exploration and analysis in data science and data mining. This book covers topics ranging from R syntax basics, descriptive statistics, and data visualizations to inferential statistics and regressions. After learning R's syntax, you will work through data visualizations such as histograms and boxplot charting, descriptive statistics, and inferential statistics such as t-test, chi-square test, ANOVA, non-parametric test, and linear regressions. Learn R for Applied Statistics is a timely skills-migration book that equips you with the R programming fundamentals and introduces you to applied statistics for data explorations. What You Will Learn Discover R, statistics, data science, data mining, and big data Master the fundamentals of R programming, including variables and arithmetic, vectors, lists, data frames, conditional statements, loops, and functions Work with descriptive statistics Create data visualizations, including bar charts, line charts, scatter plots, boxplots, histograms, and scatterplots Use inferential statistics including t-tests, chi-square tests, ANOVA, non-parametric tests, linear regressions, and multiple linear regressions Who This Book Is For Those who are interested in data science, in particular data exploration using applied statistics, and the use of R programming for data visualizations.
The updated guide to the newest graphing calculator from Texas Instruments The TI-Nspire graphing calculator is popular among high school and college students as a valuable tool for calculus, AP calculus, and college-level algebra courses. Its use is allowed on the major college entrance exams. This book is a nuts-and-bolts guide to working with the TI-Nspire, providing everything you need to get up and running and helping you get the most out of this high-powered math tool.Texas Instruments' TI-Nspire graphing calculator is perfect for high school and college students in advanced algebra and calculus classes as well as students taking the SAT, PSAT, and ACT examsThis fully updated guide covers all enhancements to the TI-Nspire, including the touchpad and the updated software that can be purchased along with the deviceShows how to get maximum value from this versatile math tool With updated screenshots and examples, "TI-Nspire For Dummies" provides practical, hands-on instruction to help students make the most of this revolutionary graphing calculator.
This book traces the theory and methodology of multivariate statistical analysis and shows how it can be conducted in practice using the LISREL computer program. It presents not only the typical uses of LISREL, such as confirmatory factor analysis and structural equation models, but also several other multivariate analysis topics, including regression (univariate, multivariate, censored, logistic, and probit), generalized linear models, multilevel analysis, and principal component analysis. It provides numerous examples from several disciplines and discusses and interprets the results, illustrated with sections of output from the LISREL program, in the context of the example. The book is intended for masters and PhD students and researchers in the social, behavioral, economic and many other sciences who require a basic understanding of multivariate statistical theory and methods for their analysis of multivariate data. It can also be used as a textbook on various topics of multivariate statistical analysis.
This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.
This book offers a collection of recent contributions and emerging ideas in the areas of robust statistics presented at the International Conference on Robust Statistics 2015 (ICORS 2015) held in Kolkata during 12-16 January, 2015. The book explores the applicability of robust methods in other non-traditional areas which includes the use of new techniques such as skew and mixture of skew distributions, scaled Bregman divergences, and multilevel functional data methods; application areas being circular data models and prediction of mortality and life expectancy. The contributions are of both theoretical as well as applied in nature. Robust statistics is a relatively young branch of statistical sciences that is rapidly emerging as the bedrock of statistical analysis in the 21st century due to its flexible nature and wide scope. Robust statistics supports the application of parametric and other inference techniques over a broader domain than the strictly interpreted model scenarios employed in classical statistical methods. The aim of the ICORS conference, which is being organized annually since 2001, is to bring together researchers interested in robust statistics, data analysis and related areas. The conference is meant for theoretical and applied statisticians, data analysts from other fields, leading experts, junior researchers and graduate students. The ICORS meetings offer a forum for discussing recent advances and emerging ideas in statistics with a focus on robustness, and encourage informal contacts and discussions among all the participants. They also play an important role in maintaining a cohesive group of international researchers interested in robust statistics and related topics, whose interactions transcend the meetings and endure year round.
The subject of this book stands at the crossroads of ergodic theory and measurable dynamics. With an emphasis on irreversible systems, the text presents a framework of multi-resolutions tailored for the study of endomorphisms, beginning with a systematic look at the latter. This entails a whole new set of tools, often quite different from those used for the "easier" and well-documented case of automorphisms. Among them is the construction of a family of positive operators (transfer operators), arising naturally as a dual picture to that of endomorphisms. The setting (close to one initiated by S. Karlin in the context of stochastic processes) is motivated by a number of recent applications, including wavelets, multi-resolution analyses, dissipative dynamical systems, and quantum theory. The automorphism-endomorphism relationship has parallels in operator theory, where the distinction is between unitary operators in Hilbert space and more general classes of operators such as contractions. There is also a non-commutative version: While the study of automorphisms of von Neumann algebras dates back to von Neumann, the systematic study of their endomorphisms is more recent; together with the results in the main text, the book includes a review of recent related research papers, some by the co-authors and their collaborators.
This book is about the role and potential of using digital technology in designing teaching and learning tasks in the mathematics classroom. Digital technology has opened up different new educational spaces for the mathematics classroom in the past few decades and, as technology is constantly evolving, novel ideas and approaches are brewing to enrich these spaces with diverse didactical flavors. A key issue is always how technology can, or cannot, play epistemic and pedagogic roles in the mathematics classroom. The main purpose of this book is to explore mathematics task design when digital technology is part of the teaching and learning environment. What features of the technology used can be capitalized upon to design tasks that transform learners' experiential knowledge, gained from using the technology, into conceptual mathematical knowledge? When do digital environments actually bring an essential (educationally, speaking) new dimension to classroom activities? What are some pragmatic and semiotic values of the technology used? These are some of the concerns addressed in the book by expert scholars in this area of research in mathematics education. This volume is the first devoted entirely to issues on designing mathematical tasks in digital teaching and learning environments, outlining different current research scenarios.
This book presents the latest findings and ongoing research in the field of green information systems as well as green information and communication technology (ICT). It provides insights into a whole range of cross-cutting concerns in ICT and environmental sciences and showcases how information and communication technologies allow environmental and energy efficiency issues to be handled effectively. Offering a selection of extended and reworked contributions to the 30th International Conference EnviroInfo 2016, it is essential reading for anyone wanting to extend their expertise in the area.
This book contains a rich set of tools for nonparametric analyses, and the purpose of this text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses and tests using R to broadly compare differences between data sets and statistical approach.
Marking the 30th anniversary of the European Conference on Modelling and Simulation (ECMS), this inspirational text/reference reviews significant advances in the field of modelling and simulation, as well as key applications of simulation in other disciplines. The broad-ranging volume presents contributions from a varied selection of distinguished experts chosen from high-impact keynote speakers and best paper winners from the conference, including a Nobel Prize recipient, and the first president of the European Council for Modelling and Simulation (also abbreviated to ECMS). This authoritative book will be of great value to all researchers working in the field of modelling and simulation, in addition to scientists from other disciplines who make use of modelling and simulation approaches in their work.
This book provides a practical approach to designing and implementing a Knowledge Management (KM) Strategy. The book explains how to design KM strategy so as to align business goals with KM objectives. The book also presents an approach for implementing KM strategy so as to make it sustainable. It covers all basic KM concepts, components of KM and the steps that are required for designing a KM strategy. As a result, the book can be used by beginners as well as practitioners. Knowledge management is a discipline that promotes an integrated approach to identifying, capturing, evaluating, retrieving, and sharing all of an enterprise's information assets. These assets may include databases, documents, policies, procedures, and previously un-captured expertise and experience in individual workers. Knowledge is considered to be the learning that results from experience and is embedded within individuals. Sometimes the knowledge is gained through critical thinking, watching others, and observing results of others. These observations then form a pattern which is converted in a 'generic form' to knowledge. This implies that knowledge can be formed only after data (which is generated through experience or observation) is grouped into information and then this information pattern is made generic wisdom. However, dissemination and acceptance of this knowledge becomes a key factor in knowledge management. The knowledge pyramid represents the usual concept of knowledge transformations, where data is transformed into information, and information is transformed into knowledge. Many organizations have struggled to manage knowledge and translate it into business benefits. This book is an attempt to show them how it can be done.
This book presents a variant of UML that is especially suitable for agile development of high-quality software. It adjusts the language UML profile, called UML/P, for optimal assistance for the design, implementation, and agile evolution to facilitate its use especially in agile, yet model based development methods for data intensive or control driven systems. After a general introduction to UML and the choices made in the development of UML/P in Chapter 1, Chapter 2 includes a definition of the language elements of class diagrams and their forms of use as views and representations. Next, Chapter 3 introduces the design and semantic facets of the Object Constraint Language (OCL), which is conceptually improved and syntactically adjusted to Java for better comfort. Subsequently, Chapter 4 introduces object diagrams as an independent, exemplary notation in UML/P, and Chapter 5 offers a detailed introduction to UML/P Statecharts. Lastly, Chapter 6 presents a simplified form of sequence diagrams for exemplary descriptions of object interactions. For completeness, appendixes A-C describe the full syntax of UML/P, and appendix D explains a sample application from the E-commerce domain, which is used in all chapters. This book is ideal for introductory courses for students and practitioners alike.
The book presents a conceptually novel oscillations based paradigm, the Oscillation-Based Multi-Agent System (OSIMAS), aimed at the modelling of agents and their systems as coherent, stylized, neurodynamic processes. This paradigm links emerging research domains via coherent neurodynamic oscillation based representations of the individual human mind and society (as a coherent collective mind) states. Thus, this multidisciplinary paradigm delivers an empirical and simulation research framework that provides a new way of modelling the complex dynamics of individual and collective mind states. This book addresses a conceptual problem - the lack of a multidisciplinary, connecting paradigm, which could link fragmented research in the fields of neuroscience, artificial intelligence (AI), multi-agent system (MAS) and the social network domains. The need for a common multidisciplinary research framework essentially arises because these fields share a common object of investigation and simulation, i.e., individual and collective human behavior. Although the fields of research mentioned above all approach this from different perspectives, their common object of investigation unites them. By putting the various pathways of research as they are interrelated into perspective, this book provides a philosophical underpinning, experimental background and modelling tools that the author anticipates will reveal new frontiers in multidisciplinary research. Fundamental investigation of the implicit oscillatory nature of agents' mind states and social mediums in general can reveal some new ways of understanding the periodic and nonperiodic fluctuations taking place in real life. For example, via agent states-related diffusion properties, we could investigate complex economic phenomena like the spread of stock market crashes, currency crises, speculative oscillations (bubbles and crashes), social unrest, recessionary effects, sovereign defaults, etc. All these effects are closely associated with social fragility, which follows and is affected by cycles such as production, political, business and financial. Thus, the multidisciplinary OSIMAS paradigm can yield new knowledge and research perspectives, allowing for a better understanding of social agents and their social organization principles.
This book reports on the results of an interdisciplinary and multidisciplinary workshop on provenance that brought together researchers and practitioners from different areas such as archival science, law, information science, computing, forensics and visual analytics that work at the frontiers of new knowledge on provenance. Each of these fields understands the meaning and purpose of representing provenance in subtly different ways. The aim of this book is to create cross-disciplinary bridges of understanding with a view to arriving at a deeper and clearer perspective on the different facets of provenance and how traditional definitions and applications may be enriched and expanded via an interdisciplinary and multidisciplinary synthesis. This volume brings together all of these developments, setting out an encompassing vision of provenance to establish a robust framework for expanded provenance theory, standards and technologies that can be used to build trust in financial and other types of information.
This book discusses examples in parametric inference with R. Combining basic theory with modern approaches, it presents the latest developments and trends in statistical inference for students who do not have an advanced mathematical and statistical background. The topics discussed in the book are fundamental and common to many fields of statistical inference and thus serve as a point of departure for in-depth study. The book is divided into eight chapters: Chapter 1 provides an overview of topics on sufficiency and completeness, while Chapter 2 briefly discusses unbiased estimation. Chapter 3 focuses on the study of moments and maximum likelihood estimators, and Chapter 4 presents bounds for the variance. In Chapter 5, topics on consistent estimator are discussed. Chapter 6 discusses Bayes, while Chapter 7 studies some more powerful tests. Lastly, Chapter 8 examines unbiased and other tests. Senior undergraduate and graduate students in statistics and mathematics, and those who have taken an introductory course in probability, will greatly benefit from this book. Students are expected to know matrix algebra, calculus, probability and distribution theory before beginning this course. Presenting a wealth of relevant solved and unsolved problems, the book offers an excellent tool for teachers and instructors who can assign homework problems from the exercises, and students will find the solved examples hugely beneficial in solving the exercise problems.
This textbook provides an introduction to the free software Python and its use for statistical data analysis. It covers common statistical tests for continuous, discrete and categorical data, as well as linear regression analysis and topics from survival analysis and Bayesian statistics. Working code and data for Python solutions for each test, together with easy-to-follow Python examples, can be reproduced by the reader and reinforce their immediate understanding of the topic. With recent advances in the Python ecosystem, Python has become a popular language for scientific computing, offering a powerful environment for statistical data analysis and an interesting alternative to R. The book is intended for master and PhD students, mainly from the life and medical sciences, with a basic knowledge of statistics. As it also provides some statistics background, the book can be used by anyone who wants to perform a statistical data analysis.
This volume presents selected peer-reviewed contributions from The International Work-Conference on Time Series, ITISE 2015, held in Granada, Spain, July 1-3, 2015. It discusses topics in time series analysis and forecasting, advanced methods and online learning in time series, high-dimensional and complex/big data time series as well as forecasting in real problems. The International Work-Conferences on Time Series (ITISE) provide a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing the disciplines of computer science, mathematics, statistics and econometrics.
This book presents and discusses the state of the art and future trends in software engineering education, with a focus on agile methods and their budgetary implications. It introduces new and innovative methods, models and frameworks to focus the training towards the industry's requirements. The range of topics covered includes education models for software engineering, development of the software engineering discipline, innovation and evaluation of software engineering education, curricula for software engineering education, requirements and cultivation of outstanding software engineers for the future and cooperation models for industry and software engineering education.
This book presents computer programming as a key method for solving mathematical problems. There are two versions of the book, one for MATLAB and one for Python. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows the students to write simple programs for solving common mathematical problems with numerical methods in engineering and science courses. The emphasis is on generic algorithms, clean design of programs, use of functions, and automatic tests for verification.
This book presents computer programming as a key method for solving mathematical problems. There are two versions of the book, one for MATLAB and one for Python. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows the students to write simple programs for solving common mathematical problems with numerical methods in engineering and science courses. The emphasis is on generic algorithms, clean design of programs, use of functions, and automatic tests for verification.
There are two different, interdependent components of IT that are important to a CIO: strategy, which is long-term; and tactical and operational concerns, which are short-term. Based on this distinction and its repercussions, this book clearly separates strategy from day-to-day operations and projects from operations - the two most important functions of a CIO. It starts by discussing the ideal organization of an IT department and the rationale behind it, and then goes on to debate the most pressing need - managing operations. It also explains some best industry standards and their practical implementation, and discusses project management, again highlighting the differences between the methodologies used in projects and those used in operations. A special chapter is devoted to the cutover of projects into operations, a critical aspect seldom discussed in detail. Other chapters touch on the management of IT portfolios, project governance, as well as agile project methodology, how it differs from the waterfall methodology, and when it is convenient to apply each. Taking the fundamental principles of IT service management and best practices in project management, the book offers a single, seamless reference for IT managers and professionals. It is highly practical, explaining how to apply these principles based on the author's extensive experience in industry.
This textbook examines empirical linguistics from a theoretical linguist's perspective. It provides both a theoretical discussion of what quantitative corpus linguistics entails and detailed, hands-on, step-by-step instructions to implement the techniques in the field. The statistical methodology and R-based coding from this book teach readers the basic and then more advanced skills to work with large data sets in their linguistics research and studies. Massive data sets are now more than ever the basis for work that ranges from usage-based linguistics to the far reaches of applied linguistics. This book presents much of the methodology in a corpus-based approach. However, the corpus-based methods in this book are also essential components of recent developments in sociolinguistics, historical linguistics, computational linguistics, and psycholinguistics. Material from the book will also be appealing to researchers in digital humanities and the many non-linguistic fields that use textual data analysis and text-based sensorimetrics. Chapters cover topics including corpus processing, frequencing data, and clustering methods. Case studies illustrate each chapter with accompanying data sets, R code, and exercises for use by readers. This book may be used in advanced undergraduate courses, graduate courses, and self-study.
This book presents a comprehensive study of multivariate time series with linear state space structure. The emphasis is put on both the clarity of the theoretical concepts and on efficient algorithms for implementing the theory. In particular, it investigates the relationship between VARMA and state space models, including canonical forms. It also highlights the relationship between Wiener-Kolmogorov and Kalman filtering both with an infinite and a finite sample. The strength of the book also lies in the numerous algorithms included for state space models that take advantage of the recursive nature of the models. Many of these algorithms can be made robust, fast, reliable and efficient. The book is accompanied by a MATLAB package called SSMMATLAB and a webpage presenting implemented algorithms with many examples and case studies. Though it lays a solid theoretical foundation, the book also focuses on practical application, and includes exercises in each chapter. It is intended for researchers and students working with linear state space models, and who are familiar with linear algebra and possess some knowledge of statistics.
This edited three volume edition brings together significant papers previously published in the Journal of information Technology (JIT) over its 30 year publication history. The three volumes of Enacting Research Methods in Information Systems celebrate the methodological pluralism used to advance our understanding of information technology's role in the world today. In addition to quantitative methods from the positivist tradition, JIT also values methodological articles from critical research perspectives, interpretive traditions, historical perspectives, grounded theory, and action research and design science approaches. Volume 1 covers Critical Research, Grounded Theory, and Historical Approaches. Volume 2 deals with Interpretive Approaches and also explores Action Research. Volume 3 focuses on Design Science Approaches and discusses Alternative Approaches including Semiotics Research, Complexity Theory and Gender in IS Research. The Journal of Information Technology (JIT) was started in 1986 by Professors Frank Land and Igor Aleksander with the aim of bringing technology and management together and bridging the 'great divide' between the two disciplines. The Journal was created with the vision of making the impact of complex interactions and developments in technology more accessible to a wider audience. Retaining this initial focus, the JIT has gone on to extend into new and innovative areas of research such as the launch of JITTC in 2010. A high impact journal, JIT shall continue to publish leading trends based on significant research in the field.
This edited three volume edition brings together significant papers previously published in the Journal of information Technology (JIT) over its 30 year publication history. The three volumes of Enacting Research Methods in Information Systems celebrate the methodological pluralism used to advance our understanding of information technology's role in the world today. In addition to quantitative methods from the positivist tradition, JIT also values methodological articles from critical research perspectives, interpretive traditions, historical perspectives, grounded theory, and action research and design science approaches. Volume 1 covers Critical Research, Grounded Theory, and Historical Approaches. Volume 2 deals with Interpretive Approaches and also explores Action Research. Volume 3 focuses on Design Science Approaches and discusses Alternative Approaches including Semiotics Research, Complexity Theory and Gender in IS Research. The Journal of Information Technology (JIT) was started in 1986 by Professors Frank Land and Igor Aleksander with the aim of bringing technology and management together and bridging the 'great divide' between the two disciplines. The Journal was created with the vision of making the impact of complex interactions and developments in technology more accessible to a wider audience. Retaining this initial focus, the JIT has gone on to extend into new and innovative areas of research such as the launch of JITTC in 2010. A high impact journal, JIT shall continue to publish leading trends based on significant research in the field. |
You may like...
An Introduction to Creating Standardized…
Todd Case, Yuting Tian
Hardcover
R1,501
Discovery Miles 15 010
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,266
Discovery Miles 12 660
Handbook of HydroInformatics - Volume…
Saeid Eslamian, Faezeh Eslamian
Paperback
R3,507
Discovery Miles 35 070
Data Communication and Computer Networks…
Jill West, Curt M. White
Paperback
|