![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages > Other software packages
This book accomplishes an analysis of critical aspects of managerial implications on the business with information. The business dealing with information is spreading in the service market; and, an efficient management of informational processes, in order to perform successful business with them, is now crucial. Besides, economical/business, technological or any other kind of information, organized in a variety of forms, can be considered as an 'informational product'. Thus, creating a business value out of information is challenging but vital, especially in the modern digital age. Accordingly, the book covers the methods and technologies to capture, integrate, analyze, mine, interpret and visualize information out of distributed data, which in turn can help to manage information competently. This volume explores the challenges being faced and opportunities to look out for in this research area, while discussing different aspects of this subject. The book will be of interest to those working in or are interested in joining interdisciplinary and transdisciplinary work in the areas of information management, service management, and service business. It will also be of use to young generation researchers by giving them an overview on different aspects of doing business with information. While introducing them to both technical and non-technical details, as well as economic aspects, the book will also be extremely informative for professionals who want to understand and realize the potential of using the cutting-edge managerial technologies for doing successful business with information/ services.
This book identifies, analyzes and discusses the current trends of digitalized, decentralized, and networked physical value creation by focusing on the particular example of 3D printing. In addition to evaluating 3D printing's disruptive potentials against a broader economic background, it also addresses the technology's potential impacts on sustainability and emerging modes of bottom-up and community-based innovation. Emphasizing these topics from economic, technical, social and environmental perspectives, the book offers a multifaceted overview that scrutinizes the scenario of a fundamental transition: from a centralized to a far more decentralized system of value creation.
This book presents a proposal for designing business process management (BPM) systems that comprise much more than just process modelling. Based on a purified Business Process Model and Notation (BPMN) variant, the authors present proposals for several important issues in BPM that have not been adequately considered in the BPMN 2.0 standard. It focusses on modality as well as actor and user interaction modelling and offers an enhanced communication concept. In order to render models executable, the semantics of the modelling language needs to be described rigorously enough to prevent deviating interpretations by different tools. For this reason, the semantics of the necessary concepts introduced in this book are defined using the Abstract State Machine (ASM) method. Finally, the authors show how the different parts of the model fit together using a simple example process, and introduce the enhanced Process Platform (eP2) architecture, which binds all the different components together. The resulting method is named Hagenberg Business Process Modelling (H-BPM) after the Austrian village where it was designed. The motivation for the development of the H-BPM method stems from several industrial projects in which business analysts and software developers struggled with redundancies and inconsistencies in system documentation due to missing integration. The book is aimed at researchers in business process management and industry 4.0 as well as advanced professionals in these areas.
This edited three volume edition brings together significant papers previously published in the Journal of information Technology (JIT) over its 30 year publication history. The three volumes of Enacting Research Methods in Information Systems celebrate the methodological pluralism used to advance our understanding of information technology's role in the world today. In addition to quantitative methods from the positivist tradition, JIT also values methodological articles from critical research perspectives, interpretive traditions, historical perspectives, grounded theory, and action research and design science approaches. Volume 1 covers Critical Research, Grounded Theory, and Historical Approaches. Volume 2 deals with Interpretive Approaches and also explores Action Research. Volume 3 focuses on Design Science Approaches and discusses Alternative Approaches including Semiotics Research, Complexity Theory and Gender in IS Research. The Journal of Information Technology (JIT) was started in 1986 by Professors Frank Land and Igor Aleksander with the aim of bringing technology and management together and bridging the 'great divide' between the two disciplines. The Journal was created with the vision of making the impact of complex interactions and developments in technology more accessible to a wider audience. Retaining this initial focus, the JIT has gone on to extend into new and innovative areas of research such as the launch of JITTC in 2010. A high impact journal, JIT shall continue to publish leading trends based on significant research in the field.
This edited three volume edition brings together significant papers previously published in the Journal of information Technology (JIT) over its 30 year publication history. The three volumes of Enacting Research Methods in Information Systems celebrate the methodological pluralism used to advance our understanding of information technology's role in the world today. In addition to quantitative methods from the positivist tradition, JIT also values methodological articles from critical research perspectives, interpretive traditions, historical perspectives, grounded theory, and action research and design science approaches. Volume 1 covers Critical Research, Grounded Theory, and Historical Approaches. Volume 2 deals with Interpretive Approaches and also explores Action Research. Volume 3 focuses on Design Science Approaches and discusses Alternative Approaches including Semiotics Research, Complexity Theory and Gender in IS Research. The Journal of Information Technology (JIT) was started in 1986 by Professors Frank Land and Igor Aleksander with the aim of bringing technology and management together and bridging the 'great divide' between the two disciplines. The Journal was created with the vision of making the impact of complex interactions and developments in technology more accessible to a wider audience. Retaining this initial focus, the JIT has gone on to extend into new and innovative areas of research such as the launch of JITTC in 2010. A high impact journal, JIT shall continue to publish leading trends based on significant research in the field.
This book presents a comprehensive study of multivariate time series with linear state space structure. The emphasis is put on both the clarity of the theoretical concepts and on efficient algorithms for implementing the theory. In particular, it investigates the relationship between VARMA and state space models, including canonical forms. It also highlights the relationship between Wiener-Kolmogorov and Kalman filtering both with an infinite and a finite sample. The strength of the book also lies in the numerous algorithms included for state space models that take advantage of the recursive nature of the models. Many of these algorithms can be made robust, fast, reliable and efficient. The book is accompanied by a MATLAB package called SSMMATLAB and a webpage presenting implemented algorithms with many examples and case studies. Though it lays a solid theoretical foundation, the book also focuses on practical application, and includes exercises in each chapter. It is intended for researchers and students working with linear state space models, and who are familiar with linear algebra and possess some knowledge of statistics.
This textbook examines empirical linguistics from a theoretical linguist's perspective. It provides both a theoretical discussion of what quantitative corpus linguistics entails and detailed, hands-on, step-by-step instructions to implement the techniques in the field. The statistical methodology and R-based coding from this book teach readers the basic and then more advanced skills to work with large data sets in their linguistics research and studies. Massive data sets are now more than ever the basis for work that ranges from usage-based linguistics to the far reaches of applied linguistics. This book presents much of the methodology in a corpus-based approach. However, the corpus-based methods in this book are also essential components of recent developments in sociolinguistics, historical linguistics, computational linguistics, and psycholinguistics. Material from the book will also be appealing to researchers in digital humanities and the many non-linguistic fields that use textual data analysis and text-based sensorimetrics. Chapters cover topics including corpus processing, frequencing data, and clustering methods. Case studies illustrate each chapter with accompanying data sets, R code, and exercises for use by readers. This book may be used in advanced undergraduate courses, graduate courses, and self-study.
This book is published under a CC BY-NC 4.0 license. The editors present essential methods and tools to support a holistic approach to the challenge of system upgrades and innovation in the context of high-value products and services. The approach presented here is based on three main pillars: an adaptation mechanism based on a broad understanding of system dependencies; efficient use of system knowledge through involvement of actors throughout the process; and technological solutions to enable efficient actor communication and information handling. The book provides readers with a better understanding of the factors that influence decisions, and put forward solutions to facilitate the rapid adaptation to changes in the business environment and customer needs through intelligent upgrade interventions. Further, it examines a number of sample cases from various contexts including car manufacturing, utilities, shipping and the furniture industry. The book offers a valuable resource for both academics and practitioners interested in the upgrading of capital-intensive products and services. "The work performed in the project "Use-It-Wisely (UiW)" significantly contributes towards a collaborative way of working. Moreover, it offers comprehensive system modelling to identify business opportunities and develop technical solutions within industrial value networks. The developed UiW-framework fills a void and offers a great opportunity. The naval construction sector of small passenger vessels, for instance, is one industry that can benefit." Nikitas Nikitakos, Professor at University of the Aegean, Department of Shipping, Trade, and Transport, Greece. "Long-life assets are crucial for both the future competiveness and sustainability of society. Make wrong choices now and you are locked into a wrong system for a long time. Make the right choices now and society can prosper. This book gives important information about how manufacturers can make right choices." Arnold Tukker, Scientific director, Institute of Environmental Sciences (CML), Leiden University, and senior scientist, TNO.
This book examines trends and challenges in research on IT governance in public organizations, reporting innovative research and new insights in the theories, models and practices within the area. As we noticed, IT governance plays an important role in generating value from organization's IT investments. However there are different challenges for researchers in studying IT governance in public organizations due to the differences between political, administrative, and practices in these organizations. The first section of the book looks at Management issues, including an introduction to IT governance in public organizations; a systematic review of IT alignment research in public organizations; the role of middle managers in aligning strategy and IT in public service organizations; and an analysis of alignment and governance with regard to IT-related policy decisions. The second section examines Modelling, including a consideration of the challenges faced by public administration; a discussion of a framework for IT governance implementation suitable to improve alignment and communication between stakeholders of IT services; the design and implementation of IT architecture; and the adoption of enterprise architecture in public organizations. Finally, section three presents Case Studies, including IT governance in the context of e-government strategy implementation in the Caribbean; the relationship of IT organizational structure and IT governance performance in the IT department of a public research and education organization in a developing country; the relationship between organizational ambidexterity and IT governance through a study of the Swedish Tax Authorities; and the role of institutional logics in IT project activities and interactions in a large Swedish hospital.
This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.
This book reports on the latest advances and applications of chaotic systems. It consists of 25 contributed chapters by experts who are specialized in the various topics addressed in this book. The chapters cover a broad range of topics of chaotic systems such as chaos, hyperchaos, jerk systems, hyperjerk systems, conservative and dissipative systems, circulant chaotic systems, multi-scroll chaotic systems, finance chaotic system, highly chaotic systems, chaos control, chaos synchronization, circuit realization and applications of chaos theory in secure communications, mobile robot, memristors, cellular neural networks, etc. Special importance was given to chapters offering practical solutions, modeling and novel control methods for the recent research problems in chaos theory. This book will serve as a reference book for graduate students and researchers with a basic knowledge of chaos theory and control systems. The resulting design procedures on the chaotic systems are emphasized using MATLAB software.
This book identifies and discusses the main challenges facing digital business innovation and the emerging trends and practices that will define its future. The book is divided into three sections covering trends in digital systems, digital management, and digital innovation. The opening chapters consider the issues associated with machine intelligence, wearable technology, digital currencies, and distributed ledgers as their relevance for business grows. Furthermore, the strategic role of data visualization and trends in digital security are extensively discussed. The subsequent section on digital management focuses on the impact of neuroscience on the management of information systems, the role of IT ambidexterity in managing digital transformation, and the way in which IT alignment is being reconfigured by digital business. Finally, examples of digital innovation in practice at the global level are presented and reviewed. The book will appeal to both practitioners and academics. The text is supported by informative illustrations and case studies, so that practitioners can use the book as a toolbox that enables easy understanding and assists in exploiting business opportunities involving digital business innovation.
Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language.The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis on the Java Platform is a great choice for those who want to learn how statistical data analysis can be done using popular programming languages, who want to integrate data analysis algorithms in full-scale applications, and deploy such calculations on the web pages or computational servers regardless of their operating system. It is an excellent reference for scientific computations to solve real-world problems using a comprehensive stack of open-source Java libraries included in the DataMelt (DMelt) project and will be appreciated by many data-analysis scientists, engineers and students.
These are the proceedings of the 22nd International Conference on Domain Decomposition Methods, which was held in Lugano, Switzerland. With 172 participants from over 24 countries, this conference continued a long-standing tradition of internationally oriented meetings on Domain Decomposition Methods. The book features a well-balanced mix of established and new topics, such as the manifold theory of Schwarz Methods, Isogeometric Analysis, Discontinuous Galerkin Methods, exploitation of modern HPC architectures and industrial applications. As the conference program reflects, the growing capabilities in terms of theory and available hardware allow increasingly complex non-linear and multi-physics simulations, confirming the tremendous potential and flexibility of the domain decomposition concept.
The revised Fourth Edition of this popular textbook is redesigned with Excel 2016 to encourage business students to develop competitive advantages for use in their future careers as decision makers. Students learn to build models using logic and experience, produce statistics using Excel 2016 with shortcuts, and translate results into implications for decision makers. The textbook features new examples and assignments on global markets, including cases featuring Chipotle and Costco. A number of examples focus on business in emerging global markets with particular emphasis on emerging markets in Latin America, China, and India. Results are linked to implications for decision making with sensitivity analyses to illustrate how alternate scenarios can be compared. The author emphasises communicating results effectively in plain English and with screenshots and compelling graphics in the form of memos and PowerPoints. Chapters include screenshots to make it easy to conduct analyses in Excel 2016. PivotTables and PivotCharts, used frequently in business, are introduced from the start. The Fourth Edition features Monte Carlo simulation in four chapters, as a tool to illustrate the range of possible outcomes from decision makers' assumptions and underlying uncertainties. Model building with regression is presented as a process, adding levels of sophistication, with chapters on multicollinearity and remedies, forecasting and model validation, auto-correlation and remedies, indicator variables to represent segment differences, and seasonality, structural shifts or shocks in time series models. Special applications in market segmentation and portfolio analysis are offered, and an introduction to conjoint analysis is included. Nonlinear models are motivated with arguments of diminishing or increasing marginal response.
This book investigates organizational learning from a variety of information processing perspectives. Continuous change and complexity in regulatory, social and economic environments are increasingly forcing organizations and their employees to acquire the necessary job-specific knowledge at the right time and in the right format. Though many regulatory documents are now available in digital form, their complexity and diversity make identifying the relevant elements for a particular context a challenging task. In such scenarios, business processes tend to be important sources of knowledge, containing rich but in many cases embedded, hidden knowledge. This book discusses the possible connection between business process models and corporate knowledge assets; knowledge extraction approaches based on organizational processes; developing and maintaining corporate knowledge bases; and semantic business process management and its relation to organizational learning approaches. The individual chapters reveal the different elements of a knowledge management solution designed to extract, organize and preserve the knowledge embedded in business processes so as to: enrich organizational knowledge bases in a systematic and controlled way, support employees in acquiring job role-specific knowledge, promote organizational learning, and steer human capital investment. All of these topics are analyzed on the basis of real-world cases from the domains of insurance, food safety, innovation, and funding.
This book enables readers who may not be familiar with matrices to understand a variety of multivariate analysis procedures in matrix forms. Another feature of the book is that it emphasizes what model underlies a procedure and what objective function is optimized for fitting the model to data. The author believes that the matrix-based learning of such models and objective functions is the fastest way to comprehend multivariate data analysis. The text is arranged so that readers can intuitively capture the purposes for which multivariate analysis procedures are utilized: plain explanations of the purposes with numerical examples precede mathematical descriptions in almost every chapter. This volume is appropriate for undergraduate students who already have studied introductory statistics. Graduate students and researchers who are not familiar with matrix-intensive formulations of multivariate data analysis will also find the book useful, as it is based on modern matrix formulations with a special emphasis on singular value decomposition among theorems in matrix algebra. The book begins with an explanation of fundamental matrix operations and the matrix expressions of elementary statistics, followed by the introduction of popular multivariate procedures with advancing levels of matrix algebra chapter by chapter. This organization of the book allows readers without knowledge of matrices to deepen their understanding of multivariate data analysis.
This book offers an original and broad exploration of the fundamental methods in Clustering and Combinatorial Data Analysis, presenting new formulations and ideas within this very active field. With extensive introductions, formal and mathematical developments and real case studies, this book provides readers with a deeper understanding of the mutual relationships between these methods, which are clearly expressed with respect to three facets: logical, combinatorial and statistical. Using relational mathematical representation, all types of data structures can be handled in precise and unified ways which the author highlights in three stages: Clustering a set of descriptive attributes Clustering a set of objects or a set of object categories Establishing correspondence between these two dual clusterings Tools for interpreting the reasons of a given cluster or clustering are also included. Foundations and Methods in Combinatorial and Statistical Data Analysis and Clustering will be a valuable resource for students and researchers who are interested in the areas of Data Analysis, Clustering, Data Mining and Knowledge Discovery.
This book explores models and concepts of trust in a digitized world. Trust is a core concept that comes into play in multiple social and economic relations of our modern life. The book provides insights into the current state of research while presenting the viewpoints of a variety of disciplines such as communication studies, information systems, educational and organizational psychology, sports psychology and economics. Focusing on an investigation of how the Internet is changing the relationship between trust and communication, and the impact this change has on trust research, this volume facilitates a greater understanding of these topics, thus enabling their employment in social relations.
This book presents recent research in the recognition of vulnerabilities of national systems and assets which gained special attention for the Critical Infrastructures in the last two decades. The book concentrates on R&D activities in the relation of Critical Infrastructures focusing on enhancing the performance of services as well as the level of security. The objectives of the book are based on a project entitled "Critical Infrastructure Protection Researches" (TAMOP-4.2.1.B-11/2/KMR-2011-0001) which concentrated on innovative UAV solutions, robotics, cybersecurity, surface engineering, and mechatornics and technologies providing safe operations of essential assets. This report is summarizing the methodologies and efforts taken to fulfill the goals defined. The project has been performed by the consortium of the Obuda University and the National University of Public Service.
This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvagar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in "big data" situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on future research directions, the contributions will benefit graduate students and researchers in computational biology, statistics and the machine learning community.
This book reviews the theoretical concepts, leading-edge techniques and practical tools involved in the latest multi-disciplinary approaches addressing the challenges of big data. Illuminating perspectives from both academia and industry are presented by an international selection of experts in big data science. Topics and features: describes the innovative advances in theoretical aspects of big data, predictive analytics and cloud-based architectures; examines the applications and implementations that utilize big data in cloud architectures; surveys the state of the art in architectural approaches to the provision of cloud-based big data analytics functions; identifies potential research directions and technologies to facilitate the realization of emerging business models through big data approaches; provides relevant theoretical frameworks, empirical research findings, and numerous case studies; discusses real-world applications of algorithms and techniques to address the challenges of big datasets.
This book presents new concepts as well as practical applications and experiences in the field of information technology for environmental engineering. The book has three main focus areas: firstly, it shows how information technologies can be employed to support natural resource management and conservation, environmental engineering, scientific simulation and integrated assessment studies. Secondly, it demonstrates the application of computing in the everyday practices of environmental engineers, natural scientists, economists and social scientists. And thirdly, it demonstrates how the complexity of natural phenomena can be approached using interdisciplinary methods, where computer science offers the infrastructure needed for environmental data collection and management, scientific simulations, decision support documentation and reporting.The book collects selected papers presented at the 7th International Symposium on Environmental Engineering, held in Port Elizabeth, South Africa in July 2015. It discusses recent success stories in eco-informatics, promising ideas and new challenges from the interdisciplinary viewpoints of computer scientists, environmental engineers, economists and social scientists, demonstrating new paradigms for problem-solving and decision-making.
This book presents the latest findings and ongoing research in the field of green information systems and green information and communication technology (ICT). It provides insights into a whole range of cross-cutting topics in ICT and environmental sciences as well as showcases how information and communication technologies allow environmental and energy efficiency issues to be handled effectively. The papers presented in this book are a selection of extended and improved contributions to the 28th International Conference on Informatics for Environmental Protection dedicated to ICT for energy efficiency. This book is essential and particularly worth reading for those who already gained basic knowledge and want to deepen and extend their expertise in the subjects mentioned above.
This book covers all the topics found in introductory descriptive statistics courses, including simple linear regression and time series analysis, the fundamentals of inferential statistics (probability theory, random sampling and estimation theory), and inferential statistics itself (confidence intervals, testing). Each chapter starts with the necessary theoretical background, which is followed by a variety of examples. The core examples are based on the content of the respective chapter, while the advanced examples, designed to deepen students' knowledge, also draw on information and material from previous chapters. The enhanced online version helps students grasp the complexity and the practical relevance of statistical analysis through interactive examples and is suitable for undergraduate and graduate students taking their first statistics courses, as well as for undergraduate students in non-mathematical fields, e.g. economics, the social sciences etc. |
You may like...
Computers in Earth and Environmental…
Hamid Reza Pourghasemi
Paperback
R4,025
Discovery Miles 40 250
Diagnostic Biomedical Signal and Image…
Kemal Polat, Saban Ozturk
Paperback
R2,952
Discovery Miles 29 520
Cybersecurity Issues and Challenges for…
Saqib Saeed, Abdullah M. Almuhaideb, …
Hardcover
R7,752
Discovery Miles 77 520
Multi-Criteria Decision-Making Sorting…
Luis Martinez Lopez, Alessio Ishizaka, …
Paperback
R2,948
Discovery Miles 29 480
Database Systems - Design…
Carlos Coronel, Steven Morris
Paperback
|