![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Business & management > Business mathematics & systems
This book presents a proposal for designing business process management (BPM) systems that comprise much more than just process modelling. Based on a purified Business Process Model and Notation (BPMN) variant, the authors present proposals for several important issues in BPM that have not been adequately considered in the BPMN 2.0 standard. It focusses on modality as well as actor and user interaction modelling and offers an enhanced communication concept. In order to render models executable, the semantics of the modelling language needs to be described rigorously enough to prevent deviating interpretations by different tools. For this reason, the semantics of the necessary concepts introduced in this book are defined using the Abstract State Machine (ASM) method. Finally, the authors show how the different parts of the model fit together using a simple example process, and introduce the enhanced Process Platform (eP2) architecture, which binds all the different components together. The resulting method is named Hagenberg Business Process Modelling (H-BPM) after the Austrian village where it was designed. The motivation for the development of the H-BPM method stems from several industrial projects in which business analysts and software developers struggled with redundancies and inconsistencies in system documentation due to missing integration. The book is aimed at researchers in business process management and industry 4.0 as well as advanced professionals in these areas.
This book demonstrates the use of a wide range of strategic engineering concepts, theories and applied case studies to improve the safety, security and sustainability of complex and large-scale engineering and computer systems. It first details the concepts of system design, life cycle, impact assessment and security to show how these ideas can be brought to bear on the modeling, analysis and design of information systems with a focused view on cloud-computing systems and big data analytics. This informative book is a valuable resource for graduate students, researchers and industry-based practitioners working in engineering, information and business systems as well as strategy.
This book analyses the role of Enterprise Resource Planning (ERP) and Business Intelligence (BI) systems in improving information quality through an empirical analysis carried out in Italy. The study begins with a detailed examination of ERP features that highlights the advantages and disadvantages of ERP adoption. Critical success factors for ERP implementation and post-implementation are then discussed, along with the capabilities of ERP in driving the alignment between management accounting and financial accounting information.The study goes on to illustrate the features of BI systems and to summarize companies' needs for BI. Critical success factors for BI implementation are then presented, along with the BI maturity model and lifecycle. The focus of the research entails a detailed empirical analysis in the Italian setting designed to investigate the role played by ERP and BI systems in reducing information overload/underload and improving information quality by influencing the features of information flow. The practical and theoretical implications of the study are discussed and future avenues of research are suggested. This book will be of value for all those who have an interest in the capacities of ERP and BI systems to enhance business information quality.
Technological advances in the last five years have allowed organizations to use Business Analytics to provide insights, increase understanding and it is hoped, gain the elusive 'competitive edge'. The rapid development of Business Analytics is impacting all enterprise competences profoundly and classical business professions are being redefined by a much deeper interplay between business and information systems.As computing capabilities for analysis has moved outside the IT glass-house and into the sphere of individual workers, they are no longer the exclusive domain of IT professionals but rather accessible to all employees. Complex open-source data analytics packages and client-level visualization tools deployed in desktops and laptops equip virtually any end-user with the instruments to carry out significant analytical tasks. All the while, the drive to improve 'customer experience' has heightened the demand for data involving customers, providers and entire ecosystems.In response to the proliferation of Business Analytics, a new Center and Masters of Science Program was introduced at the National University of Singapore (NUS). The Center collaborates with over 40 different external partner organizations in Asia-Pacific with which all MSBA students undertake individual projects. Business Analytics: Progress on Applications in Asia Pacific provides a useful picture of the maturity of the Business Analytics domain in Asia Pacific. For more information about the Business Analytics Center at NUS, visit the website at: msba.nus.edu/
This book addresses a broad range of problems commonly encountered in the fields of financial analysis, logistics and supply chain management, such as the use of big data analytics in the banking sector. Divided into twenty chapters, some of the contemporary topics discussed in the book are co-operative/non-cooperative supply chain models for imperfect quality items with trade-credit financing; a non-dominated sorting water cycle algorithm for the cardinality constrained portfolio problem; and determining initial, basic and feasible solutions for transportation problems by means of the "supply demand reparation method" and "continuous allocation method." In addition, the book delves into a comparison study on exponential smoothing and the Arima model for fuel prices; optimal policy for Weibull distributed deteriorating items varying with ramp type demand rate and shortages; an inventory model with shortages and deterioration for three different demand rates; outlier labeling methods for medical data; a garbage disposal plant as a validated model of a fault-tolerant system; and the design of a "least cost ration formulation application for cattle"; a preservation technology model for deteriorating items with advertisement dependent demand and trade credit; a time series model for stock price forecasting in India; and asset pricing using capital market curves. The book offers a valuable asset for all researchers and industry practitioners working in these areas, giving them a feel for the latest developments and encouraging them to pursue further research in this direction.
Public Information Technology: Policy and Management Issues constitutes a survey of many of the most important dimensions of managing information technology in the public sector. Written by noted academics and public administration practitioners, this book addresses general policy and administrative issues in this arena as well as the information technology skills needed by public managers.
Hands-on tools for implementing, designing, and managing a successful architectural process within a corporation Enterprise architecture holds the key to an organization's success. Yet for many companies, the promised benefits of IT architecture still elude them. Dynamic Enterprise Architecture provides readers with a better understanding of the processes involved in successfully employing architectural thinking and empowers them with the instruments to analyze their own situations and identify points of improvement. Based on the authors' decades of practical experience in the field, Dynamic Enterprise Architecture: Provides step-by-step guidance to help organizations successfully introduce an enterprise architecture system Focuses on the processes to make enterprise architecture work Discusses ways to avoid bottlenecking in enterprise architecture Provides tips and best practices for implementing the processes involved Shows readers the most effective ways to use IT in their organizations Make the most out of enterprise architecture and achieve your business goals with the step-by-step guidelines found in Dynamic Enterprise Architecture.
How do we define the nature of our business, gather everything that we know about it, and then centralize our information in one, easily accessed place within the organization? Breslin and McGann call such knowledge our ways of working and the place where it will be found a business knowledge repository. All of a company's accumulated operations data, its manuals and procedures, its records of compliance with myriad regulations, its audits, disaster recovery plans--are essential information that today's management needs at its fingertips, and information that tomorroW's management must be sure can easily be found. Breslin and McGann show clearly and comprehensively how business knowledge repositories can be established and maintained, what should go into them and how to get it out, who should have access, and all of the other details that management needs to make the most of this valuable resource and means of doing business. An essential study and guide for management at upper levels in all types of organizations, both public and private. Breslin and McGann show that once an organization's knowledge of itself is formulated into its ways of working, its so-called object orientation makes it easily maintained. The repository approach to organizing and consolidating knowledge makes it possible for all of its potential users to access it easily, without having to go to one source for one thing they need and to another for another thing, a tedious and costly procedure in many organizations that have allowed their information and knowledge resources to not only grow but become duplicated as well. The repository approach also makes it possible for management to organize and access information by job functions, and to make it available to employees more easily in training situations. Regulators and auditors are also more easily served. As a result, CFOs will find their annual audit and various compliance fees considerably reduced. Breslin and McGann's book is thus a blueprint for the creation of knowledge repositories and a discussion of how graphical communication between information systems creators and their client end users can be made to flow smoothly and efficiently.
This textbook addresses the conceptual and practical aspects of the various phases of the lifecycle of service systems, ranging from service ideation, design, implementation, analysis, improvement and trading associated with service systems engineering. Written by leading experts in the field, this indispensable textbook will enable a new wave of future professionals to think in a service-focused way with the right balance of competencies in computer science, engineering, and management. Fundamentals of Service Systems is a centerpiece for a course syllabus on service systems. Each chapter includes a summary, a list of learning objectives, an opening case, and a review section with questions, a project description, a list of key terms, and a list of further reading bibliography. All these elements enable students to learn at a faster and more comfortable peace. For researchers, teachers, and students who want to learn about this new emerging science, Fundamentals of Service Systems provides an overview of the core disciplines underlying the study of service systems. It is aimed at students of information systems, information technology, and business and economics. It also targets business and IT practitioners, especially those who are looking for better ways of innovating, designing, modeling, analyzing, and optimizing service systems.
This is the first book to explore how Semantic Web technologies (SWTs) can be used to create intelligent engineering applications (IEAs). Technology-specific chapters reflect the state of the art in relevant SWTs and offer guidelines on how they can be applied in multi-disciplinary engineering settings characteristic of engineering production systems. In addition, a selection of case studies from various engineering domains demonstrate how SWTs can be used to create IEAs that enable, for example, defect detection or constraint checking. Part I "Background and Requirements of Industrie 4.0 for Semantic Web Solutions" provides the background information needed to understand the book and addresses questions concerning the semantic challenges and requirements of Industrie 4.0, and which key SWT capabilities may be suitable for implementing engineering applications. In turn, Part II "Semantic Web-Enabled Data Integration in Multi-Disciplinary Engineering" focuses on how SWTs can be used for data integration in heterogeneous, multi-disciplinary engineering settings typically encountered in the creation of flexible production systems. Part III "Creating Intelligent Applications for Multi-Disciplinary Engineering" demonstrates how the integrated engineering data can be used to support the creation of IEAs, while Part IV "Related and Emerging Trends in the Use of Semantic Web in Engineering" presents an overview of the broader spectrum of approaches that make use of SWTs to support engineering settings. A final chapter then rounds out the book with an assessment of the strengths, weaknesses and compatibilities of SWTs and an outlook on future opportunities for applying SWTs to create IEAs in flexible industrial production systems. This book seeks to build a bridge between two communities: industrial production on one hand and Semantic Web on the other. Accordingly, stakeholders from both communities should find this book useful in their work. Semantic Web researchers will gain a better understanding of the challenges and requirements of the industrial production domain, offering them guidance in the development of new technologies and solutions for this important application area. In turn, engineers and managers from engineering domains will arrive at a firmer grasp of the benefits and limitations of using SWTs, helping them to select and adopt appropriate SWTs more effectively. In addition, researchers and students interested in industrial production-related issues will gain valuable insights into how and to what extent SWTs can help to address those issues.
This book discusses action-oriented, concise and easy-to-communicate goals and challenges related to quality, reliability, infocomm technology and business operations. It brings together groundbreaking research in the area of software reliability, e-maintenance and big data analytics, highlighting the importance of maintaining the current growth in information technology (IT) adoption in businesses, while at the same time proposing process innovations to ensure sustainable development in the immediate future. In its thirty-seven chapters, it covers various areas of e-maintenance solutions, software architectures, patching problems in software reliability, preventive maintenance, industrial big data and reliability applications in electric power systems. The book reviews the ways in which countries currently attempt to resolve the conflicts and opportunities related to quality, reliability, IT and business operations, and proposes that internationally coordinated research plans are essential for effective and sustainable development, with research being most effective when it uses evidence-based decision-making frameworks resulting in clear management objectives, and is organized within adaptive management frameworks. Written by leading experts, the book is of interest to researchers, academicians, practitioners and policy makers alike who are working towards the common goal of making business operations more effective and sustainable.
This book is about innovation, big data, and data science seen from a business perspective. Big data is a buzzword nowadays, and there is a growing necessity within practitioners to understand better the phenomenon, starting from a clear stated definition. This book aims to be a starting reading for executives who want (and need) to keep the pace with the technological breakthrough introduced by new analytical techniques and piles of data. Common myths about big data will be explained, and a series of different strategic approaches will be provided. By browsing the book, it will be possible to learn how to implement a big data strategy and how to use a maturity framework to monitor the progress of the data science team, as well as how to move forward from one stage to the next. Crucial challenges related to big data will be discussed, where some of them are more general - such as ethics, privacy, and ownership - while others concern more specific business situations (e.g., initial public offering, growth strategies, etc.). The important matter of selecting the right skills and people for an effective team will be extensively explained, and practical ways to recognize them and understanding their personalities will be provided. Finally, few relevant technological future trends will be acknowledged (i.e., IoT, Artificial intelligence, blockchain, etc.), especially for their close relation with the increasing amount of data and our ability to analyse them faster and more effectively.
This book provides a state-of-the-art perspective on intelligent process-aware information systems and presents chapters on specific facets and approaches applicable to such systems. Further, it highlights novel advances and developments in various aspects of intelligent process-aware information systems and business process management systems. Intelligence capabilities are increasingly being integrated into or created in many of today's software products and services. Process-aware information systems provide critical computing infrastructure to support the various processes involved in the creation and delivery of business products and services. Yet the integration of intelligence capabilities into process-aware information systems is a non-trivial yet necessary evolution of these complex systems. The book's individual chapters address adaptive process management, case management processes, autonomically-capable processes, process-oriented information logistics, process recommendations, reasoning over process models, process portability, and business process intelligence. The primary target groups are researchers and PhD/Master students in the field of information systems.
The rate of failure of IT projects has remained little changed in survey after survey over the past 15-20 years-over 40-50%. This has happened in spite of new technology, innovative methods and tools, and different management methods. Why does this happen? Why can't the situation be better? One reason is that many think of each IT effort as unique. In reality many IT projects are very similar at a high, strategic level. Where they differ is in the people and exact events-the detail. If you read the literature or have been in information systems or IT for some time, you have seen the same reasons for failure and the same problems and issues recur again and again. In this book IT Management experts Ben Lientz and Lee Larssen show you how to identify and track the recurring issues leading to failure in IT projects and provide a proven, modern method for addressing them. By following the recommendations in this books readers can significantly reduce the risk of IT failures and increase the rate of success. Benefits of using this approach: * Issues are identified earlier-giving more time for solution and action. * Issues are resolved more consistently since the approach tracks on their repetition. * You get an early warning of problems in IT work-before the budget or schedule fall apart. * Management tends to have more realistic expectations with an awareness of issues. * Users and managers have greater confidence in IT due to the improved handling of issues. * Since the number of issues tends to stabilize in an organization, the IT organization and management get better at detecting, preventing, and dealing with issues over time-cumulative improvement. * Giving attention to issues make users more realistic in their requests and acts to deter requirement changes and scope creep.
This informative book goes beyond the technical aspects of data management to provide detailed analyses of quality problems and their impacts, potential solutions and how they are combined to form an overall data quality program, senior management's role, methods used to make improvements, and the life-cycle of data quality. It concludes with case studies, summaries of main points, roles and responsibilities for each individual, and a helpful listing of "dos and don'ts".
Seeking to define a new approach to data management at the enterprise level, this work takes the reader beyond information management to information control, where the methods of data capture and manipulation supersede data quantity. Using the metadata approach ensures long-term, universal control of all data characteristics and improves the effectiveness of IT as a corporate function by minimizing the potential for errors, and improving communication and understanding between IT and other disciplines. By describing how to establish metadata management within an organization, this volume provides examples of data structure architectures, and reviews issues associated with metadata management in relation to the Internet and data warehousing. It offers to help the reader to control the factors that make data useable throughout an organization and manage data so that it becomes a valuable corporate asset. The book examines real-world business departments that can benefit from this approach and ways in which sets of metadata can be both autonomous and overlapping.
This edited book presents the state-of-the-art of applying fuzzy logic to managerial decision-making processes in areas such as fuzzy-based portfolio management, recommender systems, performance assessment and risk analysis, among others. Presenting the latest research, with a strong focus on applications and case studies, it is a valuable resource for researchers, practitioners, project leaders and managers wanting to apply or improve their fuzzy-based skills.
Responsible Management of Information Systems discusses the question how can information systems be used and managed in a responsible manner. It does so by first defining the central concepts of information systems as the business use of information technology and the underlying concepts of ethics and morality. The term responsibility is introduced as a mediation of ethics and morality and a promising approach to normative questions. After demonstrating that the traditional notion of responsibility runs into many problems when applied to information systems the book develops a new, a reflective theory of responsibility. This theory that emphasizes the central characteristics of responsibility, namely openness, consequentialism, and teleology, is then applied to normative problems in information systems. It is shown that with the use of this theory the central moral and legal problems of information systems such as privacy or intellectual property can be successfully addressed.
This is the second of a two-part guide to quantitative analysis using the IBM SPSS Statistics software package; this volume focuses on multivariate statistical methods and advanced forecasting techniques. More often than not, regression models involve more than one independent variable. For example, forecasting methods are commonly applied to aggregates such as inflation rates, unemployment, exchange rates, etc., that have complex relationships with determining variables. This book introduces multivariate regression models and provides examples to help understand theory underpinning the model. The book presents the fundamentals of multivariate regression and then moves on to examine several related techniques that have application in business-orientated fields such as logistic and multinomial regression. Forecasting tools such as the Box-Jenkins approach to time series modeling are introduced, as well as exponential smoothing and naive techniques. This part also covers hot topics such as Factor Analysis, Discriminant Analysis and Multidimensional Scaling (MDS).
In many disciplines of science it is vital to know the effect of a 'treatment' on a response variable of interest; the effect being known as the 'treatment effect'. Here, the treatment can be a drug, an education program or an economic policy, and the response variable can be an illness, academic achievement or GDP. Once the effect is found, it is possible to intervene to adjust the treatment and attain a desired level of the response variable. A basic way to measure the treatment effect is to compare two groups, one of which received the treatment and the other did not. If the two groups are homogenous in all aspects other than their treatment status, then the difference between their response outcomes is the desired treatment effect. But if they differ in some aspects in addition to the treatment status, the difference in the response outcomes may be due to the combined influence of more than one factor. In non-experimental data where the treatment is not randomly assigned but self-selected, the subjects tend to differ in observed or unobserved characteristics. It is therefore imperative that the comparison be carried out with subjects similar in their characteristics. This book explains how this problem can be overcome so the attributable effect of the treatment can be found. This book brings to the fore recent advances in econometrics for treatment effects. The purpose of this book is to put together various economic treatments effect models in a coherent fashion, make it clear which can be parameters of interest, and show how they can be identified and estimated under weak assumptions. The emphasis throughout the book is on semi- and non-parametric estimation methods, but traditional parametric approaches are also discussed. This book is ideally suited to researchers and graduate students with a basic knowledge of econometrics.
The Media Convergence Handbook sheds new light on the complexity of media convergence and the related business challenges. Approaching the topic from a managerial, technological as well as end-consumer perspective, it acts as a reference book and educational resource in the field. Media convergence at business level may imply transforming business models and using multiplatform content production and distribution tools. However, it is shown that the implementation of convergence strategies can only succeed when expectations and aspirations of every actor involved are taken into account. Media consumers, content producers and managers face different challenges in the process of media convergence. Volume II of the Media Convergence Handbook tackles these challenges by discussing media business models, production, and users' experience and perspectives from a technological convergence viewpoint.
This handbook provides a unique and in-depth survey of the current state-of-the-art in software engineering, covering its major topics, the conceptual genealogy of each subfield, and discussing future research directions. Subjects include foundational areas of software engineering (e.g. software processes, requirements engineering, software architecture, software testing, formal methods, software maintenance) as well as emerging areas (e.g., self-adaptive systems, software engineering in the cloud, coordination technology). Each chapter includes an introduction to central concepts and principles, a guided tour of seminal papers and key contributions, and promising future research directions. The authors of the individual chapters are all acknowledged experts in their field and include many who have pioneered the techniques and technologies discussed. Readers will find an authoritative and concise review of each subject, and will also learn how software engineering technologies have evolved and are likely to develop in the years to come. This book will be especially useful for researchers who are new to software engineering, and for practitioners seeking to enhance their skills and knowledge.
This book discusses the fusion of mobile and WiFi network data with semantic technologies and diverse context sources for offering semantically enriched context-aware services in the telecommunications domain. It presents the OpenMobileNetwork as a platform for providing estimated and semantically enriched mobile and WiFi network topology data using the principles of Linked Data. This platform is based on the OpenMobileNetwork Ontology consisting of a set of network context ontology facets that describe mobile network cells as well as WiFi access points from a topological perspective and geographically relate their coverage areas to other context sources. The book also introduces Linked Crowdsourced Data and its corresponding Context Data Cloud Ontology, which is a crowdsourced dataset combining static location data with dynamic context information. Linked Crowdsourced Data supports the OpenMobileNetwork by providing the necessary context data richness for more sophisticated semantically enriched context-aware services. Various application scenarios and proof of concept services as well as two separate evaluations are part of the book. As the usability of the provided services closely depends on the quality of the approximated network topologies, it compares the estimated positions for mobile network cells within the OpenMobileNetwork to a small set of real-world cell positions. The results prove that context-aware services based on the OpenMobileNetwork rely on a solid and accurate network topology dataset. The book also evaluates the performance of the exemplary Semantic Tracking as well as Semantic Geocoding services, verifying the applicability and added value of semantically enriched mobile and WiFi network data.
The book gives a systematic and detailed description of a new integrated product and process development approach for sheet metal manufacturing. Special attention is given to manufacturing that unites multidisciplinary competences of product design, material science, and production engineering, as well as mathematical optimization and computer based information technology. The case study of integral sheet metal structures is used by the authors to introduce the results related to the recent manufacturing technologies of linear flow splitting, bend splitting, and corresponding integrated process chains for sheet metal structures.
This book identifies, analyzes and discusses the current trends of digitalized, decentralized, and networked physical value creation by focusing on the particular example of 3D printing. In addition to evaluating 3D printing's disruptive potentials against a broader economic background, it also addresses the technology's potential impacts on sustainability and emerging modes of bottom-up and community-based innovation. Emphasizing these topics from economic, technical, social and environmental perspectives, the book offers a multifaceted overview that scrutinizes the scenario of a fundamental transition: from a centralized to a far more decentralized system of value creation. |
You may like...
Dishonesty in Behavioral Economics
Alessandro Bucciol, Natalia Montinari
Paperback
R3,024
Discovery Miles 30 240
Flexible Bayesian Regression Modelling
Yanan Fan, David Nott, …
Paperback
R2,427
Discovery Miles 24 270
Managing Quality - Integrating the…
S.Thomas Foster, John W. Gardner
Paperback
Financial Mathematics - A Computational…
K. Pereira, N. Modhien, …
Paperback
R326
Discovery Miles 3 260
Human Resource Information Systems…
Michael J Kavanagh, Richard D. Johnson
Paperback
R2,036
Discovery Miles 20 360
Principles Of Business Information…
Ralph Stair, George Reynolds, …
Paperback
(1)R1,780 Discovery Miles 17 800
|