![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Systems management
Few software projects are completed on time, on budget, and to their original specifications. Focusing on what practitioners need to know about risk in the pursuit of delivering software projects, Applied Software Risk Management: A Guide for Software Project Managers covers key components of the risk management process and the software development process, as well as best practices for software risk identification, risk planning, and risk analysis. Written in a clear and concise manner, this resource presents concepts and practical insight into managing risk. It first covers risk-driven project management, risk management processes, risk attributes, risk identification, and risk analysis. The book continues by examining responses to risk, the tracking and modeling of risks, intelligence gathering, and integrated risk management. It concludes with details on drafting and implementing procedures. A diary of a risk manager provides insight in implementing risk management processes. Bringing together concepts across software engineering with a project management perspective, Applied Software Risk Management: A Guide for Software Project Managers presents a rigorous, scientific method for identifying, analyzing, and resolving risk.
Do you spend a lot of time during the design process wondering what
users really need? Do you hate those endless meetings where you
argue how the interface should work? Have you ever developed
something that later had to be completely redesigned?
At the foundation of today's leading-edge manufacturing companies is a common vision for virtual, distributed enterprise in an agile environment where organizations can swiftly and cost-effectively bring products from concept to production - and respond dynamically to changes in customer and market requirements. Totally Integrated Enterprises: A Framework and Methodology for Business and Technology Improvement provides a framework and a methodology for understanding and mapping current enterprise configurations as well as for designing the revised architecture needed for the totally integrated enterprise. It also helps you select the MRPII, MES, APS, SCM, or ERP software most appropriate for your needs and for achieving total enterprise integration.
Firms are investing considerable resources to create large information infrastructures able to fulfil their varied information-processing and communication needs. The more the drive towards globalization, the more such infrastructures become crucial. The "wiring" of the corporation should be done in a way that is aligned with its corporate strategy - it is global and generates value. This book presents six in-depth case studies of large corporations - AstraZeneca, IBM, Norsk Hydro, Roche, SKF, and Statoil - which offer a picture of the main issues involved in information infrastructure implementation and management. Far from being a linear process, the use of the information infrastructure is in fact an open-ended process, in many cases out of control. Current management models and consulting advice do not seem to be able to cope with such a business landscape. This book provides the reader with interpretations and theories that can foster a different understanding and approach.
Throughout successive generations of information technology, the importance of the performance evaluation of software, computer architectures, and computer networks endures. For example, the performance issues of transaction processing systems and redundant arrays of independent disks replace the virtual memory and input-output problems of the 70s. ATM performance issues supercede those associated with electronic telephony of the 70s. As performance issues evolve with the technologies, so must our approach to evaluation. In System Performance Evaluation: Methodologies and Applications, top academic and industrial experts review the major issues now faced in this arena. In a series of structured, focused chapters, they present the state-of-the-art in performance methodologies and applications. They address developments in analytical modeling and its interaction with detailed analysis of measurement data. They also discuss performance evaluation methodologies for large-scale software systems - in general and in the context of critical applications, such as nuclear reactor control and air transportation systems. With its particular emphasis on network performance for wireless networks, the Internet, and ATM networking, System Performance Evaluation becomes the ideal vehicle for professionals in computer architecture, networking, and software engineering to stay up-to-date and proficient in this essential aspect of information technology.
After the Y2K Fireworks focuses on the business and technical aspects of surviving the year 2000 problem - from an author conversant with both business (particularly financial) and computer professionals.
Advanced Antenna Systems for 5G Network Deployments: Bridging the Gap between Theory and Practice provides a comprehensive understanding of the field of advanced antenna systems (AAS) and how they can be deployed in 5G networks. The book gives a thorough understanding of the basic technology components, the state-of-the-art multi-antenna solutions, what support 3GPP has standardized together with the reasoning, AAS performance in real networks, and how AAS can be used to enhance network deployments.
This book defines what IoT Systems manageability looks like and what the associated resources and costs are of that manageability. It identifies IoT Systems performance expectations and addresses the difficult challenges of determining actual costs of IoT Systems implementation, operation, and management across multiple institutional organizations. It details the unique challenges that cities and institutions have in implementing and operating IoT Systems.
"Vivek Kale has written a great book on performance management that focuses on decision-making; on continuous, incremental improvement; and on identifying common patterns in becoming a more intelligent organization." -James Taylor, CEO of Decision Management Solutions and author of Real-World Decision Modeling with DMN "Introducing the concepts of decision patterns and performance intelligence, Vivek Kale has written another important book on the issues faced by contemporary organizations."-Gary Cokins, author of Predictive Business Analytics and Performance Management: Integrating Strategy Execution, Methodologies, Risk, and Analytics Enterprise Performance Intelligence and Decision Patterns unravels the mystery of enterprise performance intelligence (EPI) and explains how it can transform the operating context of business enterprises. It provides a clear understanding of what EPI means, what it can do, and application areas where it is practical to use. The need to be responsive to evolving customer needs and desires creates organizational structures where business intelligence (BI) and decision making is pushed out to operating units that are closest to the scene of the action. Closed-loop decision making resulting from a combination of on-going performance management with on-going BI can lead to an effective responsive enterprise; hence, the need for performance intelligence (PI). This pragmatic book: Introduces the technologies such as data warehousing, data mining, analytics, and business intelligence systems that are a first step toward enabling data-driven enterprises. Details decision patterns and performance decision patterns that pave the road for performance intelligence applications. Introduces the concepts, principles, and technologies related to performance measurement systems. Describes the concepts and principles related to balance scorecard systems (BCS). Introduces aspects of performance intelligence for the real-time enterprises. Enterprise Performance Intelligence and Decision Patterns shows how a company can design and implement instruments ranging from decision patterns to PI systems that can enable continuous correction of business unit behavior so companies can enhance levels of productivity and profitability.
PgMP (R) Exam Practice Test and Study Guide, Fourth Edition is the book you need to pass the Program Management Professional (PgMP (R)) exam the first time around. It reflects recent revisions based on PMI (R)'s Standard for Program Management - Third Edition (2013).Based on best practices that complement PMI (R)'s standards, this is the most comprehensive and up-to-date resource available to help you prepare for the exam with new and changed terminology. It includes a list of the major topics covered on the exam organized by the five performance domains-strategic program management, program life cycle, benefits management, stakeholder management, and governance-as presented in the Program Management Professional Examination Content Outline. It also includes helpful tips on how to make the most of the time you have available to prepare for the exam. Just like its bestselling predecessors, this indispensable study guide includes 20 multiple-choice practice questions for each domain along with a comprehensive answer key. The program life cycle domain includes 20 questions for each of the five phases. Each question also has a plainly written rationale for each correct answer with bibliographic references for further study. Two challenging, 170-question practice tests that simulate the actual exam are included in the book and online, so you can retake them as many times as necessary. They also include a rationale and reference. Scores for the online tests are presented as if each question is rated similarly, but this edition also includes a new component: the authors' own weighting system for the level of difficulty for each question. This system will show you what they feel meets the exam's criteria for Proficient, Moderately Proficient, and below Proficient. You then will see your scores by domain in both approaches. Supplying an insider's look at the questions, terminology, and sentence construction you will encounter on the day of the exam, this indispensable study tool is designed to help you pass the exam and achieve the highly sought after PgMP (R) certification.
CISO's Guide to Penetration Testing: A Framework to Plan, Manage, and Maximize Benefits details the methodologies, framework, and unwritten conventions penetration tests should cover to provide the most value to your organization and your customers. Discussing the process from both a consultative and technical perspective, it provides an overview of the common tools and exploits used by attackers along with the rationale for why they are used. From the first meeting to accepting the deliverables and knowing what to do with the results, James Tiller explains what to expect from all phases of the testing life cycle. He describes how to set test expectations and how to identify a good test from a bad one. He introduces the business characteristics of testing, the imposed and inherent limitations, and describes how to deal with those limitations. The book outlines a framework for protecting confidential information and security professionals during testing. It covers social engineering and explains how to tune the plethora of options to best use this investigative tool within your own environment. Ideal for senior security management and anyone else responsible for ensuring a sound security posture, this reference depicts a wide range of possible attack scenarios. It illustrates the complete cycle of attack from the hacker's perspective and presents a comprehensive framework to help you meet the objectives of penetration testing-including deliverables and the final report.
All organizations, whether for profit, not for profit, or government, face issues of information technology management. While the concerns involved may differ from organization to organization, the principles of good information technology management remain the same. Using a compilation of articles on various topics relating to technology management, Handbook of Technology Management in Public Administration addresses the management, implementation, and integration of technology across a wide variety of disciplines. The book highlights lessons learned to assist you in solving contemporary problems and avoiding pitfalls. It discusses the creation of innovative paradigms, new boundaries, diversity frameworks, and operational breakthroughs emanating from technology. It also raises questions about the productivity, violence, and intrusions of technology into the personal, organizational, and social environments as we move forward. This book identifies the potential ethical, legal, and social implications of technology from electronic signatures to genetic screenings to privacy interventions to industrial applications. It raises issues, problems, and concerns arising from technology and its effects on nurturing or nullifying the foundations of life and liberty in a constitutional democracy. With the development of new tools and techniques, technology promises to make organizations more productive and efficient. Handbook of Technology Management in Public Administration identifies effective technology management approaches while balancing the repercussions of technological growth.
Economics and technology have dramatically re-shaped the landscape of software development. It is no longer uncommon to find a software development team dispersed across countries or continents. Geographically distributed development challenges the ability to clearly communicate, enforce standards, ensure quality levels, and coordinate tasks. Global Software Development Handbook explores techniques that can bridge distances, create cohesion, promote quality, and strengthen lines of communication. The book introduces techniques proven successful at international electronics and software giant Siemens AG. It shows how this multinational uses a high-level process framework that balances agility and discipline for globally distributed software development. The authors delineate an organizational structure that not only fosters team building, but also achieves effective collaboration among the central and satellite teams. The handbook explores the issues surrounding quality and the processes required to realize quality in a distributed environment. Communication is a tremendous challenge, especially for teams separated by several time zones, and the authors elucidate how to uncover patterns of communication among these teams to determine effective strategies for managing communication. The authors analyze successful and failed projects and apply this information to how a project can be successful with distributed teams. They also provide lightweight processes that can be dynamically adapted to the demands of any project.
UML for Developing Knowledge Management Systems provides knowledge engineers the framework in which to identify types of knowledge and where this knowledge exists in an organization. It also shows ways in which to use a standard recognized notation to capture, or model, knowledge to be used in a knowledge management system (KMS). This volume enables knowledge engineers, systems analysts, designers, developers, and researchers to understand the concept of knowledge modeling with Unified Modeling Language (UML). It offers a guide to quantifying, qualifying, understanding, and modeling knowledge by providing a reusable framework that can be adopted for KMS implementation. Following a brief history of knowledge management, the book discusses knowledge acquisition and the types of knowledge that can be discovered within a domain. It offers an overview of types of models and the concepts behind them. It then reviews UML and how to apply UML to model knowledge. The book concludes by defining and applying the Knowledge Acquisition framework via a real-world case study.
Defining and Deploying Software Processes enables you to create efficient and effective processes that let you better manage project schedules and software quality. The author's organized approach details how to deploy processes into your company's culture that are enthusiastically embraced by employees, and explains how to implement a Web-based process architecture that is completely flexible and extensible. Divided into four sections, the book defines the software process architectural model, then explores how this model is implemented. It addresses both the importance of the Web in deploying processes and the importance of a version-controlled repository tool for process management. The third section examines the use of the software process model. The author focuses on classes of process users, metrics collection and presentation, schedule creation and management, earned value, project estimation, time-card charging, subcontract management, and integrated teaming. The final section discusses deployment of the model into an organization, outlining how to rapidly confront pain issues, process group creation and charter, process champion development, pilot and measure the model, and prepare for external model appraisal, e.g., SCAMPI.
The Hands-On Project Office: Guaranteeing ROI and On-Time Delivery provides thoroughly tested processes and techniques that harried IT managers can immediately apply to improve IT deliverables. It is a practitioner's handbook, providing simple, deployable frameworks, practical tools, and proven best practices for successful IT service and project delivery management. It helps IT executives obtain reasonable levels of economy, consistency, and reliability in the execution of projects. After reading this text, IT managers will be able to coordinate their work efforts, hold their own teams accountable, and communicate the impact of effective IT delivery.
In modern business, the availability of up-to-date and secure information is critical to a company's competitive edge and marketing drive. Unfortunately, traditional business studies and classical economics are unable to provide the necessary analysis of such contemporary issues as information technology and knowledge management. The Efficient Enterprise: Increased Corporate Success with Industry-Specific Information Technology and Knowledge Management details an economic business model and visualization system that includes today's business needs and demonstrates how industry-specific information technology blazes the trail towards increased corporate success. Throughout this book, the author explains how his revolutionary theories are put into practice in the industry-specific ERP software CSB-System. Following a systematic visualization of economic principles, this text examines IT business organization and control systems, focusing on software choices and integration, industry-specific software, and workflow management. The third and largest section of the book explores facets of integrated industry-specific IT for corporate management. Concepts include: * Business organization * Quality Control System * Management and controlling * Performance and time management * Automated economy * Commodity and product management * Accounting and finance The Efficient Enterprise provides a roadmap for the expert integration and application of software and technology in the pursuit of efficiency and profitability. Schimitzek's intention is to develop a unified formula for economics, in which any transaction can be modeled using four components: addresses; items; conditions; and procedures.
Numerous methods exist to model and analyze the different roles, responsibilities, and process levels of information technology (IT) personnel. However, most methods neglect to account for the rigorous application and evaluation of human errors and their associated risks. This book fills that need. Modeling, Evaluating, and Predicting IT Human Resources Performance explains why it is essential to account for the human factor when determining the various risks in the software engineering process. The book presents an IT human resources evaluation approach that is rooted in existing research and describes how to enhance existing approaches through strict use of software measurement and statistical principles and criteria. Discussing IT human factors from a risk assessment point of view, the book identifies, analyzes, and evaluates the basics of IT human performance. It details the IT human factors required to achieve desired levels of human performance prediction. It also provides a rigorous investigation of existing human factors evaluation methods, including IT expertise and Big Five, in combination with powerful statistical methods, such as failure mode and effect analysis (FMEA) and design of experiment (DoE). Supplies an overview of existing methods of human risk evaluation Provides a detailed analysis of IT role-based human factors using the well-known Big Five method for software engineering Models the human factor as a risk factor in the software engineering process Summarizes emerging trends and future directions In addition to applying well-known human factors methods to software engineering, the book presents three models for analyzing psychological characteristics. It supplies profound analysis of human resources within the various software processes, including development, maintenance, and application under consideration of the Capability Maturity Model Integration (CMMI) process level five.
The book is written as a practical guide for researchers who want to know more about the role ontologies play in today 's neuroscientific findings and who may want to develop ontologies for their specific research domain. It is geared as a reader for the graduate level and provides a guide to the "best practices" in neurobiological ontology development, culled from leading experts in the development and application of ontologies for representation and meta-analysis of neuroscientific data. The book is divided into four sections: Motivation, Theory, Practice, and Application. An appendix reviews current tools and choices for biomedical ontology development, sharing, and dissemination. The first section, "Motivations for Ontologies in Neurobiological Research," is an introduction to and motivation for ontologies for biomedical researchers and neuroscientists. Ontologies are defined, and examples regarding concepts, instances, classes, relationships, and reasoning are drawn from neuroscientific and clinical research wherever possible. The motivation for the application of ontologies to biomedical research is presented, drawing from the successes of the Gene Ontology (GO) and others, with some foreshadowing of the Applications found in Section 4. An overview of coordinated efforts in ontology sharing and re-use is included, so that readers can see what ontologies already exist and will know where to look for areas of ontology development and related tools for ontology-based representation in specific scientific domains. The second section, "Theory: An Ideal Ontology," focuses on the theory and formalisms underlying ontology development and application, presented with a minimum of mathematical symbols. It is expected that the readership is at most modestly familiar with first-order logic but not necessarily with more sophisticated mathematical logics or formalisms. The goal for this section is to introduce the basics of ontology design and logic-based implementation, and to explain how ontology design may affect downstream applications (such as searching and reasoning over data that have been annotated with an ontology). In addition, this section broaches a few of the current issues and controversies in ontology design and implementation that could have a practical impact on choices in building new ontologies and ontology-based applications for science. While an entire book can be written about the choices that go into ontology design, we choose to focus on educating the reader regarding the basic issues, with indications for other sources with more detail. The third section, "Practice: Where Representation Meets Reality," focuses on the general issues that biomedical researchers face when using ontologies to represent their studies and data. The distinction between top-down (knowledge-driven) and bottom-up (data-driven) methods is a key challenge: researchers often come to ontology development with a specific problem they wish to solve or a particular type of data they wish to represent and reason about. Some start by defining the lowest level concepts that are closest to the actual instances of data, and others start at the top, modeling the structure of their research process. Each of these approaches has merit, and each has challenges. Ultimately, both top-down and bottom-up methods may be needed to form ontological bridges between data and the high-level knowledge that is linked to data in a particular domain. Similarly, in the case where several ontologies may already be applicable to different portions of the data or concepts the researcher is attempting to model, the role and advantages of ontology selection or harmonization needs to be understood. The final chapter of this section includes a discussion of where theoretical purity must interact with the complexity of actually linking to the data and sometimes incomplete knowledge, and what the resultant hybrid of pure ontology and real-world data means for ontology application. The fourth section, "Applications: Case Studies in Neuro(?)-Ontology Design and Use," elaborates on the design principles, issues, and challenges discussed in the first three sections, and presents how these have been applied or addressed within certain biomedical domains of research. A chapter is dedicated to the issues of ontologies of neuroanatomy, including discussion of the methods they have chosen for development and the best practices they have adopted. This section additionally(?) presents the vision and current efforts of several ontologies interacting with each other to represent the experimental concepts, methods, data, and interpretation of cognitive neuroimaging studies. The specific challenges of modeling space and time in physiological research are also included. The final chapter is a forward-looking piece, accepting that in any subdomain, with sufficient effort there will come a time when the bulk of the ontology-building endeavor subsides, and what a fully developed and applied ontology would mean for scientific discourse in that domain. The appendix is a practical compendium of tools and resources for the beginning ontology developer within biomedical research, to aid entry into the biomedical ontology community and leverage existing efforts.
The interest in PCs and computer technology has spawned a host of courses in microcomputer technology (both traditional FE courses and more adult retraining). This title was written for the demands of such courses at advanced FE level, in particular the CG 7361. The emphasis is on practical aspects of hardware handling, and line diagrams and photographs help the reader to identify component parts. Suitable for structured study or interested enthusiasts, the theory and practice is covered in breadth, whilst workbook-type fill-in sections should benefit teachers when lesson-planning.
This book gives a practical view of why metrics and service reports are so important to the delivery of an effective service and to service improvements. It describes the types, the design, target audiences and documentation of metrics used in the service reporting process, covered by the requirements of clauses 4 and 6.2 of ISO/IEC 20000-1, Plan-Do-Check-Act (PDCA) cycle and Service reporting. Useful tips, techniques and example metrics are included.
This book will be of particular interest to those who have used BS 15000 for service improvements, audits or training and need to update their material to reflect the ISO/IEC 20000 standard. ISO/IEC 20000 was based on BS 15000, and this book provides a detailed comparison of ISO/IEC 20000 and BS 15000, for both Parts 1 and 2. It shows the differences in structure, clause numbering and references. The core of this book is a series of tables detailing the changes to the requirements and recommendations clause-by-clause, as well as any re-wording that has been provided to give clarification for an international audience. It includes an explanation of why the changes were made and the implications of each of the changes. This book is based on the material produced by the Project Editor during the drafting of both Parts 1 and 2 of ISO/IEC 20000. |
![]() ![]() You may like...
Trends in Art - Insights for Collectors
Contemporary Art Curator Magazine
Hardcover
From Trauma to Resiliency…
Shulamit Natan Ritblatt, Audrey Hokoda
Paperback
R1,062
Discovery Miles 10 620
|