![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer communications & networking > General
Networked Control Systems: Cloud Control and Secure Control explores new technological developments in networked control systems (NCS), including new techniques, such as event-triggered, secure and cloud control. It provides the fundamentals and underlying issues of networked control systems under normal operating environments and under cyberphysical attack. The book includes a critical examination of the principles of cloud computing, cloud control systems design, the available techniques of secure control design to NCS's under cyberphysical attack, along with strategies for resilient and secure control of cyberphysical systems. Smart grid infrastructures are also discussed, providing diagnosis methods to analyze and counteract impacts. Finally, a series of practical case studies are provided to cover a range of NCS's. This book is an essential resource for professionals and graduate students working in the fields of networked control systems, signal processing and distributed estimation.
One of the most difficult, yet important, questions regarding projects is "What advantages will this project create for the investors and key stakeholders?" Projects and programs should be treated as investments. This means that the focus of projects shifts from delivering within the triple constraints (time-cost-quality) towards some of the more fundamental questions: What is the purpose of this investment? What are the specific advantages expected? Are these benefits worth the investment? Implementing Project and Program Benefit Management is written for executives and practitioners within the portfolio, program, and project environment. It guides them through the important work that must be addressed as the investment progresses towards the realization of benefits. The processes discussed cover the strategic elements of benefits realization as well as the more detailed requirements, which are the domain of the program delivery teams and the operational users. Using real cases to explain complex situations, operational teams and wider groups of stakeholders, including communities affected by infrastructure projects, will be able to engage in the conversation with the sponsors and delivery teams. Covering an area of program and project management that is rapidly becoming more widely valued, this book blends theory with practical experience to present a clear process flow to managing the benefits life cycle. Best practices are defined, and pitfalls and traps are identified to enable practitioners to apply rigor and structure to this crucial discipline.
Software Security: Concepts & Practices is designed as a textbook and explores fundamental security theories that govern common software security technical issues. It focuses on the practical programming materials that will teach readers how to implement security solutions using the most popular software packages. It's not limited to any specific cybersecurity subtopics and the chapters touch upon a wide range of cybersecurity domains, ranging from malware to biometrics and more. Features The book presents the implementation of a unique socio-technical solution for real-time cybersecurity awareness. It provides comprehensible knowledge about security, risk, protection, estimation, knowledge and governance. Various emerging standards, models, metrics, continuous updates and tools are described to understand security principals and mitigation mechanism for higher security. The book also explores common vulnerabilities plaguing today's web applications. The book is aimed primarily at advanced undergraduates and graduates studying computer science, artificial intelligence and information technology. Researchers and professionals will also find this book useful.
Organizations spend large amounts of money to purchase, deploy, and optimize their Electronic Health Records (EHRs). They are not plug-n-play systems so a commitment to an ongoing improvement cycle is necessary. When done well, this responds to the people, the process, and the technology. When not done well, complete failure of the system could result in costing the organization thousands of dollars. Based on the foundational premise that EHR governance done right speeds up change and leads to a positive user experience, this book draws upon more than a decade of work with government, academic, and nonprofit organizations using Epic, Allscripts, McKesson, Meditech, and Cerner. Designed to be practical and pragmatic, it outlines a strategic process that can scale to small and large organizations alike. It begins with how to articulate a clear vision to organizational leaders so they can champion strong EHR governance both theoretically and financially. It then walks through each step required for leading successful change, calling out critical lessons learned to help the reader avoid pitfalls and achieve measurable improvement more rapidly. It concludes with a commitment to ongoing growth and refinement through benchmarked metrics, innovation, and out-of-the-box thinking.
5G NR: Architecture, Technology, Implementation, and Operation of 3GPP New Radio Standards is an in-depth, systematic, technical reference on 3GPP's New Radio standards (Release 15 and beyond), covering the underlying theory, functional descriptions, practical considerations, and implementation of the 5G new radio access technology. The book describes the design and operation of individual components and shows how they are integrated into the overall system and operate from a system's perspective. Uniquely, this book gives detailed information on RAN protocol layers, transports, network architectures, and services, as well as practical implementation and deployment issues, making it suitable for researchers and engineers who are designing and developing 5G systems. Reflecting on the author's 30 plus years of experience in signal processing, microelectronics, and wireless communication system design, this book is ideal for professional engineers, researchers, and graduate students who are working and researching in cellular communication systems and protocols as well as mobile broadband wireless standards.
This book offers postgraduate and early career researchers in accounting and information systems a guide to choosing, executing and reporting appropriate data analysis methods to answer their research questions. It provides readers with a basic understanding of the steps that each method involves, and of the facets of the analysis that require special attention. Rather than presenting an exhaustive overview of the methods or explaining them in detail, the book serves as a starting point for developing data analysis skills: it provides hands-on guidelines for conducting the most common analyses and reporting results, and includes pointers to more extensive resources. Comprehensive yet succinct, the book is brief and written in a language that everyone can understand - from students to those employed by organizations wanting to study the context in which they work. It also serves as a refresher for researchers who have learned data analysis techniques previously but who need a reminder for the specific study they are involved in.
This latest textbook from bestselling author, Douglas E. Comer, is a class-tested book providing a comprehensive introduction to cloud computing. Focusing on concepts and principles, rather than commercial offerings by cloud providers and vendors, The Cloud Computing Book: The Future of Computing Explained gives readers a complete picture of the advantages and growth of cloud computing, cloud infrastructure, virtualization, automation and orchestration, and cloud-native software design. The book explains real and virtual data center facilities, including computation (e.g., servers, hypervisors, Virtual Machines, and containers), networks (e.g., leaf-spine architecture, VLANs, and VxLAN), and storage mechanisms (e.g., SAN, NAS, and object storage). Chapters on automation and orchestration cover the conceptual organization of systems that automate software deployment and scaling. Chapters on cloud-native software cover parallelism, microservices, MapReduce, controller-based designs, and serverless computing. Although it focuses on concepts and principles, the book uses popular technologies in examples, including Docker containers and Kubernetes. Final chapters explain security in a cloud environment and the use of models to help control the complexity involved in designing software for the cloud. The text is suitable for a one-semester course for software engineers who want to understand cloud, and for IT managers moving an organization's computing to the cloud.
This book explores the basic traits of inter-organizational networks, examining the interplay between structure, dynamics, and performance from a governance perspective. The book assumes a novel theoretical angle based on the interpretation of networks as multiple systems, and advances the theory in the realm of network effectiveness and failure. Composed of two parts, theoretical and empirical, The Network Organization clarifies the literature on networks, offering a systematic review, and provides a new perspective on their integration with other streams of research focusing on under-studied issues such as agency, micro-dynamics, and network effectiveness. The second part proposes the analysis of the tourism destination of Venice, with a specific focus on the network between the Venice Film Festival, the hospitality system, and the local institutions. By exploring the pervasion of networks in modern social and economic life, this book will be valuable to students, researchers, practitioners and policy-makers.
Cloud functionality increases flexibility and capacity in IT systems, but it also adds complexity and requires a combination of business, financial and technical expertise to make it work effectively. Moreover, organizations often confuse availability with capacity, and assume incorrectly that using cloud services reduces the need to manage these factors. In Availability and Capacity Management in the Cloud: An ITSM narrative, Daniel McLean's fictional IT service management practitioner, Chris, faces the challenge of integrating cloud services into an ITSM structure. Based on the real-life experience of the author and other ITSM practitioners, this book tells the story of a cloud services implementation, exposing potential pitfalls and exploring how to handle issues that come with such projects. The end-of-chapter pointers give useful advice on dealing with the challenges organizations face when considering cloud services. Read this book and see how Chris meets the challenge of integrating cloud services with ITSM, and how you can do the same. Learn from the successes. Avoid the mistakes.
Internet of Things: Technologies and Applications for a New Age of Intelligence outlines the background and overall vision for the Internet of Things (IoT) and Cyber-Physical Systems (CPS), as well as associated emerging technologies. Key technologies are described including device communication and interactions, connectivity of devices to cloud-based infrastructures, distributed and edge computing, data collection, and methods to derive information and knowledge from connected devices and systems using artificial intelligence and machine learning. Also included are system architectures and ways to integrate these with enterprise architectures, and considerations on potential business impacts and regulatory requirements. New to this edition: * Updated material on current market situation and outlook. * A description of the latest developments of standards, alliances, and consortia. More specifically the creation of the Industrial Internet Consortium (IIC) and its architecture and reference documents, the creation of the Reference Architectural Model for Industrie 4.0 (RAMI 4.0), the exponential growth of the number of working groups in the Internet Engineering Task Force (IETF), the transformation of the Open Mobile Alliance (OMA) to OMA SpecWorks and the introduction of OMA LightweightM2M device management and service enablement protocol, the initial steps in the specification of the architecture of Web of Things (WoT) by World Wide Consortium (W3C), the GS1 architecture and standards, the transformation of ETSI-M2M to oneM2M, and a few key facts about the Open Connectivity Forum (OCF), IEEE, IEC/ISO, AIOTI, and NIST CPS. * The emergence of new technologies such as distributed ledgers, distributed cloud and edge computing, and the use of machine learning and artificial intelligence for IoT. * A chapter on security, outlining the basic principles for secure IoT installations. * New use case description material on Logistics, Autonomous Vehicles, and Systems of CPS
Smart Networks comprises the proceedings of Smartnet'2002, the
seventh conference on Intelligence in Networks, which was sponsored
by the International Federation for Information Processing (IFIP)
and organized by Working Group 6.7. It was held in Saariselka,
Finland, in April 2002.
This book presents the basics of both NAND flash storage and machine learning, detailing the storage problems the latter can help to solve. At a first sight, machine learning and non-volatile memories seem very far away from each other. Machine learning implies mathematics, algorithms and a lot of computation; non-volatile memories are solid-state devices used to store information, having the amazing capability of retaining the information even without power supply. This book will help the reader understand how these two worlds can work together, bringing a lot of value to each other. In particular, the book covers two main fields of application: analog neural networks (NNs) and solid-state drives (SSDs). After reviewing the basics of machine learning in Chapter 1, Chapter 2 shows how neural networks can mimic the human brain; to accomplish this result, neural networks have to perform a specific computation called vector-by-matrix (VbM) multiplication, which is particularly power hungry. In the digital domain, VbM is implemented by means of logic gates which dictate both the area occupation and the power consumption; the combination of the two poses serious challenges to the hardware scalability, thus limiting the size of the neural network itself, especially in terms of the number of processable inputs and outputs. Non-volatile memories (phase change memories in Chapter 3, resistive memories in Chapter 4, and 3D flash memories in Chapter 5 and Chapter 6) enable the analog implementation of the VbM (also called "neuromorphic architecture"), which can easily beat the equivalent digital implementation in terms of both speed and energy consumption. SSDs and flash memories are strictly coupled together; as 3D flash scales, there is a significant amount of work that has to be done in order to optimize the overall performances of SSDs. Machine learning has emerged as a viable solution in many stages of this process. After introducing the main flash reliability issues, Chapter 7 shows both supervised and un-supervised machine learning techniques that can be applied to NAND. In addition, Chapter 7 deals with algorithms and techniques for a pro-active reliability management of SSDs. Last but not least, the last section of Chapter 7 discusses the next challenge for machine learning in the context of the so-called computational storage. No doubt that machine learning and non-volatile memories can help each other, but we are just at the beginning of the journey; this book helps researchers understand the basics of each field by providing real application examples, hopefully, providing a good starting point for the next level of development.
This hands-on, laboratory driven textbook helps readers understand principles of digital signal processing (DSP) and basics of software-based digital communication, particularly software-defined networks (SDN) and software-defined radio (SDR). In the book only the most important concepts are presented. Each book chapter is an introduction to computer laboratory and is accompanied by complete laboratory exercises and ready-to-go Matlab programs with figures and comments (available at the book webpage and running also in GNU Octave 5.2 with free software packages), showing all or most details of relevant algorithms. Students are tasked to understand programs, modify them, and apply presented concepts to recorded real RF signal or simulated received signals, with modelled transmission condition and hardware imperfections. Teaching is done by showing examples and their modifications to different real-world telecommunication-like applications. The book consists of three parts: introduction to DSP (spectral analysis and digital filtering), introduction to DSP advanced topics (multi-rate, adaptive, model-based and multimedia - speech, audio, video - signal analysis and processing) and introduction to software-defined modern telecommunication systems (SDR technology, analog and digital modulations, single- and multi-carrier systems, channel estimation and correction as well as synchronization issues). Many real signals are processed in the book, in the first part - mainly speech and audio, while in the second part - mainly RF recordings taken from RTL-SDR USB stick and ADALM-PLUTO module, for example captured IQ data of VOR avionics signal, classical FM radio with RDS, digital DAB/DAB+ radio and 4G-LTE digital telephony. Additionally, modelling and simulation of some transmission scenarios are tested in software in the book, in particular TETRA, ADSL and 5G signals. Provides an introduction to digital signal processing and software-based digital communication; Presents a transition from digital signal processing to software-defined telecommunication; Features a suite of pedagogical materials including a laboratory test-bed and computer exercises/experiments .
This book describes the most frequently used high-speed serial buses in embedded systems, especially those used by FPGAs. These buses employ SerDes, JESD204, SRIO, PCIE, Aurora and SATA protocols for chip-to-chip and board-to-board communication, and CPCIE, VPX, FC and Infiniband protocols for inter-chassis communication. For each type, the book provides the bus history and version info, while also assessing its advantages and limitations. Furthermore, it offers a detailed guide to implementing these buses in FPGA design, from the physical layer and link synchronization to the frame format and application command. Given its scope, the book offers a valuable resource for researchers, R&D engineers and graduate students in computer science or electronics who wish to learn the protocol principles, structures and applications of high-speed serial buses.
This volume offers state-of-the-art research in service science and its related research, education and practice areas. It showcases recent developments in smart service systems, operations management and analytics and their impact in complex service systems. The papers included in this volume highlight emerging technology and applications in fields including healthcare, energy, finance, information technology, transportation, sports, logistics, and public services. Regardless of size and service, a service organization is a service system. Because of the socio-technical nature of a service system, a systems approach must be adopted to design, develop, and deliver services, aimed at meeting end users' both utilitarian and socio-psychological needs. Effective understanding of service and service systems often requires combining multiple methods to consider how interactions of people, technology, organizations, and information create value under various conditions. The papers in this volume present methods to approach such technical challenges in service science and are based on top papers from the 2019 INFORMS International Conference on Service Science.
This guide to wireless LAN systems describes the current technologies, spells out the pros and cons of each, and offers implementation insights that can save your company valuable installation time, money, and effort. It provides in-depth analyses of all aspects of indoor LAN wireless transmission techniques via RF and infrared waves.
Benefits realization management (BRM) is a key part of governance, because it supports the strategic creation of value and provides the correct level of prioritization and executive support to the correct initiatives. Because of its relevance to the governance process, BRM has a strong influence over project success and is a link between strategic planning and strategy execution. This book guides portfolio, program, and project managers through the process of benefits realization management so they can maximize business value. It discusses why and how programs and projects are expected to enable value creation, and it explains the role of BRM in value creation. The book provides a flexible framework for: Translating business strategy drivers into expected benefits and explains the subsequent composition of a program and project portfolio that can realize expected benefits Planning the benefits realization expected from programs and projects and then making it happen Keeping programs and projects on track Reviewing and evaluating the benefits achieved or expected against the original baselines and the current expectations. To help project, program, and portfolio managers on their BRM journey, as well as to support business managers in executing business strategies, the book identifies key organizational responsibilities and roles involved in BRM practices, and it provides a simple reference that can be mapped against any organizational structure. A detailed and comprehensive case study illustrates each phase of the BRM framework as it links business strategy to project work, benefits, and business value. Each chapter ends with a series questions that provide a BRM self-assessment. The book concludes with a set of templates and detailed instructions to ensure successful deployment of BRM.
Within this context, big data analytics (BDA) can be an important tool given that many analytic techniques within the big data world have been created specifically to deal with complexity and rapidly changing conditions. The important task for public sector organizations is to liberate analytics from narrow scientific silos and expand it across internally to reap maximum benefit across their portfolios of programs. This book highlights contextual factors important to better situating the use of BDA within government organizations and demonstrates the wide range of applications of different BDA techniques. It emphasizes the importance of leadership and organizational practices that can improve performance. It explains that BDA initiatives should not be bolted on but should be integrated into the organization's performance management processes. Equally important, the book includes chapters that demonstrate the diversity of factors that need to be managed to launch and sustain BDA initiatives in public sector organizations.
Prepare for Microsoft Exam AZ-305 and help demonstrate your real-world expertise in designing cloud and hybrid solutions that run on Microsoft Azure, including identity, governance, monitoring, data storage, business continuity, and infrastructure. Designed for modern IT professionals, this Exam Ref focuses on the critical thinking and decision-making acumen needed for success at the Microsoft Certified Expert level. Focus on the expertise measured by these objectives: Design identity, governance, and monitoring solutions Design data storage solutions Design business continuity solutions Design infrastructure solutions This Microsoft Exam Ref: Organizes its coverage by exam objectives Features strategic, what-if scenarios to challenge you Assumes you have advanced experience and knowledge of IT operations, as well as experience in Azure administration, Azure development, and DevOps processes About the Exam Exam AZ-305 focuses on knowledge needed to design logging, monitoring, authentication, and authorization solutions; design governance, identities, and application access; design relational and non-relational data storage solutions; design data integration; recommend data storage solutions; design backup and disaster recovery solutions; design for high availability; design compute and network solutions, application architecture, and migration. About Microsoft Certification If you hold Microsoft Certified: Azure Administrator Associate certification, passing this exam fulfills your requirements for the Microsoft Certified: Azure Solutions Architect Expert credential. Passing this exam demonstrates your expert-level skills in advising stakeholders and translating business requirements into designs for secure, scalable, and reliable Azure solutions; and in partnering with others to implement these solutions. See full details at: microsoft.com/learn
Data Communications and Networking, 6th Edition, teaches the principles of networking using TCP/IP protocol suite. It employs a bottom-up approach where each layer in the TCP/IP protocol suite is built on the services provided by the layer below. This edition has undergone a major restructuring to reduce the number of chapters and focus on the organization of TCP/IP protocol suite. It concludes with three chapters that explore multimedia, network management, and cryptography/network security. Technologies related to data communications and networking are among the fastest growing in our culture today, and there is no better guide to this rapidly expanding field than Data Communications and Networking.
This book constitutes the refereed proceedings of five International Workshops held as parallel events of the 18th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2022, virtually and in Hersonissos, Crete, Greece, in June 2022: the 11th Mining Humanistic Data Workshop (MHDW 2022); the 7th 5G-Putting Intelligence to the Network Edge Workshop (5G-PINE 2022); the 1st workshop on AI in Energy, Building and Micro-Grids (AIBMG 2022); the 1st Workshop/Special Session on Machine Learning and Big Data in Health Care (ML@HC 2022); and the 2nd Workshop on Artificial Intelligence in Biomedical Engineering and Informatics (AIBEI 2022). The 35 full papers presented at these workshops were carefully reviewed and selected from 74 submissions.
5G Physical Layer: Principles, Models and Technology Components explains fundamental physical layer design principles, models and components for the 5G new radio access technology - 5G New Radio (NR). The physical layer models include radio wave propagation and hardware impairments for the full range of frequencies considered for the 5G NR (up to 100 GHz). The physical layer technologies include flexible multi-carrier waveforms, advanced multi-antenna solutions, and channel coding schemes for a wide range of services, deployments, and frequencies envisioned for 5G and beyond. A MATLAB-based link level simulator is included to explore various design options. 5G Physical Layer is very suitable for wireless system designers and researchers: basic understanding of communication theory and signal processing is assumed, but familiarity with 4G and 5G standards is not required. With this book the reader will learn: The fundamentals of the 5G NR physical layer (waveform, modulation, numerology, channel codes, and multi-antenna schemes). Why certain PHY technologies have been adopted for the 5G NR. The fundamental physical limitations imposed by radio wave propagation and hardware impairments. How the fundamental 5G NR physical layer functionalities (e.g., parameters/methods/schemes) should be realized. The content includes: A global view of 5G development - concept, standardization, spectrum allocation, use cases and requirements, trials, and future commercial deployments. The fundamentals behind the 5G NR physical layer specification in 3GPP. Radio wave propagation and channel modeling for 5G and beyond. Modeling of hardware impairments for future base stations and devices. Flexible multi-carrier waveforms, multi-antenna solutions, and channel coding schemes for 5G and beyond. A simulator including hardware impairments, radio propagation, and various waveforms. Ali Zaidi is a strategic product manager at Ericsson, Sweden. Fredrik Athley is a senior researcher at Ericsson, Sweden. Jonas Medbo and Ulf Gustavsson are senior specialists at Ericsson, Sweden. Xiaoming Chen is a professor at Xi'an Jiaotong University, China. Giuseppe Durisi is a professor at Chalmers University of Technology, Sweden, and a guest researcher at Ericsson, Sweden.
The high failure rate of enterprise resource planning (ERP) projects is a pressing concern for both academic researchers and industrial practitioners. The challenges of an ERP implementation are particularly high when the project involves designing and developing a system from scratch. Organizations often turn to vendors and consultants for handling such projects but, every aspect of an ERP project is opaque for both customers and vendors. Unlocking the mysteries of building a large-scale ERP system, The Adventurous and Practical Journey to a Large-Scale Enterprise Solution tells the story of implementing an applied enterprise solution. The book covers the field of enterprise resource planning by examining state-of-the-art concepts in software project management methodology, design and development integration policy, and deployment framework, including: A hybrid project management methodology using waterfall as well as a customized Scrum-based approach A novel multi-tiered software architecture featuring an enhanced flowable process engine A unique platform for coding business processes efficiently Integration to embed ERP modules in physical devices A heuristic-based framework to successfully step into the Go-live period Written to help ERP project professionals, the book charts the path that they should travel from project ideation to systems implementation. It presents a detailed, real-life case study of implementing a large-scale ERP and uses storytelling to demonstrate incorrect and correct decisions frequently made by vendors and customers. Filled with practical lessons learned, the book explains the ins and outs of adopting project methodologies. It weaves a tale that features both real-world and scholarly aspects of an ERP implementation.
Whether the source is more industry-based or academic research, there certainly appears to be a growing interest in the field of cryptocurrency. The New York Times had a cover story on March 24, 2022, titled "Time to Enter the Crypto Zone?," and they talked about institutional investors pouring billions into digital tokens, salaries being taken in Bitcoins, and even Bitcoin ATMs in grocery stores. Certainly, there have been ups and downs in crypto, but it has a kind of alluring presence that tempts one to include crypto as part of one’s portfolio. Those who are "prime crypto-curious" investors are usually familiar with the tech/pop culture and feel they want to diversify a bit in this fast-moving market. Even universities are beginning to offer more courses and create "Centers on Cryptocurrency." Some universities are even requiring their students who take a crypto course to pay the course tuition via cryptocurrency. In response to the growing interest and fascination about the crypto industry and cryptocurrency in general, Cryptocurrency Concepts, Technology, and Applications brings together many leading worldwide contributors to discuss a broad range of issues associated with cryptocurrency. The book covers a wide array of crypto-related topics, including: Blockchain NFTs Data analytics and AI Crypto crime Crypto industry and regulation Crypto and public choice Consumer confidence Bitcoin and other cryptocurrencies. Presenting various viewpoints on where the crypto industry is heading, this timely book points out both the advantages and limitations of this emerging field. It is an easy-to-read, yet comprehensive, overview of cryptocurrency in the U.S. and international markets.
Building a data-driven organization (DDO) is an enterprise-wide initiative that may consume and lock up resources for the long term. Understandably, any organization considering such an initiative would insist on a roadmap and business case to be prepared and evaluated prior to approval. This book presents a step-by-step methodology in order to create a roadmap and business case, and provides a narration of the constraints and experiences of managers who have attempted the setting up of DDOs. The emphasis is on the big decisions - the key decisions that influence 90% of business outcomes - starting from decision first and reengineering the data to the decisions process-chain and data governance, so as to ensure the right data are available at the right time, every time. Investing in artificial intelligence and data-driven decision making are now being considered a survival necessity for organizations to stay competitive. While every enterprise aspires to become 100% data-driven and every Chief Information Officer (CIO) has a budget, Gartner estimates over 80% of all analytics projects fail to deliver intended value. Most CIOs think a data-driven organization is a distant dream, especially while they are still struggling to explain the value from analytics. They know a few isolated successes, or a one-time leveraging of big data for decision making does not make an organization data-driven. As of now, there is no precise definition for data-driven organization or what qualifies an organization to call itself data-driven. Given the hype in the market for big data, analytics and AI, every CIO has a budget for analytics, but very little clarity on where to begin or how to choose and prioritize the analytics projects. Most end up investing in a visualization platform like Tableau or QlikView, which in essence is an improved version of their BI dashboard that the organization had invested into not too long ago. The most important stakeholders, the decision-makers, are rarely kept in the loop while choosing analytics projects. This book provides a fail-safe methodology for assured success in deriving intended value from investments into analytics. It is a practitioners' handbook for creating a step-by-step transformational roadmap prioritizing the big data for the big decisions, the 10% of decisions that influence 90% of business outcomes, and delivering material improvements in the quality of decisions, as well as measurable value from analytics investments. The acid test for a data-driven organization is when all the big decisions, especially top-level strategic decisions, are taken based on data and not on the collective gut feeling of the decision makers in the organization. |
You may like...
Research Anthology on Architectures…
Information R Management Association
Hardcover
R12,630
Discovery Miles 126 300
The Host in the Machine - Examining the…
Angela Thomas-Jones
Paperback
R1,318
Discovery Miles 13 180
Practical Industrial Data Networks…
Steve Mackay, Edwin Wright, …
Paperback
R1,452
Discovery Miles 14 520
Practical Industrial Data Communications…
Deon Reynders, Steve Mackay, …
Paperback
R1,452
Discovery Miles 14 520
CCNA 200-301 Network Simulator
Sean Wilkins
Digital product license key
R2,877
Discovery Miles 28 770
|