|
|
Books > Computing & IT > Applications of computing > Databases > Data warehousing
This book is a step-by-step tutorial filled with practical examples
which will show readers how to configure and manage IaaS and DBaaS
with Oracle Enterprise Manager. If you are a cloud administrator or
a user of self-service provisioning systems offered by Enterprise
Manager, this book is ideal for you. It will also help
administrators who want to understand the chargeback mechanism
offered by Enterprise Manager.An understanding of the basic
building blocks of cloud computing such as networking,
virtualization, storage, and so on, is needed by those of you
interested in this book
This book focuses on teaching you by example. The book walks you
through every aspect of Pentaho Data Integration, giving systematic
instructions in a friendly style, allowing you to learn in front of
your computer, playing with the tool. The extensive use of drawings
and screenshots make the process of learning Pentaho Data
Integration easy. Throughout the book, numerous tips and helpful
hints are provided that you will not find anywhere else.This book
is a must-have for software developers, database administrators, IT
students, and everyone involved or interested in developing ETL
solutions, or, more generally, doing any kind of data manipulation.
Those who have never used Pentaho Data Integration will benefit
most from the book, but those who have, they will also find it
useful.This book is also a good starting point for database
administrators, data warehouse designers, architects, or anyone who
is responsible for data warehouse projects and needs to load data
into them.
This book is a step-by-step tutorial filled with practical examples
which will show you how to build and manage a Hadoop cluster along
with its intricacies. This book is ideal for database
administrators, data engineers, and system administrators, and it
will act as an invaluable reference if you are planning to use the
Hadoop platform in your organization. It is expected that you have
basic Linux skills since all the examples in this book use this
operating system. It is also useful if you have access to test
hardware or virtual machines to be able to follow the examples in
the book.
It is in the context of maturity that one needs to read the book -
Master Data Management and Enterprise Engineering by Dr. M. Naoulo.
The world of technology has several related yet distinct approaches
- Martin's and Finklestein's information engineering, Ted Codd's
relational technology, Inmon's data warehouse, and others. This
book takes these approaches and others and does two important
things - it blends them together and it takes the bodies of thought
and turns them into an engineering approach. As such this book is
another important step in the evolution of computer science. This
is a next step in the maturation of computer science. I recommend
it to any serious student of computer science or to any serious
practitioner. =========== W.H. (Bill) Inmon. May 21, 2012.
=========== The book establishes the Fundamentals of design,
modeling, architecture, and management of Master, Transactional,
and Process Data and the Principles of Enterprise Engineering. The
book comprises innovative techniques and elegant approach and
design to address: > Master Data Management thru grouping and
classifying the Master Data in an innovative way that can be
implemented across the Enterprise Data Architecture: Central Data
Repository and Business and Enterprise Intelligence (BI & EI)
Data Marts. This data classification is based on the questions:
Why, How, Who, What, Where and When. > Separating the Master
Data, Transactional Data, and Process Data. This separation enables
the easiness in dealing with MDM, BI, and EI. The separation
approach enormously facilitates the mapping and propagation between
the Central Data Repository and the Business and Enterprise Data
Marts. >The basics and techniques of the design of the
Enterprise Engineering Model. This model supports the Transactional
Systems, Business Intelligence, Business Process Management,
Enterprise Intelligence, and Enterprise Engineering. >
Synchronization and Integration of Master, Transactional, and
Process Data across the enterprise: Legacy Systems, Central Data
Repository, and Business and Intelligence Data Marts. >
Assessing the different Enterprise Data Architectures. The first
part encompasses the Enterprise Data Framework and its new Modeling
Techniques. The Enterprise Data Framework illustrates and depicts
the Data Architecture across the enterprise covering the
integration and consolidation of the data of the Legacy Systems in
a Central Data Repository and the propagation of this data into the
Business and Enterprise Intelligence Data Marts. The Enterprise
Engineering Model presents a clear and concise illustration
depicting the Operational aspects of the Enterprise and their
relations to the Enterprise's needs. It includes: > Master Data
representing the main objects of an enterprise, > Transactional
Data detailing the results of transactions occurring in an
enterprise, and > Process Data capturing the data pertinent to
the activities of the functioning of an enterprise. The second part
encompasses the Enterprise Engineering Framework, Methodology,
Guidelines, Deliverables, and Techniques. It bestows the blueprint
of the functioning of enterprises. This part details the basics of
Enterprise Engineering and its implementation thru the processing
of the Enterprise Engineering Model. It provides the cost data
(materiel cost, labor cost, time) reflecting the functioning of the
Enterprise and point out the efficiency, performance, strengths,
and weaknesses of its operation. Detailed Case Studies are
presented supporting the theoretical aspect of Enterprise
Engineering. These Case Studies provide clear and practical
hands-on exercises reflecting the functioning of the Enterprises
and illustrating the implementation of Enterprise Engineering.
This easy-to-understand tutorial covers Oracle Warehouse Builder
from the ground up, and taps into the author's wide experience as a
software and database engineer. Written in a relaxed style with
step-by-step explanations, lots of screenshots are provided
throughout the book. There are numerous tips and helpful hints
throughout that are not found in the original documentation. By
following this book, you can use Oracle Warehouse Builder in the
best possible way and maximize your learning potential. This book
is an update of Oracle Warehouse Builder 11g: Getting Started. This
book is a good starting point for database engineers,
administrators, and architects who are responsible for data
warehouse projects and need to design them and load data into them.
If you are someone who wants to learn Oracle Warehouse Builder and
expand your knowledge of the tool and data warehousing, this is an
ideal book for you. No prior data warehouse or database experience
is presumed. All new database and data warehouse technical terms
and concepts explained in clear easy-to-understand language.
Are you struggling with a disparate data resource? Are there
multiple existences of the same business fact scattered throughout
the data resource? Are those multiple existences out of synch with
each other? Do you have difficulty finding the data you need to
support business activities? Do the data you find have poor
quality? If the answer to any of these questions is Yes, then you
need this book to guide you toward creating an integrated data
resource. Most public and private sector organisations have a
disparate data resource that was created over many years. That
disparate data resource contains multiple existences of business
facts that are out of synch with each other, are of poor quality,
and are difficult to locate. The traditional approach to dealing
with a disparate data resource is to perform periodic and temporary
data integration to support a specific application or business
activity. Those piecemeal data integration efforts may meet a
current need, but seldom solve the underlying problems with a
disparate data resource, and sometimes make the situation worse.
This book explains how to go about understanding and resolving a
disparate data resource and creating a comparate data resource that
fully meets an organisation's current and future business
information demand. It builds on "Data Resource Simplexity", which
described how to stop the burgeoning data disparity. It explains
the concepts, principles, and techniques for understanding a
disparate data resource within the context of a common data
architecture, and resolving that disparity with minimum impact on
the business. Like "Data Resource Simplexity", Michael Brackett
draws on five decades of data management experience building and
managing data resources, and resolving disparate data resources in
both public and private sector organisations. He leverages
theories, concepts, principles, and techniques from a wide variety
of disciplines, such as human dynamics, mathematics, physics,
chemistry, and biology, and applies them to the process of
understanding and resolving a disparate data resource. He shows you
how to approach and resolve a disparate data resource, and build a
comparate data resource that fully supports the business.
This book has step-by-step instructions to solve data manipulation
problems using PDI in the form of recipes. It has plenty of
well-organized tips, screenshots, tables, and examples to aid quick
and easy understanding. If you are a software developer or anyone
involved or interested in developing ETL solutions, or in general,
doing any kind of data manipulation, this book is for you. It does
not cover PDI basics, SQL basics, or database concepts. You are
expected to have a basic understanding of the PDI tool, SQL
language, and databases.
Business intelligence is a huge segment of the software world.
Gartner Group estimates that sales in this area surpassed $10
billion in 2010, with Oracle as the second largest vendor in the
category. At the heart of these analytical-oriented applications
are dimensional data models with OLAP as a critical component for
achieving high-performance.
If you want to do development work in this area or understand
how to maximize its value, read this book. A professional in the
world of analytics and business intelligence needs an understanding
of OLAP's specific data modeling principles, its analysis
capabilities, and its relationship to other analytical approaches.
All are presented here in a systematic fashion written in an
easy-to-follow style.
"The Multidimensional Data Modeling Toolkit" takes you on an
instructional journey into the world of OLAP. It provides a
comprehensive examination of data modeling and analytical
techniques for the native multi-dimensional information storage
framework that Oracle OLAP provides. You will get an in-depth look
at OLAP's analytical possibilities as well as comparison with the
approaches used in data mining and statistics. You will learn the
design issues and get step-by-step programming instructions for
solving real-world problems. Written by an expert with over 15
years experience using Oracle OLAP and its predecessors, the book
explains critical techniques rarely taught in university or
technical training programs.
"The Multidimensional Data Modeling Toolkit" takes you under the
covers and shows you what happens inside of Oracle's Analytic
Workspaces where the multidimensional magic occurs. Programming
instruction is based on the Oracle 10g database, but most of the
statements shown will work with other editions of the database,
such as Oracle 9i and 11g, and even earlier editions of the
technology found in stand-alone products such Oracle Financial
Analyzer and Oracle Sales Analyzer
The data analysis principles presented are universal and can be
applied to application that uses OLAP, for example OBIEE, Essbase,
Cognos, Business Objects, and Microsoft Analysis Services (SSAS).
Whether you are new to business intelligence or a seasoned
practitioner, you should find The Multi-dimensional Data Modeling
Toolkit with plenty of valuable insights to offer.
According to leading analysts, Business Intelligence (BI) has been
a top priority for worldwide organizations within the past five
years, and it will continue to be a priority in the near future.
With a global user base of millions, Cognos is known as a leading
provider of Business Intelligence tools. Cognos' latest release is
Cognos 8 BI, a powerful suite of modules that share one common
infrastructure for the creation, management and deployment of
queries, reports, analyses, scorecards, dashboards and alerts--all
designed to support an organization's Business Intelligence
objectives. As a consultant with over 10 years of Cognos BI
experience, Juan A. Padilla is an expert in helping organizations
develop and implement Business Intelligence solutions. As the first
in a series of books on Cognos products, with Cognos 8 BI for
Consumers, Padilla provides a step-by-step introductory guide for
the key component of the Cognos infrastructure: Cognos Connection.
This book walks the reader through the fundamentals of Cognos
Connection, the powerful web portal that is the foundation for all
users of Cognos 8 BI, from end users to administrators. The guide
relies heavily on screen images that demonstrate product workflow
and available features; even readers who do not have "live" access
to Cognos software can learn about it This guide has been designed
for: - companies evaluating Cognos as a potential BI solution and
need to know the capabilities before purchasing; - those planning
to use Cognos products professionally, e.g. job-seekers or
consultants; - organizations using previous versions of Cognos who
need to evaluate the latest version before upgrading or migrating;
and - novice "hands-on" users of Cognos, for whom this guide will
be an "anytime, anywhere" tutorial and reference source. In this
book, you will discover: - how Cognos Connection functions; - how
to work with reports, including scheduling, setting parameters and
changing formats; - how to customize the look and feel of the
interface to your preferences; - a simulation of Cognos' security
features; and - report samples that show the powerful reporting
capabilities of Cognos 8 BI.
Data Warehousing 101: Concepts and Implementation will appeal to
those planning data warehouse projects, senior executives, project
managers, and project implementation team members. It will also be
useful to functional managers, business analysts, developers, power
users, and end-users. Data Warehousing 101: Concepts and
Implementation, which can be used as a textbook in an introductory
data warehouse course, can also be used as a supplemental text in
IT courses that cover the subject of data warehousing. Data
Warehousing 101: Concepts and Implementation reviews the evolution
of data warehousing and its growth drivers, process and
architecture, data warehouse characteristics and design, data
marts, multi-dimensionality, and OLAP. It also shows how to plan a
data warehouse project as well as build and operate data
warehouses. Data Warehousing 101: Concepts and Implementation also
covers, in depth, common failure causes and mistakes and provides
useful guidelines and tips for avoiding common mistakes.
Data Warehousing ist seit einigen Jahren in vielen Branchen ein
zentrales Thema. Die anfangliche Euphorie tauschte jedoch daruber
hinweg, dass zur praktischen Umsetzung gesicherte Methoden und
Vorgehensmodelle fehlten. Dieses Buch stellt einen Beitrag zur
UEberwindung dieser Lucke zwischen Anspruch und Wirklichkeit dar.
Es gibt im ersten Teil einen UEberblick uber aktuelle Ergebnisse im
Bereich des Data Warehousing mit einem Fokus auf methodischen und
betriebswirtschaftlichen Aspekten. Es finden sich u.a. Beitrage zur
Wirtschaftlichkeitsanalyse, zur organisatorischen Einbettung des
Data Warehousing, zum Datenqualitatsmanagement, zum integrierten
Metadatenmanagement und zu datenschutzrechtlichen Aspekten sowie
ein Beitrag zu moeglichen zukunftigen Entwicklungsrichtungen des
Data Warehousing. Im zweiten Teil berichten Projektleiter
umfangreicher Data Warehousing-Projekte uber Erfahrungen und Best
Practices.
Data Storage: Systems, Management and Security Issues begins with a
chapter comparing digital or electronic storage systems, such as
magnetic, optical, and flash, with biological data storage systems,
like DNA and human brain memory. In the main part of the chapter,
the following quantitative storage traits are discussed: data
organisation, functionality, data density, capacity, power
consumption, redundancy, integrity, access time, data transfer
rate. Afterwards, various facets of data warehouses as well as the
necessity for security measures are reviewed. Because the
significance of security tools is greater than ever before, the
pertinent strategies and economics are discussed. The final chapter
supplements this by discussing media and storage systems
reliability and confidentiality in order to make a greater claim
about storage security. Confidentiality, integrity and availability
are three aspects of security identified as ones that should be
preserved during data transmission, processing and storage.
|
You may like...
Post-Truth?
Jeffrey Dudiak
Hardcover
R645
R574
Discovery Miles 5 740
|