![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data warehousing
The new edition of the classic bestseller that launched the data warehousing industry covers new approaches and technologies, many of which have been pioneered by Inmon himself. In addition to explaining the fundamentals of data warehouse systems, the book covers new topics such as methods for handling unstructured data in a data warehouse and storing data across multiple storage media, and discusses the pros and cons of relational versus multidimensional design and how to measure return on investment in planning data warehouse projects. It covers advanced topics, including data monitoring and testing. Although the book includes an extra 100 pages worth of valuable content, the price has actually been reduced from $65 to $55.
Multi-Modal User Interactions in Controlled Environments investigates the capture and analysis of user's multimodal behavior (mainly eye gaze, eye fixation, eye blink and body movements) within a real controlled environment (controlled-supermarket, personal environment) in order to adapt the response of the computer/environment to the user. Such data is captured using non-intrusive sensors (for example, cameras in the stands of a supermarket) installed in the environment. This multi-modal video based behavioral data will be analyzed to infer user intentions while assisting users in their day-to-day tasks by adapting the system's response to their requirements seamlessly. This book also focuses on the presentation of information to the user. Multi-Modal User Interactions in Controlled Environments is designed for professionals in industry, including professionals in the domains of security and interactive web television. This book is also suitable for graduate-level students in computer science and electrical engineering.
Best practices and invaluable advice from world-renowned data warehouse experts In this book, leading data warehouse experts from the Kimball Group share best practices for using the upcoming "Business Intelligence release" of SQL Server, referred to as SQL Server 2008 R2. In this new edition, the authors explain how SQL Server 2008 R2 provides a collection of powerful new tools that extend the power of its BI toolset to Excel and SharePoint users and they show how to use SQL Server to build a successful data warehouse that supports the business intelligence requirements that are common to most organizations. Covering the complete suite of data warehousing and BI tools that are part of SQL Server 2008 R2, as well as Microsoft Office, the authors walk you through a full project lifecycle, including design, development, deployment and maintenance.Features more than 50 percent new and revised material that covers the rich new feature set of the SQL Server 2008 R2 release, as well as the Office 2010 releaseIncludes brand new content that focuses on PowerPivot for Excel and SharePoint, Master Data Services, and discusses updated capabilities of SQL Server Analysis, Integration, and Reporting ServicesShares detailed case examples that clearly illustrate how to best apply the techniques described in the bookThe accompanying Web site contains all code samples as well as the sample database used throughout the case studies "The Microsoft Data Warehouse Toolkit, Second Edition" provides you with the knowledge of how and when to use BI tools such as Analysis Services and Integration Services to accomplish your most essential data warehousing tasks.
Discover how graph databases can help you manage and query highly connected data. With this practical book, you'll learn how to design and implement a graph database that brings the power of graphs to bear on a broad range of problem domains. Whether you want to speed up your response to user queries or build a database that can adapt as your business evolves, this book shows you how to apply the schema-free graph model to real-world problems. This second edition includes new code samples and diagrams, using the latest Neo4j syntax, as well as information on new functionality. Learn how different organizations are using graph databases to outperform their competitors. With this book's data modeling, query, and code examples, you'll quickly be able to implement your own solution. Model data with the Cypher query language and property graph model Learn best practices and common pitfalls when modeling with graphs Plan and implement a graph database solution in test-driven fashion Explore real-world examples to learn how and why organizations use a graph database Understand common patterns and components of graph database architecture Use analytical techniques and algorithms to mine graph database information
Organization of data warehouses is a vital, but often neglected, aspect of growing an enterprise. Unlike most books on the subject that focus on either the technical aspects of building data warehouses or on business strategies, this valuable reference synthesizes technological know-how with managerial best practices to show how improved alignment between data warehouse plans and business strategies can lead to successful data warehouse adoption capable of supporting an enterprise s entire infrastructure. Strategic Data Warehousing: Achieving Alignment with Business provides data warehouse developers, business managers, and IT professionals and administrators with an integrated approach to achieving successful and sustainable alignment of data warehouses and business goals. More complete than any other text in the field, this comprehensive reference details the joint roles and responsibilities of the data warehouse and business managers in achieving strategic alignment, business user satisfaction, technical integration, and improved flexibility. Complete with case studies that depict real-world scenarios, the text:
Achieving sustainable alignment between the data warehouse and business strategies is a continuous process. Armed with this valuable reference, readers will be able to gain the solid understanding of the organizational, technical, data, and user factors needed to promote a successful data warehouse adoption and become active partners in leveraging this powerful, but often overlooked, information resource.
Cutting-edge content and guidance from a data warehousing expert--now expanded to reflect field trends Data warehousing has revolutionized the way businesses in a wide variety of industries perform analysis and make strategic decisions. Since the first edition of "Data Warehousing Fundamentals," numerous enterprises have implemented data warehouse systems and reaped enormous benefits. Many more are in the process of doing so. Now, this new, revised edition covers the essential fundamentals of data warehousing and business intelligence as well as significant recent trends in the field. The author provides an enhanced, comprehensive overview of data warehousing together with in-depth explanations of critical issues in planning, design, deployment, and ongoing maintenance. IT professionals eager to get into the field will gain a clear understanding of techniques for data extraction from source systems, data cleansing, data transformations, data warehouse architecture and infrastructure, and the various methods for information delivery. This practical "Second Edition" highlights the areas of data warehousing and business intelligence where high-impact technological progress has been made. Discussions on developments include data marts, real-time information delivery, data visualization, requirements gathering methods, multi-tier architecture, OLAP applications, Web clickstream analysis, data warehouse appliances, and data mining techniques. The book also contains review questions and exercises for each chapter, appropriate for self-study or classroom work, industry examples of real-world situations, and several appendices with valuable information. Specifically written for professionals responsible for designing, implementing, or maintaining data warehousing systems, "Data Warehousing Fundamentals" presents agile, thorough, and systematic development principles for the IT professional and anyone working or researching in information management.
Did you know that there is a technology inside Excel, and Power BI, that allows you to create magic in your data, avoid repetitive manual work, and save you time and money? Using Excel and Power BI, you can: Save time by eliminating the pain of copying and pasting data into workbooks and then manually cleaning that data. Gain productivity by properly preparing data yourself, rather than relying on others to do it. Gain effiiciency by reducing the time it takes to prepare data for analysis, and make informed decisions more quickly. With the data connectivity and transformative technology found in Excel and Power BI, users with basic Excel skills import data and then easily reshape and cleanse that data, using simple intuitive user interfaces. Known as "Get & Transform" in Excel 2016, as the "Power Query" separate add-in in Excel 2013 and 2010, and included in Power BI, you'll use this technology to tackle common data challenges, resolving them with simple mouse clicks and lightweight formula editing. With your new data transformation skills acquired through this book, you will be able to create an automated transformation of virtually any type of data set to mine its hidden insights.
Questions of privacy, borders, and nationhood are increasingly shaping the way we think about all things digital. Data Centers brings together essays and photographic documentation that analyze recent and ongoing developments. Taking Switzerland as an example, the book takes a look at the country's data centers, law firms, corporations, and government institutions that are involved in the creation, maintenance, and regulation of digital infrastructures. Beneath the official storyline- Switzerland's moderate climate, political stability, and relatively clean energy mix-the book uncovers a much more varied and sometimes contradictory set of narratives.
Develop the must-have skills required for any data scientist to get the best results from Azure Databricks. Key Features * Learn to develop and productionize ML pipelines using the Databricks Unified Analytics platform * See how to use AutoML, Feature Stores, and MLOps with Databricks * Get a complete understanding of data governance and model deployment Book Description In this book, you'll get to grips with Databricks, enabling you to power-up your organization's data science applications. We'll walk through applying the Databricks AI and ML stack to real-world use cases for natural language processing, computer vision, time series data, and more. We'll dive deep into the complete model development life cycle for data ingestion and analysis, and get familiar with the latest offerings of AutoML, Feature Store, and MLStudio, on the Databricks platform. You'll get hands-on experience implementing repeatable ML operations (MLOps) pipeline using MLFlow, track model training and key metrics, and explore real-time ML, anomaly detection, and streaming analytics with Delta lake and Spark Structured Streaming. Starting with an overview of Data Science use cases across different organizations and industries, you will then be introduced to feature stores, feature tables, and how to access them. You will see why AutoML is important and how to create a baseline model with AutoML within Databricks. Utilizing the ML Flow model registry to manage model versioning and transition to production will be covered, along with detecting and protecting against model drift in production environments. By the end of the book, you will know how to set up your Databricks ML development and deployment as a CI/CD pipeline. What you will learn * Perform natural language processing, computer vision, and more * Explore AutoML, Feature Store, and MLStudio on Databricks * Dive deep into the complete model development life cycle * Experience implementing repeatable MLOps pipelines using MLFlow * Track model training and key metrics * Explore real-time ML, anomaly detection, and streaming analytics * Learn how to handle model drift Who This Book Is For In this book we are going to specifically focus on the tools catering to the Data Scientist persona. Readers who want to learn how to successfully build and deploy end-end Data Science projects using the Databricks cloud agnostic unified analytics platform will benefit from this book, along with AI and Machine Learning practitioners.
This two-volume set constitutes the refereed proceedings of the 17th International Conference on Collaborative Computing: Networking, Applications, and Worksharing, CollaborateCom 2021, held in October 2021. Due to COVID-19 pandemic the conference was held virtually.The 62 full papers and 7 short papers presented were carefully reviewed and selected from 206 submissions. The papers reflect the conference sessions as follows: Optimization for Collaborate System; Optimization based on Collaborative Computing; UVA and Traffic system; Recommendation System; Recommendation System & Network and Security; Network and Security; Network and Security & IoT and Social Networks; IoT and Social Networks & Images handling and human recognition; Images handling and human recognition & Edge Computing; Edge Computing; Edge Computing & Collaborative working; Collaborative working & Deep Learning and application; Deep Learning and application; Deep Learning and application; Deep Learning and application & UVA.
This book constitutes the refereed proceedings of the 27th International Symposium on String Processing and Information Retrieval, SPIRE 2021, held in Lille, France, in October 2021.*The 14 full papers and 4 short papers presented together with 2 invited papers in this volume were carefully reviewed and selected from 30 submissions. They cover topics such as: data structures; algorithms; information retrieval; compression; combinatorics on words; and computational biology. *The symposium was held virtually.
This contributed volume discusses essential topics and the fundamentals for Big Data Emergency Management and primarily focusses on the application of Big Data for Emergency Management. It walks the reader through the state of the art, in different facets of the big disaster data field. This includes many elements that are important for these technologies to have real-world impact. This book brings together different computational techniques from: machine learning, communication network analysis, natural language processing, knowledge graphs, data mining, and information visualization, aiming at methods that are typically used for processing big emergency data. This book also provides authoritative insights and highlights valuable lessons by distinguished authors, who are leaders in this field. Emergencies are severe, large-scale, non-routine events that disrupt the normal functioning of a community or a society, causing widespread and overwhelming losses and impacts. Emergency Management is the process of planning and taking actions to minimize the social and physical impact of emergencies and reduces the community's vulnerability to the consequences of emergencies. Information exchange before, during and after the disaster periods can greatly reduce the losses caused by the emergency. This allows people to make better use of the available resources, such as relief materials and medical supplies. It also provides a channel through which reports on casualties and losses in each affected area, can be delivered expeditiously. Big Data-Driven Emergency Management refers to applying advanced data collection and analysis technologies to achieve more effective and responsive decision-making during emergencies. Researchers, engineers and computer scientists working in Big Data Emergency Management, who need to deal with large and complex sets of data will want to purchase this book. Advanced-level students interested in data-driven emergency/crisis/disaster management will also want to purchase this book as a study guide.
This book provides readers the "big picture" and a comprehensive survey of the domain of big data processing systems. For the past decade, the Hadoop framework has dominated the world of big data processing, yet recently academia and industry have started to recognize its limitations in several application domains and thus, it is now gradually being replaced by a collection of engines that are dedicated to specific verticals (e.g. structured data, graph data, and streaming data). The book explores this new wave of systems, which it refers to as Big Data 2.0 processing systems. After Chapter 1 presents the general background of the big data phenomena, Chapter 2 provides an overview of various general-purpose big data processing systems that allow their users to develop various big data processing jobs for different application domains. In turn, Chapter 3 examines various systems that have been introduced to support the SQL flavor on top of the Hadoop infrastructure and provide competing and scalable performance in the processing of large-scale structured data. Chapter 4 discusses several systems that have been designed to tackle the problem of large-scale graph processing, while the main focus of Chapter 5 is on several systems that have been designed to provide scalable solutions for processing big data streams, and on other sets of systems that have been introduced to support the development of data pipelines between various types of big data processing jobs and systems. Next, Chapter 6 focuses on covering the emerging frameworks and systems in the domain of scalable machine learning and deep learning processing. Lastly, Chapter 7 shares conclusions and an outlook on future research challenges. This new and considerably enlarged second edition not only contains the completely new chapter 6, but also offers a refreshed content for the state-of-the-art in all domains of big data processing over the last years. Overall, the book offers a valuable reference guide for professional, students, and researchers in the domain of big data processing systems. Further, its comprehensive content will hopefully encourage readers to pursue further research on the subject.
Do your business intelligence (BI) projects take too long to deliver? Is the value of the deliverables less than satisfactory? Do these projects propagate poor data management practices? If you screamed yes to any of these questions, read this book to master a proven approach to building your enterprise data warehouse and BI initiatives. "Extreme Scoping", based on the Business Intelligence Roadmap, will show you how to build analytics applications rapidly yet not sacrifice data management and enterprise architecture. In addition, all of the roles required to deliver all seven steps of this agile methodology are explained along with many real-world examples. From Wayne Eckersons Foreword -- I've read many books about data warehousing and business intelligence (BI). This book by Larissa Moss is one of the best. I should not be surprised. Larissa has spent years refining the craft of designing, building, and delivering BI applications. Over the years, she has developed a keen insight about what works and doesnt work in BI. This book brings to light the wealth of that development experience. Best of all, this is not some dry text that laboriously steps readers through a technical methodology. Larissa expresses her ideas in a clear, concise, and persuasive manner. I highlighted so many beautifully written and insightful paragraphs in her manuscript that it became comical. I desperately wanted the final, published book rather than the manuscript so I could dog-ear it to death and place it front-and-center in my office bookshelf! From David Wells Foreword : Extreme Scoping is rich with advice and guidance for virtually every aspect of BI projects from planning and requirements to deployment and from back-end data management to front-end information and analytics services. Larissa is both a pragmatist and an independent thinker. Those qualities come through in the style of this book. This is a well-written book that is easy to absorb. It is not full of surprises. It is filled with a lot of common sense and lessons learned through experience.
Get started with Azure Synapse Analytics, Microsoft's modern data analytics platform. This book covers core components such as Synapse SQL, Synapse Spark, Synapse Pipelines, and many more, along with their architecture and implementation. The book begins with an introduction to core data and analytics concepts followed by an understanding of traditional/legacy data warehouse, modern data warehouse, and the most modern data lakehouse. You will go through the introduction and background of Azure Synapse Analytics along with its main features and key service capabilities. Core architecture is discussed, along with Synapse SQL. You will learn its main features and how to create a dedicated Synapse SQL pool and analyze your big data using Serverless Synapse SQL Pool. You also will learn Synapse Spark and Synapse Pipelines, with examples. And you will learn Synapse Workspace and Synapse Studio followed by Synapse Link and its features. You will go through use cases in Azure Synapse and understand the reference architecture for Synapse Analytics. After reading this book, you will be able to work with Azure Synapse Analytics and understand its architecture, main components, features, and capabilities. What You Will Learn Understand core data and analytics concepts and data lakehouse concepts Be familiar with overall Azure Synapse architecture and its main components Be familiar with Synapse SQL and Synapse Spark architecture components Work with integrated Apache Spark (aka Synapse Spark) and Synapse SQL engines Understand Synapse Workspace, Synapse Studio, and Synapse Pipeline Study reference architecture and use cases Who This Book Is For Azure data analysts, data engineers, data scientists, and solutions architects
This 2 volume-set of IFIP AICT 583 and 584 constitutes the refereed proceedings of the 16th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2020, held in Neos Marmaras, Greece, in June 2020.* The 70 full papers and 5 short papers presented were carefully reviewed and selected from 149 submissions. They cover a broad range of topics related to technical, legal, and ethical aspects of artificial intelligence systems and their applications and are organized in the following sections: Part I: classification; clustering - unsupervised learning -analytics; image processing; learning algorithms; neural network modeling; object tracking - object detection systems; ontologies - AI; and sentiment analysis - recommender systems. Part II: AI ethics - law; AI constraints; deep learning - LSTM; fuzzy algebra - fuzzy systems; machine learning; medical - health systems; and natural language. *The conference was held virtually due to the COVID-19 pandemic.
This 2 volume-set of IFIP AICT 583 and 584 constitutes the refereed proceedings of the 16th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2020, held in Neos Marmaras, Greece, in June 2020.* The 70 full papers and 5 short papers presented were carefully reviewed and selected from 149 submissions. They cover a broad range of topics related to technical, legal, and ethical aspects of artificial intelligence systems and their applications and are organized in the following sections: Part I: classification; clustering - unsupervised learning -analytics; image processing; learning algorithms; neural network modeling; object tracking - object detection systems; ontologies - AI; and sentiment analysis - recommender systems. Part II: AI ethics - law; AI constraints; deep learning - LSTM; fuzzy algebra - fuzzy systems; machine learning; medical - health systems; and natural language. *The conference was held virtually due to the COVID-19 pandemic.
Are you struggling with the formal design of your organisation's data resource? Do you find yourself forced into generic data architectures and universal data models? Do you find yourself warping the business to fit a purchased application? Do you find yourself pushed into developing physical databases without formal logical design? Do you find disparate data throughout the organisation? If the answer to any of these questions is Yes, then you need to read Data Resource Design to help guide you through a formal design process that produces a high quality data resource within a single common data architecture. Most public and private sector organisations do not consistently follow a formal data resource design process that begins with the organisation's perception of the business world, proceeds through logical data design, through physical data design, and into implementation. Most organisations charge ahead with physical database implementation, physical package implementation, and other brute-force-physical approaches. The result is a data resource that becomes disparate and does not fully support the organisation in its business endeavours. This book describes how to formally design an organisation's data resource to meet its current and future business information demand. It builds on "Data Resource Simplexity", which described how to stop the burgeoning data disparity, and on "Data Resource Integration", which described how to understand and resolve an organisation's disparate data resource. It describes the concepts, principles, and techniques for building a high quality data resource based on an organisation's perception of the business world in which they operate. Like "Data Resource Simplexity" and "Data Resource Integration", Michael Brackett draws on five decades of data management experience building and managing data resources, and resolving disparate data in both public and private sector organisations. He leverages theories, concepts, principles, and techniques from a wide variety of disciplines, such as human dynamics, mathematics, physics, chemistry, philosophy, and biology, and applies them to properly designing data as a critical resource of an organisation. He shows how to understand the business environment where an organisation operates and design a data resource that supports the organisation in that business environment.
Jede Business-Intelligence-Anwendung beruht letzten Endes auf einem Data Warehouse. Data Warehousing ist deshalb ein sehr wichtiges Gebiet der Angewandten Informatik, insbesondere im Zeitalter von Big Data. Das vorliegende Buch beleuchtet das Data Warehouse aus zwei Perspektiven: der des Entwicklers und der des Anwenders. Der zukA1/4nftige Entwickler lernt, ein Data Warehouse mit geeigneten Methoden selbst zu entwickeln. FA1/4r den zukA1/4nftigen Anwender geht der Autor auf die Themen Reporting, Online Analytical Processing und Data Mining ein. Das Lehrbuch ist auch zum Selbststudium geeignet. Kenntnisse A1/4ber Datenbanksysteme sollten allerdings vorhanden sein.
Create a data warehouse, complete with reporting and dashboards using Google's BigQuery technology. This book takes you from the basic concepts of data warehousing through the design, build, load, and maintenance phases. You will build capabilities to capture data from the operational environment, and then mine and analyze that data for insight into making your business more successful. You will gain practical knowledge about how to use BigQuery to solve data challenges in your organization. BigQuery is a managed cloud platform from Google that provides enterprise data warehousing and reporting capabilities. Part I of this book shows you how to design and provision a data warehouse in the BigQuery platform. Part II teaches you how to load and stream your operational data into the warehouse to make it ready for analysis and reporting. Parts III and IV cover querying and maintaining, helping you keep your information relevant with other Google Cloud Platform services and advanced BigQuery. Part V takes reporting to the next level by showing you how to create dashboards to provide at-a-glance visual representations of your business situation. Part VI provides an introduction to data science with BigQuery, covering machine learning and Jupyter notebooks. What You Will Learn Design a data warehouse for your project or organization Load data from a variety of external and internal sources Integrate other Google Cloud Platform services for more complex workflows Maintain and scale your data warehouse as your organization grows Analyze, report, and create dashboards on the information in the warehouse Become familiar with machine learning techniques using BigQuery ML Who This Book Is For Developers who want to provide business users with fast, reliable, and insightful analysis from operational data, and data analysts interested in a cloud-based solution that avoids the pain of provisioning their own servers.
The chapter "An Efficient Index for Reachability Queries in Public Transport Networks" is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
|
You may like...
Olives - Cultivation, Oil-Making…
Frederic T Bioletti, Geo E Colby
Hardcover
R518
Discovery Miles 5 180
The USDA Complete Guide To Home Canning…
US Dept of Agriculture
Hardcover
R801
Discovery Miles 8 010
WECK Home Preserving - Made-from-Scratch…
Stephanie Thurow
Hardcover
|