![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data warehousing
Multi-Modal User Interactions in Controlled Environments investigates the capture and analysis of user's multimodal behavior (mainly eye gaze, eye fixation, eye blink and body movements) within a real controlled environment (controlled-supermarket, personal environment) in order to adapt the response of the computer/environment to the user. Such data is captured using non-intrusive sensors (for example, cameras in the stands of a supermarket) installed in the environment. This multi-modal video based behavioral data will be analyzed to infer user intentions while assisting users in their day-to-day tasks by adapting the system's response to their requirements seamlessly. This book also focuses on the presentation of information to the user. Multi-Modal User Interactions in Controlled Environments is designed for professionals in industry, including professionals in the domains of security and interactive web television. This book is also suitable for graduate-level students in computer science and electrical engineering.
Did you know that there is a technology inside Excel, and Power BI, that allows you to create magic in your data, avoid repetitive manual work, and save you time and money? Using Excel and Power BI, you can: Save time by eliminating the pain of copying and pasting data into workbooks and then manually cleaning that data. Gain productivity by properly preparing data yourself, rather than relying on others to do it. Gain effiiciency by reducing the time it takes to prepare data for analysis, and make informed decisions more quickly. With the data connectivity and transformative technology found in Excel and Power BI, users with basic Excel skills import data and then easily reshape and cleanse that data, using simple intuitive user interfaces. Known as "Get & Transform" in Excel 2016, as the "Power Query" separate add-in in Excel 2013 and 2010, and included in Power BI, you'll use this technology to tackle common data challenges, resolving them with simple mouse clicks and lightweight formula editing. With your new data transformation skills acquired through this book, you will be able to create an automated transformation of virtually any type of data set to mine its hidden insights.
Discover how graph databases can help you manage and query highly connected data. With this practical book, you'll learn how to design and implement a graph database that brings the power of graphs to bear on a broad range of problem domains. Whether you want to speed up your response to user queries or build a database that can adapt as your business evolves, this book shows you how to apply the schema-free graph model to real-world problems. This second edition includes new code samples and diagrams, using the latest Neo4j syntax, as well as information on new functionality. Learn how different organizations are using graph databases to outperform their competitors. With this book's data modeling, query, and code examples, you'll quickly be able to implement your own solution. Model data with the Cypher query language and property graph model Learn best practices and common pitfalls when modeling with graphs Plan and implement a graph database solution in test-driven fashion Explore real-world examples to learn how and why organizations use a graph database Understand common patterns and components of graph database architecture Use analytical techniques and algorithms to mine graph database information
This two-volume set constitutes the refereed proceedings of the 17th International Conference on Collaborative Computing: Networking, Applications, and Worksharing, CollaborateCom 2021, held in October 2021. Due to COVID-19 pandemic the conference was held virtually.The 62 full papers and 7 short papers presented were carefully reviewed and selected from 206 submissions. The papers reflect the conference sessions as follows: Optimization for Collaborate System; Optimization based on Collaborative Computing; UVA and Traffic system; Recommendation System; Recommendation System & Network and Security; Network and Security; Network and Security & IoT and Social Networks; IoT and Social Networks & Images handling and human recognition; Images handling and human recognition & Edge Computing; Edge Computing; Edge Computing & Collaborative working; Collaborative working & Deep Learning and application; Deep Learning and application; Deep Learning and application; Deep Learning and application & UVA.
This book constitutes the refereed proceedings of the 27th International Symposium on String Processing and Information Retrieval, SPIRE 2021, held in Lille, France, in October 2021.*The 14 full papers and 4 short papers presented together with 2 invited papers in this volume were carefully reviewed and selected from 30 submissions. They cover topics such as: data structures; algorithms; information retrieval; compression; combinatorics on words; and computational biology. *The symposium was held virtually.
This book provides readers the "big picture" and a comprehensive survey of the domain of big data processing systems. For the past decade, the Hadoop framework has dominated the world of big data processing, yet recently academia and industry have started to recognize its limitations in several application domains and thus, it is now gradually being replaced by a collection of engines that are dedicated to specific verticals (e.g. structured data, graph data, and streaming data). The book explores this new wave of systems, which it refers to as Big Data 2.0 processing systems. After Chapter 1 presents the general background of the big data phenomena, Chapter 2 provides an overview of various general-purpose big data processing systems that allow their users to develop various big data processing jobs for different application domains. In turn, Chapter 3 examines various systems that have been introduced to support the SQL flavor on top of the Hadoop infrastructure and provide competing and scalable performance in the processing of large-scale structured data. Chapter 4 discusses several systems that have been designed to tackle the problem of large-scale graph processing, while the main focus of Chapter 5 is on several systems that have been designed to provide scalable solutions for processing big data streams, and on other sets of systems that have been introduced to support the development of data pipelines between various types of big data processing jobs and systems. Next, Chapter 6 focuses on covering the emerging frameworks and systems in the domain of scalable machine learning and deep learning processing. Lastly, Chapter 7 shares conclusions and an outlook on future research challenges. This new and considerably enlarged second edition not only contains the completely new chapter 6, but also offers a refreshed content for the state-of-the-art in all domains of big data processing over the last years. Overall, the book offers a valuable reference guide for professional, students, and researchers in the domain of big data processing systems. Further, its comprehensive content will hopefully encourage readers to pursue further research on the subject.
Do your business intelligence (BI) projects take too long to deliver? Is the value of the deliverables less than satisfactory? Do these projects propagate poor data management practices? If you screamed yes to any of these questions, read this book to master a proven approach to building your enterprise data warehouse and BI initiatives. "Extreme Scoping", based on the Business Intelligence Roadmap, will show you how to build analytics applications rapidly yet not sacrifice data management and enterprise architecture. In addition, all of the roles required to deliver all seven steps of this agile methodology are explained along with many real-world examples. From Wayne Eckersons Foreword -- I've read many books about data warehousing and business intelligence (BI). This book by Larissa Moss is one of the best. I should not be surprised. Larissa has spent years refining the craft of designing, building, and delivering BI applications. Over the years, she has developed a keen insight about what works and doesnt work in BI. This book brings to light the wealth of that development experience. Best of all, this is not some dry text that laboriously steps readers through a technical methodology. Larissa expresses her ideas in a clear, concise, and persuasive manner. I highlighted so many beautifully written and insightful paragraphs in her manuscript that it became comical. I desperately wanted the final, published book rather than the manuscript so I could dog-ear it to death and place it front-and-center in my office bookshelf! From David Wells Foreword : Extreme Scoping is rich with advice and guidance for virtually every aspect of BI projects from planning and requirements to deployment and from back-end data management to front-end information and analytics services. Larissa is both a pragmatist and an independent thinker. Those qualities come through in the style of this book. This is a well-written book that is easy to absorb. It is not full of surprises. It is filled with a lot of common sense and lessons learned through experience.
This contributed volume discusses essential topics and the fundamentals for Big Data Emergency Management and primarily focusses on the application of Big Data for Emergency Management. It walks the reader through the state of the art, in different facets of the big disaster data field. This includes many elements that are important for these technologies to have real-world impact. This book brings together different computational techniques from: machine learning, communication network analysis, natural language processing, knowledge graphs, data mining, and information visualization, aiming at methods that are typically used for processing big emergency data. This book also provides authoritative insights and highlights valuable lessons by distinguished authors, who are leaders in this field. Emergencies are severe, large-scale, non-routine events that disrupt the normal functioning of a community or a society, causing widespread and overwhelming losses and impacts. Emergency Management is the process of planning and taking actions to minimize the social and physical impact of emergencies and reduces the community's vulnerability to the consequences of emergencies. Information exchange before, during and after the disaster periods can greatly reduce the losses caused by the emergency. This allows people to make better use of the available resources, such as relief materials and medical supplies. It also provides a channel through which reports on casualties and losses in each affected area, can be delivered expeditiously. Big Data-Driven Emergency Management refers to applying advanced data collection and analysis technologies to achieve more effective and responsive decision-making during emergencies. Researchers, engineers and computer scientists working in Big Data Emergency Management, who need to deal with large and complex sets of data will want to purchase this book. Advanced-level students interested in data-driven emergency/crisis/disaster management will also want to purchase this book as a study guide.
Get started with Azure Synapse Analytics, Microsoft's modern data analytics platform. This book covers core components such as Synapse SQL, Synapse Spark, Synapse Pipelines, and many more, along with their architecture and implementation. The book begins with an introduction to core data and analytics concepts followed by an understanding of traditional/legacy data warehouse, modern data warehouse, and the most modern data lakehouse. You will go through the introduction and background of Azure Synapse Analytics along with its main features and key service capabilities. Core architecture is discussed, along with Synapse SQL. You will learn its main features and how to create a dedicated Synapse SQL pool and analyze your big data using Serverless Synapse SQL Pool. You also will learn Synapse Spark and Synapse Pipelines, with examples. And you will learn Synapse Workspace and Synapse Studio followed by Synapse Link and its features. You will go through use cases in Azure Synapse and understand the reference architecture for Synapse Analytics. After reading this book, you will be able to work with Azure Synapse Analytics and understand its architecture, main components, features, and capabilities. What You Will Learn Understand core data and analytics concepts and data lakehouse concepts Be familiar with overall Azure Synapse architecture and its main components Be familiar with Synapse SQL and Synapse Spark architecture components Work with integrated Apache Spark (aka Synapse Spark) and Synapse SQL engines Understand Synapse Workspace, Synapse Studio, and Synapse Pipeline Study reference architecture and use cases Who This Book Is For Azure data analysts, data engineers, data scientists, and solutions architects
This 2 volume-set of IFIP AICT 583 and 584 constitutes the refereed proceedings of the 16th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2020, held in Neos Marmaras, Greece, in June 2020.* The 70 full papers and 5 short papers presented were carefully reviewed and selected from 149 submissions. They cover a broad range of topics related to technical, legal, and ethical aspects of artificial intelligence systems and their applications and are organized in the following sections: Part I: classification; clustering - unsupervised learning -analytics; image processing; learning algorithms; neural network modeling; object tracking - object detection systems; ontologies - AI; and sentiment analysis - recommender systems. Part II: AI ethics - law; AI constraints; deep learning - LSTM; fuzzy algebra - fuzzy systems; machine learning; medical - health systems; and natural language. *The conference was held virtually due to the COVID-19 pandemic.
This 2 volume-set of IFIP AICT 583 and 584 constitutes the refereed proceedings of the 16th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2020, held in Neos Marmaras, Greece, in June 2020.* The 70 full papers and 5 short papers presented were carefully reviewed and selected from 149 submissions. They cover a broad range of topics related to technical, legal, and ethical aspects of artificial intelligence systems and their applications and are organized in the following sections: Part I: classification; clustering - unsupervised learning -analytics; image processing; learning algorithms; neural network modeling; object tracking - object detection systems; ontologies - AI; and sentiment analysis - recommender systems. Part II: AI ethics - law; AI constraints; deep learning - LSTM; fuzzy algebra - fuzzy systems; machine learning; medical - health systems; and natural language. *The conference was held virtually due to the COVID-19 pandemic.
Are you struggling with the formal design of your organisation's data resource? Do you find yourself forced into generic data architectures and universal data models? Do you find yourself warping the business to fit a purchased application? Do you find yourself pushed into developing physical databases without formal logical design? Do you find disparate data throughout the organisation? If the answer to any of these questions is Yes, then you need to read Data Resource Design to help guide you through a formal design process that produces a high quality data resource within a single common data architecture. Most public and private sector organisations do not consistently follow a formal data resource design process that begins with the organisation's perception of the business world, proceeds through logical data design, through physical data design, and into implementation. Most organisations charge ahead with physical database implementation, physical package implementation, and other brute-force-physical approaches. The result is a data resource that becomes disparate and does not fully support the organisation in its business endeavours. This book describes how to formally design an organisation's data resource to meet its current and future business information demand. It builds on "Data Resource Simplexity", which described how to stop the burgeoning data disparity, and on "Data Resource Integration", which described how to understand and resolve an organisation's disparate data resource. It describes the concepts, principles, and techniques for building a high quality data resource based on an organisation's perception of the business world in which they operate. Like "Data Resource Simplexity" and "Data Resource Integration", Michael Brackett draws on five decades of data management experience building and managing data resources, and resolving disparate data in both public and private sector organisations. He leverages theories, concepts, principles, and techniques from a wide variety of disciplines, such as human dynamics, mathematics, physics, chemistry, philosophy, and biology, and applies them to properly designing data as a critical resource of an organisation. He shows how to understand the business environment where an organisation operates and design a data resource that supports the organisation in that business environment.
Jede Business-Intelligence-Anwendung beruht letzten Endes auf einem Data Warehouse. Data Warehousing ist deshalb ein sehr wichtiges Gebiet der Angewandten Informatik, insbesondere im Zeitalter von Big Data. Das vorliegende Buch beleuchtet das Data Warehouse aus zwei Perspektiven: der des Entwicklers und der des Anwenders. Der zukA1/4nftige Entwickler lernt, ein Data Warehouse mit geeigneten Methoden selbst zu entwickeln. FA1/4r den zukA1/4nftigen Anwender geht der Autor auf die Themen Reporting, Online Analytical Processing und Data Mining ein. Das Lehrbuch ist auch zum Selbststudium geeignet. Kenntnisse A1/4ber Datenbanksysteme sollten allerdings vorhanden sein.
Create a data warehouse, complete with reporting and dashboards using Google's BigQuery technology. This book takes you from the basic concepts of data warehousing through the design, build, load, and maintenance phases. You will build capabilities to capture data from the operational environment, and then mine and analyze that data for insight into making your business more successful. You will gain practical knowledge about how to use BigQuery to solve data challenges in your organization. BigQuery is a managed cloud platform from Google that provides enterprise data warehousing and reporting capabilities. Part I of this book shows you how to design and provision a data warehouse in the BigQuery platform. Part II teaches you how to load and stream your operational data into the warehouse to make it ready for analysis and reporting. Parts III and IV cover querying and maintaining, helping you keep your information relevant with other Google Cloud Platform services and advanced BigQuery. Part V takes reporting to the next level by showing you how to create dashboards to provide at-a-glance visual representations of your business situation. Part VI provides an introduction to data science with BigQuery, covering machine learning and Jupyter notebooks. What You Will Learn Design a data warehouse for your project or organization Load data from a variety of external and internal sources Integrate other Google Cloud Platform services for more complex workflows Maintain and scale your data warehouse as your organization grows Analyze, report, and create dashboards on the information in the warehouse Become familiar with machine learning techniques using BigQuery ML Who This Book Is For Developers who want to provide business users with fast, reliable, and insightful analysis from operational data, and data analysts interested in a cloud-based solution that avoids the pain of provisioning their own servers.
The chapter "An Efficient Index for Reachability Queries in Public Transport Networks" is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
The new edition of the classic bestseller that launched the data warehousing industry covers new approaches and technologies, many of which have been pioneered by Inmon himself. In addition to explaining the fundamentals of data warehouse systems, the book covers new topics such as methods for handling unstructured data in a data warehouse and storing data across multiple storage media, and discusses the pros and cons of relational versus multidimensional design and how to measure return on investment in planning data warehouse projects. It covers advanced topics, including data monitoring and testing. Although the book includes an extra 100 pages worth of valuable content, the price has actually been reduced from $65 to $55.
This book constitutes the thoroughly refereed post-conference proceedings of the Third COST Action IC1302 International KEYSTONE Conference on Semantic Keyword-Based Search on Structured Data Sources, IKC 2017, held in Gdansk, Poland, in September 2017. The 13 revised full papers and 5 short papers included in the first part of the book were carefully reviewed and selected from numerous submissions. The second part contains reports that summarize the major activities and achievements that have taken place in the context of the action: the short term scientific missions, the outcome of the summer schools, and the results achieved within the following four work packages: representation of structured data sources; keyword search; user interaction and keyword query interpretation; and research integration, showcases, benchmarks and evaluations. Also included is a short report generated by the chairs of the action. The papers cover a broad range of topics in the area of keyword search combining expertise from many different related fields such as information retrieval, natural language processing, ontology management, indexing, semantic web and linked data.
Dirty data is a problem that costs businesses thousands, if not millions, every year. In organisations large and small across the globe you will hear talk of data quality issues. What you will rarely hear about is the consequences or how to fix it. Between the Spreadsheets: Classifying and Fixing Dirty Data draws on classification expert Susan Walsh's decade of experience in data classification to present a fool-proof method for cleaning and classifying your data. The book covers everything from the very basics of data classification to normalisation and taxonomies, and presents the author's proven COAT methodology, helping ensure an organisation's data is Consistent, Organised, Accurate and Trustworthy. A series of data horror stories outlines what can go wrong in managing data, and if it does, how it can be fixed. After reading this book, regardless of your level of experience, not only will you be able to work with your data more efficiently, but you will also understand the impact the work you do with it has, and how it affects the rest of the organisation. Written in an engaging and highly practical manner, Between the Spreadsheets gives readers of all levels a deep understanding of the dangers of dirty data and the confidence and skills to work more efficiently and effectively with it.
With this textbook, Vaisman and Zimanyi deliver excellent coverage of data warehousing and business intelligence technologies ranging from the most basic principles to recent findings and applications. To this end, their work is structured into three parts. Part I describes "Fundamental Concepts" including multi-dimensional models; conceptual and logical data warehouse design and MDX and SQL/OLAP. Subsequently, Part II details "Implementation and Deployment," which includes physical data warehouse design; data extraction, transformation, and loading (ETL) and data analytics. Lastly, Part III covers "Advanced Topics" such as spatial data warehouses; trajectory data warehouses; semantic technologies in data warehouses and novel technologies like Map Reduce, column-store databases and in-memory databases. As a key characteristic of the book, most of the topics are presented and illustrated using application tools. Specifically, a case study based on the well-known Northwind database illustrates how the concepts presented in the book can be implemented using Microsoft Analysis Services and Pentaho Business Analytics. All chapters are summarized using review questions and exercises to support comprehensive student learning. Supplemental material to assist instructors using this book as a course text is available at http://cs.ulb.ac.be/DWSDIbook/, including electronic versions of the figures, solutions to all exercises, and a set of slides accompanying each chapter. Overall, students, practitioners and researchers alike will find this book the most comprehensive reference work on data warehouses, with key topics described in a clear and educational style.
How to build and maintain strong data organizations--the Dummies way Data Governance For Dummies offers an accessible first step for decision makers into understanding how data governance works and how to apply it to an organization in a way that improves results and doesn't disrupt. Prep your organization to handle the data explosion (if you know, you know) and learn how to manage this valuable asset. Take full control of your organization's data with all the info and how-tos you need. This book walks you through making accurate data readily available and maintaining it in a secure environment. It serves as your step-by-step guide to extracting every ounce of value from your data. Identify the impact and value of data in your business Design governance programs that fit your organization Discover and adopt tools that measure performance and need Address data needs and build a more data-centric business culture This is the perfect handbook for professionals in the world of data analysis and business intelligence, plus the people who interact with data on a daily basis. And, as always, Dummies explains things in terms anyone can understand, making it easy to learn everything you need to know.
Publisher's Note: Products purchased from Third Party sellers are not guaranteed by the publisher for quality, authenticity, or access to any online entitlements included with the product. The definitive guide to dimensional design for your data warehouseLearn the best practices of dimensional design. Star Schema: The Complete Reference offers in-depth coverage of design principles and their underlying rationales. Organized around design concepts and illustrated with detailed examples, this is a step-by-step guidebook for beginners and a comprehensive resource for experts. This all-inclusive volume begins with dimensional design fundamentals and shows how they fit into diverse data warehouse architectures, including those of W.H. Inmon and Ralph Kimball. The book progresses through a series of advanced techniques that help you address real-world complexity, maximize performance, and adapt to the requirements of BI and ETL software products. You are furnished with design tasks and deliverables that can be incorporated into any project, regardless of architecture or methodology. Master the fundamentals of star schema design and slow change processing Identify situations that call for multiple stars or cubes Ensure compatibility across subject areas as your data warehouse grows Accommodate repeating attributes, recursive hierarchies, and poor data quality Support conflicting requirements for historic data Handle variation within a business process and correlation of disparate activities Boost performance using derived schemas and aggregates Learn when it's appropriate to adjust designs for BI and ETL tools
This book constitutes the refereed proceedings of the 6th International Conference on E-Technologies, MCETECH 2015, held in Montreal, Canada, in May 2015. The 18 papers presented in this volume were carefully reviewed and selected from 42 submissions. They have been organized in topical sections on process adaptation; legal issues; social computing; eHealth; and eBusiness, eEducation and eLogistics. |
You may like...
Firewall Policies and VPN Configurations
Syngress, Dale Liu, …
Paperback
R1,512
Discovery Miles 15 120
Super Thinking - Upgrade Your Reasoning…
Gabriel Weinberg, Lauren McCann
Paperback
(1)
Secure Multi-Party Non-Repudiation…
Jose A. Onieva, Jianying Zhou
Hardcover
R2,658
Discovery Miles 26 580
Identity Theft - Breakthroughs in…
Information Resources Management Association
Hardcover
R8,567
Discovery Miles 85 670
|