![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data warehousing
Questions of privacy, borders, and nationhood are increasingly shaping the way we think about all things digital. Data Centers brings together essays and photographic documentation that analyze recent and ongoing developments. Taking Switzerland as an example, the book takes a look at the country's data centers, law firms, corporations, and government institutions that are involved in the creation, maintenance, and regulation of digital infrastructures. Beneath the official storyline- Switzerland's moderate climate, political stability, and relatively clean energy mix-the book uncovers a much more varied and sometimes contradictory set of narratives.
It is in the context of maturity that one needs to read the book - Master Data Management and Enterprise Engineering by Dr. M. Naoulo. The world of technology has several related yet distinct approaches - Martin's and Finklestein's information engineering, Ted Codd's relational technology, Inmon's data warehouse, and others. This book takes these approaches and others and does two important things - it blends them together and it takes the bodies of thought and turns them into an engineering approach. As such this book is another important step in the evolution of computer science. This is a next step in the maturation of computer science. I recommend it to any serious student of computer science or to any serious practitioner. =========== W.H. (Bill) Inmon. May 21, 2012. =========== The book establishes the Fundamentals of design, modeling, architecture, and management of Master, Transactional, and Process Data and the Principles of Enterprise Engineering. The book comprises innovative techniques and elegant approach and design to address: > Master Data Management thru grouping and classifying the Master Data in an innovative way that can be implemented across the Enterprise Data Architecture: Central Data Repository and Business and Enterprise Intelligence (BI & EI) Data Marts. This data classification is based on the questions: Why, How, Who, What, Where and When. > Separating the Master Data, Transactional Data, and Process Data. This separation enables the easiness in dealing with MDM, BI, and EI. The separation approach enormously facilitates the mapping and propagation between the Central Data Repository and the Business and Enterprise Data Marts. >The basics and techniques of the design of the Enterprise Engineering Model. This model supports the Transactional Systems, Business Intelligence, Business Process Management, Enterprise Intelligence, and Enterprise Engineering. > Synchronization and Integration of Master, Transactional, and Process Data across the enterprise: Legacy Systems, Central Data Repository, and Business and Intelligence Data Marts. > Assessing the different Enterprise Data Architectures. The first part encompasses the Enterprise Data Framework and its new Modeling Techniques. The Enterprise Data Framework illustrates and depicts the Data Architecture across the enterprise covering the integration and consolidation of the data of the Legacy Systems in a Central Data Repository and the propagation of this data into the Business and Enterprise Intelligence Data Marts. The Enterprise Engineering Model presents a clear and concise illustration depicting the Operational aspects of the Enterprise and their relations to the Enterprise's needs. It includes: > Master Data representing the main objects of an enterprise, > Transactional Data detailing the results of transactions occurring in an enterprise, and > Process Data capturing the data pertinent to the activities of the functioning of an enterprise. The second part encompasses the Enterprise Engineering Framework, Methodology, Guidelines, Deliverables, and Techniques. It bestows the blueprint of the functioning of enterprises. This part details the basics of Enterprise Engineering and its implementation thru the processing of the Enterprise Engineering Model. It provides the cost data (materiel cost, labor cost, time) reflecting the functioning of the Enterprise and point out the efficiency, performance, strengths, and weaknesses of its operation. Detailed Case Studies are presented supporting the theoretical aspect of Enterprise Engineering. These Case Studies provide clear and practical hands-on exercises reflecting the functioning of the Enterprises and illustrating the implementation of Enterprise Engineering.
Until recently, many people thought big data was a passing fad. "Data science" was an enigmatic term. Today, big data is taken seriously, and data science is considered downright sexy. With this anthology of reports from award-winning journalist Mike Barlow, you'll appreciate how data science is fundamentally altering our world, for better and for worse. Barlow paints a picture of the emerging data space in broad strokes. From new techniques and tools to the use of data for social good, you'll find out how far data science reaches. With this anthology, you'll learn how: Analysts can now get results from their data queries in near real time Indie manufacturers are blurring the lines between hardware and software Companies try to balance their desire for rapid innovation with the need to tighten data security Advanced analytics and low-cost sensors are transforming equipment maintenance from a cost center to a profit center CIOs have gradually evolved from order takers to business innovators New analytics tools let businesses go beyond data analysis and straight to decision-making Mike Barlow is an award-winning journalist, author, and communications strategy consultant. Since launching his own firm, Cumulus Partners, he has represented major organizations in a number of industries.
Big Data Imperatives, focuses on resolving the key questions on every one's mind: Which data matters? Do you have enough data volume to justify the usage? How you want to process this amount of data? How long do you really need to keep it active for your analysis, marketing, and BI applications? Big data is emerging from the realm of one-off projects to mainstream business adoption; however the real value of big data is not in the overwhelming size of it, but more in its effective use. Your goal may be to obtain insight from voluminous data, with billions of loosely-structured bytes of data coming from different channels spread across different locations, which needs to be processed until the needle in the haystack is found.This book addresses the following big data characteristics: * Very large, distributed aggregations of loosely structured data -- often incomplete and inaccessible * Petabytes/Exabytes of data * Millions/billions of people providing/contributing to the context behind the data * Flat schema's with few complex interrelationships * Involves time-stamped events * Made up of incomplete data * Includes connections between data elements that must be probabilistically inferred Big data imperatives, explains 'what big data can do'. It can batch process millions and billions of records both unstructured and structured much faster and cheaper. Big data analytics provide a platform, to merge all analysis which enables data analysis to be more accurate, well-rounded, reliable and focused on a specific business capability. Big data imperatives, describes the complementary nature of traditional data warehouses and big-data analytics platforms and how they feed each other.This book aims to bring the big data and analytics realms together with a greater focus on architectures that leverage the scale and power of big data and the ability to integrate and apply analytics principles to data which earlier was not accessible. This book, can also be used as a handbook for practitioners; helping them on methodology, technical architecture, analytics techniques and best practices. At the same time, this book intends to hold the interest of those new to big data and analytics by giving them a deep insight into the realm of big data. What you'll learn * Understanding the technology, implementation of big data platforms and their usage for analytics * Big data architectures * Big data design patterns * Implementation best practices Who this book is for This book is designed for IT professionals, data warehousing, business intelligence professionals, data analysis professionals, architects, developers and business users
Best practices and invaluable advice from world-renowned data warehouse experts In this book, leading data warehouse experts from the Kimball Group share best practices for using the upcoming "Business Intelligence release" of SQL Server, referred to as SQL Server 2008 R2. In this new edition, the authors explain how SQL Server 2008 R2 provides a collection of powerful new tools that extend the power of its BI toolset to Excel and SharePoint users and they show how to use SQL Server to build a successful data warehouse that supports the business intelligence requirements that are common to most organizations. Covering the complete suite of data warehousing and BI tools that are part of SQL Server 2008 R2, as well as Microsoft Office, the authors walk you through a full project lifecycle, including design, development, deployment and maintenance.Features more than 50 percent new and revised material that covers the rich new feature set of the SQL Server 2008 R2 release, as well as the Office 2010 releaseIncludes brand new content that focuses on PowerPivot for Excel and SharePoint, Master Data Services, and discusses updated capabilities of SQL Server Analysis, Integration, and Reporting ServicesShares detailed case examples that clearly illustrate how to best apply the techniques described in the bookThe accompanying Web site contains all code samples as well as the sample database used throughout the case studies "The Microsoft Data Warehouse Toolkit, Second Edition" provides you with the knowledge of how and when to use BI tools such as Analysis Services and Integration Services to accomplish your most essential data warehousing tasks.
The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.
Data warehousing and knowledge discovery are increasingly becoming mission-critical technologies for most organizations, both commercial and public, as it becomes incre- ingly important to derive important knowledge from both internal and external data sources. With the ever growing amount and complexity of the data and information available for decision making, the process of data integration, analysis, and knowledge discovery continues to meet new challenges, leading to a wealth of new and exciting research challenges within the area. Over the last decade, the International Conference on Data Warehousing and Knowledge Discovery (DaWaK) has established itself as one of the most important international scientific events within data warehousing and knowledge discovery. DaWaK brings together a wide range of researchers and practitioners working on these topics. The DaWaK conference series thus serves as a leading forum for discu- ing novel research results and experiences within data warehousing and knowledge th discovery. This year's conference, the 11 International Conference on Data Wa- housing and Knowledge Discovery (DaWaK 2009), continued the tradition by d- seminating and discussing innovative models, methods, algorithms, and solutions to the challenges faced by data warehousing and knowledge discovery technologies.
Business intelligence (BI) used to be so simple -- in theory anyway. Integrate and copy data from your transactional systems into a specialised relational database, apply BI reporting and query tools and add business users. Job done. No longer. Analytics, big data and an array of diverse technologies have changed everything. More importantly, business is insisting on ever more, ever faster from information and from IT in general. An emerging biz-tech ecosystem demands that business and IT work together. This book reflects the new reality that in todays socially complex and rapidly changing world, business decisions must be based on a combination of rational and intuitive thinking. Integrating cues from diverse information sources and tacit knowledge, decision makers create unique meaning to innovate heuristically at the speed of thought. This book provides a wealth of new models that business and IT can use together to design support systems for tomorrows successful organisations. Dr Barry Devlin, one of the earliest proponents of data warehousing, goes back to basics to explore how the modern trinity of information, process and people must be reinvented and restructured to deliver the value, insight and innovation required by modern businesses. From here, he develops a series of novel architectural models that provide a new foundation for holistic information use across the entire business. From discovery to analysis and from decision making to action taking, he defines a fully integrated, closed-loop business environment. Covering every aspect of business analytics, big data, collaborative working and more, this book takes over where BI ends to deliver the definitive framework for information use in the coming years.
A practical guide to making good decisions in a world of missing data In the era of big data, it is easy to imagine that we have all the information we need to make good decisions. But in fact the data we have are never complete, and may be only the tip of the iceberg. Just as much of the universe is composed of dark matter, invisible to us but nonetheless present, the universe of information is full of dark data that we overlook at our peril. In Dark Data, data expert David Hand takes us on a fascinating and enlightening journey into the world of the data we don't see. Dark Data explores the many ways in which we can be blind to missing data and how that can lead us to conclusions and actions that are mistaken, dangerous, or even disastrous. Examining a wealth of real-life examples, from the Challenger shuttle explosion to complex financial frauds, Hand gives us a practical taxonomy of the types of dark data that exist and the situations in which they can arise, so that we can learn to recognize and control for them. In doing so, he teaches us not only to be alert to the problems presented by the things we don't know, but also shows how dark data can be used to our advantage, leading to greater understanding and better decisions. Today, we all make decisions using data. Dark Data shows us all how to reduce the risk of making bad ones.
Cutting-edge content and guidance from a data warehousing expert--now expanded to reflect field trends Data warehousing has revolutionized the way businesses in a wide variety of industries perform analysis and make strategic decisions. Since the first edition of "Data Warehousing Fundamentals," numerous enterprises have implemented data warehouse systems and reaped enormous benefits. Many more are in the process of doing so. Now, this new, revised edition covers the essential fundamentals of data warehousing and business intelligence as well as significant recent trends in the field. The author provides an enhanced, comprehensive overview of data warehousing together with in-depth explanations of critical issues in planning, design, deployment, and ongoing maintenance. IT professionals eager to get into the field will gain a clear understanding of techniques for data extraction from source systems, data cleansing, data transformations, data warehouse architecture and infrastructure, and the various methods for information delivery. This practical "Second Edition" highlights the areas of data warehousing and business intelligence where high-impact technological progress has been made. Discussions on developments include data marts, real-time information delivery, data visualization, requirements gathering methods, multi-tier architecture, OLAP applications, Web clickstream analysis, data warehouse appliances, and data mining techniques. The book also contains review questions and exercises for each chapter, appropriate for self-study or classroom work, industry examples of real-world situations, and several appendices with valuable information. Specifically written for professionals responsible for designing, implementing, or maintaining data warehousing systems, "Data Warehousing Fundamentals" presents agile, thorough, and systematic development principles for the IT professional and anyone working or researching in information management.
Das anhaltende Interesse an der Theorie monetArer Integration ist einerseits dem europAischen EinigungsprozeA zu verdanken, andererseits der InstabilitAt des WeltwAhrungssystems seit dem Zusammenbruch der Bretton-Woods-Vereinbarung. Die vorherrschende Theorie des optimalen WAhrungsraumes hat sich jedoch angesichts neuerer Entwicklungen in der Theorie der Wirtschaftspolitik sowie in der Theorie des Wechselkurses als zu eng und methodisch fragwA1/4rdig erwiesen. Das Buch gibt einen umfassenden, auch fA1/4r AngehArige anderer sozialwissenschaftlichen Disziplinen gut lesbaren Aoeberblick A1/4ber den Stand der Forschung zur monetAren Integration im allgemeinen und zur europAischen WAhrungsintegration im besonderen. Es gibt darA1/4ber hinaus Anregungen fA1/4r weiterfA1/4hrende Untersuchungen, z.B. zur Rolle der Arbeitsmarktverfassungen oder der Fiskal- und der Sozialpolitik.
This book offers comprehensive coverage of information retrieval by considering both Text Based Information Retrieval (TBIR) and Content Based Image Retrieval (CBIR), together with new research topics. The approach to TBIR is based on creating a thesaurus, as well as event classification and detection. N-gram thesaurus generation for query refinement offers a new method for improving the precision of retrieval, while event classification and detection approaches aid in the classification and organization of information using web documents for domain-specific retrieval applications. In turn, with regard to content based image retrieval (CBIR) the book presents a histogram construction method, which is based on human visual perceptions of color. The book's overarching goal is to introduce readers to new ideas in an easy-to-follow manner.
Learn data architecture essentials and prepare for the Salesforce Certified Data Architect exam with the help of tips and mock test questions Key Features * Leverage data modelling, Salesforce database design, and techniques for effective data design * Learn master data management, Salesforce data management, and how to include considerations * Get to grips with large data volumes, performance tuning, and poor performance mitigation techniques Book Description The Salesforce Data Architect is a prerequisite exam for the Application Architect half of the Salesforce Certified Technical Architect credential. This book offers a complete, up-to-date coverage of the Salesforce Data Architect exam so you can take it with confidence. The book is written in a clear, succinct way with self-assessment and practice exam questions, covering all topics necessary to help you pass the exam with ease. You'll understand the theory around Salesforce data modeling, database design, master data management (MDM), Salesforce data management (SDM), and data governance. Additionally, performance considerations associated with large data volumes will be covered. You'll also get to grips with data migration and understand the supporting theory needed to achieve Salesforce Data Architect certification. By the end of this Salesforce book, you'll have covered everything you need to pass the Salesforce Data Architect certification exam and have a handy, on-the-job desktop reference guide to re-visit the concepts. What you will learn * Understand the topics relevant to passing the Data Architect exam * Explore specialist areas such as large data volumes * Test your knowledge with the help of exam-like questions * Pick up useful tips and tricks that can be referred to time and again * Understand the reasons underlying the way Salesforce data management works * Discover the techniques that are available for loading massive amounts of data Who This Book Is For This book is for both aspiring Salesforce data architects and those already familiar with Salesforce data architecture who want to pass the exam and have a reference guide to revisit the material as part of their day-to-day job. Working knowledge of the Salesforce platform is assumed, alongside a clear understanding of Salesforce architectural concepts.
Wettbewerbsvorteile werden in Zukunft nur noch die Unternehmen erlangen, denen es gelingt, Informationen in Wissen zu verwandeln. Die zwei Welten Business Intelligence und Knowledge Management wachsen vor diesem Hintergrund zusammen. Der Herausgeber, Leiter des Instituts fur Managementinformationssysteme und des Instituts fur Knowledge Management, zeigt in diesem Buch die zunehmende Integration der beiden Bereiche. Das Buch bringt damit Transparenz in einen der groessten IT-Wachstumsmarkte. Mehrere Studien, etwa des Fraunhofer Instituts, beleuchten den relevanten Markt und geben wichtige Orientierungshilfen. Anhand einer Vielzahl von Beispielen wird gezeigt, welchen Nutzen der Einsatz hochentwickelter Analysewerkzeuge und die Entwicklung von Loesungen fur das Wissensmanagement heute bereits erbringen. Ebenfalls sehr hilfreich fur Praktiker ist die umfangreiche Anbieterliste. Einen raschen UEberblick uber die wichtigsten KM- und BI-Begriffe bietet ferner das integrierte Glossar.
Problemloesungen fur das Top-Management: Das Buch stellt speziell fur Entscheidungstrager die Nutzungsmoeglichkeiten von Data-Warahouse-Konzepten vor. Neben den Grundlagen werden vor allem die Einsatzgebiete, verfugbare Loesungen und praktische Erfahrungen beschrieben. Das Management speziell aus Konsumguterindustrie und -handel erhalt so die Moeglichkeit, fur das eigene Unternehmen die optimale Entscheidung zu treffen.
Supercharge and deploy Amazon Redshift Serverless, train and deploy Machine learning Models using Amazon Redshift ML and run inference queries at scale. Key Features * Learn to build Multi-Class Classification Models * Create a model, validate a model and draw conclusion from K-means clustering * Learn to create a SageMaker endpoint and use that to create a Redshift ML Model for remote inference Book Description Amazon Redshift Serverless enables organizations to run PetaBytes scales Cloud data warehouses in minutes and in most cost effective way Developers, data analysts and BI analysts can deploy cloud data warehouses and use easy-to-use tools to train models and run predictions. Developers working with Amazon Redshift data warehouses will be able to put their SQL knowledge to work with this practical guide to train and deploy Machine Learning Models. The book provides a hands-on approach to implementation and associated methodologies that will have you up-and-running, and productive in no time. Complete with step-by-step explanations of essential concepts, practical examples and self-assessment questions, you will begin Deploying and Using Amazon Redshift Serverless and then dive into learning and deploying various types of Machine learning projects using familiar SQL Code. You will learn how to configure and deploy Amazon Redshift Serverless, understand the foundations of data analytics and types of data machine learning. Then you will deep dive into Redshift ML By the end of this book, you will be able to configure and deploy Amazon Redshift Serverless, train and deploy Machine learning Models using Amazon Redshift ML and run inference queries at scale. What you will learn * Learn how to implement an end-to-end serverless architecture for ingestion, analytics and machine learning using Redshift Serverless and Redshift ML * Learn how to create supervised and unsupervised models, and various techniques to influence your model * Learn how to run inference queries at scale in Redshift to solve a variety of business problems using models created with Redshift ML or natively in Amazon SageMaker * Learn how to optimize your Redshift data warehouse for extreme performance * Learn how to ensure you are using proper security guidelines with Redshift ML * Learn how to use model explainability in Amazon Redshift ML, to help understand how each attribute in your training data contributes to the predicted result. Who This Book Is For Data Scientists and Machine Learning developers who work with Amazon Redshift and want to explore it's machine learning capabilities will find this definitive guide helpful. Basic understanding of machine learning techniques and working knowledge of Amazon Redshift is needed to get the best from this book.
Most of modern enterprises, institutions, and organizations rely on knowledge-based management systems. In these systems, knowledge is gained from data analysis. Nowadays, knowledge-based management systems include data warehouses as their core components. The purpose of building a data warehouse is twofold. Firstly, to integrate multiple heterogeneous, autonomous, and distributed data sources within an enterprise. Secondly, to provide a platform for advanced, complex, and efficient data analysis. Data integrated in a data warehouse are analyzed by the so-called On-Line Analytical Processing (OLAP) applications designed among others for discovering trends, patterns of behavior, and anomalies as well as for finding dependencies between data. Massive amounts of integrated data and the complexity of integrated data that more and more often come from WEB-based, XML-based, spatio-temporal, object, and multimedia systems, make data integration and processing challenging. The objective of NEW TRENDS IN DATA WAREHOUSING AND DATA ANALYSIS is fourfold: First, to bring together the most recent research and practical achievements in the DW and OLAP technologies. Second, to open and discuss new, just emerging areas of further development. Third, to provide the up-to-date bibliography of published works and the resource of research achievements for anyone interested in up-to-date data warehouse issues. And, finally, to assist in the dissemination of knowledge in the field of advanced DW and OLAP.
Im Mittelpunkt dieses anwendungsbezogenen Lehrbuchs stehen Architekturen, Methoden und Werkzeuge entscheidungsunterstutzender Systeme. Beispiele und Aufgaben ermoglichen die Entwicklung von Anwendungen mit der Demonstrationssoftware der CD ROM. Eine interaktive Foliensammlung veranschaulicht den Buchtext und verweist auf zusatzliches Lernmaterial. Der erste Teil stellt mit der Nutzwertanalyse (AHP) und Was-Wenn-Analysen traditionelle entscheidungsunterstutzende Ansatze dar und fuhrt anhand regelbasierter Systeme in wissensbasierte Systeme ein. Der zweite und dritte Teil behandeln das Schwerpunktthema Data Warehousing und Data Mining. Data Warehousing und OLAP bereiten die Inhalte von Produktionsdatenbanken fur Abfragen und Analysen durch Endbenutzer auf. Nach einem Uberblick uber die wichtigsten Data Mining-Verfahren konzentriert sich der dritte Teil auf zwei der verbreitesten Methoden, die Regelinduktion und neuronale Netze."
Explore how Delta brings reliability, performance, and governance to your data lake and all the AI and BI use cases built on top of it Key Features Learn Delta's core concepts and features as well as what makes it a perfect match for data engineering and analysis Solve business challenges of different industry verticals using a scenario-based approach Make optimal choices by understanding the various tradeoffs provided by Delta Book DescriptionDelta helps you generate reliable insights at scale and simplifies architecture around data pipelines, allowing you to focus primarily on refining the use cases being worked on. This is especially important when you consider that existing architecture is frequently reused for new use cases. In this book, you'll learn about the principles of distributed computing, data modeling techniques, and big data design patterns and templates that help solve end-to-end data flow problems for common scenarios and are reusable across use cases and industry verticals. You'll also learn how to recover from errors and the best practices around handling structured, semi-structured, and unstructured data using Delta. After that, you'll get to grips with features such as ACID transactions on big data, disciplined schema evolution, time travel to help rewind a dataset to a different time or version, and unified batch and streaming capabilities that will help you build agile and robust data products. By the end of this Delta book, you'll be able to use Delta as the foundational block for creating analytics-ready data that fuels all AI/BI use cases. What you will learn Explore the key challenges of traditional data lakes Appreciate the unique features of Delta that come out of the box Address reliability, performance, and governance concerns using Delta Analyze the open data format for an extensible and pluggable architecture Handle multiple use cases to support BI, AI, streaming, and data discovery Discover how common data and machine learning design patterns are executed on Delta Build and deploy data and machine learning pipelines at scale using Delta Who this book is forData engineers, data scientists, ML practitioners, BI analysts, or anyone in the data domain working with big data will be able to put their knowledge to work with this practical guide to executing pipelines and supporting diverse use cases using the Delta protocol. Basic knowledge of SQL, Python programming, and Spark is required to get the most out of this book.
Build an end-to-end business solution in the cognitive automation lifecycle and explore UiPath Document Understanding, UiPath AI Center, and Druid Key Features Explore out-of-the-box (OOTB) AI Models in UiPath Learn how to deploy, manage, and continuously improve machine learning models using UiPath AI Center Deploy UiPath-integrated chatbots and master UiPath Document Understanding Book DescriptionArtificial intelligence (AI) enables enterprises to optimize business processes that are probabilistic, highly variable, and require cognitive abilities with unstructured data. Many believe there is a steep learning curve with AI, however, the goal of our book is to lower the barrier to using AI. This practical guide to AI with UiPath will help RPA developers and tech-savvy business users learn how to incorporate cognitive abilities into business process optimization. With the hands-on approach of this book, you'll quickly be on your way to implementing cognitive automation to solve everyday business problems. Complete with step-by-step explanations of essential concepts, practical examples, and self-assessment questions, this book will help you understand the power of AI and give you an overview of the relevant out-of-the-box models. You'll learn about cognitive AI in the context of RPA, the basics of machine learning, and how to apply cognitive automation within the development lifecycle. You'll then put your skills to test by building three use cases with UiPath Document Understanding, UiPath AI Center, and Druid. By the end of this AI book, you'll be able to build UiPath automations with the cognitive capabilities of intelligent document processing, machine learning, and chatbots, while understanding the development lifecycle. What you will learn Discover how to bridge the gap between RPA and cognitive automation Understand how to configure, deploy, and maintain ML models in UiPath Explore OOTB models to manage documents, chats, emails, and more Prepare test data and test cases for user acceptance testing (UAT) Build a UiPath automation to act upon Druid responses Find out how to connect custom models to RPA Who this book is forAI Engineers and RPA developers who want to upskill and deploy out-of-the-box models using UiPath's AI capabilities will find this guide useful. A basic understanding of robotic process automation and machine learning will be beneficial but not mandatory to get started with this UiPath book.
Develop the must-have skills required for any data scientist to get the best results from Azure Databricks. Key Features * Learn to develop and productionize ML pipelines using the Databricks Unified Analytics platform * See how to use AutoML, Feature Stores, and MLOps with Databricks * Get a complete understanding of data governance and model deployment Book Description In this book, you'll get to grips with Databricks, enabling you to power-up your organization's data science applications. We'll walk through applying the Databricks AI and ML stack to real-world use cases for natural language processing, computer vision, time series data, and more. We'll dive deep into the complete model development life cycle for data ingestion and analysis, and get familiar with the latest offerings of AutoML, Feature Store, and MLStudio, on the Databricks platform. You'll get hands-on experience implementing repeatable ML operations (MLOps) pipeline using MLFlow, track model training and key metrics, and explore real-time ML, anomaly detection, and streaming analytics with Delta lake and Spark Structured Streaming. Starting with an overview of Data Science use cases across different organizations and industries, you will then be introduced to feature stores, feature tables, and how to access them. You will see why AutoML is important and how to create a baseline model with AutoML within Databricks. Utilizing the ML Flow model registry to manage model versioning and transition to production will be covered, along with detecting and protecting against model drift in production environments. By the end of the book, you will know how to set up your Databricks ML development and deployment as a CI/CD pipeline. What you will learn * Perform natural language processing, computer vision, and more * Explore AutoML, Feature Store, and MLStudio on Databricks * Dive deep into the complete model development life cycle * Experience implementing repeatable MLOps pipelines using MLFlow * Track model training and key metrics * Explore real-time ML, anomaly detection, and streaming analytics * Learn how to handle model drift Who This Book Is For In this book we are going to specifically focus on the tools catering to the Data Scientist persona. Readers who want to learn how to successfully build and deploy end-end Data Science projects using the Databricks cloud agnostic unified analytics platform will benefit from this book, along with AI and Machine Learning practitioners.
Three books by the bestselling authors on Data Warehousing The most authoritative guides from the inventor of the technique all for a value price. The Data Warehouse Toolkit, 3rd Edition (9781118530801) Ralph Kimball invented a data warehousing technique called "dimensional modeling" and popularized it in his first Wiley book, The Data Warehouse Toolkit. Since this book was first published in 1996, dimensional modeling has become the most widely accepted technique for data warehouse design. Over the past 10 years, Kimball has improved on his earlier techniques and created many new ones. In this 3rd edition, he will provide a comprehensive collection of all of these techniques, from basic to advanced. The Data Warehouse Lifecycle Toolkit, 2nd Edition (9780470149775) Complete coverage of best practices from data warehouse project inception through on-going program management. Updates industry best practices to be in sync with current recommendations of Kimball Group. Streamlines the lifecycle methodology to be more efficient and user-friendly The Data Warehouse ETL Toolkit (9780764567575) shows data warehouse developers how to effectively manage the ETL (Extract, Transform, Load) phase of the data warehouse development lifecycle. The authors show developers the best methods for extracting data from scattered sources throughout the enterprise, removing obsolete, redundant, and innaccurate data, transforming the remaining data into correctly formatted data structures, and then physically loading them into the data warehouse. This book provides complete coverage of proven, time-saving ETL techniques. It begins with a quick overview of ETL fundamentals and the role of the ETL development team. It then quickly moves into an overview of the ETL data structures, both relational and dimensional. The authors show how to build useful dimensional stuctures, providing practical examples of beginning through advanced techniques. |
You may like...
Beckett at 100 - Revolving it All
Linda Ben-Zvi, Angela Moorjani
Hardcover
R2,012
Discovery Miles 20 120
Contemporary Plays by African Women…
Yvette Hutchison, Amy Jephta
Paperback
R883
Discovery Miles 8 830
Shakespeare and Religion - Early Modern…
Ken Jackson, Marotti
Hardcover
R3,311
Discovery Miles 33 110
Streetcar Named Desire: York Notes…
Tennessee Williams
Paperback
(2)
|