Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Databases > General
Initiatives, such as INSPIRE and the US DHS Geospatial Data Model, are working to develop a rich set of standards that will create harmonized models and themes for the spatial information infrastructure. However, this is only the first step. Semantically meaningful models must still be developed in order to stimulate interoperability. Creating Spatial Information Infrastructures (SII) presents solutions to the problems preventing the launch of a truly effective SII. Leading experts in SII development present a complete overview of SII, including user and application needs, theoretical and technological foundations, and examples of realized working SII's. The book includes semantic applications in each discussion and explains their importance to the future of geo-information standardization. Offering practical solutions to technical and nontechnical obstacles, this book provides the tools needed to take the next step toward a working semantic web-one that will revolutionize the way the world accesses and utilizes spatial information.
Learn the intricate workings of DAX and the mechanics that are necessary to solve advanced Power BI challenges. This book is all about DAX (Data Analysis Expressions), the formula language used in Power BI-Microsoft's leading self-service business intelligence application-and covers other products such as PowerPivot and SQL Server Analysis Services Tabular. You will learn how to leverage the advanced applications of DAX to solve complex tasks. Often a task seems complex due to a lack of understanding, or a misunderstanding of core principles, and how certain components interact with each other. The authors of this book use solutions and examples to teach you how to solve complex problems. They explain the intricate workings of important concepts such as Filter Context and Context Transition. You will learn how Power BI, through combining DAX building blocks (such as measures, table filtering, and data lineage), can yield extraordinary analytical power. Throughout Pro Dax with Power BI these building blocks are used to create and compose solutions for advanced DAX problems, so you can independently build solutions to your own complex problems, and gain valuable insight from your data. What You Will Learn Understand the intricate workings of DAX to solve advanced problems Deconstruct problems into manageable parts in order to create your own recipes Apply predefined solutions for addressing problems, and link back step-by-step to the mechanics of DAX, to know the foundation of this powerful query language Get fully on board with DAX, a new and evolving language, by learning best practices Who This Book Is For Anyone who wants to use Power BI to build advanced and complex models. Some experience writing DAX is helpful, but not essential if you have experience with other data query languages such as MDX or SQL.
This textbook covers both fundamental and advanced Java database programming techniques for beginning and experienced students as well as programmers (courses related to database programming in Java with Apache NetBeans IDE 12 environment). A sample SQL Server 2019 Express database, CSE_DEPT, is created and implemented in all example projects throughout this textbook. Over 40 real sample database programming projects are covered in this textbook with detailed illustrations and explanations to help students understand the key techniques and programming technologies. Chapters include homework and selected solutions to strengthen and improve students' learning and understanding for topics they study in the classroom. Both Java desktop and Web applications with SQL Server database programming techniques are discussed and analyzed. Some updated Java techniques, such as Java Server Pages (JSP), Java Server Faces (JSF), Java Web Service (JWS), JavaServer Pages Standard Tag Library (JSTL), JavaBeans and Java API for XML Web Services (JAX-WS) are also discussed and implemented in the real projects developed in this textbook. This textbook targets mainly advanced-level students in computer science, but it also targets entry-level students in computer science and information system. Programmers, software engineers and researchers will also find this textbook useful as a reference for their projects.
The goal of this book is to provide a comprehensive and systematic introduction to the important and highly applicable method of data refinement and the simulation methods used for proving its correctness. The authors concentrate in the first part on the general principles needed to prove data refinement correct. They begin with an explanation of the fundamental notions, showing that data refinement proofs reduce to proving simulation. The topics of Hoare Logic and the Refinement Calculus are introduced and a general theory of simulations is developed and related to them. Accessibility and comprehension are emphasized in order to guide newcomers to the area. The book's second part contains a detailed survey of important methods in this field, such as VDM, and the methods due to Abadi & Lamport, Hehner, Lynch and Reynolds, Back's refinement calculus and Z. All these methods are carefully analysed, and shown to be either imcomplete, with counterexamples to their application, or to be always applicable whenever data refinement holds. This is shown by proving, for the first time, that all these methods can be described and analyzed in terms of two simple notions: forward and backward simulation. The book is self-contained, going from advanced undergraduate level and taking the reader to the state of the art in methods for proving simulation.
This book provides a comprehensive overview of the research on anomaly detection with respect to context and situational awareness that aim to get a better understanding of how context information influences anomaly detection. In each chapter, it identifies advanced anomaly detection and key assumptions, which are used by the model to differentiate between normal and anomalous behavior. When applying a given model to a particular application, the assumptions can be used as guidelines to assess the effectiveness of the model in that domain. Each chapter provides an advanced deep content understanding and anomaly detection algorithm, and then shows how the proposed approach is deviating of the basic techniques. Further, for each chapter, it describes the advantages and disadvantages of the algorithm. The final chapters provide a discussion on the computational complexity of the models and graph computational frameworks such as Google Tensorflow and H2O because it is an important issue in real application domains. This book provides a better understanding of the different directions in which research has been done on deep semantic analysis and situational assessment using deep learning for anomalous detection, and how methods developed in one area can be applied in applications in other domains. This book seeks to provide both cyber analytics practitioners and researchers an up-to-date and advanced knowledge in cloud based frameworks for deep semantic analysis and advanced anomaly detection using cognitive and artificial intelligence (AI) models.
This book explores a broad cross section of research and actual case studies to draw out new insights that may be used to build a benchmark for IT security professionals. This research takes a deeper dive beneath the surface of the analysis to uncover novel ways to mitigate data security vulnerabilities, connect the dots and identify patterns in the data on breaches. This analysis will assist security professionals not only in benchmarking their risk management programs but also in identifying forward looking security measures to narrow the path of future vulnerabilities.
Apply the new Query Store feature to identify and fix poorly performing queries in SQL Server. Query Store is an important and recent feature in SQL Server that provides insight into the details of query execution and how that execution has changed over time. Query Store helps to identify queries that aren't performing well, or that have regressed in their performance. Query Store provides detailed information such as wait stats that you need to resolve root causes, and it allows you to force the use of a known good execution plan. With SQL Server 2017 and later you can automate the correction of regressions in performance. Query Store for SQL Server 2019 helps you protect your database's performance during upgrades of applications or version of SQL Server. The book provides fundamental information on how Query Store works and best practices for implementation and use. You will learn to run and interpret built-in reports, configure automatic plan correction, and troubleshoot queries using Query Store when needed. Query Store for SQL Server 2019 helps you master Query Store and bring value to your organization through consistent query execution times and automate correction of regressions. What You'll Learn Apply best practices in implementing Query Store on production servers Detect and correct regressions in query performance Lower the risk of performance degradation following an upgrade Use tools and techniques to get the most from Query Store Automate regression correction and other uses of Query Store Who This Book Is For SQL Server developers and administrators responsible for query performance on SQL Server. Anyone responsible for identifying poorly performing queries will be able to use Query Store to find these queries and resolve the underlying issues.
This class-tested textbook is designed for a semester-long graduate or senior undergraduate course on Computational Health Informatics. The focus of the book is on computational techniques that are widely used in health data analysis and health informatics and it integrates computer science and clinical perspectives. This book prepares computer science students for careers in computational health informatics and medical data analysis. Features Integrates computer science and clinical perspectives Describes various statistical and artificial intelligence techniques, including machine learning techniques such as clustering of temporal data, regression analysis, neural networks, HMM, decision trees, SVM, and data mining, all of which are techniques used widely used in health-data analysis Describes computational techniques such as multidimensional and multimedia data representation and retrieval, ontology, patient-data deidentification, temporal data analysis, heterogeneous databases, medical image analysis and transmission, biosignal analysis, pervasive healthcare, automated text-analysis, health-vocabulary knowledgebases and medical information-exchange Includes bioinformatics and pharmacokinetics techniques and their applications to vaccine and drug development
This class-tested textbook is designed for a semester-long graduate or senior undergraduate course on Computational Health Informatics. The focus of the book is on computational techniques that are widely used in health data analysis and health informatics and it integrates computer science and clinical perspectives. This book prepares computer science students for careers in computational health informatics and medical data analysis. Features Integrates computer science and clinical perspectives Describes various statistical and artificial intelligence techniques, including machine learning techniques such as clustering of temporal data, regression analysis, neural networks, HMM, decision trees, SVM, and data mining, all of which are techniques used widely used in health-data analysis Describes computational techniques such as multidimensional and multimedia data representation and retrieval, ontology, patient-data deidentification, temporal data analysis, heterogeneous databases, medical image analysis and transmission, biosignal analysis, pervasive healthcare, automated text-analysis, health-vocabulary knowledgebases and medical information-exchange Includes bioinformatics and pharmacokinetics techniques and their applications to vaccine and drug development
In this provocative and ground-breaking book, Keith Devlin argues that in order to obtain a deeper understanding of the nature of intelligence and knowledge acquisition, we must broaden our concept of logic. Classical logic, beginning with the work of Aristotle, has developed into a powerful and rigorous mathematical theory with many applications in mathematics and computer science, but it has proved woefully inadequate in the search for artificial intelligence. The new kind of logic, also mathematically based, outlined by Professor Devlin is the culmination of collaborative research among some of the world's leading logicians, philosophers, linguists, psychologists, and computer scientists. It introduces the concepts of infon, a quantum of information, and situations, a dynamical generalization of sets, and is capable of handling the issues involved in human communication, thought, speech, and machine information processing.
Data Quality: The Accuracy Dimension is about assessing the quality
of corporate data and improving its accuracy using the data
profiling method. Corporate data is increasingly important as
companies continue to find new ways to use it. Likewise, improving
the accuracy of data in information systems is fast becoming a
major goal as companies realize how much it affects their bottom
line. Data profiling is a new technology that supports and enhances
the accuracy of databases throughout major IT shops. Jack Olson
explains data profiling and shows how it fits into the larger
picture of data quality.
Whats the Return on Investment (ROI) on data management? Sound like an impossible question to answer? Not if you read this book and learn the value-added approach to managing enterprise resources and assets. This book defines the five interrelated best practices that comprise data management, and shows you how by example to successfully communicate data management ROI to senior management. The 17 cases we share will help you to identify opportunities to introduce data management into the strategic conversations that occur in the C-suite. You will gain a new perspective regarding the stewardship of your data assets and insulate your operations from the chaos, losses and risks that result from traditional approaches to technological projects. And you will learn how to protect yourself from legal challenges resulting from out-sourced information technology projects gone badly due to incorrect project sequencing and focus. With the emerging acceptance and adoption of revised performance standards, your organisation will be better prepared to face the coming big data deluge! The book contains four chapters: Chapter 1 gives a somewhat unique perspective to the practice of leveraging data. We describe the motivations and delineate the specific challenges preventing most organisations from making substantial progress in this area; Chapter 2 presents 11 cases where leveraging data has produced positive financial results that can be presented in language of immediate interest to C-level executives. To the degree possible, we have quantified the effect that data management has had in terms that will be meaningful to them also; Chapter 3 describes five instances taken from the authors' experiences with various governmental defence departments. The lessons in this section however can be equally applied to many non-profit and non-defence governmental organisations; Chapter 4 speaks specifically to the interaction of data management practices, in terms of both information technology projects and legal responsibilities. Reading it can help your organisation to avoid a number of perils, stay out of court and better vet contractors, experts and other helpers who play a role in organisation information technology development.
The book focuses on the power of business blockchain. It gives an overview of blockchain in traditional business, marketing, accounting and business intelligence. The book provides a detailed working knowedge of blockchain, user cases of blockchain in business, cryptocurrency and Initial Coin Offering(ICO) along with the risks associated with them. The book also covers the detailed study of decentralization, mining, consensus, smart contracts, concepts and working of distributed ledgers and hyper ledgers as well as many other important concepts. It also details the security and privacy aspects of blockchain. The book is beneficial for readers who are preparing for their business careers, those who are working with small scale businesses and startups, and helpful for business executives, managers, entrepreneurs, bankers, government officials and legal professionals who are looking to blockchain for secure financial transactions. The book will also be beneficial for researchers and students who want to study the latest developments of blockchain.
From Visual Surveillance to Internet of Things: Technology and Applications is an invaluable resource for students, academicians and researchers to explore the utilization of Internet of Things with visual surveillance and its underlying technologies in different application areas. Using a series of present and future applications - business insights, indoor-outdoor securities, smart grids, human detection and tracking, intelligent traffic monitoring, e-health department and many more - this book will support readers to obtain a deeper knowledge in implementing IoT with visual surveillance. The book offers comprehensive coverage of the most essential topics, including: The rise of machines and communications to IoT (3G, 5G) Tools and technologies of IoT with visual surveillance IoT with visual surveillance for real-time applications IoT architectures Challenging issues and novel solutions for realistic applications Mining and tracking of motion-based object data Image processing and analysis into the unified framework to understand both IOT and computer vision applications This book will be an ideal resource for IT professionals, researchers, under- or post-graduate students, practitioners, and technology developers who are interested in gaining a deeper knowledge in implementing IoT with visual surveillance, critical applications domains, technologies, and solutions to handle relevant challenges. Dr. Lavanya Sharma is an Assistant Professor in the Amity Institute of Information Technology at Amity University UP, Noida, India. She is a recipient of several prestigious awards during her academic career. She is an active nationally-recognized researcher who has published numerous papers in her field. She has contributed as an Organizing Committee member and session chair at Springer and IEEE conferences. Prof. Pradeep K. Garg worked as a Vice Chancellor, Uttarakhand Technical University, Dehradun. Presently he is working in the department of Civil Engineering, IIT Roorkee as a professor. Prof. Garg has published more than 300 technical papers in national and international conferences and journals. He has completed 26 research projects funded by various government agencies, guided 27 PhD candidates, and provided technical services to 84 consultancy projects on various aspects of Civil Engineering.
This book offers practical advice on managing enterprise modeling (EM) projects and facilitating participatory EM sessions. Modeling activities often involve groups of people, and models are created in a participatory way. Ensuring that this is done efficiently requires dedicated individuals who know how to organize modeling projects and sessions, how to manage discussions during these sessions, and what aspects influence the success and efficiency of modeling in practice. The book also includes a summary of the theoretical background to EM, although participatory modeling can also be used in conjunction with other methods that are not made for EM, such as those made for goal-oriented requirements engineering and information systems analysis. The first four chapters present an overview of enterprise modeling from various viewpoints (including methods, processes and organizational challenges), providing a background for those that need to refresh their basic knowledge. The next six chapters form the core of the book and detail the roles and competences needed in an EM project, typical stakeholder behaviors and how to handle them, tools and methods for managing participatory modeling and facilitation, and how to train modeling experts for these social aspects of modeling. Lastly, a concluding chapter presents a summary and an outlook on current research in participatory EM. This book is intended for anybody who wants to learn more about how to facilitate participatory modeling in practice and how to set up and carry out EM projects. It does not require any in-depth knowledge about specific EM methods and tools, and can be used by students and lecturers for courses on participatory modeling, and by practitioners wanting to extend their knowledge of social and organizational topics to become an experienced facilitator and EM project manager.
This, the 38th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, contains extended and revised versions of six papers selected from the 68 contributions presented at the 27th International Conference on Database and Expert Systems Applications, DEXA 2016, held in Porto, Portugal, in September 2016. Topics covered include query personalization in databases, data anonymization, similarity search, computational methods for entity resolution, array-based computations in big data analysis, and pattern mining.
C# developers, here's your opportunity to learn the ins-and-outs of Entity Framework Core, Microsoft's recently redesigned object-relational mapper. Benefit from hands-on learning that will teach you how to tackle frustrating database challenges, such as workarounds to missing features in Entity Framework Core, and learn how to optimize the performance of your applications, head-on! Modern Data Access with Entity Framework Core teaches best practices, guidance, and shortcuts that will significantly reduce the amount of resources you internally dedicate to programming data access code. The proven methods and tools taught in this book, such as how to get better performance, and the ability to select the platform of your choice, will save you valuable time and allow you to create seamless data access. Dive into succinct guidance that covers the gamut-- from installing Entity Framework Core, reverse engineering, forward engineering (including schema migrations), and data reading and modification with LINQ, Dynamic LINQ, SQL, Stored Procedures, and Table Valued Functions- to using third-party products such as LINQPad, Entity Developer, Entity Framework Profiler, EFPlus, and AutoMapper. You'll also appreciate excerpts of conceptual software architecture discussion around Entity Framework Core that might otherwise take years to learn. What You'll Learn Understand the core concepts of Entity Framework Core, as well process models for existing databases (reverse engineering) and the generation of database schemas from object models (forward engineering) Study real-world case studies for hands-on EF Core instruction Get up to speed with valuable database access scenarios and code samples Discover workarounds to augment missing features in Entity Framework Core Use Entity Framework Core to write mobile apps Bonus online appendix covers Entity Framework Core 2.1 release updates Who This Book Is For Software developers who have basic experience with .NET and C#, as well as some understanding of relational databases. Knowledge of predecessor technologies such as ADO.NET and the classic ADO.NET Entity Framework is not necessary to learn from this book.
This book is the best way to make the leap from SQL-92 to SQL:
1999, but it is much more than just a simple bridge between the
two. The latest from celebrated SQL experts Jim Melton and Alan
Simon, "SQL: 1999" is a comprehensive, eminently practical account
of SQL's latest incarnation and a potent distillation of the
details required to put it to work. Written to accommodate both
novice and experienced SQL users, "SQL: 1999" focuses on the
language's capabilities, from the basic to the advanced, and the
way that real applications take advantage of them. Throughout, the
authors illustrate features and techniques with clear and often
entertaining references to their own custom database, which can be
downloaded from the companion Web site.
Drawn from the US National Science Foundation's Symposium on Next Generation of Data Mining and Cyber-Enabled Discovery for Innovation (NGDM 07), Next Generation of Data Mining explores emerging technologies and applications in data mining as well as potential challenges faced by the field. Gathering perspectives from top experts across different disciplines, the book debates upcoming challenges and outlines computational methods. The contributors look at how ecology, astronomy, social science, medicine, finance, and more can benefit from the next generation of data mining techniques. They examine the algorithms, middleware, infrastructure, and privacy policies associated with ubiquitous, distributed, and high performance data mining. They also discuss the impact of new technologies, such as the semantic web, on data mining and provide recommendations for privacy-preserving mechanisms. The dramatic increase in the availability of massive, complex data from various sources is creating computing, storage, communication, and human-computer interaction challenges for data mining. Providing a framework to better understand these fundamental issues, this volume surveys promising approaches to data mining problems that span an array of disciplines.
The First Book to Describe the Technical and Practical Elements of Chemical Text Mining Explores the development of chemical structure extraction capabilities and how to incorporate these technologies in daily research work For scientific researchers, finding too much information on a subject, not finding enough information, or not being able to access full text documents often costs them time, money, and quality. Addressing these concerns, Chemical Information Mining: Facilitating Literature-Based Discovery presents strategic ideas for properly selecting and successfully using the best text mining tools for scientific research. Links chemical and biological entities at the heart of life science research The book focuses on information extraction issues, highlights available solutions, and underscores the value of these solutions to academic and commercial scientists. After introducing the drivers behind chemical text mining, it discusses chemical semantics. The contributors describe the tools that identify and convert chemical names and images to structure-searchable information. They also explain natural language processing, name entity recognition concepts, and semantic web technologies. Following a section on current trends in the field, the book looks at where information mining approaches fit into the research needs within the life sciences. Shaping the future of scientific information and knowledge management By building knowledge and competency in the growing area of literature-based discovery, this book shows how text mining of the chemical literature can increase drug discovery opportunities and enhance life science research.
Data Structures and Abstractions with Java is suitable for one- or two-semester courses in data structures (CS-2) in the departments of Computer Science, Computer Engineering, Business, and Management Information Systems. This book is also useful for programmers and software engineers interested in learning more about data structures and abstractions. This is the most student-friendly data structures text available that introduces ADTs in individual, brief chapters - each with pedagogical tools to help students master each concept. Using the latest features of Java, this unique object-oriented presentation makes a clear distinction between specification and implementation to simplify learning, while providing maximum classroom flexibility. Teaching and Learning Experience This book will provide a better teaching and learning experience-for you and your students. It will help: Aid comprehension and facilitate teaching with an approachable format and content organization: Material is organized into small segments that focus a reader's attention and provide greater instructional flexibility. Support learning with student-friendly pedagogy: In-text and online features help students master the material.
Get SQL Server up and running on the Linux operating system and containers. No database professional managing or developing SQL Server on Linux will want to be without this deep and authoritative guide by one of the most respected experts on SQL Server in the industry. Get an inside look at how SQL Server for Linux works through the eyes of an engineer on the team that made it possible. Microsoft SQL Server is one of the leading database platforms in the industry, and SQL Server 2017 offers developers and administrators the ability to run a database management system on Linux, offering proven support for enterprise-level features and without onerous licensing terms. Organizations invested in Microsoft and open source technologies are now able to run a unified database platform across all their operating system investments. Organizations are further able to take full advantage of containerization through popular platforms such as Docker and Kubernetes. Pro SQL Server on Linux walks you through installing and configuring SQL Server on the Linux platform. The author is one of the principal architects of SQL Server for Linux, and brings a corresponding depth of knowledge that no database professional or developer on Linux will want to be without. Throughout this book are internals of how SQL Server on Linux works including an in depth look at the innovative architecture. The book covers day-to-day management and troubleshooting, including diagnostics and monitoring, the use of containers to manage deployments, and the use of self-tuning and the in-memory capabilities. Also covered are performance capabilities, high availability, and disaster recovery along with security and encryption. The book covers the product-specific knowledge to bring SQL Server and its powerful features to life on the Linux platform, including coverage of containerization through Docker and Kubernetes. What You'll Learn Learn about the history and internal of the unique SQL Server on Linux architecture. Install and configure Microsoft's flagship database product on the Linux platform Manage your deployments using container technology through Docker and Kubernetes Know the basics of building databases, the T-SQL language, and developing applications against SQL Server on Linux Use tools and features to diagnose, manage, and monitor SQL Server on Linux Scale your application by learning the performance capabilities of SQL Server Deliver high availability and disaster recovery to ensure business continuity Secure your database from attack, and protect sensitive data through encryption Take advantage of powerful features such as Failover Clusters, Availability Groups, In-Memory Support, and SQL Server's Self-Tuning Engine Learn how to migrate your database from older releases of SQL Server and other database platforms such as Oracle and PostgreSQL Build and maintain schemas, and perform management tasks from both GUI and command line Who This Book Is For Developers and IT professionals who are new to SQL Server and wish to configure it on the Linux operating system. This book is also useful to those familiar with SQL Server on Windows who want to learn the unique aspects of managing SQL Server on the Linux platform and Docker containers. Readers should have a grasp of relational database concepts and be comfortable with the SQL language.
"E-Health Care Information Systems" is a comprehensive collection written by leading experts from a range of disciplines including medicine, health sciences, engineering, business information systems, general science, and computing technology. This easily followed text provides a theoretical framework with sound methodological approaches and is filled with numerous case examples. Topics include e-health records, e-public information systems, e-network and surveys, general and specific applications of e-health such as e-rehabilitation, e-medicine, e-homecare, e-diagnosis support systems, and e-health intelligence. "E-Health Care Information Systems" also covers strategies in e-health care technology management, e-security issues, and the impacts of e-technologies. In addition, this book reviews new and emerging technologies such as mobile health, virtual reality and nanotechnology, and harnessing the power of e-technologies for real-world applications.
Utilize this practical and easy-to-follow guide to modernize traditional enterprise data warehouse and business intelligence environments with next-generation big data technologies. Next-Generation Big Data takes a holistic approach, covering the most important aspects of modern enterprise big data. The book covers not only the main technology stack but also the next-generation tools and applications used for big data warehousing, data warehouse optimization, real-time and batch data ingestion and processing, real-time data visualization, big data governance, data wrangling, big data cloud deployments, and distributed in-memory big data computing. Finally, the book has an extensive and detailed coverage of big data case studies from Navistar, Cerner, British Telecom, Shopzilla, Thomson Reuters, and Mastercard. What You'll Learn Install Apache Kudu, Impala, and Spark to modernize enterprise data warehouse and business intelligence environments, complete with real-world, easy-to-follow examples, and practical advice Integrate HBase, Solr, Oracle, SQL Server, MySQL, Flume, Kafka, HDFS, and Amazon S3 with Apache Kudu, Impala, and Spark Use StreamSets, Talend, Pentaho, and CDAP for real-time and batch data ingestion and processing Utilize Trifacta, Alteryx, and Datameer for data wrangling and interactive data processing Turbocharge Spark with Alluxio, a distributed in-memory storage platform Deploy big data in the cloud using Cloudera Director Perform real-time data visualization and time series analysis using Zoomdata, Apache Kudu, Impala, and Spark Understand enterprise big data topics such as big data governance, metadata management, data lineage, impact analysis, and policy enforcement, and how to use Cloudera Navigator to perform common data governance tasks Implement big data use cases such as big data warehousing, data warehouse optimization, Internet of Things, real-time data ingestion and analytics, complex event processing, and scalable predictive modeling Study real-world big data case studies from innovative companies, including Navistar, Cerner, British Telecom, Shopzilla, Thomson Reuters, and Mastercard Who This Book Is For BI and big data warehouse professionals interested in gaining practical and real-world insight into next-generation big data processing and analytics using Apache Kudu, Impala, and Spark; and those who want to learn more about other advanced enterprise topics |
You may like...
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
R1,062
Discovery Miles 10 620
Machine Learning and Artificial…
Tawseef Ayoub Shaikh, Saqib Hakak, …
Hardcover
R4,355
Discovery Miles 43 550
Handbook of Data Science with Semantic…
Archana Patel, Narayan C Debnath
Hardcover
R7,900
Discovery Miles 79 000
BTEC Nationals Information Technology…
Jenny Phillips, Alan Jarvis, …
Paperback
R1,056
Discovery Miles 10 560
ISE Database System Concepts
Abraham Silberschatz, Henry Korth, …
Paperback
R2,043
Discovery Miles 20 430
Technological Prospects and Social…
Lavanya Sharma, Pradeep Kumar Garg
Hardcover
R2,742
Discovery Miles 27 420
|