![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > Data warehousing
Market Basket Analysis (MBA) provides the ability to continually monitor the affinities of a business and can help an organization achieve a key competitive advantage. Time Variant data enables data warehouses to directly associate events in the past with the participants in each individual event. In the past however, the use of these powerful tools in tandem led to performance degradation and resulted in unactionable and even damaging information. Data Warehouse Designs: Achieving ROI with Market Basket Analysis and Time Variance presents an innovative, soup-to-nuts approach that successfully combines what was previously incompatible, without degradation, and uses the relational architecture already in place. Built around two main chapters, Market Basket Solution Definition and Time Variant Solution Definition, it provides a tangible how-to design that can be used to facilitate MBA within the context of a data warehouse. Presents a solution for creating home-grown MBA data marts Includes database design solutions in the context of Oracle, DB2, SQL Server, and Teradata relational database management systems (RDBMS) Explains how to extract, transform, and load data used in MBA and Time Variant solutions The book uses standard RDBMS platforms, proven database structures, standard SQL and hardware, and software and practices already accepted and used in the data warehousing community to fill the gaps left by most conceptual discussions of MBA. It employs a form and language intended for a data warehousing audience to explain the practicality of how data is delivered, stored, and viewed. Offering a comprehensive explanation of the applications that provide, store, and use MBA data, Data Warehouse Designs provides you with the language and concepts needed to require and receive information that is relevant and actionable.
Building upon his earlier book that detailed agile data warehousing programming techniques for the Scrum master, Ralph's latest work illustrates the agile interpretations of the remaining software engineering disciplines: Requirements management benefits from streamlined templates that not only define projects quickly, but ensure nothing essential is overlooked. Data engineering receives two new "hyper modeling" techniques, yielding data warehouses that can be easily adapted when requirements change without having to invest in ruinously expensive data-conversion programs. Quality assurance advances with not only a stereoscopic top-down and bottom-up planning method, but also the incorporation of the latest in automated test engines. Use this step-by-step guide to deepen your own application development skills through self-study, show your teammates the world's fastest and most reliable techniques for creating business intelligence systems, or ensure that the IT department working for you is building your next decision support system the right way.
The Data Vault was invented by Dan Linstedt at the U.S. Department of Defense, and the standard has been successfully applied to data warehousing projects at organizations of different sizes, from small to large-size corporations. Due to its simplified design, which is adapted from nature, the Data Vault 2.0 standard helps prevent typical data warehousing failures. "Building a Scalable Data Warehouse" covers everything one needs to know to create a scalable data warehouse end to end, including a presentation of the Data Vault modeling technique, which provides the foundations to create a technical data warehouse layer. The book discusses how to build the data warehouse incrementally using the agile Data Vault 2.0 methodology. In addition, readers will learn how to create the input layer (the stage layer) and the presentation layer (data mart) of the Data Vault 2.0 architecture including implementation best practices. Drawing upon years of practical experience and using numerous examples and an easy to understand framework, Dan Linstedt and Michael Olschimke discuss: How to load each layer using SQL Server Integration Services (SSIS), including automation of the Data Vault loading processes. Important data warehouse technologies and practices. Data Quality Services (DQS) and Master Data Services (MDS) in the context of the Data Vault architecture.
This book examines how cloud-based services challenge the current application of antitrust and privacy laws in the EU and the US. The author looks at the elements of data centers, the way information is organized, and how antitrust, competition and privacy laws in the US and the EU regulate cloud-based services and their market practices. She discusses how platform interoperability can be a driver of incremental innovation and the consequences of not promoting radical innovation. She evaluates applications of predictive analysis based on big data as well as deriving privacy-invasive conduct. She looks at the way antitrust and privacy laws approach consumer protection and how lawmakers can reach more balanced outcomes by understanding the technical background of cloud-based services.
This guide shows how to combine data science with social science to gain unprecedented insight into customer behavior, so you can change it. Joanne Rodrigues-Craig bridges the gap between predictive data science and statistical techniques that reveal why important things happen -- why customers buy more, or why they immediately leave your site -- so you can get more behaviors you want and less you don't. Drawing on extensive enterprise experience and deep knowledge of demographics and sociology, Rodrigues-Craig shows how to create better theories and metrics, so you can accelerate the process of gaining insight, altering behavior, and earning business value. You'll learn how to: Develop complex, testable theories for understanding individual and social behavior in web products Think like a social scientist and contextualize individual behavior in today's social environments Build more effective metrics and KPIs for any web product or system Conduct more informative and actionable A/B tests Explore causal effects, reflecting a deeper understanding of the differences between correlation and causation Alter user behavior in a complex web product Understand how relevant human behaviors develop, and the prerequisites for changing them Choose the right statistical techniques for common tasks such as multistate and uplift modeling Use advanced statistical techniques to model multidimensional systems Do all of this in R (with sample code available in a separate code manual)
This book constitutes the refereed proceedings of the 27th International Symposium on String Processing and Information Retrieval, SPIRE 2021, held in Lille, France, in October 2021.*The 14 full papers and 4 short papers presented together with 2 invited papers in this volume were carefully reviewed and selected from 30 submissions. They cover topics such as: data structures; algorithms; information retrieval; compression; combinatorics on words; and computational biology. *The symposium was held virtually.
"This text should be required reading for everyone in contemporary business." --Peter Woodhull, CEO, Modus21 "The one book that clearly describes and links Big Data concepts to business utility." --Dr. Christopher Starr, PhD "Simply, this is the best Big Data book on the market!" --Sam Rostam, Cascadian IT Group "...one of the most contemporary approaches I've seen to Big Data fundamentals..." --Joshua M. Davis, PhD The Definitive Plain-English Guide to Big Data for Business and Technology Professionals Big Data Fundamentals provides a pragmatic, no-nonsense introduction to Big Data. Best-selling IT author Thomas Erl and his team clearly explain key Big Data concepts, theory and terminology, as well as fundamental technologies and techniques. All coverage is supported with case study examples and numerous simple diagrams. The authors begin by explaining how Big Data can propel an organization forward by solving a spectrum of previously intractable business problems. Next, they demystify key analysis techniques and technologies and show how a Big Data solution environment can be built and integrated to offer competitive advantages. Discovering Big Data's fundamental concepts and what makes it different from previous forms of data analysis and data science Understanding the business motivations and drivers behind Big Data adoption, from operational improvements through innovation Planning strategic, business-driven Big Data initiatives Addressing considerations such as data management, governance, and security Recognizing the 5 "V" characteristics of datasets in Big Data environments: volume, velocity, variety, veracity, and value Clarifying Big Data's relationships with OLTP, OLAP, ETL, data warehouses, and data marts Working with Big Data in structured, unstructured, semi-structured, and metadata formats Increasing value by integrating Big Data resources with corporate performance monitoring Understanding how Big Data leverages distributed and parallel processing Using NoSQL and other technologies to meet Big Data's distinct data processing requirements Leveraging statistical approaches of quantitative and qualitative analysis Applying computational analysis methods, including machine learning
Multi-Modal User Interactions in Controlled Environments investigates the capture and analysis of user's multimodal behavior (mainly eye gaze, eye fixation, eye blink and body movements) within a real controlled environment (controlled-supermarket, personal environment) in order to adapt the response of the computer/environment to the user. Such data is captured using non-intrusive sensors (for example, cameras in the stands of a supermarket) installed in the environment. This multi-modal video based behavioral data will be analyzed to infer user intentions while assisting users in their day-to-day tasks by adapting the system's response to their requirements seamlessly. This book also focuses on the presentation of information to the user. Multi-Modal User Interactions in Controlled Environments is designed for professionals in industry, including professionals in the domains of security and interactive web television. This book is also suitable for graduate-level students in computer science and electrical engineering.
A practical guide to making good decisions in a world of missing data In the era of big data, it is easy to imagine that we have all the information we need to make good decisions. But in fact the data we have are never complete, and may be only the tip of the iceberg. Just as much of the universe is composed of dark matter, invisible to us but nonetheless present, the universe of information is full of dark data that we overlook at our peril. In Dark Data, data expert David Hand takes us on a fascinating and enlightening journey into the world of the data we don't see. Dark Data explores the many ways in which we can be blind to missing data and how that can lead us to conclusions and actions that are mistaken, dangerous, or even disastrous. Examining a wealth of real-life examples, from the Challenger shuttle explosion to complex financial frauds, Hand gives us a practical taxonomy of the types of dark data that exist and the situations in which they can arise, so that we can learn to recognize and control for them. In doing so, he teaches us not only to be alert to the problems presented by the things we don't know, but also shows how dark data can be used to our advantage, leading to greater understanding and better decisions. Today, we all make decisions using data. Dark Data shows us all how to reduce the risk of making bad ones.
Manage and Automate Data Analysis with Pandas in Python Today, analysts must manage data characterized by extraordinary variety, velocity, and volume. Using the open source Pandas library, you can use Python to rapidly automate and perform virtually any data analysis task, no matter how large or complex. Pandas can help you ensure the veracity of your data, visualize it for effective decision-making, and reliably reproduce analyses across multiple data sets. Pandas for Everyone, 2nd Edition, brings together practical knowledge and insight for solving real problems with Pandas, even if you're new to Python data analysis. Daniel Y. Chen introduces key concepts through simple but practical examples, incrementally building on them to solve more difficult, real-world data science problems such as using regularization to prevent data overfitting, or when to use unsupervised machine learning methods to find the underlying structure in a data set. New features to the second edition include: Extended coverage of plotting and the seaborn data visualization library Expanded examples and resources Updated Python 3.9 code and packages coverage, including statsmodels and scikit-learn libraries Online bonus material on geopandas, Dask, and creating interactive graphics with Altair Chen gives you a jumpstart on using Pandas with a realistic data set and covers combining data sets, handling missing data, and structuring data sets for easier analysis and visualization. He demonstrates powerful data cleaning techniques, from basic string manipulation to applying functions simultaneously across dataframes. Once your data is ready, Chen guides you through fitting models for prediction, clustering, inference, and exploration. He provides tips on performance and scalability and introduces you to the wider Python data analysis ecosystem. Work with DataFrames and Series, and import or export data Create plots with matplotlib, seaborn, and pandas Combine data sets and handle missing data Reshape, tidy, and clean data sets so they're easier to work with Convert data types and manipulate text strings Apply functions to scale data manipulations Aggregate, transform, and filter large data sets with groupby Leverage Pandas' advanced date and time capabilities Fit linear models using statsmodels and scikit-learn libraries Use generalized linear modeling to fit models with different response variables Compare multiple models to select the "best" one Regularize to overcome overfitting and improve performance Use clustering in unsupervised machine learning
This contributed volume discusses essential topics and the fundamentals for Big Data Emergency Management and primarily focusses on the application of Big Data for Emergency Management. It walks the reader through the state of the art, in different facets of the big disaster data field. This includes many elements that are important for these technologies to have real-world impact. This book brings together different computational techniques from: machine learning, communication network analysis, natural language processing, knowledge graphs, data mining, and information visualization, aiming at methods that are typically used for processing big emergency data. This book also provides authoritative insights and highlights valuable lessons by distinguished authors, who are leaders in this field. Emergencies are severe, large-scale, non-routine events that disrupt the normal functioning of a community or a society, causing widespread and overwhelming losses and impacts. Emergency Management is the process of planning and taking actions to minimize the social and physical impact of emergencies and reduces the community's vulnerability to the consequences of emergencies. Information exchange before, during and after the disaster periods can greatly reduce the losses caused by the emergency. This allows people to make better use of the available resources, such as relief materials and medical supplies. It also provides a channel through which reports on casualties and losses in each affected area, can be delivered expeditiously. Big Data-Driven Emergency Management refers to applying advanced data collection and analysis technologies to achieve more effective and responsive decision-making during emergencies. Researchers, engineers and computer scientists working in Big Data Emergency Management, who need to deal with large and complex sets of data will want to purchase this book. Advanced-level students interested in data-driven emergency/crisis/disaster management will also want to purchase this book as a study guide.
This 2 volume-set of IFIP AICT 583 and 584 constitutes the refereed proceedings of the 16th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2020, held in Neos Marmaras, Greece, in June 2020.* The 70 full papers and 5 short papers presented were carefully reviewed and selected from 149 submissions. They cover a broad range of topics related to technical, legal, and ethical aspects of artificial intelligence systems and their applications and are organized in the following sections: Part I: classification; clustering - unsupervised learning -analytics; image processing; learning algorithms; neural network modeling; object tracking - object detection systems; ontologies - AI; and sentiment analysis - recommender systems. Part II: AI ethics - law; AI constraints; deep learning - LSTM; fuzzy algebra - fuzzy systems; machine learning; medical - health systems; and natural language. *The conference was held virtually due to the COVID-19 pandemic.
This 2 volume-set of IFIP AICT 583 and 584 constitutes the refereed proceedings of the 16th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2020, held in Neos Marmaras, Greece, in June 2020.* The 70 full papers and 5 short papers presented were carefully reviewed and selected from 149 submissions. They cover a broad range of topics related to technical, legal, and ethical aspects of artificial intelligence systems and their applications and are organized in the following sections: Part I: classification; clustering - unsupervised learning -analytics; image processing; learning algorithms; neural network modeling; object tracking - object detection systems; ontologies - AI; and sentiment analysis - recommender systems. Part II: AI ethics - law; AI constraints; deep learning - LSTM; fuzzy algebra - fuzzy systems; machine learning; medical - health systems; and natural language. *The conference was held virtually due to the COVID-19 pandemic.
Publisher's Note: Products purchased from Third Party sellers are not guaranteed by the publisher for quality, authenticity, or access to any online entitlements included with the product. Develop a custom, agile data warehousing and business intelligence architectureEmpower your users and drive better decision making across your enterprise with detailed instructions and best practices from an expert developer and trainer. The Data Warehouse Mentor: Practical Data Warehouse and Business Intelligence Insights shows how to plan, design, construct, and administer an integrated end-to-end DW/BI solution. Learn how to choose appropriate components, build an enterprise data model, configure data marts and data warehouses, establish data flow, and mitigate risk. Change management, data governance, and security are also covered in this comprehensive guide. Understand the components of BI and data warehouse systems Establish project goals and implement an effective deployment plan Build accurate logical and physical enterprise data models Gain insight into your company's transactions with data mining Input, cleanse, and normalize data using ETL (Extract, Transform, and Load) techniques Use structured input files to define data requirements Employ top-down, bottom-up, and hybrid design methodologies Handle security and optimize performance using data governance tools Robert Laberge is the founder of several Internet ventures and a principle consultant for the IBM Industry Models and Assets Lab, which has a focus on data warehousing and business intelligence solutions.
This double volumes LNCS 11229-11230 constitutes the refereed proceedings of the Confederated International Conferences: Cooperative Information Systems, CoopIS 2018, Ontologies, Databases, and Applications of Semantics, ODBASE 2018, and Cloud and Trusted Computing, C&TC, held as part of OTM 2018 in October 2018 in Valletta, Malta. The 64 full papers presented together with 22 short papers were carefully reviewed and selected from 173 submissions. The OTM program every year covers data and Web semantics, distributed objects, Web services, databases, informationsystems, enterprise workflow and collaboration, ubiquity, interoperability, mobility, grid and high-performance computing.
This two-volume set LNCS 11196 and LNCS 11197 constitutes the refereed proceedings of the 7th International Conference on Digital Heritage, EuroMed 2018, held in Nicosia, Cyprus, in October/November 2018. The 21 full papers, 47 project papers, and 29 short papers presented were carefully reviewed and selected from 537 submissions. The papers are organized in topical sections on 3D Digitalization, Reconstruction, Modeling, and HBIM; Innovative Technologies in Digital Cultural Heritage; Digital Cultural Heritage -Smart Technologies; The New Era of Museums and Exhibitions; Digital Cultural Heritage Infrastructure; Non Destructive Techniques in Cultural Heritage Conservation; E-Humanities; Reconstructing the Past; Visualization, VR and AR Methods and Applications; Digital Applications for Materials Preservation in Cultural Heritage; and Digital Cultural Heritage Learning and Experiences.
Graph Databases in Action teaches readers everything they need to know to begin building and running applications powered by graph databases. Right off the bat, seasoned graph database experts introduce readers to just enough graph theory, the graph database ecosystem, and a variety of datastores. They also explore modelling basics in action with real-world examples, then go hands-on with querying, coding traversals, parsing results, and other essential tasks as readers build their own graph-backed social network app complete with a recommendation engine! Key Features * Graph database fundamentals * An overview of the graph database ecosystem * Relational vs. graph database modelling * Querying graphs using Gremlin * Real-world common graph use cases For readers with basic Java and application development skills building in RDBMS systems such as Oracle, SQL Server, MySQL, and Postgres. No experience with graph databases is required. About the technology Graph databases store interconnected data in a more natural form, making them superior tools for representing data with rich relationships. Unlike in relational database management systems (RDBMS), where a more rigid view of data connections results in the loss of valuable insights, in graph databases, data connections are first priority. Dave Bechberger has extensive experience using graph databases as a product architect and a consultant. He's spent his career leveraging cutting-edge technologies to build software in complex data domains such as bioinformatics, oil and gas, and supply chain management. He's an active member of the graph community and has presented on a wide variety of graph-related topics at national and international conferences. Josh Perryman is technologist with over two decades of diverse experience building and maintaining complex systems, including high performance computing (HPC) environments. Since 2014 he has focused on graph databases, especially in distributed or big data environments, and he regularly blogs and speaks at conferences about graph databases.
This SpringerBrief reviews the knowledge engineering problem of engineering objectivity in top-k query answering; essentially, answers must be computed taking into account the user's preferences and a collection of (subjective) reports provided by other users. Most assume each report can be seen as a set of scores for a list of features, its author's preferences among the features, as well as other information is discussed in this brief. These pieces of information for every report are then combined, along with the querying user's preferences and their trust in each report, to rank the query results. Everyday examples of this setup are the online reviews that can be found in sites like Amazon, Trip Advisor, and Yelp, among many others. Throughout this knowledge engineering effort the authors adopt the Datalog+/- family of ontology languages as the underlying knowledge representation and reasoning formalism, and investigate several alternative ways in which rankings can b e derived, along with algorithms for top-k (atomic) query answering under these rankings. This SpringerBrief also investigate assumptions under which our algorithms run in polynomial time in the data complexity. Since this SpringerBrief contains a gentle introduction to the main building blocks (OBDA, Datalog+/-, and reasoning with preferences), it should be of value to students, researchers, and practitioners who are interested in the general problem of incorporating user preferences into related formalisms and tools. Practitioners also interested in using Ontology-based Data Access to leverage information contained in reviews of products and services for a better customer experience will be interested in this brief and researchers working in the areas of Ontological Languages, Semantic Web, Data Provenance, and Reasoning with Preferences.
Hive makes life much easier for developers who work with stored and managed data in Hadoop clusters, such as data warehouses. With this example-driven guide, you'll learn how to use the Hive infrastructure to provide data summarization, query, and analysis - particularly with HiveQL, the query language dialect of SQL. You'll learn how to set up Hive in your environment and optimize its use, and how it interoperates with other tools, such as HBase. You'll also learn how to extend Hive with custom code written in Java or scripting languages. Ideal for developers with prior SQL experience, this book shows you how Hive simplifies many tasks that would be much harder to implement in the lower-level MapReduce API provided by Hadoop.
Dive into the world of SQL on Hadoop and get the most out of your Hive data warehouses. This book is your go-to resource for using Hive: authors Scott Shaw, Ankur Gupta, David Kjerrumgaard, and Andreas Francois Vermeulen take you through learning HiveQL, the SQL-like language specific to Hive, to analyze, export, and massage the data stored across your Hadoop environment. From deploying Hive on your hardware or virtual machine and setting up its initial configuration to learning how Hive interacts with Hadoop, MapReduce, Tez and other big data technologies, Practical Hive gives you a detailed treatment of the software. In addition, this book discusses the value of open source software, Hive performance tuning, and how to leverage semi-structured and unstructured data. What You Will Learn Install and configure Hive for new and existing datasets Perform DDL operations Execute efficient DML operations Use tables, partitions, buckets, and user-defined functions Discover performance tuning tips and Hive best practices Who This Book Is For Developers, companies, and professionals who deal with large amounts of data and could use software that can efficiently manage large volumes of input. It is assumed that readers have the ability to work with SQL.
With this textbook, Vaisman and Zimanyi deliver excellent coverage of data warehousing and business intelligence technologies ranging from the most basic principles to recent findings and applications. To this end, their work is structured into three parts. Part I describes "Fundamental Concepts" including multi-dimensional models; conceptual and logical data warehouse design and MDX and SQL/OLAP. Subsequently, Part II details "Implementation and Deployment," which includes physical data warehouse design; data extraction, transformation, and loading (ETL) and data analytics. Lastly, Part III covers "Advanced Topics" such as spatial data warehouses; trajectory data warehouses; semantic technologies in data warehouses and novel technologies like Map Reduce, column-store databases and in-memory databases. As a key characteristic of the book, most of the topics are presented and illustrated using application tools. Specifically, a case study based on the well-known Northwind database illustrates how the concepts presented in the book can be implemented using Microsoft Analysis Services and Pentaho Business Analytics. All chapters are summarized using review questions and exercises to support comprehensive student learning. Supplemental material to assist instructors using this book as a course text is available at http://cs.ulb.ac.be/DWSDIbook/, including electronic versions of the figures, solutions to all exercises, and a set of slides accompanying each chapter. Overall, students, practitioners and researchers alike will find this book the most comprehensive reference work on data warehouses, with key topics described in a clear and educational style.
This book presents Hyper-lattice, a new algebraic model for partially ordered sets, and an alternative to lattice. The authors analyze some of the shortcomings of conventional lattice structure and propose a novel algebraic structure in the form of Hyper-lattice to overcome problems with lattice. They establish how Hyper-lattice supports dynamic insertion of elements in a partial order set with a partial hierarchy between the set members. The authors present the characteristics and the different properties, showing how propositions and lemmas formalize Hyper-lattice as a new algebraic structure.
There is growing recognition of the need to address the fragility of digital information, on which our society heavily depends for smooth operation in all aspects of daily life. This has been discussed in many books and articles on digital preservation, so why is there a need for yet one more? Because, for the most part, those other publications focus on documents, images and webpages - objects that are normally rendered to be simply displayed by software to a human viewer. Yet there are clearly many more types of digital objects that may need to be preserved, such as databases, scientific data and software itself. David Giaretta, Director of the Alliance for Permanent Access, and his contributors explain why the tools and techniques used for preserving rendered objects are inadequate for all these other types of digital objects, and they provide the concepts, techniques and tools that are needed. The book is structured in three parts. The first part is on theory, i.e., the concepts and techniques that are essential for preserving digitally encoded information. The second part then shows practice, i.e., the use and validation of these tools and techniques. Finally, the third part concludes by addressing how to judge whether money is being well spent, in terms of effectiveness and cost sharing. Various examples of digital objects from many sources are used to explain the tools and techniques presented. The presentation style mainly aims at practitioners in libraries, archives and industry who are either directly responsible for preservation or who need to prepare for audits of their archives. Researchers in digital preservation and developers of preservation tools and techniques will also find valuable practical information here. Researchers creating digitally encoded information of all kinds will also need to be aware of these topics so that they can help to ensure that their data is usable and can be valued by others now and in the future. To further assist the reader, the book is supported by many hours of videos and presentations from the CASPAR project and by a set of open source software.
This book constitutes the refereed proceedings of the 6th International Conference on E-Technologies, MCETECH 2015, held in Montreal, Canada, in May 2015. The 18 papers presented in this volume were carefully reviewed and selected from 42 submissions. They have been organized in topical sections on process adaptation; legal issues; social computing; eHealth; and eBusiness, eEducation and eLogistics.
Questions of privacy, borders, and nationhood are increasingly shaping the way we think about all things digital. Data Centers brings together essays and photographic documentation that analyze recent and ongoing developments. Taking Switzerland as an example, the book takes a look at the country's data centers, law firms, corporations, and government institutions that are involved in the creation, maintenance, and regulation of digital infrastructures. Beneath the official storyline- Switzerland's moderate climate, political stability, and relatively clean energy mix-the book uncovers a much more varied and sometimes contradictory set of narratives. |
![]() ![]() You may like...
Micro Light Emitting Diode: Fabrication…
Jong-Hyun Ahn, Jae Hyun Kim
Hardcover
R2,000
Discovery Miles 20 000
Phase Transitions and Self-Organization…
J.C. Phillips, M.F. Thorpe
Hardcover
R4,621
Discovery Miles 46 210
RF Power Semiconductor Generator…
Satoshi Horikoshi, Nick Serpone
Hardcover
R4,240
Discovery Miles 42 400
Sustaining Higher Education Through…
Manyane Makua, Mariam Akinlolu
Hardcover
R5,776
Discovery Miles 57 760
Woman Evolve - Break Up With Your Fears…
Sarah Jakes Roberts
Paperback
![]()
The Educator As Assessor In The Senior…
J.M. Dreyer, A.S. Mawela
Paperback
R238
Discovery Miles 2 380
University Physics with Modern Physics…
Hugh Young, Roger Freedman
Paperback
|