0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (1)
  • -
Status
Brand

Showing 1 - 1 of 1 matches in All Departments

Covariances in Computer Vision and Machine Learning (Paperback): Ha Quang Minh, Vittorio Murino Covariances in Computer Vision and Machine Learning (Paperback)
Ha Quang Minh, Vittorio Murino
R1,566 Discovery Miles 15 660 Ships in 10 - 15 working days

Covariance matrices play important roles in many areas of mathematics, statistics, and machine learning, as well as their applications. In computer vision and image processing, they give rise to a powerful data representation, namely the covariance descriptor, with numerous practical applications. In this book, we begin by presenting an overview of the {\it finite-dimensional covariance matrix} representation approach of images, along with its statistical interpretation. In particular, we discuss the various distances and divergences that arise from the intrinsic geometrical structures of the set of Symmetric Positive Definite (SPD) matrices, namely Riemannian manifold and convex cone structures. Computationally, we focus on kernel methods on covariance matrices, especially using the Log-Euclidean distance. We then show some of the latest developments in the generalization of the finite-dimensional covariance matrix representation to the {\it infinite-dimensional covariance operator} representation via positive definite kernels. We present the generalization of the affine-invariant Riemannian metric and the Log-Hilbert-Schmidt metric, which generalizes the Log-Euclidean distance. Computationally, we focus on kernel methods on covariance operators, especially using the Log-Hilbert-Schmidt distance. Specifically, we present a two-layer kernel machine, using the Log-Hilbert-Schmidt distance and its finite-dimensional approximation, which reduces the computational complexity of the exact formulation while largely preserving its capability. Theoretical analysis shows that, mathematically, the approximate Log-Hilbert-Schmidt distance should be preferred over the approximate Log-Hilbert-Schmidt inner product and, computationally, it should be preferred over the approximate affine-invariant Riemannian distance. Numerical experiments on image classification demonstrate significant improvements of the infinite-dimensional formulation over the finite-dimensional counterpart. Given the numerous applications of covariance matrices in many areas of mathematics, statistics, and machine learning, just to name a few, we expect that the infinite-dimensional covariance operator formulation presented here will have many more applications beyond those in computer vision.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Kendall Office Chair (Light Grey)
R1,699 R1,346 Discovery Miles 13 460
Fly Repellent ShooAway (Black)(4 Pack)
R1,396 R1,076 Discovery Miles 10 760
The Personal History Of David…
Dev Patel, Peter Capaldi, … DVD  (1)
R63 Discovery Miles 630
OMC! Gemstone Jewellery Kit
Kit R280 R129 Discovery Miles 1 290
Dunlop Pro Padel Balls (Green)(Pack of…
R199 R165 Discovery Miles 1 650
Pure Pleasure Electric Heating Pad (30 x…
 (2)
R599 R529 Discovery Miles 5 290
Nintendo Joy-Con Neon Controller Pair…
 (1)
R1,899 R1,489 Discovery Miles 14 890
Vital BabyŽ HYDRATE™ Easy Sipper™ Cup…
R158 R149 Discovery Miles 1 490
Mission Impossible 7 - Dead Reckoning…
Tom Cruise Blu-ray disc R571 Discovery Miles 5 710
Loot
Nadine Gordimer Paperback  (2)
R383 R310 Discovery Miles 3 100

 

Partners