Covariance Matrices and Covariance Operators Theory and Applications - - PowerPoint PPT Presentation

covariance matrices and covariance operators theory and
SMART_READER_LITE
LIVE PREVIEW

Covariance Matrices and Covariance Operators Theory and Applications - - PowerPoint PPT Presentation

Covariance Matrices and Covariance Operators Theory and Applications H` a Quang Minh Functional Analytic Learning Unit RIKEN Center for Advanced Intelligence Project (AIP), Tokyo February 2019 H.Q. Minh (AIP) Covariance Matrices and


slide-1
SLIDE 1

Covariance Matrices and Covariance Operators Theory and Applications

H` a Quang Minh

Functional Analytic Learning Unit RIKEN Center for Advanced Intelligence Project (AIP), Tokyo

February 2019

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 1 / 52

slide-2
SLIDE 2

Main Research Directions

1

Vector-valued Reproducing Kernel Hilbert Spaces (RKHS) and Applications

2

Geometrical methods in Machine Learning and Applications

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 2 / 52

slide-3
SLIDE 3

Geometrical methods in Machine Learning

Exploit the geometrical structures of data Current theoretical focus: Infinite-dimensional generalizations of the geometrical structures of the set of Symmetric Positive Definite (SPD) matrices Current computational focus: Geometry of RKHS covariance

  • perators

Current practical application focus: Image representation by covariance matrices and covariance operators

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 3 / 52

slide-4
SLIDE 4

Covariance Matrices and Covariance Operators

Motivations Covariance matrices: many applications in computer vision, brain imaging, radar signal processing etc

Powerful approach for data representation by encoding input correlations Rich mathematical theories and computational algorithms Very good practical performances

Covariance operators (infinite-dimensional setting):

Nonlinear generalization of covariance matrices Can be much more powerful as a form of data representation Can achieve substantial gains in practical performances

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 4 / 52

slide-5
SLIDE 5

Covariance matrices: Motivations

Symmetric Positive Definite (SPD) matrices Sym++(n) = set of n × n SPD matrices Have been studied extensively mathematically Numerous practical applications

Brain imaging (Arsigny et al 2005, Dryden et al 2009, Qiu et al 2015) Computer vision: object detection (Tuzel et al 2008, Tosato et al 2013), image retrieval (Cherian et al 2013), visual recognition (Jayasumana et al 2015), many more Radar signal processing: Barbaresco (2013), Formont et al 2013 Machine learning: kernel learning (Kulis et al 2009)

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 5 / 52

slide-6
SLIDE 6

Example: Covariance matrix representation of images

Tuzel, Porikli, Meer (ECCV 2006, CVPR 2006): covariance matrices as region descriptors for images (covariance descriptors) Given an image F (or a patch in F), at each pixel, extract a feature vector (e.g. intensity, colors, filter responses etc) Each image corresponds to a data matrix X X = [x1, . . . , xm] = n × m matrix where

m = number of pixels n = number of features at each pixel

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 6 / 52

slide-7
SLIDE 7

Example: Covariance matrix representation of images

X = [x1, . . . , xm] = data matrix of size n × m, with m observations Empirical mean vector µX = 1 m

m

  • i=1

xi = 1 mX1m, 1m = (1, . . . , 1)T ∈ Rm Empirical covariance matrix CX = 1 m

m

  • i=1

(xi − µX)(xi − µX)T = 1 mXJmXT Jm = Im − 1 m1m1T

m =

centering matrix

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 7 / 52

slide-8
SLIDE 8

Example: Covariance matrix representation of images

Image F ⇒ Data matrix X ⇒ Covariance matrix CX Each image is represented by a covariance matrix Example of image features

f(x, y) =

  • I(x, y), R(x, y), G(x, y), B(x, y), |∂R

∂x |, |∂R ∂y |, |∂G ∂x |, |∂G ∂y |, |∂B ∂x |, |∂B ∂y |

  • at pixel location (x, y)

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 8 / 52

slide-9
SLIDE 9

Example

Figure: An example of the covariance descriptor. At each pixel (x, y), a

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 9 / 52

slide-10
SLIDE 10

Covariance matrix representation - Properties

Encode linear correlations (second order statistics) between image features Flexible, allowing the fusion of multiple and different features

Handcrafted features, e.g. colors and SIFT Convolutional features

Compact Robust to noise

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 10 / 52

slide-11
SLIDE 11

Covariance matrix representation - generalization

Covariance representation for video: e.g. Guo et al (AVSS 2010), Sanin et al (WACV 2013)

Employ features that capture temporal information, e.g. optical flow

Covariance representation for 3D point clouds and 3D shapes: e.g. Fehr et al (ICRA 2012, ICRA 2014), Tabias et al (CVPR 2014), Hariri et al (Pattern Recognition Letters 2016)

Employ geometric features e.g. curvature, surface normal vectors

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 11 / 52

slide-12
SLIDE 12

Statistical interpretation

Representing an image by a covariance matrix is essentially equivalent to Representing an image by a Gaussian probability density ρ in Rn with mean zero Features extracted are random observations of a n-dimensional random vector with probability density ρ

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 12 / 52

slide-13
SLIDE 13

Geometry of SPD Matrices

A, B ∈ Sym++(n) = set of n × n SPD matrices Euclidean distance dE(A, B) = ||A − B||F Riemannian manifold viewpoint

Affine-invariant Riemannian distance (e.g. Pennec et al 2006, Bhatia 2007) daiE(A, B) = || log(A−1/2BA−1/2)||F Log-Euclidean distance (Arsigny et al 2007) dlogE(A, B) = || log(A) − log(B)||F

Optimal transport viewpoint Bures-Wasserstein-Fr´ echet distance (Dowson and Landau 1982, Olkin and Pukelsheim 1982, Givens and Shortt 1984, Gelbrich 1990) dBW(A, B) =

  • tr[A + B − 2(A1/2BA1/2)]

1/2

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 13 / 52

slide-14
SLIDE 14

Statistical Interpretation

Affine-Invariant Metric Close connection with Fisher-Rao metric in information geometry (e.g. Amari 1985) For two multivariate Gaussian probability densities ρ1 ∼ N(µ, C1), ρ2 ∼ N(µ, C2) daiE(C1, C2) = 2(Fisher-Rao distance between ρ1 and ρ2 )

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 14 / 52

slide-15
SLIDE 15

Statistical Interpretation

Bures-Wasserstein Distance µX ∼ N(m1, A) and µY ∼ N(m2, B) = Gaussian probability distributions on Rn L2-Wasserstein distance between µX and µY d2

W(µX, µY) =

inf

µ∈Γ(µX ,µY )

  • Rn×Rn ||x − y||2dµ(x, y)

= ||m1 − m2||2 + tr[A + B − 2(A1/2BA1/2)1/2]

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 15 / 52

slide-16
SLIDE 16

Geometry of SPD Matrices

Convex cone viewpoint Alpha Log-Determinant divergences (Chebbi and Moakher, 2012) dα

logdet(A, B) =

4 1 − α2 log det( 1−α

2 A + 1+α 2 B)

det(A)

1−α 2

det(B)

1+α 2

, −1 < α < 1 Limiting cases d1

logdet(A, B) = lim α→1 dα logdet(A, B) = tr(B−1A − I) − log det(B−1A)

d−1

logdet(A, B) = lim α→−1 dα logdet(A, B) = tr(A−1B − I) − log det(A−1B)

Are generally not metrics

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 16 / 52

slide-17
SLIDE 17

Alpha Log-Determinant divergences

α = 0: Symmetric Stein divergence (also called S-divergence) d0

logdet(A, B) = 4

  • log

A + B 2

  • − 1

2 log det(AB)

  • = 4d2

stein(A, B)

Sra (NIPS 2012): dstein(A, B) =

  • log

A + B 2

  • − 1

2 log det(AB) is a metric (satisfying positivity, symmetry, and triangle inequality)

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 17 / 52

slide-18
SLIDE 18

Statistical Interpretation

Alpha Log-Determinant Divergences Close connection with Kullback-Leibler and R´ enyi divergences For two multivariate Gaussian probability densities ρ1 ∼ N(µ, C1), ρ2 ∼ N(µ, C2) dα

logdet(C1, C2) = constant(a R´

enyi divergence between ρ1 and ρ2 ) d1

logdet(C1, C2) = 2(Kullback-Leibler divergence between ρ1 and ρ2 )

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 18 / 52

slide-19
SLIDE 19

Kernel methods with Log-Euclidean metric

  • S. Jayasumana, R. Hartley, M. Salzmann, H. Li, and M. Harandi.

Kernel methods on the Riemannian manifold of symmetric positive definite matrices. CVPR 2013.

  • S. Jayasumana, R. Hartley, M. Salzmann, H. Li, and M. Harandi.

Kernel methods on Riemannian manifolds with Gaussian RBF kernels, PAMI 2015. P . Li, Q. Wang, W. Zuo, and L. Zhang. Log-Euclidean kernels for sparse representation and dictionary learning, ICCV 2013

  • D. Tosato, M. Spera, M. Cristani, and V. Murino. Characterizing

humans on Riemannian manifolds, PAMI 2013

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 19 / 52

slide-20
SLIDE 20

Kernel methods with Log-Euclidean metric for image classification

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 20 / 52

slide-21
SLIDE 21

Material classification

Example: KTH-TIPS2b data set f(x, y) =

  • R(x, y), G(x, y), B(x, y),
  • G0,0(x, y)
  • , . . .
  • G3,4(x, y)
  • H.Q. Minh (AIP)

Covariance Matrices and Operators February 2019 21 / 52

slide-22
SLIDE 22

Object recognition

Example: ETH-80 data set f(x, y) = [x, y, I(x, y), |Ix|, |Iy|]

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 22 / 52

slide-23
SLIDE 23

Numerical results

Better results with covariance operators (later)! Method KTH-TIPS2b ETH-80 E 55.3% 64.4% (±7.6%) (±0.9%) Stein 73.1% 67.5% (±8.0%) (±0.4%) Log-E 74.1 % 71.1% (±7.4%) (±1.0%)

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 23 / 52

slide-24
SLIDE 24

Comparison of metrics

Results from Cherian et al (PAMI 2013) using Nearest Neighbor Method Texture Activity Affine-invariant 85.5% 99.5% Stein 85.5% 99.5% Log-E 82.0% 96.5% Texture: images from Brodatz and CURET datasets Activity: videos from Weizmann, KTH, and UT Tower datasets

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 24 / 52

slide-25
SLIDE 25

Covariance operator representation - Motivation

Covariance matrices encode linear correlations of input features Nonlinearization

1

Map original input features into a high (generally infinite) dimensional feature space (via kernels)

2

Covariance operators: covariance matrices of infinite-dimensional features

3

Encode nonlinear correlations of input features

4

Provide a richer, more expressive representation of the data

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 25 / 52

slide-26
SLIDE 26

Covariance operator representation

S.K. Zhou and R. Chellappa. From sample similarity to ensemble similarity: Probabilistic distance measures in reproducing kernel Hilbert space, PAMI 2006

  • M. Harandi, M. Salzmann, and F

. Porikli. Bregman divergences for infinite-dimensional covariance matrices, CVPR 2014 H.Q.Minh, M. San Biagio, V. Murino. Log-Hilbert-Schmidt metric between positive definite operators on Hilbert spaces, NIPS 2014 H.Q.Minh, M. San Biagio, L. Bazzani, V. Murino. Approximate Log-Hilbert-Schmidt distances between covariance operators for image classification, CVPR 2016

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 26 / 52

slide-27
SLIDE 27

From covariance matrices

X = [x1, . . . , xm] = data matrix with m observations, sampled according to some probability distribution ρ on the input space X = Rn Empirical mean vector µX = 1 m

m

  • i=1

xi = 1 mX1m, 1m = (1, . . . , 1)T ∈ Rm Empirical covariance matrix CX = 1 m

m

  • i=1

(xi − µX)(xi − µX)T = 1 mXJmXT Jm = Im − 1 m1m1T

m =

centering matrix

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 27 / 52

slide-28
SLIDE 28

To RKHS covariance operators

X = [x1, . . . , xm] = data matrix randomly sampled according to ρ

  • n the input space X, with m observations

Positive definite kernel K, RKHS HK, feature map Φ : X → HK Informally, Φ gives an infinite feature matrix in the feature space HK, of size dim(HK) × m Φ(X) = [Φ(x1), . . . , Φ(xm)] Formally, Φ(X) : Rm → HK is the bounded linear operator Φ(X)w =

m

  • i=1

wiΦ(xi), w ∈ Rm

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 28 / 52

slide-29
SLIDE 29

RKHS covariance operators

Empirical RKHS mean µΦ(X) = 1 m

m

  • i=1

Φ(xi) = 1 mΦ(X)1m ∈ HK Empirical covariance operator CΦ(x) : HK → HK CΦ(X) = 1 mΦ(X)JmΦ(X)∗ Jm = Im − 1

m1m1T m = centering matrix

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 29 / 52

slide-30
SLIDE 30

RKHS covariance operators

Theoretical mean µΦ =

  • X

Φ(x)dρ(x) ∈ HK Theoretical covariance operator CΦ : HK → HK CΦ =

  • X

Φ(x) ⊗ Φ(x)dρ(x) − µΦ ⊗ µΦ

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 30 / 52

slide-31
SLIDE 31

Geometry of Covariance Operators

H.Q. Minh et al. Log-Hilbert-Schmidt metric between positive definite operators on Hilbert spaces, NIPS 2014

Infinite-dimensional generalization of the Log-Euclidean Riemannian metric on the manifold of SPD matrices Closed form formulas in the case of RKHS covariance operators

H.Q. Minh. Affine-invariant Riemannian distance between infinite-dimensional covariance operators, Geometric Science of Information 2015 H.Q.Minh, M. San Biagio, L. Bazzani, V. Murino. Approximate Log-Hilbert-Schmidt Distances between Covariance Operators for Image Classification, CVPR 2016

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 31 / 52

slide-32
SLIDE 32

Geometry of Covariance Operators

H.Q. Minh. Infinite-dimensional Log-Determinant divergences between positive definite trace class operators, Linear Algebra and its Applications 2017

Infinite-dimensional generalization of the Alpha Log-Determinant divergences on the convex cone of SPD matrices Closed form formulas in the case of RKHS covariance operators

H.Q. Minh. Infinite-Dimensional Log-Determinant Divergences II: Alpha-Beta divergences, under review Information Geometry https://arxiv.org/abs/1610.08087 H.Q. Minh. Log-Determinant divergences between positive definite Hilbert-Schmidt operators, Geometric Science of Information 2017 H.Q. Minh. Infinite-Dimensional Log-Determinant Divergences III: Log-Euclidean and Log-Hilbert-Schmidt divergences, Information Geometry and Its Applications 2018

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 32 / 52

slide-33
SLIDE 33

From finite to infinite-dimensional settings

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 33 / 52

slide-34
SLIDE 34

Infinite-dimensional generalizations

Substantially different from the finite-dimensional formulations Problems: A = strictly positive, self-adjoint compact operator (e.g. covariance operator)

1

Eigenvalues λk → 0 as k → ∞

2

1 λk → ∞ and log(λk) → −∞

3

A−1 is unbounded

4

log(A) is unbounded

5

det(A) is always zero

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 34 / 52

slide-35
SLIDE 35

Infinite-dimensional generalization of Sym++(n)

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 35 / 52

slide-36
SLIDE 36

Geometry of positive definite operators

Larotonda (Differential Geometry and Its Applications 2007): generalization of the manifold Sym++(n) of SPD matrices to the infinite-dimensional Hilbert manifold

Σ(H) = {A + γI > 0 : A∗ = A, A ∈ HS(H), γ ∈ R}

Hilbert-Schmidt operators on the Hilbert space H

HS(H) = {A : ||A||2

HS = tr(A∗A) = ∞

  • k=1

||Aek||2 < ∞}

for any orthonormal basis {ek}∞

k=1

A self-adjoint ||A||2

HS = ∞ k=1 λ2 k

Generalization of the affine-invariant Riemannian metric

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 36 / 52

slide-37
SLIDE 37

Log-Hilbert-Schmidt distance

Generalizing Log-Euclidean distance dlogE(A, B) = || log(A) − log(B)|| Log-Hilbert-Schmidt distance dlogHS[(A + γI), (B + νI)] = || log(A + γI) − log(B + νI)||eHS Extended Hilbert-Schmidt norm ||A + γI||2

eHS = ||A||2 HS + γ2

Extended Hilbert-Schmidt inner product A + γI, B + νI = A, BHS + γν

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 37 / 52

slide-38
SLIDE 38

Log-Hilbert-Schmidt distance

Why log(A + γI)? Why extended Hilbert-Schmidt norm? A ∈ Sym++(n), with eigenvalues {λk}n

k=1 and orthonormal

eigenvectors {uk}n

k=1

A =

n

  • k=1

λkukuT

k ,

log(A) =

n

  • k=1

log(λk)ukuT

k

A : H → H self-adjoint, positive, compact operator, with eigenvalues {λk}∞

k=1, λk > 0, limk→∞ λk = 0, and orthonormal

eigenvectors {uk}∞

k=1

A =

  • k=1

λk(uk ⊗ uk), (uk ⊗ uk)w = uk, wuk log(A) =

  • k=1

log(λk)(uk ⊗ uk), lim

k→∞ log(λk) = −∞

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 38 / 52

slide-39
SLIDE 39

Log-Hilbert-Schmidt distance

Why log(A + γI)? Why extended Hilbert-Schmidt norm? log(A) is unbounded log(A + γI) is bounded Hilbert-Schmidt norm || log(A + γI)||2

HS = ∞

  • j=1

[log(λk + γ)]2 = ∞ if γ = 1 The extended Hilbert-Schmidt norm || log(A + γI)||2

eHS = || log(A

γ + I)||2

HS + (log γ)2

=

  • j=1

[log(λk γ + 1)]2 + (log γ)2 < ∞

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 39 / 52

slide-40
SLIDE 40

Log-Hilbert-Schmidt distance between RKHS covariance operators

The distance dlogHS[(CΦ(X) + γIHK ), (CΦ(Y) + νIHK )] = dlogHS 1 mΦ(X)JmΦ(X)∗ + γIHK

  • ,

1 mΦ(Y)JmΦ(Y)∗ + νIHK

  • has a closed form in terms of m × m Gram matrices

K[X] = Φ(X)∗Φ(X), (K[X])ij = K(xi, xj), K[Y] = Φ(Y)∗Φ(Y), (K[Y])ij = K(yi, yj), K[X, Y] = Φ(X)∗Φ(Y), (K[X, Y])ij = K(xi, yj) K[Y, X] = Φ(Y)∗Φ(X), (K[Y, x])ij = K(yi, xj)

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 40 / 52

slide-41
SLIDE 41

Log-Hilbert-Schmidt distance between RKHS covariance operators

1 γmJmK[X]Jm = UAΣAUT

A ,

1 µmJmK[Y]Jm = UBΣBUT

B ,

A∗B = 1 √γµmJmK[X, Y]Jm

CAB = 1T

NA log(INA + ΣA)Σ−1 A (UT A A∗BUB ◦ UT A A∗BUB)Σ−1 B log(INB + ΣB)1NB

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 41 / 52

slide-42
SLIDE 42

Example: Log-Hilbert-Schmidt distance between RKHS covariance operators

Closed form expression

Theorem (H.Q.M. et al - NIPS 2014)

Assume that dim(HK) = ∞. Let γ > 0, ν > 0. The Log-Hilbert-Schmidt distance between (CΦ(X) + γIHK ) and (CΦ(Y) + νIHK ) is

d2

logHS[(CΦ(X) + γIHK ), (CΦ(Y) + νIHK )] = tr[log(INA + ΣA)]2 + tr[log(INB + ΣB)]2

− 2CAB + (log γ − log ν)2

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 42 / 52

slide-43
SLIDE 43

Log-Hilbert-Schmidt distance between RKHS covariance operators

Closed form expression

Theorem (H.Q.M. et al - NIPS2014)

Assume that dim(HK) < ∞. Let γ > 0, ν > 0. The Log-Hilbert-Schmidt distance between (CΦ(X) + γIHK ) and (CΦ(Y) + νIHK ) is d2

logHS[(CΦ(X) + γIHK ), (CΦ(Y) + νIHK )]

= tr[log(INA + ΣA)]2 + tr[log(INB + ΣB)]2 − 2CAB + 2(log γ ν )(tr[log(INA + ΣA)] − tr[log(INB + ΣB)]) + (log γ − log ν)2dim(HK)

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 43 / 52

slide-44
SLIDE 44

Example: Two-layer kernel machine for image classification (H.Q.Minh et al - NIPS 2014)

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 44 / 52

slide-45
SLIDE 45

Approximate methods for reducing computational complexity

  • M. Faraki, M. Harandi, and F

. Porikli, Approximate infinite-dimensional region covariance descriptors for image classification, ICASSP 2015 H.Q. Minh, M. San Biagio, L. Bazzani, V. Murino. Approximate Log-Hilbert-Schmidt distances between covariance operators for image classification, CVPR 2016

  • Q. Wang, P

. Li, W. Zuo, and L. Zhang. RAID-G: Robust estimation

  • f approximate infinite-dimensional Gaussian with application to

material recognition, CVPR 2016

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 45 / 52

slide-46
SLIDE 46

Two-layer kernel machine with the approximate Log-Hilbert-Schmidt distance (H.Q.M et al CVPR 2016)

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 46 / 52

slide-47
SLIDE 47

Example: Object recognition

Example: ETH-80 data set f(x, y) = [x, y, I(x, y), |Ix|, |Iy|]

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 47 / 52

slide-48
SLIDE 48

Example: Object recognition

Results obtained using approximate Log-HS distance (H.Q.M et al, CVPR 2016) Method ETH-80 Euclidean 64.4%(±0.9%) Stein 67.5% (±0.4%) Log-Euclidean 71.1%(±1.0%) HS 93.1 % (±0.4) Approx-LogHS 95.0% (±0.5%)

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 48 / 52

slide-49
SLIDE 49

Further detail

H.Q. Minh and V. Murino. Covariances in Computer Vision and Machine Learning, Morgan & Claypool Publishers, 2017 H.Q. Minh and V. Murino. From Covariance Matrices to Covariance Operators: Data Representation from Finite to Infinite-Dimensional Settings. In Algorithmic Advances in Riemannian Geometry and Applications, Springer, 2017 H.Q. Minh. International Conference on Computer Vision (ICCV 2017) Tutorial, http://www.covariance2017.eu/

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 49 / 52

slide-50
SLIDE 50

Exposition

Covariance representation in computer vision From finite to infinite-dimensional settings

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 50 / 52

slide-51
SLIDE 51

Exposition

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 51 / 52

slide-52
SLIDE 52

Thank you for listening!

Questions, comments, suggestions?

H.Q. Minh (AIP) Covariance Matrices and Operators February 2019 52 / 52