Recently, several seminal works have shown that exploiting the geometry of non-Euclidean space for specific parametric representations makes it possible to improve performance of estimation or classification algorithms well known in the Euclidean case. In this talk, we consider the manifold of structured descriptors such as covariance matrices for characterizing signals, images or 3D data (spatial or temporal dependencies, correlation between features, etc.). In this context, we propose to formalize the concept of Riemannian Gaussian probability laws that can be extended to the case of various symmetric manifolds. In terms of empirical moments, this later formulation leads to consider the set of empirical covariance matrices as an i.i.d population characterized by a covariance “mean” and a “dispersion around this mean” as first and second order statistics on the manifold. Then, from this definition of the Riemannian Gaussian law, the extension to the case of mixture modelling is considered. This extension makes it possible to define the generalization coding technics such as Fisher vectors to the context of the structured descriptors for classification task. From this paradigm of geometric inference problem, we illustrate our point of view from different classification tasks, i.e. textured images, motion analysis and EEG analysis for BCIs (Brain Computer Interface). We will also propose a connexion with CNN feature coding.
Back to Workshop I: Geometric Processing