We review literature on equivariant representations starting by the Amari 1978 and the results on handcrafted equivariance filters of the 80’s and 90’s. We will motivate equivariance design by reviewing alternative designs that enable equivariance by data augmentation leading to networks of increased model complexity. Equivariance in CNNs can be achieved by performing group convolution either by using canonical coordinates or by performing convolution on the acting groups and the associated homogeneous spaces.. Experiments validate our claim of lower complexity without sacrificing performance. When we want to infer 3D pose from 2D images, annotation is hardly possible and we have to rely on minimal supervision or geometry constraints. We show that building equivariant embeddings reduces 3D pose problems to a simple correlation operation avoiding, thus, supervised regression or spatial transformers.
Back to Workshop IV: Deep Geometric Learning of Big Data and Applications