This talk examines the interplay of two types of networks that can transfer information and knowledge over visual or geometric data, such 2D images, 3D scans or models, etc. On the one hand we have deep neural networks, which we think of as vertical networks, as they transfer information across different levels of abstraction over the same data, from low level features to higher level semantic abstractions. On the other we can consider horizontal networks where information is transported between the same levels of abstraction, but over different yet related data sets. Such networks can be built using functional maps, which are linear operators transferring knowledge-encoding functions between connected data sets. We briefly discuss some of the issues involved on the construction of both types of networks, especially for irregular 3D representations, and examine the latent spaces that arise in the process. We argue that in the end we want both types of networks, both vertical and horizontal maps, to "play well" with each other, giving rise to commutative map diagrams that enforce structure-preserving abstractions and make deep nets more functorial. We demonstrate these ideas in the context of image and shape classification and segmentation, as well as in 3D reconstruction.