Large Convolutional Neural Network models have recently demonstrated impressive classification performance on the ImageNet benchmark (Krizhevsky et al.). However there is no clear understanding of why they perform so well, or how they might be improved. In this talk we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also describe several novel approaches to regularizing the capacity of the network. These methods allow us to train model architectures that outperform Krizhevsky et al. on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets. Joint work with PhD students Matt Zeiler and Wan Li.
Back to Graduate Summer School: Computer Vision