This semester, we are running a Deep Learning Reading group every Tuesday at 7.00 pm before the weekly talk. Below is a collection of the papers we have read so far.
A novel criterion to efficiently prune convolutional neural networks inspired by explaining nonlinear classification decisions in terms of input variables is introduced.learn more
"Big Transfer" (BiT) improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision.learn more
Self-supervised Vision Transformer (ViT) features are effective for semantic segmentation and k-NN classification.learn more
Vision Transformer (ViT) applied to image patches performs competitively with convolutional networks on image classification tasks.learn more
The paper presents the "lottery ticket hypothesis", which states that dense, randomly-initialized, feed-forward networks contain subnetworks (winning tickets) that when trained in isolation reach test accuracy comparable to the original network in a similar number of iterations.learn more
The deep Q-network agent can excel at a diverse array of challenging tasks by bridging the divide between high-dimensional sensory inputs and actions.learn more
This paper describes Layer-wise Relevance Propagation (LRP), a method for making deep neural networks explainable by highlighting the input features used for predictions.learn more