Statistics > Machine Learning
[Submitted on 16 Jun 2017 (v1), last revised 1 Jul 2017 (this version, v2)]
Title:A Closer Look at Memorization in Deep Networks
Download PDFAbstract: We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.
Submission history
From: Devansh Arpit [view email][v1] Fri, 16 Jun 2017 18:11:09 UTC (8,495 KB)
[v2] Sat, 1 Jul 2017 14:26:51 UTC (8,495 KB)
Current browse context:
stat.ML
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)