Computer Science > Machine Learning
[Submitted on 1 Sep 2020 (v1), last revised 24 Oct 2020 (this version, v3)]
Title:Learning explanations that are hard to vary
Download PDFAbstract: In this paper, we investigate the principle that `good explanations are hard to vary' in the context of deep learning. We show that averaging gradients across examples -- akin to a logical OR of patterns -- can favor memorization and `patchwork' solutions that sew together different strategies, instead of identifying invariances. To inspect this, we first formalize a notion of consistency for minima of the loss surface, which measures to what extent a minimum appears only when examples are pooled. We then propose and experimentally validate a simple alternative algorithm based on a logical AND, that focuses on invariances and prevents memorization in a set of real-world tasks. Finally, using a synthetic dataset with a clear distinction between invariant and spurious mechanisms, we dissect learning signals and compare this approach to well-established regularizers.
Submission history
From: Giambattista Parascandolo [view email][v1] Tue, 1 Sep 2020 10:17:48 UTC (9,504 KB)
[v2] Sat, 5 Sep 2020 14:46:16 UTC (9,909 KB)
[v3] Sat, 24 Oct 2020 11:32:18 UTC (11,272 KB)
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)