<title>A meta-transfer objective for learning to disentangle causal mechanisms</title>
<link>https://kylrth.com/paper/meta-transfer-objective-for-causal-mechanisms/</link>
<pubDate>Mon, 21 Sep 2020 08:46:30 -0600</pubDate>
<guid>https://kylrth.com/paper/meta-transfer-objective-for-causal-mechanisms/</guid>
<description>Theoretically, models should be able to predict on out-of-distribution data if their understanding of causal relationships is correct. The toy problem they use in this paper is that of predicting temperature from altitude. If a model is trained on data from Switzerland, the model should ideally be able to correctly predict on data from the Netherlands, even though it hasn’t seen elevations that low before.
The main contribution of this paper is that they’ve found that models tend to transfer faster to a new distribution when they learn the correct causal relationships, and when those relationships are sparsely represented, meaning they are represented by relatively few nodes in the network.</description>