<title>A disciplined approach to neural network hyperparameters: part 1</title>
<link>https://kylrth.com/paper/disciplined-approach-to-hyperparameters/</link>
<pubDate>Fri, 28 Aug 2020 14:16:29 -0600</pubDate>
<guid>https://kylrth.com/paper/disciplined-approach-to-hyperparameters/</guid>
<description>The goal of hyperparameter tuning is to reach the point where test loss is horizontal on the graph over model complexity.
Underfitting can be observed with a small learning rate, simple architecture, or complex data distribution. You can observe underfitting decrease by seeing more drastic results at the outset, followed by a more horizontal line further into training. You can use the LR range test to find a good learning rate range, and then use a cyclical learning rate to move up and down within that range.</description>