Statistics > Machine Learning
[Submitted on 6 Mar 2017 (v1), last revised 12 Dec 2017 (this version, v3)]
Title:Forward and Reverse Gradient-Based Hyperparameter Optimization
Download PDFAbstract: We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent. These procedures mirror two methods of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. Our formulation of the reverse-mode procedure is linked to previous work by Maclaurin et al. [2015] but does not require reversible dynamics. The forward-mode procedure is suitable for real-time hyperparameter updates, which may significantly speed up hyperparameter optimization on large datasets. We present experiments on data cleaning and on learning task interactions. We also present one large-scale experiment where the use of previous gradient-based methods would be prohibitive.
Submission history
From: Luca Franceschi [view email][v1] Mon, 6 Mar 2017 09:44:32 UTC (335 KB)
[v2] Wed, 26 Apr 2017 19:00:39 UTC (335 KB)
[v3] Tue, 12 Dec 2017 17:17:59 UTC (398 KB)
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)