{
    "byline": null,
    "dir": null,
    "excerpt": "In the area of hyperparameter optimization (HO), the goal is to optimize a response function of the hyperparameters. The response function is usually the average loss on a validation set. Gradient-based HO refers to iteratively finding the optimal hyperparameters using gradient updates, just as we do with neural network training itself. The gradient of the response function with respect to the hyperparameters is called the hypergradient.",
    "length": 4085,
    "siteName": null,
    "title": "Forward and reverse gradient-based hyperparameter optimization"
}