Skip to content
Home ยป Using Keras in R: Hypertuning a model

Using Keras in R: Hypertuning a model

If you too like Keras and RStudio, you’re probably curious how you can hypertune a model. Here goes.

If you’ve paid attention in the previous part, you noticed that I didn’t do any hypertuning at all to tweak the performance of the car price prediction model. I tackle these questions in this part of the series.

In order to hypertune your model, we need two scripts: the first will describe the model and the second will describe the process.

Defining the hypertuning parameters

In essence, hypertuning is done through flags. Anywhere in your neural network, you can replace a parameter with a flag. In the control file, you will be able to define the range of these parameters. Important: flag_numeric will produce a flout. Using it to specify a # of neurons (for example), will produce an error. You probably know it by now.

R Session Aborted - R encountered a fatal error. The Session was terminated.
R Session Aborted – R encountered a fatal error. The Session was terminated.

In the script below you can see I set the default values for the dropout, the neuron amount for the three dense layers, the regularization factor and the learning rate.


FLAGS <- flags(
  flag_numeric('dropout1', 0.3),
  flag_integer('neurons1', 128),
  flag_integer('neurons2', 128),
  flag_integer('neurons3', 128),
  flag_numeric('l2', 0.001),
  flag_numeric('lr', 0.001)

In the next lines you can see I inserted every hypertuning parameter by referring to its FLAGS counterpart.

build_model <- function() {
  model <- keras_model_sequential() 
  model %>% 
    layer_dense(units = FLAGS$neurons1, 
                input_shape = dim(x_train)[2],
                activation = 'relu',
                kernel_regularizer = regularizer_l2(l = FLAGS$l2)) %>%
    layer_dense(units = FLAGS$neurons2,
                activation = 'relu') %>%
    layer_dropout(FLAGS$dropout1) %>%
    layer_dense(units = FLAGS$neurons3,
                activation = 'relu') %>%
    layer_dense(units = 1)
  model %>% compile(
    loss = "mse",
    optimizer = optimizer_rmsprop(lr = FLAGS$lr),
    metrics = list('mean_absolute_error')

model <- build_model()

In the following lines of code I simple wrap up the script. I also copied the callback function from the script in part 2. I also introduce the evaluate() function which evaluates the model through the previously defined metrics — for each run. Finally, I also save the file as an .h5 file. However, this is silly at this point as the model will be overwritten each time. This will become handy once you train the model with the best parameters.

early_stop <- callback_early_stopping(monitor = "val_loss", patience = 20)

epochs <- 100

# Fit the model and store training stats
history <- model %>% fit(
  epochs = epochs,
  validation_split = 0.2,
  verbose = 1,
  callbacks = list(early_stop)


score <- model %>% evaluate(
  x_test, y_test,
  verbose = 0

save_model_hdf5(model, 'model.h5')

cat('Test loss:', score$loss, '\n')
cat('Test accuracy:', score$mean_absolute_error, '\n')

I saved this file as nn_ht.R. In the next lines of code I will discuss, I will refer to this file to run the code that it contains.

Hypertuning the model

In the following lines of code I define the possible values of all the parameters I want to hypertune. This will produce 1296 possible combinations of parameters. That’s why in the tuning_run() function, I specify I only want to try 10% of the possible combinations (sampled).

Finally, I simply list all the runs, by referring to its running directory, where all the information from the run is stored and I ask for it to be ordered according to the mean absolute error.

par <- list(
  dropout1 = c(0.3,0.4,0.5,0.6),
  neurons1 = c(64,128,256),
  neurons2 = c(64,128,256),
  neurons3 = c(64,128,256),
  l2 = c(0.0001,0.001,0.01),
  lr = c(0.00001,0.0001,0.001,0.01)

runs <- tuning_run('nn_ht.R', runs_dir = '_tuning', sample = 0.1, flags = par)

ls_runs(order = metric_val_mean_absolute_error, decreasing= F, runs_dir = '_tuning')

Finally, I select the best model parameters and I train the model with it.

best_run <- ls_runs(order = metric_val_mean_absolute_error, decreasing= F, runs_dir = '_tuning')[1,]

run <- training_run('nn_ht.R',flags = list(
  dropout1 = best_run$flag_dropout1,
  neurons1 = best_run$flag_neurons1,
  neurons2 = best_run$flag_neurons2,
  neurons3 = best_run$flag_neurons3,
  l2 = best_run$flag_l2,
  lr = best_run$flag_lr))

best_model <- load_model_hdf5('model.h5')

In this blog I explained how you can hypertune a model in R through Keras. In the next blog post I will explain how you can visualize your runs using Tensorboard.

By the way, if you’re having trouble understanding some of the code and concepts, I can highly recommend “An Introduction to Statistical Learning: with Applications in R”, which is the must-have data science bible. If you simply need an introduction into R, and less into the Data Science part, I can absolutely recommend this book by Richard Cotton. Hope it helps!

Say thanks, ask questions or give feedback

Technologies get updated, syntax changes and honestly… I make mistakes too. If something is incorrect, incomplete or doesn’t work, let me know in the comments below and help thousands of visitors.

2 thoughts on “Using Keras in R: Hypertuning a model”

Leave a Reply

Your email address will not be published. Required fields are marked *