pip list | grep "spot[RiverPython]"spotPython 0.2.50
spotRiver 0.0.94
Note: you may need to restart the kernel to use updated packages.
In this tutorial, we will show how spotPython can be integrated into the PyTorch training workflow for a classifiaction task.
./data/VBDP/train.csv.This document refers to the following software versions:
python: 3.10.10torch: 2.0.1torchvision: 0.15.0pip list | grep "spot[RiverPython]"spotPython 0.2.50
spotRiver 0.0.94
Note: you may need to restart the kernel to use updated packages.
spotPython can be installed via pip. Alternatively, the source code can be downloaded from gitHub: https://github.com/sequential-parameter-optimization/spotPython.
!pip install spotPython
spotPython from gitHub.# import sys
# !{sys.executable} -m pip install --upgrade build
# !{sys.executable} -m pip install --upgrade --force-reinstall spotPythonBefore we consider the detailed experimental setup, we select the parameters that affect run time, initial design size and the device that is used.
DEVICE."cpu" is preferred (on Mac)."cuda:0" instead.None, spotPython will automatically select the device.
"mps" on Macs, which is not the best choice for simple neural nets.MAX_TIME = 1
INIT_SIZE = 5
DEVICE = None # "cpu" # "cuda:0"from spotPython.utils.device import getDevice
DEVICE = getDevice(DEVICE)
print(DEVICE)mps
import os
import copy
import socket
from datetime import datetime
from dateutil.tz import tzlocal
start_time = datetime.now(tzlocal())
HOSTNAME = socket.gethostname().split(".")[0]
experiment_name = '25-torch' + "_" + HOSTNAME + "_" + str(MAX_TIME) + "min_" + str(INIT_SIZE) + "init_" + str(start_time).split(".", 1)[0].replace(' ', '_')
experiment_name = experiment_name.replace(':', '-')
print(experiment_name)
if not os.path.exists('./figures'):
os.makedirs('./figures')25-torch_maans03_1min_5init_2023-06-28_04-46-12
fun_control Dictionarytensorboard_path to None if you are working under Windows.spotPython uses a Python dictionary for storing the information required for the hyperparameter tuning process, which was described in Section 14.2, see Initialization of the fun_control Dictionary in the documentation.
from spotPython.utils.init import fun_control_init
fun_control = fun_control_init(task="classification",
tensorboard_path="runs/25_spot_torch_vbdp",
device=DEVICE)import pandas as pd
from sklearn.preprocessing import OrdinalEncoder
train_df = pd.read_csv('./data/VBDP/train.csv')
# remove the id column
train_df = train_df.drop(columns=['id'])
n_samples = train_df.shape[0]
n_features = train_df.shape[1] - 1
target_column = "prognosis"
# Encode our prognosis labels as integers for easier decoding later
enc = OrdinalEncoder()
train_df[target_column] = enc.fit_transform(train_df[[target_column]])
# convert all entries to int for faster processing
train_df = train_df.astype(int)from spotPython.utils.convert import add_logical_columns
df_new = train_df.copy()
# save the target column using "target_column" as the column name
target = train_df[target_column]
# remove the target column
df_new = df_new.drop(columns=[target_column])
train_df = add_logical_columns(df_new)
# add the target column back
train_df[target_column] = target
train_df = train_df.astype(int)from sklearn.model_selection import train_test_split
import numpy as np
n_samples = train_df.shape[0]
n_features = train_df.shape[1] - 1
train_df.columns = [f"x{i}" for i in range(1, n_features+1)] + [target_column]train_df[target_column].head()0 3
1 7
2 3
3 10
4 6
Name: prognosis, dtype: int64
X_train, X_test, y_train, y_test = train_test_split(train_df.drop(target_column, axis=1), train_df[target_column],
random_state=42,
test_size=0.25,
stratify=train_df[target_column])
trainset = pd.DataFrame(np.hstack((X_train, np.array(y_train).reshape(-1, 1))))
testset = pd.DataFrame(np.hstack((X_test, np.array(y_test).reshape(-1, 1))))
trainset.columns = [f"x{i}" for i in range(1, n_features+1)] + [target_column]
testset.columns = [f"x{i}" for i in range(1, n_features+1)] + [target_column]
print(train_df.shape)
print(trainset.shape)
print(testset.shape)(707, 6113)
(530, 6113)
(177, 6113)
import torch
from sklearn.model_selection import train_test_split
from spotPython.torch.dataframedataset import DataFrameDataset
dtype_x = torch.float32
dtype_y = torch.long
train_df = DataFrameDataset(train_df, target_column=target_column, dtype_x=dtype_x, dtype_y=dtype_y)
train = DataFrameDataset(trainset, target_column=target_column, dtype_x=dtype_x, dtype_y=dtype_y)
test = DataFrameDataset(testset, target_column=target_column, dtype_x=dtype_x, dtype_y=dtype_y)
n_samples = len(train)# add the dataset to the fun_control
fun_control.update({"data": train_df, # full dataset,
"train": train,
"test": test,
"n_samples": n_samples,
"target_column": target_column})After the training and test data are specified and added to the fun_control dictionary, spotPython allows the specification of a data preprocessing pipeline, e.g., for the scaling of the data or for the one-hot encoding of categorical variables, see Section 14.4. This feature is not used here, so we do not change the default value (which is None).
algorithm and core_model_hyper_dictspotPython includes the Net_vbdp class which is implemented in the file netvbdp.py. The class is imported here.
This class inherits from the class Net_Core which is implemented in the file netcore.py, see Section 14.5.1.
from spotPython.torch.netvbdp import Net_vbdp
from spotPython.data.torch_hyper_dict import TorchHyperDict
from spotPython.hyperparameters.values import add_core_model_to_fun_control
fun_control = add_core_model_to_fun_control(core_model=Net_vbdp,
fun_control=fun_control,
hyper_dict=TorchHyperDict)The corresponding entries for the core_model class are shown below.
fun_control['core_model_hyper_dict']{'_L0': {'type': 'int',
'default': 64,
'transform': 'None',
'lower': 64,
'upper': 64},
'l1': {'type': 'int',
'default': 8,
'transform': 'transform_power_2_int',
'lower': 8,
'upper': 16},
'dropout_prob': {'type': 'float',
'default': 0.01,
'transform': 'None',
'lower': 0.0,
'upper': 0.9},
'lr_mult': {'type': 'float',
'default': 1.0,
'transform': 'None',
'lower': 0.1,
'upper': 10.0},
'batch_size': {'type': 'int',
'default': 4,
'transform': 'transform_power_2_int',
'lower': 1,
'upper': 4},
'epochs': {'type': 'int',
'default': 4,
'transform': 'transform_power_2_int',
'lower': 4,
'upper': 9},
'k_folds': {'type': 'int',
'default': 1,
'transform': 'None',
'lower': 1,
'upper': 1},
'patience': {'type': 'int',
'default': 2,
'transform': 'transform_power_2_int',
'lower': 1,
'upper': 5},
'optimizer': {'levels': ['Adadelta',
'Adagrad',
'Adam',
'AdamW',
'SparseAdam',
'Adamax',
'ASGD',
'NAdam',
'RAdam',
'RMSprop',
'Rprop',
'SGD'],
'type': 'factor',
'default': 'SGD',
'transform': 'None',
'class_name': 'torch.optim',
'core_model_parameter_type': 'str',
'lower': 0,
'upper': 12},
'sgd_momentum': {'type': 'float',
'default': 0.0,
'transform': 'None',
'lower': 0.0,
'upper': 1.0}}
hyper_dict Hyperparameters for the Selected Algorithm aka core_modelspotPython provides functions for modifying the hyperparameters, their bounds and factors as well as for activating and de-activating hyperparameters without re-compilation of the Python source code. These functions were described in Section 14.6.
epochs and patience are set to small values for demonstration purposes. These values are too small for a real application.fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[7, 9]) andfun_control = modify_hyper_parameter_bounds(fun_control, "patience", bounds=[2, 7])from spotPython.hyperparameters.values import modify_hyper_parameter_bounds
fun_control = modify_hyper_parameter_bounds(fun_control, "_L0", bounds=[n_features, n_features])
fun_control = modify_hyper_parameter_bounds(fun_control, "l1", bounds=[6, 13])
fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[2, 3])
fun_control = modify_hyper_parameter_bounds(fun_control, "patience", bounds=[2, 2])from spotPython.hyperparameters.values import modify_hyper_parameter_levels
fun_control = modify_hyper_parameter_levels(fun_control, "optimizer",["Adam", "AdamW", "Adamax", "NAdam"])
# fun_control = modify_hyper_parameter_levels(fun_control, "optimizer", ["Adam"])
# fun_control["core_model_hyper_dict"]Optimizers are described in Section 14.6.1.
fun_control = modify_hyper_parameter_bounds(fun_control,
"lr_mult", bounds=[1e-3, 1e-3])
fun_control = modify_hyper_parameter_bounds(fun_control,
"sgd_momentum", bounds=[0.9, 0.9])The evaluation procedure requires the specification of two elements:
The loss function is specified by the key "loss_function". We will use CrossEntropy loss for the multiclass-classification task.
from torch.nn import CrossEntropyLoss
loss_function = CrossEntropyLoss()
fun_control.update({"loss_function": loss_function})from spotPython.torch.mapk import MAPK
import torch
mapk = MAPK(k=2)
target = torch.tensor([0, 1, 2, 2])
preds = torch.tensor(
[
[0.5, 0.2, 0.2], # 0 is in top 2
[0.3, 0.4, 0.2], # 1 is in top 2
[0.2, 0.4, 0.3], # 2 is in top 2
[0.7, 0.2, 0.1], # 2 isn't in top 2
]
)
mapk.update(preds, target)
print(mapk.compute()) # tensor(0.6250)tensor(0.6250)
from spotPython.torch.mapk import MAPK
import torchmetrics
metric_torch = MAPK(k=3)
fun_control.update({"metric_torch": metric_torch})The following code passes the information about the parameter ranges and bounds to spot.
# extract the variable types, names, and bounds
from spotPython.hyperparameters.values import (get_bound_values,
get_var_name,
get_var_type,)
var_type = get_var_type(fun_control)
var_name = get_var_name(fun_control)
fun_control.update({"var_type": var_type,
"var_name": var_name})
lower = get_bound_values(fun_control, "lower")
upper = get_bound_values(fun_control, "upper")Now, the dictionary fun_control contains all information needed for the hyperparameter tuning. Before the hyperparameter tuning is started, it is recommended to take a look at the experimental design. The method gen_design_table generates a design table as follows:
from spotPython.utils.eda import gen_design_table
print(gen_design_table(fun_control))| name | type | default | lower | upper | transform |
|--------------|--------|-----------|----------|----------|-----------------------|
| _L0 | int | 64 | 6112 | 6112 | None |
| l1 | int | 8 | 6 | 13 | transform_power_2_int |
| dropout_prob | float | 0.01 | 0 | 0.9 | None |
| lr_mult | float | 1.0 | 0.001 | 0.001 | None |
| batch_size | int | 4 | 1 | 4 | transform_power_2_int |
| epochs | int | 4 | 2 | 3 | transform_power_2_int |
| k_folds | int | 1 | 1 | 1 | None |
| patience | int | 2 | 2 | 2 | transform_power_2_int |
| optimizer | factor | SGD | 0 | 3 | None |
| sgd_momentum | float | 0.0 | 0.9 | 0.9 | None |
This allows to check if all information is available and if the information is correct.
fun_torchThe objective function fun_torch is selected next. It implements an interface from PyTorch’s training, validation, and testing methods to spotPython.
from spotPython.fun.hypertorch import HyperTorch
fun = HyperTorch().fun_torchfrom spotPython.hyperparameters.values import get_default_hyperparameters_as_array
hyper_dict=TorchHyperDict().load()
X_start = get_default_hyperparameters_as_array(fun_control, hyper_dict)The spotPython hyperparameter tuning is started by calling the Spot function as described in Section 14.8.4.
import numpy as np
from spotPython.spot import spot
from math import inf
spot_tuner = spot.Spot(fun=fun,
lower = lower,
upper = upper,
fun_evals = inf,
fun_repeats = 1,
max_time = MAX_TIME,
noise = False,
tolerance_x = np.sqrt(np.spacing(1)),
var_type = var_type,
var_name = var_name,
infill_criterion = "y",
n_points = 1,
seed=123,
log_level = 50,
show_models= False,
show_progress= True,
fun_control = fun_control,
design_control={"init_size": INIT_SIZE,
"repeats": 1},
surrogate_control={"noise": True,
"cod_type": "norm",
"min_theta": -4,
"max_theta": 3,
"n_theta": len(var_name),
"model_fun_evals": 10_000,
"log_level": 50
})
spot_tuner.run(X_start=X_start)
config: {'_L0': 6112, 'l1': 2048, 'dropout_prob': 0.17031221661559992, 'lr_mult': 0.001, 'batch_size': 16, 'epochs': 8, 'k_folds': 1, 'patience': 4, 'optimizer': 'AdamW', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1540178507566452 | Loss: 2.3976302146911621 | Acc: 0.0849056603773585.
Epoch: 2 |
MAPK: 0.1569940596818924 | Loss: 2.3976029668535506 | Acc: 0.0849056603773585.
Epoch: 3 |
MAPK: 0.1696428805589676 | Loss: 2.3975278139114380 | Acc: 0.0849056603773585.
Epoch: 4 |
MAPK: 0.1755952686071396 | Loss: 2.3974441800798689 | Acc: 0.0849056603773585.
Epoch: 5 |
MAPK: 0.1711309701204300 | Loss: 2.3975015878677368 | Acc: 0.0849056603773585.
Epoch: 6 |
MAPK: 0.1763392984867096 | Loss: 2.3974156209400723 | Acc: 0.0849056603773585.
Epoch: 7 |
MAPK: 0.1815476119518280 | Loss: 2.3972805397851125 | Acc: 0.0849056603773585.
Epoch: 8 |
MAPK: 0.1815476268529892 | Loss: 2.3972591842923845 | Acc: 0.0849056603773585.
Returned to Spot: Validation loss: 2.3972591842923845
config: {'_L0': 6112, 'l1': 256, 'dropout_prob': 0.19379790035512987, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 4, 'k_folds': 1, 'patience': 4, 'optimizer': 'Adamax', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1820987761020660 | Loss: 2.3971086519735829 | Acc: 0.1084905660377359.
Epoch: 2 |
MAPK: 0.1851851791143417 | Loss: 2.3970684740278454 | Acc: 0.1037735849056604.
Epoch: 3 |
MAPK: 0.1774691641330719 | Loss: 2.3970886071523032 | Acc: 0.0943396226415094.
Epoch: 4 |
MAPK: 0.1805555522441864 | Loss: 2.3971756652549461 | Acc: 0.1084905660377359.
Returned to Spot: Validation loss: 2.397175665254946
config: {'_L0': 6112, 'l1': 4096, 'dropout_prob': 0.6759063718076167, 'lr_mult': 0.001, 'batch_size': 2, 'epochs': 8, 'k_folds': 1, 'patience': 4, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1863207966089249 | Loss: 2.3976315979687675 | Acc: 0.0990566037735849.
Epoch: 2 |
MAPK: 0.1926100701093674 | Loss: 2.3974278827883162 | Acc: 0.1037735849056604.
Epoch: 3 |
MAPK: 0.2075471878051758 | Loss: 2.3971737353306897 | Acc: 0.1320754716981132.
Epoch: 4 |
MAPK: 0.1808176040649414 | Loss: 2.3970814223559396 | Acc: 0.1132075471698113.
Epoch: 5 |
MAPK: 0.1768868118524551 | Loss: 2.3969526155939640 | Acc: 0.1132075471698113.
Epoch: 6 |
MAPK: 0.1737421303987503 | Loss: 2.3967153801108307 | Acc: 0.1132075471698113.
Epoch: 7 |
MAPK: 0.1729559749364853 | Loss: 2.3964492105088144 | Acc: 0.1132075471698113.
Epoch: 8 |
MAPK: 0.1713836640119553 | Loss: 2.3960586201469853 | Acc: 0.1132075471698113.
Returned to Spot: Validation loss: 2.3960586201469853
config: {'_L0': 6112, 'l1': 128, 'dropout_prob': 0.37306669346546995, 'lr_mult': 0.001, 'batch_size': 4, 'epochs': 4, 'k_folds': 1, 'patience': 4, 'optimizer': 'AdamW', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.2075471580028534 | Loss: 2.3964649506334990 | Acc: 0.1084905660377359.
Epoch: 2 |
MAPK: 0.2122641503810883 | Loss: 2.3964470287538924 | Acc: 0.1084905660377359.
Epoch: 3 |
MAPK: 0.2083333283662796 | Loss: 2.3964999261892066 | Acc: 0.1084905660377359.
Epoch: 4 |
MAPK: 0.2091194689273834 | Loss: 2.3964180181611261 | Acc: 0.1084905660377359.
Returned to Spot: Validation loss: 2.396418018161126
config: {'_L0': 6112, 'l1': 1024, 'dropout_prob': 0.870137281216666, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 8, 'k_folds': 1, 'patience': 4, 'optimizer': 'Adam', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1836419701576233 | Loss: 2.3978879275145353 | Acc: 0.0990566037735849.
Epoch: 2 |
MAPK: 0.1751543283462524 | Loss: 2.3980012911337392 | Acc: 0.0943396226415094.
Epoch: 3 |
MAPK: 0.1689814776182175 | Loss: 2.3977711112410933 | Acc: 0.0943396226415094.
Epoch: 4 |
MAPK: 0.1728395074605942 | Loss: 2.3980036488285772 | Acc: 0.0896226415094340.
Epoch: 5 |
MAPK: 0.1882716268301010 | Loss: 2.3978758829611317 | Acc: 0.0990566037735849.
Epoch: 6 |
MAPK: 0.1766975373029709 | Loss: 2.3980953869996249 | Acc: 0.1037735849056604.
Epoch: 7 |
MAPK: 0.1728395223617554 | Loss: 2.3980912102593317 | Acc: 0.0990566037735849.
Early stopping at epoch 6
Returned to Spot: Validation loss: 2.3980912102593317
config: {'_L0': 6112, 'l1': 4096, 'dropout_prob': 0.6451395692472426, 'lr_mult': 0.001, 'batch_size': 2, 'epochs': 8, 'k_folds': 1, 'patience': 4, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1218553408980370 | Loss: 2.3982275364533909 | Acc: 0.0518867924528302.
Epoch: 2 |
MAPK: 0.1627358645200729 | Loss: 2.3976766995663912 | Acc: 0.0613207547169811.
Epoch: 3 |
MAPK: 0.1422956138849258 | Loss: 2.3977168101184771 | Acc: 0.0660377358490566.
Epoch: 4 |
MAPK: 0.1627358645200729 | Loss: 2.3974515649507628 | Acc: 0.0566037735849057.
Epoch: 5 |
MAPK: 0.1572327464818954 | Loss: 2.3971563870052122 | Acc: 0.0566037735849057.
Epoch: 6 |
MAPK: 0.1650943458080292 | Loss: 2.3967831089811504 | Acc: 0.0613207547169811.
Epoch: 7 |
MAPK: 0.1643081903457642 | Loss: 2.3961733962005041 | Acc: 0.0613207547169811.
Epoch: 8 |
MAPK: 0.1603773534297943 | Loss: 2.3953707015739298 | Acc: 0.0707547169811321.
Returned to Spot: Validation loss: 2.39537070157393
spotPython tuning: 2.39537070157393 [##########] 96.02%
config: {'_L0': 6112, 'l1': 4096, 'dropout_prob': 0.46046466104295497, 'lr_mult': 0.001, 'batch_size': 2, 'epochs': 8, 'k_folds': 1, 'patience': 4, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1784591376781464 | Loss: 2.3977366865805858 | Acc: 0.0990566037735849.
Epoch: 2 |
MAPK: 0.2130503207445145 | Loss: 2.3973819917103030 | Acc: 0.1367924528301887.
Epoch: 3 |
MAPK: 0.2256288826465607 | Loss: 2.3970227016592927 | Acc: 0.1415094339622641.
Epoch: 4 |
MAPK: 0.2232704013586044 | Loss: 2.3964527103136168 | Acc: 0.1367924528301887.
Epoch: 5 |
MAPK: 0.2311320602893829 | Loss: 2.3956421141354545 | Acc: 0.1320754716981132.
Epoch: 6 |
MAPK: 0.2193395793437958 | Loss: 2.3947185772769854 | Acc: 0.1273584905660377.
Epoch: 7 |
MAPK: 0.2240566015243530 | Loss: 2.3934073943012164 | Acc: 0.1179245283018868.
Epoch: 8 |
MAPK: 0.2248427569866180 | Loss: 2.3915365304587022 | Acc: 0.1226415094339623.
Returned to Spot: Validation loss: 2.391536530458702
spotPython tuning: 2.391536530458702 [##########] 100.00% Done...
<spotPython.spot.spot.Spot at 0x189ae71f0>
The textual output shown in the console (or code cell) can be visualized with Tensorboard as described in Section 14.9, see also the description in the documentation: Tensorboard.
After the hyperparameter tuning run is finished, the results can be analyzed as described in Section 14.10.
spot_tuner.plot_progress(log_y=False,
filename="./figures/" + experiment_name+"_progress.png")
from spotPython.utils.eda import gen_design_table
print(gen_design_table(fun_control=fun_control, spot=spot_tuner))| name | type | default | lower | upper | tuned | transform | importance | stars |
|--------------|--------|-----------|---------|---------|---------------------|-----------------------|--------------|---------|
| _L0 | int | 64 | 6112.0 | 6112.0 | 6112.0 | None | 0.00 | |
| l1 | int | 8 | 6.0 | 13.0 | 12.0 | transform_power_2_int | 0.14 | . |
| dropout_prob | float | 0.01 | 0.0 | 0.9 | 0.46046466104295497 | None | 3.06 | * |
| lr_mult | float | 1.0 | 0.001 | 0.001 | 0.001 | None | 0.00 | |
| batch_size | int | 4 | 1.0 | 4.0 | 1.0 | transform_power_2_int | 100.00 | *** |
| epochs | int | 4 | 2.0 | 3.0 | 3.0 | transform_power_2_int | 0.19 | . |
| k_folds | int | 1 | 1.0 | 1.0 | 1.0 | None | 0.00 | |
| patience | int | 2 | 2.0 | 2.0 | 2.0 | transform_power_2_int | 0.00 | |
| optimizer | factor | SGD | 0.0 | 3.0 | 3.0 | None | 4.72 | * |
| sgd_momentum | float | 0.0 | 0.9 | 0.9 | 0.9 | None | 0.00 | |
spot_tuner.plot_importance(threshold=0.025,
filename="./figures/" + experiment_name+"_importance.png")
from spotPython.hyperparameters.values import get_one_core_model_from_X
X = spot_tuner.to_all_dim(spot_tuner.min_X.reshape(1,-1))
model_spot = get_one_core_model_from_X(X, fun_control)
model_spotNet_vbdp(
(fc1): Linear(in_features=6112, out_features=4096, bias=True)
(fc2): Linear(in_features=4096, out_features=2048, bias=True)
(fc3): Linear(in_features=2048, out_features=1024, bias=True)
(fc4): Linear(in_features=1024, out_features=512, bias=True)
(fc5): Linear(in_features=512, out_features=11, bias=True)
(relu): ReLU()
(softmax): Softmax(dim=1)
(dropout1): Dropout(p=0.46046466104295497, inplace=False)
(dropout2): Dropout(p=0.23023233052147749, inplace=False)
)
from spotPython.torch.traintest import (
train_tuned,
test_tuned,
)
train_tuned(net=model_spot, train_dataset=train,
loss_function=fun_control["loss_function"],
metric=fun_control["metric_torch"],
shuffle=True,
device = fun_control["device"],
path=None,
task=fun_control["task"],)Epoch: 1 |
MAPK: 0.1627358347177505 | Loss: 2.3976274706282705 | Acc: 0.0990566037735849.
Epoch: 2 |
MAPK: 0.1611635237932205 | Loss: 2.3973149983388073 | Acc: 0.0943396226415094.
Epoch: 3 |
MAPK: 0.1933961957693100 | Loss: 2.3969106044409410 | Acc: 0.1132075471698113.
Epoch: 4 |
MAPK: 0.2099056541919708 | Loss: 2.3965457700333506 | Acc: 0.1037735849056604.
Epoch: 5 |
MAPK: 0.2091194689273834 | Loss: 2.3956661404303783 | Acc: 0.1037735849056604.
Epoch: 6 |
MAPK: 0.1863207519054413 | Loss: 2.3948831828135364 | Acc: 0.0943396226415094.
Epoch: 7 |
MAPK: 0.1965408921241760 | Loss: 2.3936074229906188 | Acc: 0.0990566037735849.
Epoch: 8 |
MAPK: 0.1973270177841187 | Loss: 2.3920769286605545 | Acc: 0.0990566037735849.
Returned to Spot: Validation loss: 2.3920769286605545
If path is set to a filename, e.g., path = "model_spot_trained.pt", the weights of the trained model will be loaded from this file.
test_tuned(net=model_spot, test_dataset=test,
shuffle=False,
loss_function=fun_control["loss_function"],
metric=fun_control["metric_torch"],
device = fun_control["device"],
task=fun_control["task"],)MAPK: 0.2219101339578629 | Loss: 2.3883381907859547 | Acc: 0.1186440677966102.
Final evaluation: Validation loss: 2.3883381907859547
Final evaluation: Validation metric: 0.22191013395786285
----------------------------------------------
(2.3883381907859547, nan, tensor(0.2219))
k_folds attribute of the model as follows:setattr(model_spot, "k_folds", 10)from spotPython.torch.traintest import evaluate_cv
# modify k-kolds:
setattr(model_spot, "k_folds", 3)
df_eval, df_preds, df_metrics = evaluate_cv(net=model_spot,
dataset=fun_control["data"],
loss_function=fun_control["loss_function"],
metric=fun_control["metric_torch"],
task=fun_control["task"],
writer=fun_control["writer"],
writerId="model_spot_cv",
device = fun_control["device"])Fold: 1
Epoch: 1 |
MAPK: 0.1631355732679367 | Loss: 2.3976899385452271 | Acc: 0.0720338983050847.
Epoch: 2 |
MAPK: 0.1935028284788132 | Loss: 2.3970449900223039 | Acc: 0.0593220338983051.
Epoch: 3 |
MAPK: 0.2153954058885574 | Loss: 2.3962083509412864 | Acc: 0.0974576271186441.
Epoch: 4 |
MAPK: 0.2104519307613373 | Loss: 2.3953836327892239 | Acc: 0.1144067796610169.
Epoch: 5 |
MAPK: 0.2224575728178024 | Loss: 2.3935029122789029 | Acc: 0.1483050847457627.
Epoch: 6 |
MAPK: 0.2097457051277161 | Loss: 2.3908023712998729 | Acc: 0.1313559322033898.
Epoch: 7 |
MAPK: 0.2125705927610397 | Loss: 2.3872123269711509 | Acc: 0.1271186440677966.
Epoch: 8 |
MAPK: 0.2196327298879623 | Loss: 2.3841707443786881 | Acc: 0.1271186440677966.
Fold: 2
Epoch: 1 |
MAPK: 0.2387005239725113 | Loss: 2.3975351341700151 | Acc: 0.1737288135593220.
Epoch: 2 |
MAPK: 0.2817796170711517 | Loss: 2.3968791274701133 | Acc: 0.1991525423728814.
Epoch: 3 |
MAPK: 0.2761299014091492 | Loss: 2.3960167250390780 | Acc: 0.1694915254237288.
Epoch: 4 |
MAPK: 0.2803671956062317 | Loss: 2.3947743743152943 | Acc: 0.1652542372881356.
Epoch: 5 |
MAPK: 0.2676552832126617 | Loss: 2.3924352536767217 | Acc: 0.1398305084745763.
Epoch: 6 |
MAPK: 0.2563558816909790 | Loss: 2.3891501164032243 | Acc: 0.1271186440677966.
Epoch: 7 |
MAPK: 0.2789547741413116 | Loss: 2.3855787293385653 | Acc: 0.1694915254237288.
Epoch: 8 |
MAPK: 0.2874293327331543 | Loss: 2.3826769650992699 | Acc: 0.1822033898305085.
Fold: 3
Epoch: 1 |
MAPK: 0.1765536814928055 | Loss: 2.3975534519906772 | Acc: 0.1063829787234043.
Epoch: 2 |
MAPK: 0.1786722987890244 | Loss: 2.3971492678432140 | Acc: 0.0936170212765957.
Epoch: 3 |
MAPK: 0.1850282549858093 | Loss: 2.3965534961829751 | Acc: 0.1191489361702128.
Epoch: 4 |
MAPK: 0.1822033971548080 | Loss: 2.3956931001048978 | Acc: 0.1021276595744681.
Epoch: 5 |
MAPK: 0.1927965879440308 | Loss: 2.3940545199281078 | Acc: 0.1191489361702128.
Epoch: 6 |
MAPK: 0.1857344359159470 | Loss: 2.3918487924640464 | Acc: 0.1063829787234043.
Epoch: 7 |
MAPK: 0.1807909309864044 | Loss: 2.3880215681205361 | Acc: 0.0851063829787234.
Epoch: 8 |
MAPK: 0.2323445975780487 | Loss: 2.3856813564138899 | Acc: 0.1319148936170213.
metric_name = type(fun_control["metric_torch"]).__name__
print(f"loss: {df_eval}, Cross-validated {metric_name}: {df_metrics}")loss: 2.384176355297283, Cross-validated MAPK: 0.2464689016342163
filename = "./figures/" + experiment_name
spot_tuner.plot_important_hyperparameter_contour(filename=filename)l1: 0.14155091712282247
dropout_prob: 3.0557577535882814
batch_size: 100.0
epochs: 0.1857261662232338
optimizer: 4.718527868292797










spot_tuner.parallel_plot()Parallel coordinates plots
# close tensorbaoard writer
if fun_control["writer"] is not None:
fun_control["writer"].close()PLOT_ALL = False
if PLOT_ALL:
n = spot_tuner.k
for i in range(n-1):
for j in range(i+1, n):
spot_tuner.plot_contour(i=i, j=j, min_z=min_z, max_z = max_z)