pip list | grep "spot[RiverPython]"spotPython 0.2.52
spotRiver 0.0.94
Note: you may need to restart the kernel to use updated packages.
In this tutorial, we will show how spotPython can be integrated into the PyTorch training workflow for a classifiaction task.
./data/VBDP/train.csv.This document refers to the following software versions:
python: 3.10.10torch: 2.0.1torchvision: 0.15.0pip list | grep "spot[RiverPython]"spotPython 0.2.52
spotRiver 0.0.94
Note: you may need to restart the kernel to use updated packages.
spotPython can be installed via pip. Alternatively, the source code can be downloaded from gitHub: https://github.com/sequential-parameter-optimization/spotPython.
!pip install spotPython
spotPython from gitHub.# import sys
# !{sys.executable} -m pip install --upgrade build
# !{sys.executable} -m pip install --upgrade --force-reinstall spotPythonBefore we consider the detailed experimental setup, we select the parameters that affect run time, initial design size and the device that is used.
DEVICE."cpu" is preferred (on Mac)."cuda:0" instead.None, spotPython will automatically select the device.
"mps" on Macs, which is not the best choice for simple neural nets.MAX_TIME = 1
INIT_SIZE = 5
DEVICE = None # "cpu" # "cuda:0"from spotPython.utils.device import getDevice
DEVICE = getDevice(DEVICE)
print(DEVICE)mps
import os
import copy
import socket
from datetime import datetime
from dateutil.tz import tzlocal
start_time = datetime.now(tzlocal())
HOSTNAME = socket.gethostname().split(".")[0]
experiment_name = '25-torch' + "_" + HOSTNAME + "_" + str(MAX_TIME) + "min_" + str(INIT_SIZE) + "init_" + str(start_time).split(".", 1)[0].replace(' ', '_')
experiment_name = experiment_name.replace(':', '-')
print(experiment_name)
if not os.path.exists('./figures'):
os.makedirs('./figures')25-torch_maans03_1min_5init_2023-07-03_13-26-02
fun_control Dictionarytensorboard_path to None if you are working under Windows.spotPython uses a Python dictionary for storing the information required for the hyperparameter tuning process, which was described in Section 14.2, see Initialization of the fun_control Dictionary in the documentation.
from spotPython.utils.init import fun_control_init
fun_control = fun_control_init(task="classification",
tensorboard_path="runs/25_spot_torch_vbdp",
device=DEVICE)import pandas as pd
from sklearn.preprocessing import OrdinalEncoder
train_df = pd.read_csv('./data/VBDP/train.csv')
# remove the id column
train_df = train_df.drop(columns=['id'])
n_samples = train_df.shape[0]
n_features = train_df.shape[1] - 1
target_column = "prognosis"
# Encode our prognosis labels as integers for easier decoding later
enc = OrdinalEncoder()
train_df[target_column] = enc.fit_transform(train_df[[target_column]])
# convert all entries to int for faster processing
train_df = train_df.astype(int)from spotPython.utils.convert import add_logical_columns
df_new = train_df.copy()
# save the target column using "target_column" as the column name
target = train_df[target_column]
# remove the target column
df_new = df_new.drop(columns=[target_column])
train_df = add_logical_columns(df_new)
# add the target column back
train_df[target_column] = target
train_df = train_df.astype(int)from sklearn.model_selection import train_test_split
import numpy as np
n_samples = train_df.shape[0]
n_features = train_df.shape[1] - 1
train_df.columns = [f"x{i}" for i in range(1, n_features+1)] + [target_column]train_df[target_column].head()0 3
1 7
2 3
3 10
4 6
Name: prognosis, dtype: int64
X_train, X_test, y_train, y_test = train_test_split(train_df.drop(target_column, axis=1), train_df[target_column],
random_state=42,
test_size=0.25,
stratify=train_df[target_column])
trainset = pd.DataFrame(np.hstack((X_train, np.array(y_train).reshape(-1, 1))))
testset = pd.DataFrame(np.hstack((X_test, np.array(y_test).reshape(-1, 1))))
trainset.columns = [f"x{i}" for i in range(1, n_features+1)] + [target_column]
testset.columns = [f"x{i}" for i in range(1, n_features+1)] + [target_column]
print(train_df.shape)
print(trainset.shape)
print(testset.shape)(707, 6113)
(530, 6113)
(177, 6113)
import torch
from sklearn.model_selection import train_test_split
from spotPython.torch.dataframedataset import DataFrameDataset
dtype_x = torch.float32
dtype_y = torch.long
train_df = DataFrameDataset(train_df, target_column=target_column, dtype_x=dtype_x, dtype_y=dtype_y)
train = DataFrameDataset(trainset, target_column=target_column, dtype_x=dtype_x, dtype_y=dtype_y)
test = DataFrameDataset(testset, target_column=target_column, dtype_x=dtype_x, dtype_y=dtype_y)
n_samples = len(train)# add the dataset to the fun_control
fun_control.update({"data": train_df, # full dataset,
"train": train,
"test": test,
"n_samples": n_samples,
"target_column": target_column})After the training and test data are specified and added to the fun_control dictionary, spotPython allows the specification of a data preprocessing pipeline, e.g., for the scaling of the data or for the one-hot encoding of categorical variables, see Section 14.4. This feature is not used here, so we do not change the default value (which is None).
algorithm and core_model_hyper_dictspotPython includes the Net_vbdp class which is implemented in the file netvbdp.py. The class is imported here.
This class inherits from the class Net_Core which is implemented in the file netcore.py, see Section 14.5.1.
from spotPython.torch.netvbdp import Net_vbdp
from spotPython.data.torch_hyper_dict import TorchHyperDict
from spotPython.hyperparameters.values import add_core_model_to_fun_control
fun_control = add_core_model_to_fun_control(core_model=Net_vbdp,
fun_control=fun_control,
hyper_dict=TorchHyperDict)The corresponding entries for the core_model class are shown below.
fun_control['core_model_hyper_dict']{'_L0': {'type': 'int',
'default': 64,
'transform': 'None',
'lower': 64,
'upper': 64},
'l1': {'type': 'int',
'default': 8,
'transform': 'transform_power_2_int',
'lower': 8,
'upper': 16},
'dropout_prob': {'type': 'float',
'default': 0.01,
'transform': 'None',
'lower': 0.0,
'upper': 0.9},
'lr_mult': {'type': 'float',
'default': 1.0,
'transform': 'None',
'lower': 0.1,
'upper': 10.0},
'batch_size': {'type': 'int',
'default': 4,
'transform': 'transform_power_2_int',
'lower': 1,
'upper': 4},
'epochs': {'type': 'int',
'default': 4,
'transform': 'transform_power_2_int',
'lower': 4,
'upper': 9},
'k_folds': {'type': 'int',
'default': 1,
'transform': 'None',
'lower': 1,
'upper': 1},
'patience': {'type': 'int',
'default': 2,
'transform': 'transform_power_2_int',
'lower': 1,
'upper': 5},
'optimizer': {'levels': ['Adadelta',
'Adagrad',
'Adam',
'AdamW',
'SparseAdam',
'Adamax',
'ASGD',
'NAdam',
'RAdam',
'RMSprop',
'Rprop',
'SGD'],
'type': 'factor',
'default': 'SGD',
'transform': 'None',
'class_name': 'torch.optim',
'core_model_parameter_type': 'str',
'lower': 0,
'upper': 12},
'sgd_momentum': {'type': 'float',
'default': 0.0,
'transform': 'None',
'lower': 0.0,
'upper': 1.0}}
hyper_dict Hyperparameters for the Selected Algorithm aka core_modelspotPython provides functions for modifying the hyperparameters, their bounds and factors as well as for activating and de-activating hyperparameters without re-compilation of the Python source code. These functions were described in Section 14.6.
epochs and patience are set to small values for demonstration purposes. These values are too small for a real application.fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[7, 9]) andfun_control = modify_hyper_parameter_bounds(fun_control, "patience", bounds=[2, 7])from spotPython.hyperparameters.values import modify_hyper_parameter_bounds
fun_control = modify_hyper_parameter_bounds(fun_control, "_L0", bounds=[n_features, n_features])
fun_control = modify_hyper_parameter_bounds(fun_control, "l1", bounds=[6, 13])
fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[2, 3])
fun_control = modify_hyper_parameter_bounds(fun_control, "patience", bounds=[2, 2])from spotPython.hyperparameters.values import modify_hyper_parameter_levels
fun_control = modify_hyper_parameter_levels(fun_control, "optimizer",["Adam", "AdamW", "Adamax", "NAdam"])
# fun_control = modify_hyper_parameter_levels(fun_control, "optimizer", ["Adam"])
# fun_control["core_model_hyper_dict"]Optimizers are described in Section 14.6.1.
fun_control = modify_hyper_parameter_bounds(fun_control,
"lr_mult", bounds=[1e-3, 1e-3])
fun_control = modify_hyper_parameter_bounds(fun_control,
"sgd_momentum", bounds=[0.9, 0.9])The evaluation procedure requires the specification of two elements:
The loss function is specified by the key "loss_function". We will use CrossEntropy loss for the multiclass-classification task.
from torch.nn import CrossEntropyLoss
loss_function = CrossEntropyLoss()
fun_control.update({"loss_function": loss_function})from spotPython.torch.mapk import MAPK
import torch
mapk = MAPK(k=2)
target = torch.tensor([0, 1, 2, 2])
preds = torch.tensor(
[
[0.5, 0.2, 0.2], # 0 is in top 2
[0.3, 0.4, 0.2], # 1 is in top 2
[0.2, 0.4, 0.3], # 2 is in top 2
[0.7, 0.2, 0.1], # 2 isn't in top 2
]
)
mapk.update(preds, target)
print(mapk.compute()) # tensor(0.6250)tensor(0.6250)
from spotPython.torch.mapk import MAPK
import torchmetrics
metric_torch = MAPK(k=3)
fun_control.update({"metric_torch": metric_torch})The following code passes the information about the parameter ranges and bounds to spot.
# extract the variable types, names, and bounds
from spotPython.hyperparameters.values import (get_bound_values,
get_var_name,
get_var_type,)
var_type = get_var_type(fun_control)
var_name = get_var_name(fun_control)
fun_control.update({"var_type": var_type,
"var_name": var_name})
lower = get_bound_values(fun_control, "lower")
upper = get_bound_values(fun_control, "upper")Now, the dictionary fun_control contains all information needed for the hyperparameter tuning. Before the hyperparameter tuning is started, it is recommended to take a look at the experimental design. The method gen_design_table generates a design table as follows:
from spotPython.utils.eda import gen_design_table
print(gen_design_table(fun_control))| name | type | default | lower | upper | transform |
|--------------|--------|-----------|----------|----------|-----------------------|
| _L0 | int | 64 | 6112 | 6112 | None |
| l1 | int | 8 | 6 | 13 | transform_power_2_int |
| dropout_prob | float | 0.01 | 0 | 0.9 | None |
| lr_mult | float | 1.0 | 0.001 | 0.001 | None |
| batch_size | int | 4 | 1 | 4 | transform_power_2_int |
| epochs | int | 4 | 2 | 3 | transform_power_2_int |
| k_folds | int | 1 | 1 | 1 | None |
| patience | int | 2 | 2 | 2 | transform_power_2_int |
| optimizer | factor | SGD | 0 | 3 | None |
| sgd_momentum | float | 0.0 | 0.9 | 0.9 | None |
This allows to check if all information is available and if the information is correct.
fun_torchThe objective function fun_torch is selected next. It implements an interface from PyTorch’s training, validation, and testing methods to spotPython.
from spotPython.fun.hypertorch import HyperTorch
fun = HyperTorch().fun_torchfrom spotPython.hyperparameters.values import get_default_hyperparameters_as_array
hyper_dict=TorchHyperDict().load()
X_start = get_default_hyperparameters_as_array(fun_control, hyper_dict)The spotPython hyperparameter tuning is started by calling the Spot function as described in Section 14.8.4.
import numpy as np
from spotPython.spot import spot
from math import inf
spot_tuner = spot.Spot(fun=fun,
lower = lower,
upper = upper,
fun_evals = inf,
fun_repeats = 1,
max_time = MAX_TIME,
noise = False,
tolerance_x = np.sqrt(np.spacing(1)),
var_type = var_type,
var_name = var_name,
infill_criterion = "y",
n_points = 1,
seed=123,
log_level = 50,
show_models= False,
show_progress= True,
fun_control = fun_control,
design_control={"init_size": INIT_SIZE,
"repeats": 1},
surrogate_control={"noise": True,
"cod_type": "norm",
"min_theta": -4,
"max_theta": 3,
"n_theta": len(var_name),
"model_fun_evals": 10_000,
"log_level": 50
})
spot_tuner.run(X_start=X_start)
config: {'_L0': 6112, 'l1': 2048, 'dropout_prob': 0.17031221661559992, 'lr_mult': 0.001, 'batch_size': 16, 'epochs': 8, 'k_folds': 1, 'patience': 4, 'optimizer': 'AdamW', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1793154776096344 | Loss: 2.3979182073048184 | Acc: 0.1273584905660377.
Epoch: 2 |
MAPK: 0.1845238208770752 | Loss: 2.3979019778115407 | Acc: 0.1320754716981132.
Epoch: 3 |
MAPK: 0.2008928507566452 | Loss: 2.3978130817413330 | Acc: 0.1367924528301887.
Epoch: 4 |
MAPK: 0.1964285671710968 | Loss: 2.3978581598826816 | Acc: 0.1415094339622641.
Epoch: 5 |
MAPK: 0.2142857164144516 | Loss: 2.3978352035794939 | Acc: 0.1509433962264151.
Epoch: 6 |
MAPK: 0.2105654776096344 | Loss: 2.3977752923965454 | Acc: 0.1320754716981132.
Epoch: 7 |
MAPK: 0.2343750000000000 | Loss: 2.3977158580507552 | Acc: 0.1698113207547170.
Epoch: 8 |
MAPK: 0.2232142686843872 | Loss: 2.3976944174085344 | Acc: 0.1415094339622641.
Returned to Spot: Validation loss: 2.3976944174085344
config: {'_L0': 6112, 'l1': 256, 'dropout_prob': 0.19379790035512987, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 4, 'k_folds': 1, 'patience': 4, 'optimizer': 'Adamax', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1712962985038757 | Loss: 2.3980199672557689 | Acc: 0.0990566037735849.
Epoch: 2 |
MAPK: 0.1689814776182175 | Loss: 2.3979500134785972 | Acc: 0.0943396226415094.
Epoch: 3 |
MAPK: 0.1666666716337204 | Loss: 2.3979757538548223 | Acc: 0.0849056603773585.
Epoch: 4 |
MAPK: 0.1550925970077515 | Loss: 2.3980355969181768 | Acc: 0.0613207547169811.
Returned to Spot: Validation loss: 2.398035596918177
config: {'_L0': 6112, 'l1': 4096, 'dropout_prob': 0.6759063718076167, 'lr_mult': 0.001, 'batch_size': 2, 'epochs': 8, 'k_folds': 1, 'patience': 4, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1533019095659256 | Loss: 2.3977285038750127 | Acc: 0.0707547169811321.
Epoch: 2 |
MAPK: 0.1627358347177505 | Loss: 2.3976753702703513 | Acc: 0.0896226415094340.
Epoch: 3 |
MAPK: 0.1910377293825150 | Loss: 2.3973205426953874 | Acc: 0.1179245283018868.
Epoch: 4 |
MAPK: 0.1816037744283676 | Loss: 2.3971340116464868 | Acc: 0.0990566037735849.
Epoch: 5 |
MAPK: 0.2114779651165009 | Loss: 2.3967223572281173 | Acc: 0.1179245283018868.
Epoch: 6 |
MAPK: 0.1957547366619110 | Loss: 2.3965382081157758 | Acc: 0.0990566037735849.
Epoch: 7 |
MAPK: 0.2256288826465607 | Loss: 2.3956945887151755 | Acc: 0.1273584905660377.
Epoch: 8 |
MAPK: 0.2130502909421921 | Loss: 2.3951818313238755 | Acc: 0.1226415094339623.
Returned to Spot: Validation loss: 2.3951818313238755
config: {'_L0': 6112, 'l1': 128, 'dropout_prob': 0.37306669346546995, 'lr_mult': 0.001, 'batch_size': 4, 'epochs': 4, 'k_folds': 1, 'patience': 4, 'optimizer': 'AdamW', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1470125913619995 | Loss: 2.3993017583523155 | Acc: 0.0896226415094340.
Epoch: 2 |
MAPK: 0.1470125913619995 | Loss: 2.3992382625363908 | Acc: 0.0896226415094340.
Epoch: 3 |
MAPK: 0.1470125913619995 | Loss: 2.3992580557769201 | Acc: 0.0896226415094340.
Epoch: 4 |
MAPK: 0.1446540951728821 | Loss: 2.3991979338088125 | Acc: 0.0849056603773585.
Returned to Spot: Validation loss: 2.3991979338088125
config: {'_L0': 6112, 'l1': 1024, 'dropout_prob': 0.870137281216666, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 8, 'k_folds': 1, 'patience': 4, 'optimizer': 'Adam', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1288580447435379 | Loss: 2.3975733032932989 | Acc: 0.0424528301886792.
Epoch: 2 |
MAPK: 0.1682098656892776 | Loss: 2.3977194273913347 | Acc: 0.0943396226415094.
Epoch: 3 |
MAPK: 0.1743827015161514 | Loss: 2.3975409225181297 | Acc: 0.0707547169811321.
Epoch: 4 |
MAPK: 0.1712962985038757 | Loss: 2.3976151325084545 | Acc: 0.0990566037735849.
Epoch: 5 |
MAPK: 0.1797839254140854 | Loss: 2.3974891680258290 | Acc: 0.0896226415094340.
Epoch: 6 |
MAPK: 0.1628086417913437 | Loss: 2.3976068055188215 | Acc: 0.0849056603773585.
Epoch: 7 |
MAPK: 0.1658950597047806 | Loss: 2.3975864074848316 | Acc: 0.0896226415094340.
Epoch: 8 |
MAPK: 0.1604938358068466 | Loss: 2.3976810243394642 | Acc: 0.0801886792452830.
Returned to Spot: Validation loss: 2.397681024339464
config: {'_L0': 6112, 'l1': 4096, 'dropout_prob': 0.4132005099912892, 'lr_mult': 0.001, 'batch_size': 2, 'epochs': 8, 'k_folds': 1, 'patience': 4, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.2044024914503098 | Loss: 2.3973671445306741 | Acc: 0.1226415094339623.
Epoch: 2 |
MAPK: 0.2295597344636917 | Loss: 2.3970097910683110 | Acc: 0.1226415094339623.
Epoch: 3 |
MAPK: 0.2397798150777817 | Loss: 2.3966489625426957 | Acc: 0.1226415094339623.
Epoch: 4 |
MAPK: 0.2649370431900024 | Loss: 2.3960055427731208 | Acc: 0.1226415094339623.
Epoch: 5 |
MAPK: 0.2547169029712677 | Loss: 2.3953653956359289 | Acc: 0.1226415094339623.
Epoch: 6 |
MAPK: 0.2720125317573547 | Loss: 2.3940943771938108 | Acc: 0.1226415094339623.
Epoch: 7 |
MAPK: 0.2397798299789429 | Loss: 2.3928466540462567 | Acc: 0.1226415094339623.
Epoch: 8 |
MAPK: 0.2350628525018692 | Loss: 2.3911897168969207 | Acc: 0.1226415094339623.
Returned to Spot: Validation loss: 2.3911897168969207
spotPython tuning: 2.3911897168969207 [########--] 81.78%
config: {'_L0': 6112, 'l1': 512, 'dropout_prob': 0.4054506390535282, 'lr_mult': 0.001, 'batch_size': 4, 'epochs': 8, 'k_folds': 1, 'patience': 4, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 |
MAPK: 0.1533019095659256 | Loss: 2.3983700455359691 | Acc: 0.0801886792452830.
Epoch: 2 |
MAPK: 0.1470126062631607 | Loss: 2.3983595596169525 | Acc: 0.0801886792452830.
Epoch: 3 |
MAPK: 0.1462264358997345 | Loss: 2.3983060413936399 | Acc: 0.0801886792452830.
Epoch: 4 |
MAPK: 0.1540880799293518 | Loss: 2.3982332292592750 | Acc: 0.0801886792452830.
Epoch: 5 |
MAPK: 0.1517295837402344 | Loss: 2.3982075970127896 | Acc: 0.0801886792452830.
Epoch: 6 |
MAPK: 0.1501572579145432 | Loss: 2.3981801293930918 | Acc: 0.0801886792452830.
Epoch: 7 |
MAPK: 0.1580188870429993 | Loss: 2.3981470701829442 | Acc: 0.0801886792452830.
Epoch: 8 |
MAPK: 0.1627358645200729 | Loss: 2.3980459537146226 | Acc: 0.0801886792452830.
Returned to Spot: Validation loss: 2.3980459537146226
spotPython tuning: 2.3911897168969207 [##########] 100.00% Done...
<spotPython.spot.spot.Spot at 0x1895ab2b0>
The textual output shown in the console (or code cell) can be visualized with Tensorboard as described in Section 14.9, see also the description in the documentation: Tensorboard.
After the hyperparameter tuning run is finished, the results can be analyzed as described in Section 14.10.
spot_tuner.plot_progress(log_y=False,
filename="./figures/" + experiment_name+"_progress.png")
from spotPython.utils.eda import gen_design_table
print(gen_design_table(fun_control=fun_control, spot=spot_tuner))| name | type | default | lower | upper | tuned | transform | importance | stars |
|--------------|--------|-----------|---------|---------|--------------------|-----------------------|--------------|---------|
| _L0 | int | 64 | 6112.0 | 6112.0 | 6112.0 | None | 0.00 | |
| l1 | int | 8 | 6.0 | 13.0 | 12.0 | transform_power_2_int | 0.00 | |
| dropout_prob | float | 0.01 | 0.0 | 0.9 | 0.4132005099912892 | None | 0.73 | . |
| lr_mult | float | 1.0 | 0.001 | 0.001 | 0.001 | None | 0.00 | |
| batch_size | int | 4 | 1.0 | 4.0 | 1.0 | transform_power_2_int | 100.00 | *** |
| epochs | int | 4 | 2.0 | 3.0 | 3.0 | transform_power_2_int | 0.00 | |
| k_folds | int | 1 | 1.0 | 1.0 | 1.0 | None | 0.00 | |
| patience | int | 2 | 2.0 | 2.0 | 2.0 | transform_power_2_int | 0.00 | |
| optimizer | factor | SGD | 0.0 | 3.0 | 3.0 | None | 0.00 | |
| sgd_momentum | float | 0.0 | 0.9 | 0.9 | 0.9 | None | 0.00 | |
spot_tuner.plot_importance(threshold=0.025,
filename="./figures/" + experiment_name+"_importance.png")
from spotPython.hyperparameters.values import get_one_core_model_from_X
X = spot_tuner.to_all_dim(spot_tuner.min_X.reshape(1,-1))
model_spot = get_one_core_model_from_X(X, fun_control)
model_spotNet_vbdp(
(fc1): Linear(in_features=6112, out_features=4096, bias=True)
(fc2): Linear(in_features=4096, out_features=2048, bias=True)
(fc3): Linear(in_features=2048, out_features=1024, bias=True)
(fc4): Linear(in_features=1024, out_features=512, bias=True)
(fc5): Linear(in_features=512, out_features=11, bias=True)
(relu): ReLU()
(softmax): Softmax(dim=1)
(dropout1): Dropout(p=0.4132005099912892, inplace=False)
(dropout2): Dropout(p=0.2066002549956446, inplace=False)
)
from spotPython.torch.traintest import (
train_tuned,
test_tuned,
)
train_tuned(net=model_spot, train_dataset=train,
loss_function=fun_control["loss_function"],
metric=fun_control["metric_torch"],
shuffle=True,
device = fun_control["device"],
path=None,
task=fun_control["task"],)Epoch: 1 |
MAPK: 0.1776729971170425 | Loss: 2.3976604848537804 | Acc: 0.0707547169811321.
Epoch: 2 |
MAPK: 0.1658805310726166 | Loss: 2.3973652394312732 | Acc: 0.0660377358490566.
Epoch: 3 |
MAPK: 0.1721698343753815 | Loss: 2.3971810025988884 | Acc: 0.0849056603773585.
Epoch: 4 |
MAPK: 0.1933962255716324 | Loss: 2.3963526172458001 | Acc: 0.1132075471698113.
Epoch: 5 |
MAPK: 0.1878930777311325 | Loss: 2.3954812670653722 | Acc: 0.1132075471698113.
Epoch: 6 |
MAPK: 0.1902516037225723 | Loss: 2.3941759433386460 | Acc: 0.1084905660377359.
Epoch: 7 |
MAPK: 0.1878930926322937 | Loss: 2.3928982806655594 | Acc: 0.1084905660377359.
Epoch: 8 |
MAPK: 0.2012578696012497 | Loss: 2.3909529492540180 | Acc: 0.1179245283018868.
Returned to Spot: Validation loss: 2.390952949254018
If path is set to a filename, e.g., path = "model_spot_trained.pt", the weights of the trained model will be loaded from this file.
test_tuned(net=model_spot, test_dataset=test,
shuffle=False,
loss_function=fun_control["loss_function"],
metric=fun_control["metric_torch"],
device = fun_control["device"],
task=fun_control["task"],)MAPK: 0.2331460714340210 | Loss: 2.3826623846975603 | Acc: 0.1299435028248588.
Final evaluation: Validation loss: 2.3826623846975603
Final evaluation: Validation metric: 0.233146071434021
----------------------------------------------
(2.3826623846975603, nan, tensor(0.2331))
k_folds attribute of the model as follows:setattr(model_spot, "k_folds", 10)from spotPython.torch.traintest import evaluate_cv
# modify k-kolds:
setattr(model_spot, "k_folds", 3)
df_eval, df_preds, df_metrics = evaluate_cv(net=model_spot,
dataset=fun_control["data"],
loss_function=fun_control["loss_function"],
metric=fun_control["metric_torch"],
task=fun_control["task"],
writer=fun_control["writer"],
writerId="model_spot_cv",
device = fun_control["device"])Fold: 1
Epoch: 1 |
MAPK: 0.1645480394363403 | Loss: 2.3976494235507513 | Acc: 0.1059322033898305.
Epoch: 2 |
MAPK: 0.1793785095214844 | Loss: 2.3972296149043713 | Acc: 0.1186440677966102.
Epoch: 3 |
MAPK: 0.2394067347049713 | Loss: 2.3964333251371222 | Acc: 0.1525423728813559.
Epoch: 4 |
MAPK: 0.2648304998874664 | Loss: 2.3949893369513044 | Acc: 0.1694915254237288.
Epoch: 5 |
MAPK: 0.2881355881690979 | Loss: 2.3928930355330644 | Acc: 0.1906779661016949.
Epoch: 6 |
MAPK: 0.3156779110431671 | Loss: 2.3891845072730113 | Acc: 0.2288135593220339.
Epoch: 7 |
MAPK: 0.3340395390987396 | Loss: 2.3839198573160978 | Acc: 0.2415254237288136.
Epoch: 8 |
MAPK: 0.3495762944221497 | Loss: 2.3775227857848344 | Acc: 0.2627118644067797.
Fold: 2
Epoch: 1 |
MAPK: 0.1913841664791107 | Loss: 2.3970539893134166 | Acc: 0.1186440677966102.
Epoch: 2 |
MAPK: 0.2231637984514236 | Loss: 2.3965012562476984 | Acc: 0.1186440677966102.
Epoch: 3 |
MAPK: 0.2499999552965164 | Loss: 2.3951188224857138 | Acc: 0.1186440677966102.
Epoch: 4 |
MAPK: 0.2344632297754288 | Loss: 2.3933223768816156 | Acc: 0.1186440677966102.
Epoch: 5 |
MAPK: 0.2182203084230423 | Loss: 2.3901682970887523 | Acc: 0.1186440677966102.
Epoch: 6 |
MAPK: 0.2083333283662796 | Loss: 2.3876159878100380 | Acc: 0.1186440677966102.
Epoch: 7 |
MAPK: 0.2252824753522873 | Loss: 2.3847315937785778 | Acc: 0.1186440677966102.
Epoch: 8 |
MAPK: 0.2492937594652176 | Loss: 2.3822026192131691 | Acc: 0.1186440677966102.
Fold: 3
Epoch: 1 |
MAPK: 0.1716101765632629 | Loss: 2.3976111533278126 | Acc: 0.0851063829787234.
Epoch: 2 |
MAPK: 0.1977400630712509 | Loss: 2.3969947241120417 | Acc: 0.1148936170212766.
Epoch: 3 |
MAPK: 0.2055084407329559 | Loss: 2.3959967362678656 | Acc: 0.1361702127659574.
Epoch: 4 |
MAPK: 0.1991525292396545 | Loss: 2.3944945173748469 | Acc: 0.1361702127659574.
Epoch: 5 |
MAPK: 0.1963276714086533 | Loss: 2.3925338862305980 | Acc: 0.1276595744680851.
Epoch: 6 |
MAPK: 0.1899717301130295 | Loss: 2.3886827089018743 | Acc: 0.1191489361702128.
Epoch: 7 |
MAPK: 0.1998587250709534 | Loss: 2.3855099698244513 | Acc: 0.1361702127659574.
Epoch: 8 |
MAPK: 0.2019773721694946 | Loss: 2.3827925758846735 | Acc: 0.1404255319148936.
metric_name = type(fun_control["metric_torch"]).__name__
print(f"loss: {df_eval}, Cross-validated {metric_name}: {df_metrics}")loss: 2.3808393269608925, Cross-validated MAPK: 0.2669491469860077
filename = "./figures/" + experiment_name
spot_tuner.plot_important_hyperparameter_contour(filename=filename)dropout_prob: 0.7261315500302485
batch_size: 100.0

spot_tuner.parallel_plot()Parallel coordinates plots
# close tensorbaoard writer
if fun_control["writer"] is not None:
fun_control["writer"].close()PLOT_ALL = False
if PLOT_ALL:
n = spot_tuner.k
for i in range(n-1):
for j in range(i+1, n):
spot_tuner.plot_contour(i=i, j=j, min_z=min_z, max_z = max_z)