pip list | grep "spot[RiverPython]"spotPython 0.2.52
spotRiver 0.0.94
Note: you may need to restart the kernel to use updated packages.
In this tutorial, we will show how spotPython can be integrated into the PyTorch training workflow.
This document refers to the following software versions:
python: 3.10.10torch: 2.0.1torchvision: 0.15.0pip list | grep "spot[RiverPython]"spotPython 0.2.52
spotRiver 0.0.94
Note: you may need to restart the kernel to use updated packages.
spotPython can be installed via pip. Alternatively, the source code can be downloaded from gitHub: https://github.com/sequential-parameter-optimization/spotPython.
!pip install spotPython
spotPython from gitHub.# import sys
# !{sys.executable} -m pip install --upgrade build
# !{sys.executable} -m pip install --upgrade --force-reinstall spotPythonBefore we consider the detailed experimental setup, we select the parameters that affect run time, initial design size and the device that is used.
DEVICE."cpu" is preferred (on Mac)."cuda:0" instead.None, spotPython will automatically select the device.
"mps" on Macs, which is not the best choice for simple neural nets.MAX_TIME = 1
INIT_SIZE = 5
DEVICE = "cpu" # "cuda:0" Nonefrom spotPython.utils.device import getDevice
DEVICE = getDevice(DEVICE)
print(DEVICE)cpu
import os
import copy
import socket
from datetime import datetime
from dateutil.tz import tzlocal
start_time = datetime.now(tzlocal())
HOSTNAME = socket.gethostname().split(".")[0]
experiment_name = '12-torch' + "_" + HOSTNAME + "_" + str(MAX_TIME) + "min_" + str(INIT_SIZE) + "init_" + str(start_time).split(".", 1)[0].replace(' ', '_')
experiment_name = experiment_name.replace(':', '-')
print(experiment_name)
if not os.path.exists('./figures'):
os.makedirs('./figures')12-torch_maans03_1min_5init_2023-07-03_10-48-33
fun_control DictionaryspotPython uses a Python dictionary for storing the information required for the hyperparameter tuning process, which was described in Section 14.2.
tensorboard_path to None if you are working under Windows.from spotPython.utils.init import fun_control_init
fun_control = fun_control_init(task="classification",
tensorboard_path="runs/12_spot_hpt_torch_cifar10",
device=DEVICE)from torchvision import datasets, transforms
import torchvision
def load_data(data_dir="./data"):
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
trainset = torchvision.datasets.CIFAR10(
root=data_dir, train=True, download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(
root=data_dir, train=False, download=True, transform=transform)
return trainset, testset
train, test = load_data()Files already downloaded and verified
Files already downloaded and verified
fun_control dictionary:n_samples = len(train)
# add the dataset to the fun_control
fun_control.update({"data": None, # dataset,
"train": train,
"test": test,
"n_samples": n_samples,
"target_column": None})After the training and test data are specified and added to the fun_control dictionary, spotPython allows the specification of a data preprocessing pipeline, e.g., for the scaling of the data or for the one-hot encoding of categorical variables, see Section 14.4. This feature is not used here, so we do not change the default value (which is None).
algorithm) and core_model_hyper_dictspotPython includes the Net_CIFAR10 class which is implemented in the file netcifar10.py. The class is imported here.
This class inherits from the class Net_Core which is implemented in the file netcore.py, see Section 14.5.1.
from spotPython.torch.netcifar10 import Net_CIFAR10
from spotPython.data.torch_hyper_dict import TorchHyperDict
from spotPython.hyperparameters.values import add_core_model_to_fun_control
fun_control = add_core_model_to_fun_control(core_model=Net_CIFAR10,
fun_control=fun_control,
hyper_dict=TorchHyperDict,
filename=None)hyper_dict Hyperparameters for the Selected AlgorithmspotPython uses JSON files for the specification of the hyperparameters, which were described in Section 14.5.5.
The corresponding entries for the core_model class are shown below.
fun_control['core_model_hyper_dict']{'l1': {'type': 'int',
'default': 5,
'transform': 'transform_power_2_int',
'lower': 2,
'upper': 9},
'l2': {'type': 'int',
'default': 5,
'transform': 'transform_power_2_int',
'lower': 2,
'upper': 9},
'lr_mult': {'type': 'float',
'default': 1.0,
'transform': 'None',
'lower': 0.1,
'upper': 10.0},
'batch_size': {'type': 'int',
'default': 4,
'transform': 'transform_power_2_int',
'lower': 1,
'upper': 4},
'epochs': {'type': 'int',
'default': 3,
'transform': 'transform_power_2_int',
'lower': 3,
'upper': 4},
'k_folds': {'type': 'int',
'default': 1,
'transform': 'None',
'lower': 1,
'upper': 1},
'patience': {'type': 'int',
'default': 5,
'transform': 'None',
'lower': 2,
'upper': 10},
'optimizer': {'levels': ['Adadelta',
'Adagrad',
'Adam',
'AdamW',
'SparseAdam',
'Adamax',
'ASGD',
'NAdam',
'RAdam',
'RMSprop',
'Rprop',
'SGD'],
'type': 'factor',
'default': 'SGD',
'transform': 'None',
'class_name': 'torch.optim',
'core_model_parameter_type': 'str',
'lower': 0,
'upper': 12},
'sgd_momentum': {'type': 'float',
'default': 0.0,
'transform': 'None',
'lower': 0.0,
'upper': 1.0}}
hyper_dict Hyperparameters for the Selected Algorithm aka core_modelspotPython provides functions for modifying the hyperparameters, their bounds and factors as well as for activating and de-activating hyperparameters without re-compilation of the Python source code. These functions were described in Section 14.6.
hyper_dict Hyperparameters for the Selected Algorithm aka core_modelThe hyperparameter k_folds is not used, it is de-activated here by setting the lower and upper bound to the same value.
l1 and l2 as well as epochs and patience are set to small values for demonstration purposes. These values are too small for a real application.fun_control = modify_hyper_parameter_bounds(fun_control, "l1", bounds=[2, 7])fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[7, 9]) andfun_control = modify_hyper_parameter_bounds(fun_control, "patience", bounds=[2, 7])from spotPython.hyperparameters.values import modify_hyper_parameter_bounds
fun_control = modify_hyper_parameter_bounds(fun_control, "k_folds", bounds=[0, 0])
fun_control = modify_hyper_parameter_bounds(fun_control, "patience", bounds=[2, 2])
fun_control = modify_hyper_parameter_bounds(fun_control, "epochs", bounds=[2, 3])
fun_control = modify_hyper_parameter_bounds(fun_control, "l1", bounds=[2, 5])
fun_control = modify_hyper_parameter_bounds(fun_control, "l2", bounds=[2, 5])from spotPython.hyperparameters.values import modify_hyper_parameter_levels
fun_control = modify_hyper_parameter_levels(fun_control, "optimizer",["Adam", "AdamW", "Adamax", "NAdam"])Optimizers can be selected as described in Section 19.6.2.
Optimizers are described in Section 14.6.1.
fun_control = modify_hyper_parameter_bounds(fun_control,
"lr_mult", bounds=[1e-3, 1e-3])
fun_control = modify_hyper_parameter_bounds(fun_control,
"sgd_momentum", bounds=[0.9, 0.9])The evaluation procedure requires the specification of two elements:
These are described in Section 19.7.1.
The key "loss_function" specifies the loss function which is used during the optimization, see Section 14.7.5.
We will use CrossEntropy loss for the multiclass-classification task.
from torch.nn import CrossEntropyLoss
loss_function = CrossEntropyLoss()
fun_control.update({
"loss_function": loss_function,
"shuffle": True,
"eval": "train_hold_out"
})import torchmetrics
metric_torch = torchmetrics.Accuracy(task="multiclass",
num_classes=10).to(fun_control["device"])
fun_control.update({"metric_torch": metric_torch})The following code passes the information about the parameter ranges and bounds to spot.
# extract the variable types, names, and bounds
from spotPython.hyperparameters.values import (get_bound_values,
get_var_name,
get_var_type,)
var_type = get_var_type(fun_control)
var_name = get_var_name(fun_control)
fun_control.update({"var_type": var_type,
"var_name": var_name})
lower = get_bound_values(fun_control, "lower")
upper = get_bound_values(fun_control, "upper")from spotPython.utils.eda import gen_design_table
print(gen_design_table(fun_control))| name | type | default | lower | upper | transform |
|--------------|--------|-----------|---------|---------|-----------------------|
| l1 | int | 5 | 2 | 5 | transform_power_2_int |
| l2 | int | 5 | 2 | 5 | transform_power_2_int |
| lr_mult | float | 1.0 | 0.001 | 0.001 | None |
| batch_size | int | 4 | 1 | 4 | transform_power_2_int |
| epochs | int | 3 | 2 | 3 | transform_power_2_int |
| k_folds | int | 1 | 0 | 0 | None |
| patience | int | 5 | 2 | 2 | None |
| optimizer | factor | SGD | 0 | 3 | None |
| sgd_momentum | float | 0.0 | 0.9 | 0.9 | None |
fun_torchThe objective function fun_torch is selected next. It implements an interface from PyTorch’s training, validation, and testing methods to spotPython.
from spotPython.fun.hypertorch import HyperTorch
fun = HyperTorch().fun_torchimport numpy as np
from spotPython.spot import spot
from math import inf
spot_tuner = spot.Spot(fun=fun,
lower = lower,
upper = upper,
fun_evals = inf,
fun_repeats = 1,
max_time = MAX_TIME,
noise = False,
tolerance_x = np.sqrt(np.spacing(1)),
var_type = var_type,
var_name = var_name,
infill_criterion = "y",
n_points = 1,
seed=123,
log_level = 50,
show_models= False,
show_progress= True,
fun_control = fun_control,
design_control={"init_size": INIT_SIZE,
"repeats": 1},
surrogate_control={"noise": True,
"cod_type": "norm",
"min_theta": -4,
"max_theta": 3,
"n_theta": len(var_name),
"model_fun_evals": 10_000,
"log_level": 50
})
spot_tuner.run(X_start=X_start)
config: {'l1': 16, 'l2': 8, 'lr_mult': 0.001, 'batch_size': 16, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'AdamW', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.1004000008106232 | Loss: 2.3229682102203371 | Acc: 0.1004000000000000.
Epoch: 2 |
MulticlassAccuracy: 0.1004000008106232 | Loss: 2.3213387151718141 | Acc: 0.1004000000000000.
Epoch: 3 |
MulticlassAccuracy: 0.1004000008106232 | Loss: 2.3194895351409914 | Acc: 0.1004000000000000.
Epoch: 4 |
MulticlassAccuracy: 0.1004000008106232 | Loss: 2.3174200672149659 | Acc: 0.1004000000000000.
Epoch: 5 |
MulticlassAccuracy: 0.1004000008106232 | Loss: 2.3150155567169191 | Acc: 0.1004000000000000.
Epoch: 6 |
MulticlassAccuracy: 0.1004000008106232 | Loss: 2.3125426223754881 | Acc: 0.1004000000000000.
Epoch: 7 |
MulticlassAccuracy: 0.1004000008106232 | Loss: 2.3103920515060423 | Acc: 0.1004000000000000.
Epoch: 8 |
MulticlassAccuracy: 0.1004500016570091 | Loss: 2.3083329513549806 | Acc: 0.1004500000000000.
Returned to Spot: Validation loss: 2.3083329513549806
config: {'l1': 8, 'l2': 8, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 4, 'k_folds': 0, 'patience': 2, 'optimizer': 'Adamax', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.0987500026822090 | Loss: 2.3179213603019715 | Acc: 0.0987500000000000.
Epoch: 2 |
MulticlassAccuracy: 0.1002999991178513 | Loss: 2.3168445756912233 | Acc: 0.1003000000000000.
Epoch: 3 |
MulticlassAccuracy: 0.1064499989151955 | Loss: 2.3159697061538695 | Acc: 0.1064500000000000.
Epoch: 4 |
MulticlassAccuracy: 0.1145000010728836 | Loss: 2.3151374659538271 | Acc: 0.1145000000000000.
Returned to Spot: Validation loss: 2.315137465953827
config: {'l1': 32, 'l2': 16, 'lr_mult': 0.001, 'batch_size': 2, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.1028499975800514 | Loss: 2.3065456705808640 | Acc: 0.1028500000000000.
Epoch: 2 |
MulticlassAccuracy: 0.1173999980092049 | Loss: 2.2966628431320188 | Acc: 0.1174000000000000.
Epoch: 3 |
MulticlassAccuracy: 0.1528999954462051 | Loss: 2.2584952401041987 | Acc: 0.1529000000000000.
Epoch: 4 |
MulticlassAccuracy: 0.1580500006675720 | Loss: 2.2108294310688974 | Acc: 0.1580500000000000.
Epoch: 5 |
MulticlassAccuracy: 0.1589999943971634 | Loss: 2.1711846857666970 | Acc: 0.1590000000000000.
Epoch: 6 |
MulticlassAccuracy: 0.1640499979257584 | Loss: 2.1371860899329187 | Acc: 0.1640500000000000.
Epoch: 7 |
MulticlassAccuracy: 0.1751500070095062 | Loss: 2.1060472553730012 | Acc: 0.1751500000000000.
Epoch: 8 |
MulticlassAccuracy: 0.1858499944210052 | Loss: 2.0774075149416924 | Acc: 0.1858500000000000.
Returned to Spot: Validation loss: 2.0774075149416924
config: {'l1': 4, 'l2': 8, 'lr_mult': 0.001, 'batch_size': 4, 'epochs': 4, 'k_folds': 0, 'patience': 2, 'optimizer': 'AdamW', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.1001499965786934 | Loss: 2.3315476094722749 | Acc: 0.1001500000000000.
Epoch: 2 |
MulticlassAccuracy: 0.1001499965786934 | Loss: 2.3303059405803679 | Acc: 0.1001500000000000.
Epoch: 3 |
MulticlassAccuracy: 0.1001499965786934 | Loss: 2.3289187388181687 | Acc: 0.1001500000000000.
Epoch: 4 |
MulticlassAccuracy: 0.1001000031828880 | Loss: 2.3274010416507722 | Acc: 0.1001000000000000.
Returned to Spot: Validation loss: 2.327401041650772
config: {'l1': 16, 'l2': 32, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'Adam', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.0927999988198280 | Loss: 2.3063064372062683 | Acc: 0.0928000000000000.
Epoch: 2 |
MulticlassAccuracy: 0.0933500006794930 | Loss: 2.3060600970268248 | Acc: 0.0933500000000000.
Epoch: 3 |
MulticlassAccuracy: 0.0934500023722649 | Loss: 2.3057814753532409 | Acc: 0.0934500000000000.
Epoch: 4 |
MulticlassAccuracy: 0.0935499966144562 | Loss: 2.3054392121315002 | Acc: 0.0935500000000000.
Epoch: 5 |
MulticlassAccuracy: 0.0942000001668930 | Loss: 2.3050192249298096 | Acc: 0.0942000000000000.
Epoch: 6 |
MulticlassAccuracy: 0.0950499996542931 | Loss: 2.3045036982536318 | Acc: 0.0950500000000000.
Epoch: 7 |
MulticlassAccuracy: 0.0962499976158142 | Loss: 2.3038450118064882 | Acc: 0.0962500000000000.
Epoch: 8 |
MulticlassAccuracy: 0.1012500002980232 | Loss: 2.3029967868804930 | Acc: 0.1012500000000000.
Returned to Spot: Validation loss: 2.302996786880493
config: {'l1': 8, 'l2': 16, 'lr_mult': 0.001, 'batch_size': 8, 'epochs': 8, 'k_folds': 0, 'patience': 2, 'optimizer': 'NAdam', 'sgd_momentum': 0.9}
Epoch: 1 |
MulticlassAccuracy: 0.1009000018239021 | Loss: 2.3312099172592164 | Acc: 0.1009000000000000.
Epoch: 2 |
MulticlassAccuracy: 0.1009000018239021 | Loss: 2.3279800788879395 | Acc: 0.1009000000000000.
Epoch: 3 |
MulticlassAccuracy: 0.1109500005841255 | Loss: 2.3245738817214967 | Acc: 0.1109500000000000.
Epoch: 4 |
MulticlassAccuracy: 0.1382499933242798 | Loss: 2.3212142275810241 | Acc: 0.1382500000000000.
Epoch: 5 |
MulticlassAccuracy: 0.1424999982118607 | Loss: 2.3181941164970397 | Acc: 0.1425000000000000.
Epoch: 6 |
MulticlassAccuracy: 0.1428000032901764 | Loss: 2.3150831417083739 | Acc: 0.1428000000000000.
Epoch: 7 |
MulticlassAccuracy: 0.1416500061750412 | Loss: 2.3113945014953612 | Acc: 0.1416500000000000.
Epoch: 8 |
MulticlassAccuracy: 0.1405500024557114 | Loss: 2.3069930368423464 | Acc: 0.1405500000000000.
Returned to Spot: Validation loss: 2.3069930368423464
spotPython tuning: 2.0774075149416924 [##########] 100.00% Done...
<spotPython.spot.spot.Spot at 0x1071c3b50>
The textual output shown in the console (or code cell) can be visualized with Tensorboard as described in Section 14.9, see also the description in the documentation: Tensorboard.
After the hyperparameter tuning run is finished, the results can be analyzed as described in Section 14.10.
SAVE = False
LOAD = False
if SAVE:
result_file_name = "res_" + experiment_name + ".pkl"
with open(result_file_name, 'wb') as f:
pickle.dump(spot_tuner, f)
if LOAD:
result_file_name = "ADD THE NAME here, e.g.: res_ch10-friedman-hpt-0_maans03_60min_20init_1K_2023-04-14_10-11-19.pkl"
with open(result_file_name, 'rb') as f:
spot_tuner = pickle.load(f)After the hyperparameter tuning run is finished, the progress of the hyperparameter tuning can be visualized. The following code generates the progress plot from ?fig-progress.
spot_tuner.plot_progress(log_y=False,
filename="./figures/" + experiment_name+"_progress.png")
print(gen_design_table(fun_control=fun_control,
spot=spot_tuner))| name | type | default | lower | upper | tuned | transform | importance | stars |
|--------------|--------|-----------|---------|---------|---------|-----------------------|--------------|---------|
| l1 | int | 5 | 2.0 | 5.0 | 5.0 | transform_power_2_int | 0.00 | |
| l2 | int | 5 | 2.0 | 5.0 | 4.0 | transform_power_2_int | 0.00 | |
| lr_mult | float | 1.0 | 0.001 | 0.001 | 0.001 | None | 0.00 | |
| batch_size | int | 4 | 1.0 | 4.0 | 1.0 | transform_power_2_int | 100.00 | *** |
| epochs | int | 3 | 2.0 | 3.0 | 3.0 | transform_power_2_int | 0.00 | |
| k_folds | int | 1 | 0.0 | 0.0 | 0.0 | None | 0.00 | |
| patience | int | 5 | 2.0 | 2.0 | 2.0 | None | 0.00 | |
| optimizer | factor | SGD | 0.0 | 3.0 | 3.0 | None | 0.00 | |
| sgd_momentum | float | 0.0 | 0.9 | 0.9 | 0.9 | None | 0.00 | |
spot_tuner.plot_importance(threshold=0.025, filename="./figures/" + experiment_name+"_importance.png")
The architecture of the spotPython model can be obtained by the following code:
from spotPython.hyperparameters.values import get_one_core_model_from_X
X = spot_tuner.to_all_dim(spot_tuner.min_X.reshape(1,-1))
model_spot = get_one_core_model_from_X(X, fun_control)
model_spotNet_CIFAR10(
(conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=32, bias=True)
(fc2): Linear(in_features=32, out_features=16, bias=True)
(fc3): Linear(in_features=16, out_features=10, bias=True)
)
from spotPython.torch.traintest import (
train_tuned,
test_tuned,
)train_tuned(net=model_spot, train_dataset=train,
loss_function=fun_control["loss_function"],
metric=fun_control["metric_torch"],
shuffle=True,
device = fun_control["device"],
path=None,
task=fun_control["task"],)Epoch: 1 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.310
MulticlassAccuracy: 0.1036000028252602 | Loss: 2.3006046615839004 | Acc: 0.1036000000000000.
Epoch: 2 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.292
MulticlassAccuracy: 0.1433500051498413 | Loss: 2.2680888182759285 | Acc: 0.1433500000000000.
Epoch: 3 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.257
MulticlassAccuracy: 0.1770499944686890 | Loss: 2.2302701745629312 | Acc: 0.1770500000000000.
Epoch: 4 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.224
MulticlassAccuracy: 0.1736499965190887 | Loss: 2.1976760300278664 | Acc: 0.1736500000000000.
Epoch: 5 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.195
MulticlassAccuracy: 0.1816000044345856 | Loss: 2.1689404532074930 | Acc: 0.1816000000000000.
Epoch: 6 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.164
MulticlassAccuracy: 0.1942500025033951 | Loss: 2.1427407929718494 | Acc: 0.1942500000000000.
Epoch: 7 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.145
MulticlassAccuracy: 0.2108999937772751 | Loss: 2.1194734202206136 | Acc: 0.2109000000000000.
Epoch: 8 |
Batch: 10000. Batch Size: 2. Training Loss (running): 2.117
MulticlassAccuracy: 0.2226999998092651 | Loss: 2.0977164102852344 | Acc: 0.2227000000000000.
Returned to Spot: Validation loss: 2.0977164102852344
If path is set to a filename, e.g., path = "model_spot_trained.pt", the weights of the trained model will be loaded from this file.
test_tuned(net=model_spot, test_dataset=test,
shuffle=False,
loss_function=fun_control["loss_function"],
metric=fun_control["metric_torch"],
device = fun_control["device"],
task=fun_control["task"],)MulticlassAccuracy: 0.2190999984741211 | Loss: 2.0944503646016122 | Acc: 0.2191000000000000.
Final evaluation: Validation loss: 2.094450364601612
Final evaluation: Validation metric: 0.2190999984741211
----------------------------------------------
(2.094450364601612, nan, tensor(0.2191))
k_folds attribute of the model as follows:setattr(model_spot, "k_folds", 10)from spotPython.torch.traintest import evaluate_cv
# modify k-kolds:
setattr(model_spot, "k_folds", 3)
df_eval, df_preds, df_metrics = evaluate_cv(net=model_spot,
dataset=fun_control["data"],
loss_function=fun_control["loss_function"],
metric=fun_control["metric_torch"],
task=fun_control["task"],
writer=fun_control["writer"],
writerId="model_spot_cv",
device = fun_control["device"])Error in Net_Core. Call to evaluate_cv() failed. err=TypeError("Expected sequence or array-like, got <class 'NoneType'>"), type(err)=<class 'TypeError'>
metric_name = type(fun_control["metric_torch"]).__name__
print(f"loss: {df_eval}, Cross-validated {metric_name}: {df_metrics}")loss: nan, Cross-validated MulticlassAccuracy: nan
filename = "./figures/" + experiment_name
spot_tuner.plot_important_hyperparameter_contour(filename=filename)batch_size: 99.99999999999999
spot_tuner.parallel_plot()Parallel coordinates plots
PLOT_ALL = False
if PLOT_ALL:
n = spot_tuner.k
for i in range(n-1):
for j in range(i+1, n):
spot_tuner.plot_contour(i=i, j=j, min_z=min_z, max_z = max_z)