-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Labels
refactoringRefactoring of an existing functionality.Refactoring of an existing functionality.
Description
Status Quo
A benchmark instance or experiment instance receives various parameters as input. Currently, the input is given by defining parameters in a dataset_list
and in a model_classes_and_params
list.
Example:
dataset_list = [
Dataset(df=peyton_manning_df, name="peyton_manning", freq="D"),
]
model_classes_and_params = [
(SeasonalNaiveModel, {"n_forecasts": 4, 'K':5}),
]
Subsequently, they are assigned to the attributes of different classes and/ or separated into further categories. Currently, this happens partly in the __post_init__()
of the Experiment(ABC)
class and the __post_init__()
of the Model(ABC)
. So far, we separate the params into _data_params
and model_params
.
Problem
- It would be great, to have one dedicated part to post-process, split and assign the input params. In my eyes, this should be on the experiment level.
- It's great to distinguish between
_data_params
andmodel_params
. Further, I think we should introducepred_params
, which saves all prediction task-related information.
Metadata
Metadata
Assignees
Labels
refactoringRefactoring of an existing functionality.Refactoring of an existing functionality.