Skip to content

Commit b7ca343

Browse files
benjamcthibautklenkebenjamcLukasFehring
authored
v1.0.1 (#187)
* Fix HPOBench (#158) * Fix tabular benchmark not working * Update hpobench install * Add script to fix nas container error * Update CHANGELOG.md * Optuna: add new MO config (#160) * Rename config * Rename config * Add Optuna config * Simplify init of optuna study * Update * Small fix * Update optimizer configs (#161) * feat(skopt): more variants * Add more kwargs for DEHB * Add paper source * Add task info for dummy problem * Update HEBO config * Rework Skopt config * Use base config for synetune * Add comment * restrict ConfigSpace for now (yahpo and smac struggle) * Clean SMAC configs * Remove smac1.4 configs (clash with ConfigSpace, would only work with containers) * Update optuna configs * Update Nevergrad configs / clean * Update CHANGELOG.md * More Pymoo Problems (#162) * Update * Generate pymoo problems * Update pymoo problems * Update CHANGELOG.md * Update HEBO & BBOB-Vizier (#163) * feat(benchmark): add bbob 20d 100 vizier * fix(hebo): sobol scrambling seed default * refactor(hebo): use global seed for sobol * refactor(hebo): update config to pass global seed to scramble * feat(notebook): add comparison of SMAC, HEBO, RS on BBOB20d * Update CHANGELOG.md * Merge * feat(Ax): Add Ax optimizer (SO + MO) (#166) * feat(Ax): Add Ax optimizer (SO + MO) * Update CHANGELOG.md * feat(Ax): Type refinements, fixed OrdinalHyperparameter, added note regarding random seed --------- Co-authored-by: thibautklenke <thibaut.klenke@stud.uni-hannover.de> Co-authored-by: benjamc <c.benjamins@instadeep.com> * fix: SMAC callbacks (#167) * Update CHANGELOG.md * fix(ax): conversion of ordinal HPs * fix(ax): cat HP * fix(ax): allow_inactive_with_values=True * fix(smac20): remove unrecognized arg from TrialValue * fix(ax): typos * Research/subset (#164) * Ignore more * Update * Add notebook * Update * feat: validate ranking * Update * add notebook * Update * Do not plot * Update * Update notebook * Format * Compare subselections * Also save run cfgs * Update notebook * fix timestamp * goodbye bash scripts * make prettier * goosbye old file * goodbye * build(pyproject): update ruff setting * Ignore more * update program * feat(subselect): finally as python * tiny fixes * refactor(subselection) * fix(problem): generate pymoo config * local parallel * refactor(create_subset_config): del dir, better error msg, new cmd * feat: new subsets for BB + MO * run more pymoo problems * Rename problem * refactor(generate_problems): MO * refactor(benchmark_footprint): notebook * refactor(inspect_problems): update notebook * build(yahpo_install): update * update yahpo install * update notebooks * revert subselection progress moved to branch feat/subselection * please pandas * please pandas * fix(hebo): ordinal hyperparameter + precommit * refactor(yahpo): update ConfigSpace API * ignore more * add scenario and subset_id to config * add scenario and subset id to subselection configs * remove duplicates in config * refactor(gather_data): more config keys * feat(gather_data): calculate log performance * update notebook * feat(report) * fix(autorank): api * ignore more * feat(color_palette): more and nicer colors * feat(generate_report): silent plotting * fix(ax): conversion of ordinal HPs * style(utils): pre-commit * style(...): pre-commit * style * fix(ax): cat HP * refactor(generate_report) * fix(ax): allow_inactive_with_values=True * fix(yahpo): ConfigSpace deprecation warning * docs(generate_report): more info * fix(run_autorank): if nothing is lost * fix(utils): filter by final performance: if there is a small max val correctly limit * feat(gather_data): collect from several folders * refactor(generate_report): goodbye plots, fix ranks and norm * style(file_logger) * style(pareto_front) * style(overriderfinde) * style(loggingutils) * style(index_configs) * refactor(generate_report): fun arg default * fix(run_autorank): log msg pos * pre-commit 1 * refactor/fix(nevergrad): pass trial seed to trial info * Remove DummyOptimizer because Random Search behaves the same * Remove DummyOptimizer because Random Search behaves the same * style: make ruff happy * refactor(task): set n_objectives as 1 per default * refactor(synetune): set max budget (fidelity) as fixed hp in synetune * Please mypy nr 1 * refactor(install_yahpo): control root of carps * fix(task): dataclass unmutable arg * fix(task): dataclass unmutable arg * bbob: update ioh requirements * style: please mypy * Update CHANGELOG.md * 170 update nomenclature (#175) * refactor(objective_functions): rename folder, rename file * refactor: Problem -> ObjectiveFunction * refactor: import objective_function * refactor: problem_id -> task_id * refactor: carps.benchmarks -> carps.objective_functions * refactor(task) * refactor(installation) * refactor(install): update * current * refactor(BREAKING): new task definition * refactor: make task instantiable * refactor: generate tasks, new configs * refactor(optimizer): add metadata about mo/mf * tests(optimizers): add * testing * fix install scripts * refactor(dummy_problem): handle ListConfig * update dependencies * refactor(Makefile): more commands * refactor: rename budget -> fidelity * refactor: rename problem -> task or objective function * fix(smacconfig) * format(notebooks): ruff * format * refactor: scenario -> task_type * refactor(hebo): deactivate due to install issues * fix(hpob): path to hpob surrogate files * ignore more * fix(hpobench_config) * fix(pymoo_config) * refactor(yahpo_tasks): regenerate * fix(hpobench): installation (container building) * fix(hpobench): configs, build container * refactor(hpobench): configs * refactor(subselection): update config files * refactor(hpobench_configs): tab -> tabular * refactor(subselection_blackbox): update to new tasks * build: update requirements * Update carps version * Update CHANGELOG.md * add todo * Refactor/installation (#176) * build/refactor(installation): use make commands from anywhere * Update README.md * refactor(install): benchmarks with data * refactor: validate ranking * refactor(yahpo repo) * refactor * refactor(env_vars): define carps root * refactor(Hpobenchconfigs): fix objectives * Reactvate SMAC check * build(Makefile): install swig for smac via pip * fix(yahpo_configs): convert config space from hydra/omegaconf to native python objects * fix(yahpo_configs): convert config space from hydra/omegaconf to native python objects * refactor(subset_configs) * build(nevergrad): upgrade cma to use np2.0 * style(hpobench): add arg to docstring * fix(conditional_search_space): deactivate wrongly active HPs before passing to objective function evlauate * fix(check_missing): status check * refactor(database): more ids * fix(create_cluster_configs): update task info * docs(installation): update * ignore more * refactor/feat: run from database * refactor: move experimenter files to own folder * refactor: move database utils * refactor(: run from db * refactor: container folder -> experimenter folder * feat(experimenter): show stats and export error configs * refactor(assets): shorten filename * feat: Scrape results to database (#178) * fix(gather_data): dict accesses, lists, task_id * fix(create_cluster_configs): dict accesses * feat(scrape_to_db): Scrape file-logged results to database * Update README.md * Sleeker type handling and happy pre-commit :) * Refactor * Quickdev (#179) * refactor: subselection notebooks * refactor(run_autorank): directly pass df crit * build(pyproject): remove all extras * update notebook * fix(legacy_task_ids) * feat: create database experiments via slurm * refactor/feat: update subselection, add python methods * format(shift_v2nobrute.c): finally * feat(subselect): make usable * fix(yahpo): generation of configs, set time budget to none * refactor(create_cluster_configs): calculate config hash differently * feat(create_cluster_configs_subselection) * fix(file_logger): logging msg * fix(yahpo_task_gen): output space * fix(yahpo output space) * feat(show_stats): split errors into yahpo and non yahpo, save error message * fix/build: hpobench benchmark install * refactor(database_logger): also log n function calls for incumbents * refactor(database_logger): also log n function calls for incumbents * feat: download and process results from database * feat(show_stats): split error msgs in known and unknown * fix(nevergrad): optimizer config (one param needs to be float and somehow the conversion from yaml to json str back to DictConfig messes this up) * refactor(Makefile): for using uv, make sure pip is installed and accessible * fix(synetune): passing metrics * fix(nevergrad): config out of bounds (just a tiny bit) * refactor(create_cluster_configs_subselection) * fix(Ax): always convert cost to float * fix(Ax): hopefully numeric mean bounds * feat(reset_experiments): reset only yahpo attr none type errors * ignore more * refactor(show_stats): filter more errors * refactor(create_cluster_configs): separate hydra from functionality to make it importable from other scripts * refactor(create_cluster_configs): rename variables * refactor(gather_data): handle logs from database * docs(process_logs): add runcommand * feat(reset_experiments): add more reset options * fix(reset_experiments): correctly delete falsely done experiments * refactor(generate_report) * refactor(generate_report): small plotting adjustments * fix(run_autorank): class args * fix(yahpo): 1st to fix yahpo nonetype error Copy onnx model. * refactor(check_missing): adapt to new structure * refactor(gather_data): adapt to new structure * refactor(pyproject.toml): add pyexperimenter requirement * refactor(scrape_results_to_database) * refactor(Makefile): upgrade numpy * feat(container): add recipe for virtual env * refactor(yahpo install) * refactor(requirements) * refactor(README): adjust paths * refactor(download_results): add more infos * docs(README): add info about container * Quickdev (#180) * refactor: subselection notebooks * refactor(run_autorank): directly pass df crit * build(pyproject): remove all extras * update notebook * fix(legacy_task_ids) * feat: create database experiments via slurm * refactor/feat: update subselection, add python methods * format(shift_v2nobrute.c): finally * feat(subselect): make usable * fix(yahpo): generation of configs, set time budget to none * refactor(create_cluster_configs): calculate config hash differently * feat(create_cluster_configs_subselection) * fix(file_logger): logging msg * fix(yahpo_task_gen): output space * fix(yahpo output space) * feat(show_stats): split errors into yahpo and non yahpo, save error message * fix/build: hpobench benchmark install * refactor(database_logger): also log n function calls for incumbents * refactor(database_logger): also log n function calls for incumbents * feat: download and process results from database * feat(show_stats): split error msgs in known and unknown * fix(nevergrad): optimizer config (one param needs to be float and somehow the conversion from yaml to json str back to DictConfig messes this up) * refactor(Makefile): for using uv, make sure pip is installed and accessible * fix(synetune): passing metrics * fix(nevergrad): config out of bounds (just a tiny bit) * refactor(create_cluster_configs_subselection) * fix(Ax): always convert cost to float * fix(Ax): hopefully numeric mean bounds * feat(reset_experiments): reset only yahpo attr none type errors * ignore more * refactor(show_stats): filter more errors * refactor(create_cluster_configs): separate hydra from functionality to make it importable from other scripts * refactor(create_cluster_configs): rename variables * refactor(gather_data): handle logs from database * docs(process_logs): add runcommand * feat(reset_experiments): add more reset options * fix(reset_experiments): correctly delete falsely done experiments * refactor(generate_report) * refactor(generate_report): small plotting adjustments * fix(run_autorank): class args * fix(yahpo): 1st to fix yahpo nonetype error Copy onnx model. * refactor(check_missing): adapt to new structure * refactor(gather_data): adapt to new structure * refactor(pyproject.toml): add pyexperimenter requirement * refactor(scrape_results_to_database) * refactor(Makefile): upgrade numpy * feat(container): add recipe for virtual env * refactor(yahpo install) * refactor(requirements) * refactor(README): adjust paths * refactor(download_results): add more infos * docs(README): add info about container * Quickdev (#181) * refactor: subselection notebooks * refactor(run_autorank): directly pass df crit * build(pyproject): remove all extras * update notebook * fix(legacy_task_ids) * feat: create database experiments via slurm * refactor/feat: update subselection, add python methods * format(shift_v2nobrute.c): finally * feat(subselect): make usable * fix(yahpo): generation of configs, set time budget to none * refactor(create_cluster_configs): calculate config hash differently * feat(create_cluster_configs_subselection) * fix(file_logger): logging msg * fix(yahpo_task_gen): output space * fix(yahpo output space) * feat(show_stats): split errors into yahpo and non yahpo, save error message * fix/build: hpobench benchmark install * refactor(database_logger): also log n function calls for incumbents * refactor(database_logger): also log n function calls for incumbents * feat: download and process results from database * feat(show_stats): split error msgs in known and unknown * fix(nevergrad): optimizer config (one param needs to be float and somehow the conversion from yaml to json str back to DictConfig messes this up) * refactor(Makefile): for using uv, make sure pip is installed and accessible * fix(synetune): passing metrics * fix(nevergrad): config out of bounds (just a tiny bit) * refactor(create_cluster_configs_subselection) * fix(Ax): always convert cost to float * fix(Ax): hopefully numeric mean bounds * feat(reset_experiments): reset only yahpo attr none type errors * ignore more * refactor(show_stats): filter more errors * refactor(create_cluster_configs): separate hydra from functionality to make it importable from other scripts * refactor(create_cluster_configs): rename variables * refactor(gather_data): handle logs from database * docs(process_logs): add runcommand * feat(reset_experiments): add more reset options * fix(reset_experiments): correctly delete falsely done experiments * refactor(generate_report) * refactor(generate_report): small plotting adjustments * fix(run_autorank): class args * fix(yahpo): 1st to fix yahpo nonetype error Copy onnx model. * refactor(check_missing): adapt to new structure * refactor(gather_data): adapt to new structure * refactor(pyproject.toml): add pyexperimenter requirement * refactor(scrape_results_to_database) * refactor(Makefile): upgrade numpy * feat(container): add recipe for virtual env * refactor(yahpo install) * refactor(requirements) * refactor(README): adjust paths * refactor(download_results): add more infos * docs(README): add info about container * fix(unique task ids) * fix(dummy_problem): passing of configuration space * fix(synetune): passing metric for SyncMOBSTER * fix/build(mfpbench): new xgboost for pd1 * fix(test_tasks): query fidelity * fix(pyproject): ONLY COPY CARPS FILES 😭👽 * fix(pyproject.toml) email adress * fix/build: properly include files in package (not too many) * update README.md * docs(reset_experiments): add convenience cmds in docstring * refactor(show_stats): print number of known errors * Fix/make (#185) * fix(Makefile): access works Makefile did not get copied into * move container_recipes * style(prepare_nas_benchmarks): add docstr * refactor: rename container recipe paths * fix: install paths * fix(install_yahpo): pip install * build(pyproject.toml): include recipe data * Update README.md * Update CHANGELOG.md * Update docs (#186) * Update CHANGELOG.md * Update carps version 1.0.0 -> 1.0.1 --------- Co-authored-by: thibautklenke <thibaut.klenke@stud.uni-hannover.de> Co-authored-by: benjamc <c.benjamins@instadeep.com> Co-authored-by: thibautklenke <154522686+thibautklenke@users.noreply.github.com> Co-authored-by: Lukas Fehring <lukasfehring@gmail.com>
1 parent 8e45d2d commit b7ca343

File tree

73 files changed

+333
-1655
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

73 files changed

+333
-1655
lines changed

CHANGELOG.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,9 @@
1+
# v1.0.1
2+
- Fix installation via pypi (#185).
3+
- Update docs (#186).
4+
15
# v1.0.0
6+
⚠ Breaking Changes
27
Redefined task as an objective function together with an input and output space. Updated configs. Renamed problem to
38
objective function and scenario to task type.
49

CITATION.cff

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ url: "https://automl.github.io/CARP-S/main/"
1010

1111
repository-code: "https://github.com/automl/CARP-S"
1212

13-
version: "1.0.0"
13+
version: "1.0.1"
1414

1515
type: "template"
1616
keywords:

CONTRIBUTING.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -81,19 +81,17 @@ git push origin name-of-your-bugfix-or-feature
8181
Submit a [pull request](https://github.com/automl/CARP-S/pulls) through the GitHub website!
8282

8383
## Local Development
84-
85-
### Virtual Environments
86-
You can try to install all dependencies into one big environment, but probably there are package clashes.
87-
Therefore, you can build one virtual environment for each optimizer-benchmark combination.
88-
Either run `scripts/build_envs.sh` to build all existing combinations or copy the combination and run as needed. It will create an environment with name `automlsuite_${OPTIMIZER_CONTAINER_ID}_${BENCHMARK_ID}`.
84+
To promote compatibility we encourage to enable `numpy>2.0.0` and `ConfigSpace>1.0.0`, and that in general modern
85+
python versions are supported.
86+
If there are still package clashes, you can create a virtual env per benchmark and optimizer.
8987

9088
## Pull Request Guidelines
9189
Before you submit a pull request, check that it meets these guidelines:
9290

9391
1. The pull request should include tests.
9492
2. If the pull request adds functionality, the docs should be updated.
9593
Put your new functionality into a function with a docstring, and add the feature to the list in `README.md`.
96-
3. The pull request should work for `Python 3.9` and
94+
3. The pull request should work for `Python 3.9` (ideally newer versions) and
9795
make sure that the tests pass for all supported Python versions.
9896

9997
## Testing

MANIFEST.in

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ include CONTRIBUTING.md
44
include LICENSE
55
include README.md
66
include pyproject.toml
7+
include carps/build/Makefile
78

89
graft carps
910
graft docs
@@ -12,14 +13,16 @@ graft tests
1213
# recursive-include tests *
1314
recursive-exclude * __pycache__
1415
recursive-exclude * *.py[co]
15-
recursive-exclude container_recipes *
1616
recursive-exclude examples *
1717
recursive-exclude notebooks *
1818
recursive-exclude scripts *
1919
recursive-exclude subselection *
2020
recursive-exclude lib *
2121
recursive-exclude env* *
22+
recursive-exclude carps.egg* *
2223
recursive-exclude carps/benchmark_data *
24+
recursive-exclude carps/task_data *
2325

24-
recursive-include docs *.rst Makefile make.bat *.jpg *.png *.gif
26+
recursive-include docs *.rst make.bat *.jpg *.png *.gif
2527
recursive-include carps/configs *
28+
recursive-include carps/container/recipes *

README.md

Lines changed: 17 additions & 103 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,6 @@ For more details on CARP-S, please have a look at the
2020

2121
### Installation from PyPI
2222

23-
⚠️ The installation of the optimizers and benchmarks/tasks currently does not work via pip due to packaging issues of the install scripts. Until this is fixed, please install `carps` from source (see below).
24-
2523
To install CARP-S, you can simply use `pip`:
2624

2725
1. Create virtual env with conda or uv
@@ -76,25 +74,15 @@ $PIP install .
7674
$PIP install -e .
7775
```
7876

79-
If you want to install CARP-S for development, you can use the following command:
77+
If you want to install CARP-S for development, you can use the following command (from the root of the repo):
8078
```bash
81-
make install-dev
79+
$PIP install -e .
80+
python -m carps.build.make install-dev
8281
```
8382
#### Apptainer
84-
You can also create a container with the env setup by running `apptainer build container/env.sif container/env.def`.
85-
Then you can execute any carps commands as usual by add this prefix `apptainer exec container/env.sif` before the
86-
command, e.g. `apptainer exec container/env.sif python -m carps.run +task/... +optimizer/...`.
87-
There is also an sbatch script to run experiments from the database using the apptainer on a slurm cluster
88-
(`sbatch scripts/container_run_from_db.sh`). You might need to adapt the array size and the number of repetitions
89-
according to the number of experiments you can run.
90-
91-
PS.: On some clusters you might need to load the module apptainer like so `module load tools Apptainer`.
92-
Troubleshooting: If you have problems writing your cache directory, mount-bind it like so
93-
`apptainer shell --bind $XDG_CACHE_HOME container/env.sif`. This binds the directory `$XDG_CACHE_HOME` in the
94-
container to the directory `$XDG_CACHE_HOME` on the host.
95-
If you have problems with `/var/lib/hpobench`, this bind might help:
96-
`<hpobench data dir>:/var/lib/hpobench/data`. `<hpobench data dir>` can be found in
97-
[`.hpobenchrc`](https://github.com/automl/HPOBench/?tab=readme-ov-file#configure-hpobench).
83+
⚠ This is still experimental.
84+
You can also use a container as an env, see
85+
[this guide](https://automl.github.io/CARP-S/latest/installation/#apptainer).
9886

9987
#### A note on python versions
10088
For python3.12, numpy should be `numpy>=2.0.0`. For python3.10, numpy must be `numpy==1.26.4`, you can simply
@@ -106,13 +94,14 @@ Additionally, you need to install the requirements for the benchmark and optimiz
10694

10795
⚠ You can specify the directory of the task data by `export CARPS_TASK_DATA_DIR=...`. Please use absolute dirnames.
10896
The default location is `<carps package location>/task_data`. If you specify a custom dir, always export the env var.
109-
97+
(The carps package location is the root of the package, not of the repo.)
11098

11199
For example, if you want to use the `SMAC3` optimizer and the `BBOB` benchmark, you need to install the
112100
requirements for both of them via:
113101

114102
```bash
115-
# Install options for optimizers and benchmarks (these are Makefile commands, check the Makefile for more commands)
103+
# Install options for optimizers and benchmarks (these are Makefile commands, check the Makefile at carps/build for
104+
# more commands)
116105
# The commands should be separated by a whitespace
117106
python -m carps.build.make benchmark_bbob optimizer_smac
118107
```
@@ -132,7 +121,7 @@ optimizer_smac optimizer_dehb optimizer_nevergrad optimizer_optuna optimizer_ax
132121
All of the above except `optimizer_hebo` work with python3.12.
133122

134123
You can also install all benchmarks in one go with `benchmarks` and all optimizers with `optimizers`.
135-
Check the `Makefile` in carps for more details.
124+
Check the `carps/build/Makefile` in carps for more details.
136125

137126

138127
## Minimal Example
@@ -153,6 +142,7 @@ should be run for all available BBOB tasks (`+task/BBOB=glob(*)`) and for 10 dif
153142
seed values (seed=range(1,11)).
154143

155144
## Commands
145+
For a complete list see the [docs](https://automl.github.io/CARP-S/latest/commands/).
156146

157147
You can run a certain task and optimizer combination directly with Hydra via:
158148
```bash
@@ -165,97 +155,21 @@ a file `runcommands_missing.sh` containing the missing runs:
165155
python -m carps.utils.check_missing <rundir>
166156
```
167157

168-
To collect all run data generated by the file logger into csv files, use the following command:
158+
To collect all run data generated by the file logger into parquet files, use the following command:
169159
```bash
170160
python -m carps.analysis.gather_data <rundir>
171161
```
172-
The csv files are then located in `<rundir>`. `logs.csv` contain the trial info and values and
173-
`logs_cfg.csv` contain the experiment configuration.
162+
The parquet files are then located in `<rundir>`. `logs.parquet` contain the trial info and values and
163+
`logs_cfg.parquet` contain the experiment configuration.
174164
The experiments can be matched via the column `experiment_id`.
175165

176166
## CARPS and MySQL Database
177167
Per default, `carps` logs to files. This has its caveats: Checking experiment status is a bit more cumbersome (but
178168
possible with `python -m carps.utils.check_missing <rundir>` to check for missing/failed experiments) and reading from
179169
the filesystem takes a long time. For this reason, we can also control and log experiments to a MySQL database with
180-
`PyExperimenter`.
181-
182-
### Requirements and Configuration
183-
Requirement: MySQL database is set up.
184-
185-
1. Add a `credentials.yaml` file in `carps/experimenter` with the following content:
186-
```yaml
187-
CREDENTIALS:
188-
Database:
189-
user: someuser
190-
password: amazing_password
191-
Connection:
192-
Standard:
193-
server: mysql_server
194-
port: 3306 (most likely)
195-
```
196-
2. Edit `carps/experimenter/py_experimenter.yaml` by setting:
197-
```yaml
198-
PY_EXPERIMENTER:
199-
n_jobs: 1
200-
201-
Database:
202-
use_ssh_tunnel: false
203-
provider: mysql
204-
database: your_database_name
205-
...
206-
```
207-
!!! Note: If you use an ssh tunnel, set `use_ssh_tunnel` to `true` in `carps/experimenter/py_experimenter.yaml`.
208-
Set up `carps/experimenter/credentials.yaml` like this:
209-
```yaml
210-
CREDENTIALS:
211-
Database:
212-
user: someuser
213-
password: amazing_password
214-
Connection:
215-
Standard:
216-
server: mysql_server
217-
port: 3306 (most likely)
218-
Ssh:
219-
server: 127.0.0.1
220-
address: some_host # hostname as specified in ~/.ssh/config
221-
# ssh_private_key_password: null
222-
# server: example.sshmysqlserver.com (address from ssh server)
223-
# address: example.sslserver.com
224-
# port: optional_ssh_port
225-
# remote_address: optional_mysql_server_address
226-
# remote_port: optional_mysql_server_port
227-
# local_address: optional_local_address
228-
# local_port: optional_local_port
229-
# passphrase: optional_ssh_passphrase
230-
```
231-
### Create Experiments
232-
First, in order for PyExperimenter to be able to pull experiments from the database, we need to fill it.
233-
The general command looks like this:
234-
```bash
235-
python -m carps.experimenter.create_cluster_configs +task=... +optimizer=... -m
236-
```
237-
All subset runs were created with `scripts/create_experiments_in_db.sh`.
238-
239-
### Running Experiments
240-
Now, execute experiments with:
241-
```bash
242-
python -m carps.run_from_db 'job_nr_dummy=range(1,1000)' -m
243-
```
244-
This will create 1000 multirun jobs, each pulling an experiment from PyExperimenter and executing it.
245-
246-
!!! Note: On most slurm clusters the max array size is 1000.
247-
!!! Note: On our mysql server location, at most 300 connections at the same time are possible. You can limit your number
248-
of parallel jobs with `hydra.launcher.array_parallelism=250`.
249-
!!! `carps/configs/runfromdb.yaml` configures the run and its resources. Currently defaults for our slurm cluster are
250-
configured. If you run on a different cluster, adapt `hydra.launcher`.
251-
252-
Experiments with error status (or any other status) can be reset via:
253-
```bash
254-
python -m carps.experimenter.database.reset_experiments
255-
```
256-
257-
### Get the results from the database and post-process
258-
170+
`PyExperimenter`. See the
171+
[guide in the docs](https://automl.github.io/CARP-S/latest/guides/database/) for information
172+
about how to set it up.
259173

260174
## Adding a new Optimizer or Benchmark
261175
For instructions on how to add a new optimizer or benchmark, please refer to the contributing

Makefile renamed to carps/build/Makefile

Lines changed: 19 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,12 @@
11
# These have been configured to only really run short tasks. Longer form tasks
22
# are usually completed in github actions.
3+
# Paths are relative to the Makefile location.
34

45
SHELL := /bin/bash
56

67
NAME := CARP-S
78
PACKAGE_NAME := carps
8-
VERSION := 1.0.0
9+
VERSION := 1.0.1
910

1011
DIST := dist
1112

@@ -36,7 +37,7 @@ check:
3637
pre-commit run --all-files
3738

3839
install-dev:
39-
$(PIP) install -e ".[dev]"
40+
$(PIP) install -e "../..[dev]"
4041
pre-commit install
4142

4243
clean-build:
@@ -80,33 +81,33 @@ uvenv:
8081

8182
optimizer_smac:
8283
$(PIP) install swig
83-
$(PIP) install -r container_recipes/optimizers/SMAC3/SMAC3_requirements.txt
84+
$(PIP) install -r ../container/recipes/optimizers/SMAC3/SMAC3_requirements.txt
8485

8586
optimizer_optuna:
86-
$(PIP) install -r container_recipes/optimizers/Optuna/Optuna_requirements.txt
87+
$(PIP) install -r ../container/recipes/optimizers/Optuna/Optuna_requirements.txt
8788

8889
optimizer_dehb:
89-
$(PIP) install -r container_recipes/optimizers/DEHB/DEHB_requirements.txt
90+
$(PIP) install -r ../container/recipes/optimizers/DEHB/DEHB_requirements.txt
9091
$(PIP) install numpy --upgrade
9192

9293
optimizer_skopt:
93-
$(PIP) install -r container_recipes/optimizers/Scikit_Optimize/Scikit_Optimize_requirements.txt
94+
$(PIP) install -r ../container/recipes/optimizers/Scikit_Optimize/Scikit_Optimize_requirements.txt
9495

9596
optimizer_synetune:
96-
$(PIP) install -r container_recipes/optimizers/SyneTune/SyneTune_requirements.txt
97+
$(PIP) install -r ../container/recipes/optimizers/SyneTune/SyneTune_requirements.txt
9798
$(PIP) install numpy --upgrade
9899

99100
optimizer_ax:
100-
$(PIP) install -r container_recipes/optimizers/Ax/Ax_requirements.txt
101+
$(PIP) install -r ../container/recipes/optimizers/Ax/Ax_requirements.txt
101102
$(PIP) install numpy --upgrade
102103

103104
optimizer_hebo:
104-
# . container_recipes/optimizers/HEBO/HEBO_install.sh
105-
$(PIP) install -r container_recipes/optimizers/HEBO/HEBO_requirements.txt
105+
# . ../container/recipes/optimizers/HEBO/HEBO_install.sh
106+
$(PIP) install -r ../container/recipes/optimizers/HEBO/HEBO_requirements.txt
106107
$(PIP) install numpy --upgrade
107108

108109
optimizer_nevergrad:
109-
$(PIP) install -r container_recipes/optimizers/Nevergrad/Nevergrad_requirements.txt
110+
$(PIP) install -r ../container/recipes/optimizers/Nevergrad/Nevergrad_requirements.txt
110111
$(PIP) install numpy --upgrade
111112
$(PIP) install cma --upgrade
112113

@@ -118,29 +119,29 @@ benchmark_bbob:
118119
benchmark_yahpo:
119120
# Needs 2GB of space for the surrogate models of YAHPO
120121
# Install yahpo
121-
. container_recipes/benchmarks/YAHPO/install_yahpo.sh
122+
. ../container/recipes/benchmarks/YAHPO/install_yahpo.sh
122123
$(PIP) install ConfigSpace --upgrade
123124
$(PIP) install numpy --upgrade
124125

125126
benchmark_pymoo:
126127
# Install pymoo
127-
$(PIP) install -r container_recipes/benchmarks/Pymoo/Pymoo_requirements.txt
128+
$(PIP) install -r ../container/recipes/benchmarks/Pymoo/Pymoo_requirements.txt
128129

129130
benchmark_mfpbench:
130131
# Install mfpbench
131-
$(PIP) install -r container_recipes/benchmarks/MFPBench/MFPBench_requirements.txt
132+
$(PIP) install -r ../container/recipes/benchmarks/MFPBench/MFPBench_requirements.txt
132133
$(PIP) install xgboost --upgrade
133134
$(PIP) install ConfigSpace --upgrade
134-
. container_recipes/benchmarks/MFPBench/download_data.sh
135+
. ../container/recipes/benchmarks/MFPBench/download_data.sh
135136

136137
benchmark_hpobench:
137138
# Install hpobench
138-
. container_recipes/benchmarks/HPOBench/install_HPOBench.sh
139+
. ../container/recipes/benchmarks/HPOBench/install_HPOBench.sh
139140

140141
benchmark_hpob:
141142
# Install hpob
142-
$(PIP) install -r container_recipes/benchmarks/HPOB/HPOB_requirements.txt
143-
. container_recipes/benchmarks/HPOB/download_data.sh
143+
$(PIP) install -r ../container/recipes/benchmarks/HPOB/HPOB_requirements.txt
144+
. ../container/recipes/benchmarks/HPOB/download_data.sh
144145

145146
benchmarks:
146147
$(MAKE) benchmark_bbob

carps/build/make.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ def run_make_commands(targets: list[str]) -> None:
3131
if __name__ == '__main__':
3232
args = sys.argv[1:]
3333
cwd_orig = os.getcwd()
34-
makefile_dir = Path(os.path.dirname(__file__)).parent.parent
34+
makefile_dir = Path(os.path.dirname(__file__))
3535
os.chdir(makefile_dir)
3636
run_make_commands(args)
3737
os.chdir(cwd_orig)

container_recipes/benchmarks/BBOB/BBOB.recipe renamed to carps/container/recipes/benchmarks/BBOB/BBOB.recipe

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ From: python:3.10-slim
99

1010
%files
1111
./carps /benchmarking/carps
12-
./container_recipes /benchmarking/container_recipes
12+
./carps/container/recipes /benchmarking/carps/container/recipes
1313
requirements.txt /benchmarking/requirements.txt
1414
setup.py /benchmarking/setup.py
1515
README.md /benchmarking/README.md
@@ -27,13 +27,13 @@ From: python:3.10-slim
2727
pip install wheel
2828
pip install -r /benchmarking/requirements.txt
2929
pip install ../benchmarking
30-
pip install -r /benchmarking/container_recipes/general/general_requirements_container_task.txt
30+
pip install -r /benchmarking/carps/container/recipes/general/general_requirements_container_task.txt
3131

3232
# log benchmarking version
3333
BENCHMARKING_VERSION=$(python -c "import carps; print(carps.version)")
3434
echo "benchmarking_version $BENCHMARKING_VERSION" >> "$SINGULARITY_LABELS"
3535

3636
# benchmark-specific commands go here
37-
pip install -r /benchmarking/container_recipes/benchmarks/BBOB/BBOB_requirements.txt
37+
pip install -r /benchmarking/carps/container/recipes/benchmarks/BBOB/BBOB_requirements.txt
3838

3939
echo "Successfully installed all features"

0 commit comments

Comments
 (0)