This repository contains a minimal implementation of Hierarchical Optimization with Large Language Models (HOLLM)
# Create the conda environment
conda create -n hollm python=3.11
# Activate the environment
conda activate hollm
# Install the dependencies
python -m pip install -r requirements.txt
In our experiments we used gemini-1.5-flash
. If using the same, set up the environment variables first:
echo "export GEMINI_API_KEY={api_key}" >> ~/.zshrc
source ~/.zshrc
# Generate YAML config files for all test functions
python generate_configs.py
# Submit jobs with specified settings in the config files
python run_benchmarks.py --config configs/levy.yaml
hollm/
├── README.md
├── requirements.txt
├── run_benchmarks.py # Main script (now uses YAML configs)
├── configs/ # YAML configuration files
│ ├── ackley.yaml
│ ├── hartmann.yaml
│ └── ...
├── src/ # Core implementation
│ ├── benchmark_functions.py
│ ├── acquisition_strategies.py
│ └── ...
└── generate_configs.py # Helper to generate YAML configs
- TuRBO-1 (Trust Region Bayesian Optimization)
- Expected Improvement (EI)
- Log Expected Improvement (LogEI)
- Thompson Sampling (TS)
- Random Search
- Sobol Sequence
- LLM-based Global Optimization
- HOLLM