Slune ( Slurm + tune ) is a zero-boilerplate helper library for launching embarrassingly-parallel jobs—such as hyper-parameter searches—on a SLURM cluster.
- ⚡ Fast turnaround – split large searches into many tiny jobs and let the scheduler run them as soon as resources are free.
- 🪄 Simple – a tiny API surface; one import gives you everything you need.
- 🔧 Flexible – minimal assumptions about your project; works with any executable script.
- 📑 Organised results – pluggable ‘Saver’ implementations (CSV by default) keep your experiments tidy and queryable.
The goal is to stay bloat-free: every helper is fully optional and easy to read or extend.
Stable release:
pip install slune-lib
Bleeding-edge (main branch):
pip install "git+https://github.com/h-0-0/slune.git#egg=slune-lib"
Requires Python 3.8+ and access to an sbatch
command (a SLURM installation or simulator).
- Define the search space.
- Submit jobs with a single function call.
- Read the results when the jobs finish.
from slune import SearcherGrid, sbatchit, get_csv_saver
# 1️⃣ Search space: try three values of `alpha`
searcher = SearcherGrid({'alpha': [0.25, 0.5, 0.75]}, runs=1)
# 2️⃣ Submit jobs
sbatchit(
script_path="model.py", # training script that consumes --alpha argument
sbatch_path="template.sh", # SBATCH template containing your cluster directives
searcher=searcher,
cargs={}, # constant args shared by every job
saver=None # optional SaverCsv; skip to rerun all jobs
)
# 3️⃣ Retrieve best result
saver = get_csv_saver()
params, mse = saver.read(params=[], metric_name="mse", select_by="min")
print("Best parameters:", params, "MSE:", mse)
The examples/
directory contains full notebooks that walk through multi-node experiments.
Full API reference and guides are hosted at:
👉 https://h-0-0.github.io/slune/
SearcherGrid
– Cartesian product grid search (more search strategies coming soon)SaverCsv
+LoggerDefault
– hierarchical CSV logging out-of-the-box- Helper utilities:
lsargs
,dict_to_strings
, filesystem helpers and more - Works with any language—you can pass parameters to a Bash, R or Julia script just as easily
- Enhanced SLURM integration: smarter job names, output paths, notifications
- Random / Bayesian / cross-validation search strategies
- Additional saving back-ends (TensorBoard, database, single-CSV)
- CLI tools for monitoring, resubmitting and cancelling jobs
- Language-agnostic templates for non-Python pipelines
Have an idea? Open an issue or PR ✨
- Fork the repo and create a feature branch.
- Run the test suite (
pytest
). - Open a Pull Request describing your changes.
See CONTRIBUTING.md
for more details.
Licensed under the MIT License – see the LICENSE
file for details.