An Apache Iceberg implementation of the Hydrofabric to disseminate continental hydrologic data
Note
To run any of the functions in this repo your AWS test account credentials need to be in your .env file and your .pyiceberg.yaml settings need to up to date with AWS_DEFAULT_REGION="us-east-1" set
This repo is managed through UV and can be installed through:
uv sync --all-extras
source .venv/bin/activateNote: Functionality is split into optional-dependencies in pyproject.toml. If you only require base functionality, install as uv sync. If you require some extras (e.g. icechunk, io), you can specify uv sync --extra icechunk --extra io as needed. For local develpoment, --all-extras is recommended for complete functionality.
To run the API locally, ensure your .env file in your project root has the right credentials, then run
python -m app.mainThis should spin up the API services at localhost:8000/.
If you are running the API locally, you can run
python -m app.main --catalog sqlTo run the API locally with Docker, ensure your .env file in your project root has the right credentials, then run
docker compose -f docker/compose.yaml build --no-cache
docker compose -f docker/compose.yaml upThis should spin up the API services
To ensure that icefabric follows the specified structure, be sure to install the local dev dependencies and run pre-commit install
To build the user guide documentation for Icefabric locally, run the following commands:
uv pip install ".[docs]"
mkdocs serve -a localhost:8080Docs will be spun up at localhost:8080/
The tests folder is for all testing data so the global confest can pick it up. This allows all tests in the namespace packages to share the same scope without having to reference one another in tests
To run tests, run pytest -s from project root.
To run the subsetter tests, run pytest --run-slow as these tests take some time. Otherwise, they will be skipped
