This repository provides the implementation for paper. We introduce an ensemble-based pipeline compitable with any monocular height estimation network, where we carefully design the architecture and loss functions to leverage the information concealed in imperfect labels using weak supervision.
⚠️ Note: The provided environment file is for reference only. Some packages may require manual installation or configuration.
pytorch
1.7.1pytorch3d
0.4.0⚙️ PyTorch3D typically requires manual compilation with GPU support. Please follow the official installation instructions.
fvcore
0.1.5timm
0.9.7scikit-image
0.21.0wandb
(for experiment tracking)
A configuration file is needed to launch training, see configs/*.yaml
as references. For the model and training configuration, most hyperparameters are set as defaults in ensembleplus_*.py
, but you may override them here in the configs if desired.
For the data configurations, you must define:
data_dir
: path to your datasetdata_train
,data_val
, anddata_test
: list of text files representing the data split. The parameters for DFC23 dataset are provided as reference.
Organize your dataset in the following structure under data_dir
:
📂 data_dir
📂 image # Opitcal satellite images
📂 mask # Building footprint masks (not as network input, only for computing building metrics)
📂 ndsm (Ground truth normalized DSMs)
Each scene should have the same filename base, e.g., scene_001
, with different suffixes:
_IMG.tif
– optical image_BLG.tif
– building mask (optional)_AGL.tif
– nDSM height map
Example:
scene_001_IMG.tif
scene_001_BLG.tif
scene_001_AGL.tif
Define your data splits in test files defined in data_train
, data_val
, and data_test
.
Each file lists scene bases (without extensions), e.g.:
scene_001
scene_002
...
scene_xxx
For train, validation and test, data entries from all split files will be concatenated to build the dataloader.
To start training with the provided example configuraton, simply run
python train.py --exp_config /path/to/saved/config --restore
After training, there will be several checkpoint files under the checkpoint directory, checkpoint_last.pth.tar
for the last epoch, checkpoint_best_rmse.pth.tar
for the epoch with best validation RMSE, and so on.
Evaluate a trained model with:
python test.py --config /path/to/archived/config/under/checkpoint/directory test_checkpoint_file checkpoint_best_rmse.pth.tar
Replace checkpoint_best_rmse.pth.tar with any other saved checkpoint as needed. The results with be save as result_best_rmse.pth.tar
.