Skip to content

VisionXLab/Earth-Adapter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Earth-Adapter LogoEarth-Adapter: Bridge the Geospatial Domain Gaps with Mixture of Frequency Adaptation

Xiaoxing HuZiyang GongYupei WangYuru Jia
Gen LuoXue Yang

If you find our work helpful, please consider giving us a ⭐!

Official PyTorch implementation of [Earth Adapter: Bridge the Geospatial Domain Gaps with Mixture of Frequency Adaptation]

Notice

This repository is still being organized and refined. If you encounter any issues while using it, please contact |Email: xiaoxinghhh@gmail.com|WeChat: 15717699268| or submit an issue. Thank you for your attention.

TODO

  • complete training and evaluation instruction
  • paper link
  • demo.ipynb
  • data and weight on huggingface & google drive
  • extended experiment on supervised in-domain semantic segmentation
  • extended experiment on cross-earth benchmark
  • bug fix...

📖 Introduction

This repository contains the official implementation of [Earth Adapter: Bridge the Geospatial Domain Gaps with Mixture of Frequency Adaptation]. Our method achieves state-of-the-art performance on 8 widely-used cross-domain geospatial benchmarks. The code is still under development, and we are currently providing the model, weights, and dataset.

Paper: Paper Link

🛠️ Requirements

  • Python >= 3.8
  • PyTorch >= 1.10
  • CUDA >= 11.0 (if using GPU)
  • Other dependencies in requirements.txt

🚀 Installation

  • Clone this repository and install dependencies:
# Clone the repo
git clone https://github.com/VisionXLab/Earth-Adapter.git
cd Earth-Adapter

# Create virtual environment
conda create -n earth-adapter python=3.9 -y

conda activate earth-adapter

# Install PyTorch according to your own CUDA version
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu121


# Install other dependencies
pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.0"
pip install "mmsegmentation>=1.0.0"
pip install "mmdet>=3.0.0"
pip install xformers=='0.0.23'
pip install -r requirements.txt
pip install future tensorboard

📂 Dataset Preparation

  • Download the LoveDA, ISPRS Potsdam, ISPRS Vaihingen at the |Baidu Cloud|Hugging Face|Google Drive| (We have processed the images and labels, dividing them into 512x512 patches. You may also perform the same processing on your own dataset.)
  • Construct the data as follows:
Earth-Adapter/
|-- data/
|---|--- loveda_uda
|---|--- potsdamRGB
|---|--- vaihingen

🔥 Usage

  • If you encounter a version mismatch of mmseg or mmdet during use, such as an error such as xxx<=mmcv<xxx, please modify it directly in __init__.py(in mmseg and mmdet) and change it to xxx<=mmcv<=xxx.

Training

./tools/train.sh

Evaluation

The Checkpoint can be downloaded from |Baidu Cloud|Hugging Face|Google Drive|,put the checkpoint in the checkpoints folder. Then run:

./tools/test.sh

📊 Results

Main Results

Sample Result Sample Result

Quantitative Results

Sample Result Sample Result

Visualization

Sample Result Sample Result

📜 Citation

If you find our work helpful, please cite our paper:

@article{hu2025earth,
  title={Earth-Adapter: Bridge the Geospatial Domain Gaps with Mixture of Frequency Adaptation},
  author={Hu, Xiaoxing and Gong, Ziyang and Wang, Yupei and Jia, Yuru and Luo, Gen and Yang, Xue},
  journal={arXiv preprint arXiv:2504.06220},
  year={2025}
}
@article{gong2024crossearth,
  title={Crossearth: Geospatial vision foundation model for domain generalizable remote sensing semantic segmentation},
  author={Gong, Ziyang and Wei, Zhixiang and Wang, Di and Ma, Xianzheng and Chen, Hongruixuan and Jia, Yuru and Deng, Yupeng and Ji, Zhenming and Zhu, Xiangwei and Yokoya, Naoto and others},
  journal={arXiv preprint arXiv:2410.22629},
  year={2024}
}

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙌 Acknowledgments

Our work is inspired by Rein. We are grateful for their outstanding work and code.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published