Skip to content

YanCheng-go/easyearth

Repository files navigation

EasyEarth: Run Vision(-Language) Models for Earth Observations at Your Fingertips

EasyEarth Logo

Deploy Dokcer GitHub Release GitHub Issues License Ubuntu macOS Windows DOI

EasyEarth enables seamless application of cutting-edge computer vision and vision-language models directly on Earth observation data — without writing code. The platform integrates with QGIS via a plugin GUI and provides server-side infrastructure for scalable model inference and management.


🔧 Key Components

  1. Server-Side Infrastructure – Scalable backend to run AI models on geospatial data
  2. QGIS Plugin GUI – User-friendly interface to apply models inside QGIS
  3. Model Manager (in development) – Upload, version, and deploy models with ease

Architecture

📽️ Watch Demo


Table of Contents

📁 Project Structure

.
├── easyearth  # Server-side code for EasyEarth
│   ├── app.py  # Main application entry point
│   ├── config  
│   ├── controllers  # Controllers for handling requests
│   ├── models  # Model management and inference logic
│   ├── openapi  # OpenAPI specification for the API
│   ├── static  # Static files for the server
│   ├── tests  # Unit tests for the server
├── easyearth_plugin  # QGIS plugin for EasyEarth
│   ├── core  # Core logic for the plugin
│   ├── data  # Sample data for testing
│   ├── environment.yml  
│   ├── launch_server_local.sh  # Script to launch server locally
│   ├── plugin.py  # Main plugin entry point
│   ├── requirements.txt  # Python dependencies for the plugin
│   ├── resources  # Resources for the plugin (icons, images, etc.)
│   └── ui  # User interface files for the plugin
├── docs  # Documentation for EasyEarth
│   ├── APIReference.md  # API reference documentation
│   ├── TROUBLESHOOTING.md  # Troubleshooting guide for common issues
│   ├── docker_installation.md  # Docker installation guide for Ubuntu
│   └── DeveloperGuide.md  # Developer guide for contributing to EasyEarth
├── docker-compose.yml  # Docker Compose configuration for EasyEarth
├── Dockerfile  # Dockerfile for building the EasyEarth server image
├── environment.yml  # Conda environment file for EasyEarth
├── launch_server_local.sh  # Script to launch the EasyEarth server locally
├── README.md  
├── requirements_mac.txt  # Python dependencies for macOS
├── requirements.txt  # Python dependencies for EasyEarth
├── launch_server_docker.sh  # Script to set up the dockerized EasyEarth server (only needed for building the image from the start)

    

🚀 Get Started

✅ Requirements

  • Python ≥ 3.9
  • (optional) QGIS (tested with 3.38 and 3.40)
    ⚠️ required to use the plugin on QGIS, otherwise, one can use the server side only
  • (optional) CUDA ≥ 12.8 (download)
    ⚠️ CUDA is only required for GPU inference on Linux. CPU-only mode is also available (though much slower). For macOS, run local mode to enable GPU.
  • (optional) Docker and Docker Compose ≥ 1.21.2 (install guide)
    ⚠️ The server side is a dockerized Flask APP. Without Docker, one can use the local server mode in the plugin, which will download and use a pre-compressed env file for running the app without Docker. For linux, install docker using the offical Docker repository (deb package instead of through snap) to avoid issues
  • (optional) NVIDIA Container Toolkit (install guide)
    ⚠️ To use GPU with docker container on Linux, only required if you want to use the docker mode
    • Tested on NVIDIA-SMI 570.133.07 | Driver Version: 570.133.07 | CUDA Version: 12.8 If you encounter issues with enabling GPU in Docker, please update your NVIDIA driver.

📦 Compatibility

Currently tested on:
✅ Ubuntu
✅ MacOS
⚠️ Windows support:
Please find the pre-release for Windoes here. We haven't finished the full test for Windows - if you encounter any issues or would like to help us add Windows support, contributions are welcome!

📥 Download Pre-built Plugin

# go to your download directory
cd ~/Downloads  # Specify your own path where you want to download the code
git clone https://github.com/YanCheng-go/easyearth.git

You can also download the latest release (.zip) directly from the Releases Page.

🧩 Install EasyEarth Plugin in QGIS

Method 1: Manual Installation

  1. Open QGIS > Settings > User Profiles > Open Active Profile Folder
  2. Navigate to python/plugins
  3. Copy easyearth_plugin folder into this directory
  4. Restart QGIS > Plugins > Manage and Install Plugins > enable EasyEarth

Method 2: Terminal Installation

cd ~/Downloads/easyearth/easyearth_plugin  # go to the directory where easyearth_plugin is located
cp -r ./easyearth_plugin ~/.local/share/QGIS/QGIS3/profiles/default/python/plugins  # copy the easyearth_plugin folder to the plugins directory on Linux
cp -r easyearth_plugin /Users/USERNAME/Library/Application\ Support/QGIS/QGIS3/profiles/default/python/plugins # copy the easyearth_plugin folder to the plugins directory on Mac

After this, Restart QGIS > Plugins > Manage and Install Plugins > enable EasyEarth


🚀 Usage

🛰️ Run EasyEarth in QGIS

  1. Click on the EasyEarth icon in the toolbar
  2. Select a project directory, some folders will be created in the project directory, the structure is as follows:
    • easyearth_base/images
      ⚠️images to be processed need to be placed here
    • easyearth_base/embeddings - for storing embeddings
    • easyearth_base/logs - for storing logs
    • easyearth_base/tmp - for storing temporary files
    • easyearth_base/predictions - for storing predictions
  3. Click Docker to launch the EasyEarth server dockerized container, or Local to run the non-dockerized server
    ⚠️ This may take a while the first time and when there is an updated docker image. As a faster option, one can pull the docker image outside QGIS using the terminal and run "docker pull maverickmiaow/easyearth:latest"
  4. Then you will see the Server Status as Online - Device: in the Server section
  5. Click Browse Image to select an image from the easyearth_base/images folder
  6. Select a model from the dropdown menu
  7. Click Start Drawing to draw points or boxes on the image
    ⚠️when the real-time mode is checked, the prediction of each drawing prompt will be shown in real time, so no need to go step 8
  8. Click Predict to run the model inference
  9. Prediction results will be saved in the easyearth_base/tmp folder and can be moved to the easyearth_base/predictions folder as desired. QGIS Plugin GUI

🧠 Available Models (Adding...)

Model Name Description Prompt Type Prompt Data
SAM Segment Anything Model Point [[x, y], [x, y], ...]
SAM Segment Anything Model Box [[x1, y1, x2, y2]]
SAM2 Segment Anything Model Point [[x, y], [x, y], ...]
SAM2 Segment Anything Model Box [[x1, y1, x2, y2]]
LangSAM Language Model Text ["text1", "text2"]
restor/tcd-segformer-mit-b2 Semantic Segmentation None []

📚 Documentation

Check out our User Guide and Developer Guide for more.


🎯 Roadmap

  • EasyEarth server for model inference
  • QGIS plugin for model application
  • Dockerized server for scalable model inference
  • Advanced prompt-guided segmentation
  • Editing tools for segmentation
  • Model Manager for uploading/updating/tracking models
  • Chatbot integration for model management and reporting
  • Cloud deployment templates

🤝 Contributing

We welcome community contributions! If you'd like to contribute, check out:


👥 Acknowledgements

This project was inspired by several outstanding open-source initiatives. We extend our gratitude to the developers and communities behind the following projects:

  • Segment Anything (SAM) – Meta AI's foundation model for promptable image and video segmentation.
  • SAMGeo – A Python package for applying SAM to geospatial data.
  • Geo-SAM – A QGIS plugin for efficient segmentation of large geospatial raster images.
  • GroundingDINO – An open-set object detector integrating language and vision for zero-shot detection.
  • Lang-Segment-Anything – Combines SAM and GroundingDINO to enable segmentation via natural language prompts.
  • Ultralytics – Creators of the YOLO series, offering real-time object detection and segmentation models.
  • Hugging Face – A platform for sharing and collaborating on machine learning models and datasets.
  • Ollama – A framework for running large language models locally with ease.

🧑‍💻 Authors

Developed by:

Yan Cheng (chengyan2017@gmail.com) – 🌐 Website GitHub GitHub LinkedIn LinkedIn
Lucia Gordon (luciagordon@g.harvard.edu) – 🌐 Website GitHub GitHub LinkedIn LinkedIn
Ankit Kariryaa (ankit.ky@gmail.com)

Citation

If you use EasyEarth in your research or projects, please cite it as follows:

@software{easyearth2025,
  author = {Yan Cheng and Lucia Gordon and Ankit Kariryaa},
  title = {EasyEarth: Run Vision(-Language) Models for Earth Observations at Your Fingertips},
  year = {2025},
  publisher = {GitHub},
  journal = {GitHub repository},
  url = {https://github.com/YanCheng-go/easyearth},
  doi = {10.5281/zenodo.15699316},
}

About

Run Vision(-language) models for earth observations at fingertips

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 8