You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+11-8Lines changed: 11 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Generative AI Navigation Information for UAV Reconnaissance in Natural Environments
1
+
# UAV-GenerativeAI-Navigation-Images
2
2
3
3
## Table of Contents
4
4
-[Overview](#Overview)
@@ -18,11 +18,11 @@
18
18
19
19
We employ two models: GAN ([pix2pix](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)) and Diffusion([PITI](https://github.com/PITI-Synthesis/PITI)). The raw data is fed into both models. The Diffusion model utilizes an [guided-diffusion](https://github.com/openai/guided-diffusion) pre-trained model for fine-tuning, while the GAN model is trained from scratch. The generated images are evaluated by a Router, which determines the final output by selecting the best result from either the GAN or Diffusion model.
The `training_dataset` and `testing_dataset` directories contain the datasets provided by the [AI CUP 2024](https://tbrain.trendmicro.com.tw/Competitions/Details/34). You can replace these datasets with your own data by organizing them in the following structure:
58
58
* Training Dataset
59
59
*`img/`: Contains raw drone images in .jpg format.
0 commit comments