This is the repository for the paper titled StegGNN: Learning Graphical Representation for Image Steganography
Image steganography refers to embedding secret messages within cover images while maintaining imperceptibility. Recent advances in deep learning—primarily driven by Convolutional Neural Networks (CNNs) and architectures such as inverse neural networks, autoencoders, and generative adversarial networks—have led to notable progress. However, these frameworks are primarily built on CNN architectures, which treat images as regular grids and are limited by their receptive field size and a bias toward spatial locality. In parallel, Graph Neural Networks (GNNs) have recently demonstrated strong adaptability in several computer vision tasks, achieving state-of-the-art performance with architectures such as Vision GNN (ViG). This work moves in that direction and introduces StegGNN—a novel autoencoder-based, cover-agnostic image steganography framework based on GNNs. By modeling images as graph structures, our approach leverages the representational flexibility of GNNs over the grid-based rigidity of conventional CNNs. We conduct extensive experiments on standard benchmark datasets to evaluate visual quality and imperceptibility. Our results show that our GNN-based method performs comparably to existing CNN benchmarks. These findings suggest that GNNs provide a promising alternative representation for steganographic embedding and open the field of deep learning-based steganography to further exploration of GNN-based architectures.
See setup/environment.yml
and run:
bash setup/install.sh
- Prepare datasets Download and organize datasets:
bash setup/download_datasets.sh
Ensure your data directory has:
data/
├── div2k/
├── coco/
└── imagenet_subset/
Modify paths in:
configs/steggnn.yaml
- Training
python train.py --config configs/steggnn.yaml
- Evaluation
python evaluate.py \
--model_path checkpoints/best_model.pth \
--dataset div2k
Outputs PSNR, SSIM, LPIPS, and steganalysis AUC metrics.
Method | PSNR (dB) | SSIM | LPIPS |
---|---|---|---|
HiDDeN | 28.45 | 0.93 | 0.13 |
Baluja | 28.47 | 0.93 | 0.13 |
UDH | 34.35 | 0.94 | 0.02 |
HiNet | 42.89 | 0.99 | 0.00 |
StegGNN | 41.65 | 0.98 | 0.00 |
Method | PSNR (dB) | SSIM | LPIPS |
---|---|---|---|
HiDDeN | 27.79 | 0.87 | 0.11 |
Baluja | 28.25 | 0.91 | 0.13 |
UDH | 33.30 | 0.94 | 0.04 |
HiNet | 31.27 | 0.96 | 0.00 |
StegGNN | 27.64 | 0.87 | 0.13 |
- StegExpose: AUC ≈ 0.57 (near-random)
- SRNet: Over 100 cover/stego pairs needed to reach >95% detection accuracy
Refer to the paper for full experimental curves.
We evaluate StegGNN on three publicly available image datasets:
- DIV2K: Used for both training and evaluation. Contains 800 high-resolution images in the training set. During training, we randomly crop
256×256
patches from these images with horizontal and vertical flipping for data augmentation. - COCO: A subset of 1000 cover-secret image pairs is randomly sampled from the COCO dataset for testing.
- ImageNet: A subset of 1000 cover-secret image pairs is randomly sampled from ImageNet for evaluation.
All evaluation images are resized to 256×256
using bilinear interpolation to ensure cover and secret images have the same dimensions.
Organize datasets under the data/
directory as follows:
data/
├── div2k/
│ ├── train/
│ └── val/
├── coco/
│ └── images/
└── imagenet_subset/
└── images/
- You must manually download the datasets from their official websites:
- DIV2K: https://data.vision.ee.ethz.ch/cvl/DIV2K/
- COCO: https://cocodataset.org/
- ImageNet: http://www.image-net.org/
Ensure that all datasets are resized or preprocessed to 256×256 resolution before training or evaluation.
This repository builds upon several foundational works in deep image steganography and graph neural networks.
We acknowledge the following open-source projects for providing baselines, architectural inspiration, or tools:
- HiDDeN (ECCV 2018) for the CNN-based autoencoder framework.
- HiNet (ICCV 2021) for invertible neural network design and comparison baselines.
- UDH (NeurIPS 2020) for pioneering cover-agnostic steganography with deep learning.
- StegExpose for statistical steganalysis.
- SRNet for deep learning-based steganalysis.
We also thank the authors of DIV2K, COCO, and ImageNet for making their datasets publicly available.
This implementation was developed for academic research purposes only.
Please cite the paper if you use this code:
@inproceedings{steggnn2025,
title = {StegGNN: Learning Graphical Representation for Image Steganography},
author = {Anonymous},
booktitle = {The IEEE/CVF International Conference on Computer Vision (ICCV)},
year = {2025}
}
For questions, please raise a GitHub issue regarding the same.
Authors: