Skip to content
/ wibe Public

An Extensible Open Source Framework for Evaluating Imperceptibility and Robustness of Digital Watermarks for Generated Images

License

Notifications You must be signed in to change notification settings

ispras/wibe

Repository files navigation

WIBE: Watermarks for generated Images – Benchmarking & Evaluation

Documentation Status

WIBE is a modular and extensible framework for automated testing of invisible image watermarking methods under various attack scenarios. The system is designed to support research and development of robust watermarking techniques by enabling systematic evaluation through a customizable processing pipeline.

The system architecture consists of a sequence of processing configurable stages.

``WIBE schema``

Key features

  • Modularity and extensibility through a plugin-based architecture
  • Reproducibility ensured by YAML-configured experiments
  • Usability with a simple command-line interface
  • Flexible persistence through multiple storage backends, including files and ClickHouse database
  • Transparency via real-time visual feedback
  • Scalability to run experiments on clusters

Quick start

To assess implemented watermarking algorithms and attacks on watermarks, follow the step-by-step procedure below.

  1. Clone the repository and navigate to its directory (all subsequent commands should be run from this location):
git clone https://github.com/ispras/wibe.git
  1. Update the submodules:
git submodule update --init --recursive
  1. Create and activate a virtual environment (the exact command varies slightly between OSes – you know how to do this):
python -m venv venv
  1. Download the pre-trained model weights:
(venv) python download_models.py
  1. Install the dependencies:
(venv) python install_requirements.py
  1. Set the HF_TOKEN environment variable with your HuggingFace token (see HuggingFace Authentication Setup for details), then authenticate:
(venv) python huggingface_login.py
  1. All set! Specify the path to your сonfiguration file as a required parameter:
(venv) python -m wibench --config configs/trustmark_demo.yml -d
  1. Upon completion of computations, you can view watermarked images and explore interactive charts for different combinations of watermarking algorithms, attacks, and computed performance metrics.

Below, from left to right, are the original, watermarked with StegaStamp, and attacked by FLUX Regeneration images.

``Original, watermarked, and attacked images``

And here are the same as above, the original and watermarked images, as well as their difference.

``Original and watermarked images, and their difference``

To explore interactive wind rose chart with average TPR@0.1%FPR for all algorithms and attacks evaluated so far, run the following command:

(venv) python make_plots.py --results_dir path_to_results_directory

Below is an average TPR@0.1%FPR chart for 7 algorithms under different types of attacks (evaluated on 300 images from the DiffusionDB dataset).

``Average TPR@0.1%FPR for 7 algorithms``

Documentation

See the full documentation here.

Tutorial video

Watch our video tutorial here.

About

An Extensible Open Source Framework for Evaluating Imperceptibility and Robustness of Digital Watermarks for Generated Images

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages