Skip to content

Commit 82f1686

Browse files
Merge pull request #79 from mahmoodlab/docs
Add readthedoc documentation to Trident
2 parents d82fea1 + a86266d commit 82f1686

File tree

19 files changed

+389
-10
lines changed

19 files changed

+389
-10
lines changed

.readthedocs.yaml

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
version: 2
2+
3+
build:
4+
os: "ubuntu-20.04"
5+
tools:
6+
python: "3.10"
7+
8+
python:
9+
install:
10+
- requirements: docs/requirements.txt
11+
- method: pip
12+
path: .
13+
14+
sphinx:
15+
configuration: docs/conf.py

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# 🔱 Trident
22

33
[arXiv](https://arxiv.org/pdf/2502.06750) | [Blog](https://www.linkedin.com/pulse/announcing-new-open-source-tools-accelerate-ai-pathology-andrew-zhang-loape/?trackingId=pDkifo54SRuJ2QeGiGcXpQ%3D%3D) | [Cite](https://github.com/mahmoodlab/trident?tab=readme-ov-file#reference)
4-
| [License](https://github.com/mahmoodlab/trident?tab=License-1-ov-file)
4+
| [Documentation](https://trident-docs.readthedocs.io/en/latest/) | [License](https://github.com/mahmoodlab/trident?tab=License-1-ov-file)
55

66
Trident is a toolkit for large-scale whole-slide image processing.
77
This project was developed by the [Mahmood Lab](https://faisal.ai/) at Harvard Medical School and Brigham and Women's Hospital. This work was funded by NIH NIGMS R35GM138216.

docs/Makefile

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Minimal makefile for Sphinx documentation
2+
#
3+
4+
# You can set these variables from the command line, and also
5+
# from the environment for the first two.
6+
SPHINXOPTS ?=
7+
SPHINXBUILD ?= sphinx-build
8+
SOURCEDIR = .
9+
BUILDDIR = _build
10+
11+
# Put it first so that "make" without argument is like "make help".
12+
help:
13+
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
14+
15+
.PHONY: help Makefile
16+
17+
# Catch-all target: route all unknown targets to Sphinx using the new
18+
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
19+
%: Makefile
20+
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

docs/_static/lab_logo.svg

Lines changed: 1 addition & 0 deletions
Loading

docs/_static/trident_crop.jpg

505 KB
Loading

docs/api.rst

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
API Reference
2+
=============
3+
4+
This section documents the **public API** of TRIDENT.
5+
6+
.. contents::
7+
:local:
8+
:depth: 2
9+
10+
11+
Trident
12+
-------
13+
14+
Core of TRIDENT with `Processor` and WSI building.
15+
16+
.. automodule:: trident
17+
:members:
18+
:undoc-members:
19+
:inherited-members:
20+
:show-inheritance:
21+
22+
23+
Segmentation Models
24+
-------------------
25+
26+
Semantic segmentation models for tissue vs. background detection and filtering.
27+
28+
.. automodule:: trident.segmentation_models
29+
:members:
30+
:undoc-members:
31+
32+
33+
Patch Encoders
34+
--------------
35+
36+
Factory for loading patch-level encoder models.
37+
38+
.. automodule:: trident.patch_encoder_models
39+
:members:
40+
:undoc-members:
41+
42+
43+
Slide Encoders
44+
--------------
45+
46+
Factory for slide-level encoder models.
47+
48+
.. automodule:: trident.slide_encoder_models
49+
:members:
50+
:undoc-members:

docs/citation.rst

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
Citation & License
2+
==================
3+
4+
If you use TRIDENT in your work, please cite:
5+
6+
.. code-block:: bibtex
7+
8+
@article{zhang2025standardizing,
9+
title={Accelerating Data Processing and Benchmarking of AI Models for Pathology},
10+
author={Zhang, Andrew and Jaume, Guillaume and Vaidya, Anurag and Ding, Tong and Mahmood, Faisal},
11+
journal={arXiv preprint arXiv:2502.06750},
12+
year={2025}
13+
}
14+
15+
@article{vaidya2025molecular,
16+
title={Molecular-driven Foundation Model for Oncologic Pathology},
17+
author={Vaidya, Anurag and Zhang, Andrew and Jaume, Guillaume and ...},
18+
journal={arXiv preprint arXiv:2501.16652},
19+
year={2025}
20+
}
21+
22+
License
23+
-------
24+
Released under CC-BY-NC-ND 4.0. Academic use only.
25+
26+
Funding
27+
-------
28+
Supported by NIH NIGMS R35GM138216.

docs/conf.py

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
# Configuration file for the Sphinx documentation builder.
2+
#
3+
# This file only contains a selection of the most common options. For a full
4+
# list see the documentation:
5+
# https://www.sphinx-doc.org/en/master/usage/configuration.html
6+
7+
# -- Path setup --------------------------------------------------------------
8+
9+
# If extensions (or modules to document with autodoc) are in another directory,
10+
# add these directories to sys.path here. If the directory is relative to the
11+
# documentation root, use os.path.abspath to make it absolute, like shown here.
12+
#
13+
import os
14+
import sys
15+
sys.path.insert(0, os.path.abspath('./../'))
16+
17+
18+
# -- Project information -----------------------------------------------------
19+
20+
project = 'TRIDENT'
21+
copyright = '2025, Guillaume Jaume'
22+
author = 'Guillaume Jaume'
23+
24+
# The full version, including alpha/beta/rc tags
25+
release = 'v0.1.1'
26+
27+
# HTML style
28+
html_theme = 'sphinx_rtd_theme'
29+
html_static_path = ['_static']
30+
html_logo = '_static/lab_logo.svg'
31+
html_theme_options = {
32+
"sidebar_hide_name": True,
33+
}
34+
35+
# -- General configuration ---------------------------------------------------
36+
37+
# Add any Sphinx extension module names here, as strings. They can be
38+
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
39+
# ones.
40+
extensions = [
41+
'sphinx.ext.autodoc',
42+
'sphinx.ext.autosummary',
43+
'sphinx.ext.napoleon', # For NumPy or Google-style docstrings
44+
"sphinx_design",
45+
]
46+
autosummary_generate = True
47+
48+
autoclass_content = 'both' # Shows class-level and __init__ docstring
49+
napoleon_include_init_with_doc = True # for Google/NumPy-style docstrings
50+
51+
# Add any paths that contain templates here, relative to this directory.
52+
templates_path = ['_templates']
53+
54+
# List of patterns, relative to source directory, that match files and
55+
# directories to ignore when looking for source files.
56+
# This pattern also affects html_static_path and html_extra_path.
57+
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
58+
59+
html_context = {
60+
"display_github": True,
61+
"github_user": "guillaumejaume",
62+
"github_repo": "TRIDENT",
63+
"github_version": "docs",
64+
"conf_py_path": "/docs/",
65+
}

docs/faq.rst

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
Frequently Asked Questions
2+
==========================
3+
4+
.. dropdown:: **How do I extract embeddings from legacy CLAM coordinates?**
5+
6+
Use the `--coords_dir` flag to pass CLAM-style patch coordinates:
7+
8+
.. code-block:: bash
9+
10+
python run_batch_of_slides.py --task feat --wsi_dir wsis --job_dir legacy_dir --coords_dir extracted_coords --patch_encoder uni_v1
11+
12+
13+
.. dropdown:: **My WSIs have no micron-per pixel (MPP) or magnigication metadata. What should I do?**
14+
15+
PNGs and JPEGs do not store MPP metadata in the file itself. If you're working with such formats, passing a CSV via `--custom_list_of_wsis` is **required**. This CSV should include at least two columns: `wsi` and `mpp`.
16+
17+
Example:
18+
19+
.. code-block:: csv
20+
21+
wsi,mpp
22+
TCGA-AJ-A8CV-01Z-00-DX1_1.png,0.25
23+
TCGA-AJ-A8CV-01Z-00-DX1_2.png,0.25
24+
TCGA-AJ-A8CV-01Z-00-DX1_3.png,0.25
25+
26+
If you're using OpenSlide-readable formats (e.g., `.svs`, `.tiff`), this CSV is optional—but you can still use it to:
27+
28+
- Restrict processing to a specific subset of slides
29+
- Override incorrect or missing MPP metadata
30+
31+
32+
.. dropdown:: **I want to skip patches on holes.**
33+
34+
By default, TRIDENT includes all tissue patches (including holes). Use `--remove_holes` to exclude them. No recommended, as "holes" are often helping defining the tissue microenvironment.

docs/index.rst

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
.. image:: _static/trident_crop.jpg
2+
:align: right
3+
:width: 220px
4+
5+
Welcome to **TRIDENT**!
6+
======================================
7+
8+
**TRIDENT** is a scalable toolkit for **large-scale whole-slide image (WSI) processing**, developed at the `Mahmood Lab <https://mahmoodlab.org>`_ at **Harvard Medical School** and **Brigham and Women's Hospital**.
9+
10+
🚀 **What TRIDENT offers:**
11+
12+
- **Tissue vs. background segmentation** for H&E, IHC, special stains, and artifact removal
13+
- **Patch-level feature extraction** using 20+ foundation models
14+
- **Slide-level feature extraction** via 5+ pretrained model backbones
15+
- Native support for **OpenSlide**, **CuCIM**, and **PIL-compatible** formats
16+
17+
Explore the **end-to-end pipeline**, from segmentation to slide-level representation — all powered by the latest **foundation models** for computational pathology.
18+
19+
---
20+
21+
.. toctree::
22+
:maxdepth: 2
23+
:caption: 📚 Contents
24+
25+
installation
26+
quickstart
27+
tutorials
28+
api
29+
faq
30+
citation
31+
32+
.. note::
33+
🧪 This project is supported by **NIH NIGMS R35GM138216** and is under active development by the Mahmood Lab.

docs/installation.rst

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
Installation
2+
============
3+
4+
Create a fresh environment:
5+
6+
.. code-block:: bash
7+
8+
conda create -n "trident" python=3.10
9+
conda activate trident
10+
11+
Clone the repository:
12+
13+
.. code-block:: bash
14+
15+
git clone https://github.com/mahmoodlab/trident.git && cd trident
16+
17+
Install the package locally:
18+
19+
.. code-block:: bash
20+
21+
pip install -e .
22+
23+
.. warning::
24+
Some pretrained models require additional dependencies. TRIDENT will guide you via error messages when needed.

docs/make.bat

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
@ECHO OFF
2+
3+
pushd %~dp0
4+
5+
REM Command file for Sphinx documentation
6+
7+
if "%SPHINXBUILD%" == "" (
8+
set SPHINXBUILD=sphinx-build
9+
)
10+
set SOURCEDIR=.
11+
set BUILDDIR=_build
12+
13+
if "%1" == "" goto help
14+
15+
%SPHINXBUILD% >NUL 2>NUL
16+
if errorlevel 9009 (
17+
echo.
18+
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
19+
echo.installed, then set the SPHINXBUILD environment variable to point
20+
echo.to the full path of the 'sphinx-build' executable. Alternatively you
21+
echo.may add the Sphinx directory to PATH.
22+
echo.
23+
echo.If you don't have Sphinx installed, grab it from
24+
echo.https://www.sphinx-doc.org/
25+
exit /b 1
26+
)
27+
28+
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
29+
goto end
30+
31+
:help
32+
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
33+
34+
:end
35+
popd

docs/quickstart.rst

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
Quickstart
2+
==========
3+
4+
🚀 Process a full directory of WSIs:
5+
6+
.. code-block:: bash
7+
8+
python run_batch_of_slides.py --task all --wsi_dir ./wsis --job_dir ./trident_processed --patch_encoder uni_v1 --mag 20 --patch_size 256
9+
10+
🧪 Test a single WSI:
11+
12+
.. code-block:: bash
13+
14+
python run_single_slide.py --slide_path ./wsis/sample.svs --job_dir ./trident_processed --patch_encoder uni_v1 --mag 20 --patch_size 256
15+
16+
👣 Or go step-by-step:
17+
18+
- Tissue segmentation
19+
- Patch extraction
20+
- Feature extraction
21+
22+
Tissue Segmentation
23+
-------------------
24+
Segment WSIs into tissue vs. background with:
25+
26+
.. code-block:: bash
27+
28+
python run_batch_of_slides.py --task seg --wsi_dir ./wsis --job_dir ./trident_processed --segmenter hest --remove_artifacts
29+
30+
Outputs: GeoJSONs, contours, thumbnails.
31+
32+
Patch Extraction
33+
----------------
34+
Extract tissue patches at desired magnification:
35+
36+
.. code-block:: bash
37+
38+
python run_batch_of_slides.py --task coords --wsi_dir ./wsis --job_dir ./trident_processed --mag 20 --patch_size 256
39+
40+
Outputs: Patch coordinates and visualizations.
41+
42+
Patch Feature Extraction
43+
------------------------
44+
Embed patches using any supported foundation model:
45+
46+
.. code-block:: bash
47+
48+
python run_batch_of_slides.py --task feat --wsi_dir ./wsis --job_dir ./trident_processed --patch_encoder uni_v1
49+
50+
Slide Feature Extraction
51+
------------------------
52+
Embed entire slides via models like TITAN or GigaPath:
53+
54+
.. code-block:: bash
55+
56+
python run_batch_of_slides.py --task feat --slide_encoder titan --patch_size 512 --mag 20
57+

docs/requirements.txt

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
sphinx
2+
sphinx_design
3+
sphinx_rtd_theme

docs/tutorials.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
Tutorials
2+
=========
3+
4+
Browse our interactive guides:
5+
6+
- `1-Step-by-Step-Patch-Feature-Extraction-with-Trident.ipynb <https://github.com/mahmoodlab/TRIDENT/blob/main/tutorials/1-Step-by-Step-Patch-Feature-Extraction-with-Trident.ipynb>`_: Guided-whole slide imahe processing.
7+
- `2-Using-Trident-With-Your-Custom-Patch-Encoder.ipynb <https://github.com/mahmoodlab/TRIDENT/blob/main/tutorials/2-Using-Trident-With-Your-Custom-Patch-Encoder.ipynb>`_: Using Trident with a custom patch encoder.
8+
- `3-Training-a-WSI-Classification-Model-with-ABMIL-and-Heatmaps.ipynb <https://github.com/mahmoodlab/TRIDENT/blob/main/tutorials/3-Training-a-WSI-Classification-Model-with-ABMIL-and-Heatmaps.ipynb>`_: Training an ABMIL model with attention heatmaps.

0 commit comments

Comments
 (0)