Skip to content

live-image-tracking-tools/traccuracy

Repository files navigation

traccuracy: Evaluate Cell Tracking Solutions

License PyPI Python Version CI Benchmarking Documentation Status codecov

traccuracy provides a suite of benchmarking functions that can be used to evaluate cell tracking solutions against ground truth annotations. The goal of this library is to provide a convenient way to run rigorous evaluation and to document and consolidate the wide variety of metrics used in the field.

traccuracy can compute a comprehensive set of metrics for evaluating cell linking and division performance, and can compute biologically meaningful metrics such as the number of correctly reconstructed lineages over N frames and cell cycle length accuracy. As matching ground truth and predicted lineages is a crucial step for performing evaluation, traccuracy includes a number of algorithms for matching ground truth and predicted lineages, both with and without segmentation masks.

Learn more in the documentation or check out the source code.

Installation

pip install traccuracy

How It Works

The traccuracy library has three main components: loaders, matchers, and metrics.

Loaders load tracking graphs from other formats, such as the CTC format, into a TrackingGraph object. A TrackingGraph is a spatiotemporal graph backed by a networkx.DiGraph Nodes represent a single cell in a given time point, and are annotated with a time and a location. Edges point forward in time from a node representing a cell in time point t to the same cell or its daughter in frame t+1 (or beyond, to represent skip edges). Additional terminology is documented in the glossary To load TrackingGraphs from a custom format, you will likely need to implement a loader: see documentation here for more information. Alternatively you can initialize a TrackingGraph with a networkx.DiGraph and ArrayLike objects of segmentation masks if needed.

Matchers take a ground truth and a predicted TrackingGraph with optional segmentation masks and match the nodes and edges to allow evaluation to occur. A list of matchers is available here.

In order to compute metrics, traccuracy begins by annotating the matched graphs with error flags such as False Positive and False Negative. The annotated graph can be exported and used for visualization in other tools. Finally, metrics inspect the error annotations to report both error counts and summary statistics.

The traccuracy library has a flexible Python API, shown in this example notebook. Additionally there is a command line interface for running standard CTC metrics, documented here.

from traccuracy.loaders import load_ctc_data
from traccuracy.matchers import PointMatcher
from traccuracy.metrics import DivisionMetrics, BasicMetrics

# Load data in TrackingGraph objects
gt_data = load_ctc_data(
    "path/to/GT/TRA",
    "path/to/GT/TRA/man_track.txt",
    name="GT"
)
pred_data = load_ctc_data(
    "path/to/prediction",
    "path/to/prediction/track.txt",
    name="prediction"
)

results, matched = run_metrics(
    gt_data=gt_data,
    pred_data=pred_data,
    matcher=PointMatcher(),
    metrics=[DivisionMetrics(), BasicMetrics()]
)

Implemented Metrics

Featured Works

If you use traccuracy in your own work, please let us know so that we can feature it here!

About

Utilities for computing common accuracy metrics on cell tracking solutions with ground truth

Resources

License

Stars

Watchers

Forks

Contributors 11

Languages