Skip to content

Commit c3302fb

Browse files
committed
refactor: standardize imports and enhance logging
Major improvements across codebase: Import Standardization: - Convert relative imports to absolute (AGENTS.md compliance) - src/batch/batch_orchestrator.py: from ..pipelines → from pipelines - src/api/background_tasks.py: from .. → from api/storage - src/worker/job_processor.py: relative → absolute imports - Prepares for package namespace migration (v1.3.0) Logging Enhancements (src/utils/logging_config.py): - Add capture_warnings parameter (route warnings.warn to logging) - Add capture_stdstreams parameter (capture print() calls) - Attach handlers to third-party loggers (torch, ultralytics, etc.) - Route py.warnings into log handlers for visibility - StreamToLogger class for stdout/stderr redirection Pipeline Import Resilience (batch_orchestrator.py): - Individual try/catch per pipeline import - Failure in one pipeline doesn't disable others - Helpful error hints (e.g., missing libGL.so.1) - Graceful degradation on import failures Version Info Cleanup (src/version.py): - Replace print() with logger.info() calls - Consistent with structured logging approach - No emoji/unicode (Windows compatibility) Configuration Updates: - .devcontainer/devcontainer.json: port/settings updates - .github/workflows/ci-cd.yml: workflow adjustments - Dockerfile.gpu: build improvements - Dockerfile.oldgpu: legacy GPU support preserved Documentation: - docs/testing/testing_standards.md: updated guidelines - docs/usage/pipeline_specs.md: spec updates - tests/README.md: test documentation improvements Debug Tools: - scripts/debug_load_job.py: new job debugging utility - scripts/test_logging.py: logging test improvements Aligns with v1.3.0 roadmap goals: - Package namespace preparation - ASCII-safe console output - Improved error diagnostics - Graceful component failures
1 parent 6da5932 commit c3302fb

File tree

17 files changed

+309
-104
lines changed

17 files changed

+309
-104
lines changed

.devcontainer/devcontainer.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
"all"
1111
],
1212
"features": {},
13-
"postCreateCommand": "uv sync && uv sync --extra dev && uv pip install \"torch==2.8.0+cu130\" \"torchvision==0.21.0+cu130\" \"torchaudio==2.8.0+cu130\" --index-url https://download.pytorch.org/whl/cu130 && uv run pre-commit install",
13+
"postCreateCommand": "uv sync && uv sync --extra dev && uv pip install \"torch==2.8.0+cu124\" \"torchvision==0.21.0+cu124\" \"torchaudio==2.8.0+cu124\" --index-url https://download.pytorch.org/whl/cu124 && uv run pre-commit install",
1414
"customizations": {
1515
"vscode": {
1616
"settings": {

.github/workflows/ci-cd.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ jobs:
3434
if: matrix.os == 'ubuntu-latest'
3535
run: |
3636
sudo apt-get update
37-
sudo apt-get install -y ffmpeg libsm6 libxext6 libxrender-dev libglib2.0-0
37+
sudo apt-get install -y ffmpeg libsm6 libxext6 libxrender1 libglib2.0-0
3838
sudo apt-get install -y cmake build-essential
3939

4040
- name: Install system dependencies (macOS)
@@ -101,7 +101,7 @@ jobs:
101101
- name: Install system dependencies
102102
run: |
103103
sudo apt-get update
104-
sudo apt-get install -y ffmpeg libsm6 libxext6 libxrender-dev libglib2.0-0
104+
sudo apt-get install -y ffmpeg libsm6 libxext6 libxrender1 libglib2.0-0
105105
sudo apt-get install -y cmake build-essential
106106

107107
- name: Install dependencies
@@ -141,7 +141,7 @@ jobs:
141141
- name: Install system dependencies
142142
run: |
143143
sudo apt-get update
144-
sudo apt-get install -y ffmpeg libsm6 libxext6 libxrender-dev libglib2.0-0
144+
sudo apt-get install -y ffmpeg libsm6 libxext6 libxrender1 libglib2.0-0
145145
sudo apt-get install -y cmake build-essential
146146

147147
- name: Install dependencies

Dockerfile.gpu

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,12 @@
55
# docker build -f Dockerfile.gpu -t videoannotator:gpu .
66
# docker run --gpus all --rm -p 8000:8000 -v ${PWD}/data:/app/data videoannotator:gpu
77

8-
FROM nvidia/cuda:13.0.1-runtime-ubuntu24.04
8+
FROM nvidia/cuda:12.6.0-runtime-ubuntu24.04
99

1010
SHELL ["/bin/bash","-lc"]
1111
RUN apt-get update && apt-get install -y \
1212
curl python3 python3-venv python3-pip git \
13-
libgl1-mesa-glx libglib2.0-0 libsm6 libxext6 libxrender-dev libgomp1 \
13+
libgl1-mesa-dri libglib2.0-0 libsm6 libxext6 libxrender1 libgomp1 \
1414
&& rm -rf /var/lib/apt/lists/*
1515

1616
# Ensure locale is generated so LANG=en_US.UTF-8 works inside the container
@@ -46,14 +46,10 @@ COPY . .
4646
RUN if [ "${SKIP_IMAGE_UV_SYNC}" != "true" ]; then uv sync --frozen --no-editable; else echo "[BUILD] Skipping uv sync at image build (SKIP_IMAGE_UV_SYNC=true)"; fi
4747

4848
# Install CUDA PyTorch for GPU acceleration (override CPU version)
49-
RUN if [ "${SKIP_TORCH_INSTALL}" != "true" ]; then uv pip install "torch==2.8.0+cu130" "torchvision==0.21.0+cu130" "torchaudio==2.8.0+cu130" --index-url https://download.pytorch.org/whl/cu130; else echo "[BUILD] Skipping torch install at image build (SKIP_TORCH_INSTALL=true)"; fi
49+
RUN if [ "${SKIP_TORCH_INSTALL}" != "true" ]; then uv pip install "torch==2.8.0+cu126" "torchvision==0.23.0+cu126" "torchaudio==2.8.0+cu126" --index-url https://download.pytorch.org/whl/cu126; else echo "[BUILD] Skipping torch install at image build (SKIP_TORCH_INSTALL=true)"; fi
5050

5151
# Verify GPU access (no model downloading needed!)
52-
RUN uv run python3 -c "\
53-
import torch; \
54-
print(f'[GPU BUILD] CUDA available: {torch.cuda.is_available()}'); \
55-
print(f'[GPU BUILD] PyTorch version: {torch.__version__}'); \
56-
print('[GPU BUILD] Production image ready - models will download on first use');"
52+
RUN uv run python3 -c "import torch; print(f'[GPU BUILD] CUDA available: {torch.cuda.is_available()}'); print(f'[GPU BUILD] PyTorch version: {torch.__version__}'); print('[GPU BUILD] Production image ready - models will download on first use')"
5753

5854
# Set environment for production
5955
ENV PYTHONUNBUFFERED=1

Dockerfile.oldgpu

Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
# VideoAnnotator Docker Image - Legacy GPU (GTX 10xx / CUDA 11.x)
2+
#
3+
# Purpose:
4+
# - Designed for older GPUs (eg. GTX 1060) that are best matched with CUDA 11.x
5+
# - Keeps heavy steps optional (SKIP_IMAGE_UV_SYNC / SKIP_TORCH_INSTALL) so builders can
6+
# choose to perform network operations at container runtime instead of at build time.
7+
#
8+
# Usage (build):
9+
# docker build -f Dockerfile.oldgpu -t videoannotator:oldgpu .
10+
#
11+
# Usage (run):
12+
# docker run --gpus all --rm -p 18011:18011 -v ${PWD}/data:/app/data videoannotator:oldgpu
13+
14+
# Choose a CUDA 11.x runtime image compatible with older GPUs/drivers
15+
FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
16+
17+
SHELL ["/bin/bash","-lc"]
18+
19+
# Install core system packages and graphics libs commonly required by some pipelines
20+
RUN apt-get update && apt-get install -y \
21+
curl python3 python3-venv python3-pip git git-lfs \
22+
libgl1-mesa-dri libglib2.0-0 libsm6 libxext6 libxrender1 libgomp1 \
23+
&& rm -rf /var/lib/apt/lists/*
24+
25+
# Ensure locale is generated so LANG=en_US.UTF-8 works inside the container
26+
RUN apt-get update && apt-get install -y locales \
27+
&& locale-gen en_US.UTF-8 \
28+
&& update-locale LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 \
29+
&& rm -rf /var/lib/apt/lists/*
30+
31+
# Export UTF-8 locale for all processes
32+
ENV LANG=en_US.UTF-8
33+
ENV LC_ALL=en_US.UTF-8
34+
35+
# Initialize Git LFS
36+
RUN git lfs install
37+
38+
# uv package manager (project standard)
39+
RUN curl -LsSf https://astral.sh/uv/install.sh | sh
40+
ENV PATH="/root/.local/bin:${PATH}"
41+
42+
WORKDIR /app
43+
44+
# Build args to allow skipping heavy network steps during image build
45+
ARG SKIP_IMAGE_UV_SYNC=true
46+
ARG SKIP_TORCH_INSTALL=true
47+
48+
# Copy project files (exclude large model artifacts via .dockerignore)
49+
COPY . .
50+
51+
# Copy local models / weights if you want them baked into the image (optional)
52+
# COPY models/ /app/models/
53+
# COPY weights/ /app/weights/
54+
55+
# Install Python dependencies via uv (can be skipped at build time)
56+
RUN if [ "${SKIP_IMAGE_UV_SYNC}" != "true" ]; then uv sync --frozen --no-editable; else echo "[BUILD] Skipping uv sync at image build (SKIP_IMAGE_UV_SYNC=true)"; fi
57+
58+
# Torch installation: keep this optional because CUDA and appropriate wheel sets can vary
59+
# Provide build args to install a matching wheel at build time. Example values are
60+
# for CUDA 11.3: TORCH_WHEEL="torch==1.13.1+cu113 torchvision==0.14.1+cu113 torchaudio==0.13.1+cu113"
61+
ARG TORCH_WHEEL=""
62+
ARG TORCH_INDEX_URL="https://download.pytorch.org/whl/cu113"
63+
RUN if [ "${SKIP_TORCH_INSTALL}" != "true" ] && [ -n "${TORCH_WHEEL}" ]; then \
64+
echo "[BUILD] Installing torch wheels: ${TORCH_WHEEL}"; \
65+
uv pip install ${TORCH_WHEEL} --index-url ${TORCH_INDEX_URL}; \
66+
else \
67+
echo "[BUILD] Skipping torch install (SKIP_TORCH_INSTALL=${SKIP_TORCH_INSTALL}, TORCH_WHEEL set: ${TORCH_WHEEL:+yes})"; \
68+
fi
69+
70+
# Quick smoke-check (optional) - this will only run if uv and Python are available
71+
RUN uv run python3 -c "import sys; print('[OLDGPU BUILD] Python:', sys.version.splitlines()[0])"
72+
73+
ENV PYTHONUNBUFFERED=1
74+
ENV CUDA_VISIBLE_DEVICES=0
75+
76+
# Create directories for mounted volumes
77+
RUN mkdir -p /app/data /app/output /app/logs
78+
79+
EXPOSE 18011
80+
81+
CMD ["uv", "run", "python3", "api_server.py", "--log-level", "info", "--port", "18011"]

api_server.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,9 @@
2727

2828
def setup_logging(level: str = "INFO", logs_dir: str = "logs"):
2929
"""Set up enhanced logging configuration."""
30-
setup_videoannotator_logging(logs_dir=logs_dir, log_level=level)
30+
# Capture Python warnings by default so compatibility warnings surface in logs.
31+
setup_videoannotator_logging(logs_dir=logs_dir, log_level=level, capture_warnings=True,
32+
capture_stdstreams=(level.upper()=="DEBUG"))
3133

3234
def main():
3335
"""Main entry point for the API server."""

docs/testing/testing_standards.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -73,15 +73,15 @@ The VideoAnnotator test suite uses a modular approach with dedicated test files
7373
#### Running Modular Tests
7474
```bash
7575
# Run tests for a specific pipeline
76-
python -m pytest tests/test_face_pipeline.py -v
76+
uv run python -m pytest tests/test_face_pipeline.py -v
7777

7878
# Run specific test categories across all pipelines
79-
python -m pytest tests/ -m unit -v
80-
python -m pytest tests/ -m integration -v
81-
python -m pytest tests/ -m performance -v
79+
uv run python -m pytest tests/ -m unit -v
80+
uv run python -m pytest tests/ -m integration -v
81+
uv run python -m pytest tests/ -m performance -v
8282

8383
# Run all pipeline tests through the test runner
84-
python -m pytest tests/test_all_pipelines.py -v
84+
uv run python -m pytest tests/test_all_pipelines.py -v
8585
```
8686

8787
## Test File Standards

docs/usage/pipeline_specs.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -186,16 +186,16 @@ scene_detection:
186186

187187
```bash
188188
# Process single video with all pipelines
189-
python -m videoannotator process video.mp4
189+
uv run python -m videoannotator process video.mp4
190190

191191
# Process specific pipeline
192-
python -m videoannotator process video.mp4 --pipeline person_tracking
192+
uv run python -m videoannotator process video.mp4 --pipeline person_tracking
193193

194194
# Custom config
195-
python -m videoannotator process video.mp4 --config configs/high_performance.yaml
195+
uv run python -m videoannotator process video.mp4 --config configs/high_performance.yaml
196196

197197
# Batch processing
198-
python -m videoannotator batch videos/ --output results/
198+
uv run python -m videoannotator batch videos/ --output results/
199199
```
200200

201201
### Python API

scripts/debug_load_job.py

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
"""
2+
Quick script to load a job from storage and print full traceback on failure.
3+
Run with: uv run python scripts/debug_load_job.py <job_id>
4+
"""
5+
import sys
6+
import traceback
7+
from pathlib import Path
8+
9+
# Ensure package import works from repo root
10+
sys.path.insert(0, str(Path(__file__).resolve().parents[1] / 'src'))
11+
12+
from api.database import get_storage_backend
13+
14+
JOB_ID = sys.argv[1] if len(sys.argv) > 1 else '52325a4e-71e8-4a22-b934-0c4836fd746e'
15+
16+
print(f"Attempting to load job: {JOB_ID}")
17+
storage = get_storage_backend()
18+
try:
19+
job = storage.load_job_metadata(JOB_ID)
20+
print('Loaded job OK:', getattr(job, 'job_id', None))
21+
except Exception as e:
22+
print('Exception occurred while loading job:')
23+
traceback.print_exc()
24+
raise

scripts/test_logging.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ def test_logging_system():
2222
print("\n[TEST 1] Testing basic logging setup...")
2323

2424
try:
25-
from src.utils.logging_config import setup_videoannotator_logging, get_logger
26-
loggers = setup_videoannotator_logging(logs_dir="logs", log_level="INFO")
25+
from src.utils.logging_config import setup_videoannotator_logging, get_logger
26+
loggers = setup_videoannotator_logging(logs_dir="logs", log_level="INFO", capture_warnings=True)
2727

2828
api_logger = get_logger("api")
2929
request_logger = get_logger("requests")

src/api/background_tasks.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,8 @@ async def _process_cycle(self):
139139
jobs_to_process.append(job)
140140
self.processing_jobs.add(job_id)
141141
except Exception as e:
142-
logger.error(f"Failed to load job {job_id}: {e}")
142+
# Log full exception traceback to identify offending imports
143+
logger.error(f"Failed to load job {job_id}: {e}", exc_info=True)
143144

144145
# Start processing selected jobs
145146
for job in jobs_to_process:

0 commit comments

Comments
 (0)