Skip to content

feat(SQO): Implement RPC failover, retry logic and notification on RPC provider rotation. #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jun 13, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions .github/workflows/pr-check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,16 @@ jobs:
fetch-depth: 0

- name: Validate PR requirements
env:
PR_TITLE: "${{ github.event.pull_request.title }}"
PR_BODY: "${{ github.event.pull_request.body }}"
run: |
PR_TITLE="${{ github.event.pull_request.title }}"
if [[ ${#PR_TITLE} -lt 1 ]]; then
if [[ -z "$PR_TITLE" ]]; then
echo "PR title cannot be empty"
exit 1
fi

PR_BODY="${{ github.event.pull_request.body }}"
if [[ ${#PR_BODY} -lt 1 ]]; then
if [[ -z "$PR_BODY" ]]; then
echo "PR description cannot be empty"
exit 1
fi
Expand Down
12 changes: 4 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,19 +127,15 @@ bandit -r src/

## TODO List (only outstanding TODOs)

### 1. Production Readiness
- [ ] Check error recovery mechanisms to see if they could be improved (RPC failover, retry logic)
- [ ] Verify health check endpoints or processes (Docker healthcheck)

### 2. Testing
### 1. Testing
- [ ] Create unit tests for all components
- [ ] Create integration tests for the entire pipeline
- [ ] Security review of code and dependencies

### 3. Documentation
### 2. Documentation
- [ ] Documentation of all major components
- [ ] Document operational procedures

### 4. Optimization
### 3. Optimization
- [ ] Optimize dependencies and container setup
- [ ] Ensure unused files, functions & dependencies are removed from codebase
- [ ] Ensure unused files, functions & dependencies are removed from codebase
8 changes: 4 additions & 4 deletions config.toml.example
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@
# =============================================================================

[bigquery]
BIGQUERY_LOCATION_ID = "US"
BIGQUERY_PROJECT_ID = "graph-mainnet"
BIGQUERY_DATASET_ID = "internal_metrics"
BIGQUERY_TABLE_ID = "metrics_indexer_attempts"
BIGQUERY_LOCATION_ID = ""
BIGQUERY_PROJECT_ID = ""
BIGQUERY_DATASET_ID = ""
BIGQUERY_TABLE_ID = ""

[blockchain]
BLOCKCHAIN_CONTRACT_ADDRESS = ""
Expand Down
2 changes: 1 addition & 1 deletion scripts/ruff_check_format_assets.sh
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ find src tests scripts -name "*.py" -print0 | xargs -0 python3 scripts/custom_fo

# Show remaining issues (mainly line length issues that need manual intervention)
echo -e "\n\nRemaining issues that need manual attention:"
ruff check src tests scripts --select E501 --statistics
ruff check src tests scripts --select E501

echo "Linting/formatting complete! All auto-fixable issues have been resolved."
echo "Manually review and fix any remaining line length issues if desired."
3 changes: 1 addition & 2 deletions src/models/bigquery_data_access_provider.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
"""

import logging
import socket
from datetime import date
from typing import cast

Expand Down Expand Up @@ -40,7 +39,7 @@ def __init__(
self.max_blocks_behind = max_blocks_behind


@retry_with_backoff(max_attempts=10, min_wait=1, max_wait=60, exceptions=(ConnectionError, socket.timeout))
@retry_with_backoff(max_attempts=10, min_wait=1, max_wait=60)
def _read_gbq_dataframe(self, query: str) -> DataFrame:
"""
Execute a read query on Google BigQuery and return the results as a pandas DataFrame.
Expand Down
Loading