Skip to content

agglayer/gha-record-e2e-test-run

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Record E2E Test Run in Datadog

A GitHub Action that records End-to-End (E2E) test run data in Datadog for monitoring, analytics, and observability purposes. This action sends structured log data to Datadog's HTTP log intake API, capturing test execution details, repository information, and component references.

Features

  • πŸ“Š Centralized Logging: Send E2E test results directly to Datadog
  • πŸ” Rich Metadata: Capture test status, repository refs, and workflow details
  • πŸš€ Easy Integration: Simple composite action that works in any workflow
  • πŸ“ˆ Observability: Monitor test trends and performance across repositories
  • 🏷️ Component Tracking: Track specific versions of agglayer, aggkit, kurtosis, and e2e components
  • πŸ€– AI-Powered Analysis: Get intelligent insights into test failures using ChatGPT, automatically included in Datadog logs

Usage

Basic Usage

name: E2E Tests
on: [push, pull_request]

jobs:
  e2e-test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
        
      - name: Run E2E tests
        id: e2e
        run: |
          # Your E2E test commands here
          echo "Test execution completed"
          
      - name: Record test run in Datadog
        uses: agglayer/gha-record-e2e-test-run@v1
        with:
          status: ${{ job.status }}
        env:
          DATADOG_API_KEY: ${{ secrets.DATADOG_API_KEY_GHA_RECORD_E2E_TEST_RUN }}

Advanced Usage with Component References

- name: Record test run with component refs
  uses: agglayer/gha-record-e2e-test-run@v1
  with:
    host: "my-test-runner"
    status: ${{ job.status }}
    ref_agglayer: "v1.2.3"
    ref_aggkit: "main"
    ref_kurtosis: "feature/new-improvements"
    ref_e2e: "develop"
  env:
    DATADOG_API_KEY: ${{ secrets.DATADOG_API_KEY_GHA_RECORD_E2E_TEST_RUN }}

Conditional Recording

- name: Record test results
  if: always()  # Run even if previous steps failed
  uses: agglayer/gha-record-e2e-test-run@v1
  with:
    status: ${{ steps.e2e.outcome }}
  env:
    DATADOG_API_KEY: ${{ secrets.DATADOG_API_KEY_GHA_RECORD_E2E_TEST_RUN }}

With AI-Powered Failure Analysis

Same-Job Pattern (Optimal - Access to Local Test Files)

jobs:
  e2e-test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4
      
      - name: Run E2E Tests
        run: |
          # Your test commands here
          # Tests output results to files like test-results.xml, junit.xml
          npm run test:e2e
      
      - name: Record results with AI analysis
        if: always()  # Run even if tests failed
        uses: agglayer/gha-record-e2e-test-run@v1
        with:
          status: ${{ job.status }}
          enable_ai_analysis: "true"
          openai_model: "gpt-4o-mini"  # Supported: gpt-4o-mini, gpt-4o, gpt-3.5-turbo, gpt-4-turbo
        env:
          DATADOG_API_KEY: ${{ secrets.DATADOG_API_KEY_GHA_RECORD_E2E_TEST_RUN }}
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

Separate-Job Pattern (Common - Uses API for Log Access)

permissions:
  actions: read      # Required for AI analysis to access job logs
  contents: read

jobs:
  e2e-test:
    runs-on: ubuntu-latest
    steps:
      - name: Run E2E Tests
        run: npm run test:e2e
  
  record-results:
    if: always()
    needs: [e2e-test]
    runs-on: ubuntu-latest  
    steps:
      - name: Record results with AI analysis
        uses: agglayer/gha-record-e2e-test-run@v1
        with:
          status: ${{ needs.e2e-test.result }}
          enable_ai_analysis: "true"
          openai_model: "gpt-4o"  # Higher quality model for critical analysis
        env:
          DATADOG_API_KEY: ${{ secrets.DATADOG_API_KEY_GHA_RECORD_E2E_TEST_RUN }}
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

Advanced Model Configuration

# Cost-effective for regular builds
- name: Record results (development)
  if: always()
  uses: agglayer/gha-record-e2e-test-run@v1
  with:
    status: ${{ job.status }}
    enable_ai_analysis: "true"
    openai_model: "gpt-4o-mini"  # Default: good quality, low cost
  env:
    DATADOG_API_KEY: ${{ secrets.DATADOG_API_KEY_GHA_RECORD_E2E_TEST_RUN }}
    OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

# High-quality analysis for production failures
- name: Record results (production)
  if: failure() && github.ref == 'refs/heads/main'
  uses: agglayer/gha-record-e2e-test-run@v1
  with:
    status: ${{ job.status }}
    enable_ai_analysis: "true"
    openai_model: "gpt-4o"  # Premium model for critical analysis
  env:
    DATADOG_API_KEY: ${{ secrets.DATADOG_API_KEY_GHA_RECORD_E2E_TEST_RUN }}
    OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

Inputs

Input Description Required Default
host Host identifier for the test runner No github-actions-runners
status Test status (e.g., success, failure, cancelled) Yes -
ref_agglayer Git reference for agglayer/agglayer repository No -
ref_aggkit Git reference for agglayer/aggkit repository No -
ref_kurtosis Git reference for 0xPolygon/kurtosis-cdk repository No -
ref_e2e Git reference for agglayer/e2e repository No -
enable_ai_analysis Enable AI-powered failure analysis using ChatGPT No false
openai_model OpenAI model to use for AI analysis (see supported models below) No gpt-4o-mini

Environment Variables

Note: Composite actions cannot access secrets directly. You must pass secrets as environment variables from your workflow to the action.

Variable Description Required
DATADOG_API_KEY Your Datadog API key for log intake (use DATADOG_API_KEY_GHA_RECORD_E2E_TEST_RUN secret) Yes
OPENAI_API_KEY OpenAI API key for ChatGPT analysis (only needed if enable_ai_analysis is true) No

Permissions

For AI failure analysis to access job logs, your workflow needs specific permissions:

permissions:
  actions: read      # Required for AI analysis to fetch job logs
  contents: read     # Required for workflow access

Note: If you see HTTP 403 errors in the AI analysis logs, add these permissions to your workflow.

Without Permissions

AI analysis will still work using local context (test output files, system info) but won't be able to fetch detailed job logs from the GitHub API.

With Permissions

AI analysis gets access to full job logs for more comprehensive failure analysis.

Setup

1. Use the Organization-Level Secret

This action uses the DATADOG_API_KEY_GHA_RECORD_E2E_TEST_RUN secret that is already configured at the organization level. You don't need to create or configure any additional secrets.

2. (Optional) Set up AI Analysis

To enable AI-powered failure analysis, you need to add an OpenAI API key:

  1. Create an OpenAI API Key:

  2. Add to GitHub Secrets:

    • In your repository, go to Settings β†’ Secrets and variables β†’ Actions
    • Click New repository secret
    • Name: OPENAI_API_KEY
    • Value: Your OpenAI API key
    • Click Add secret
  3. Choose Your Model (optional):

    Supported Models:

    Model Cost Quality Best For
    gpt-4o-mini πŸ’° Lowest ⭐⭐⭐ Good Regular monitoring, frequent analysis
    gpt-4o πŸ’°πŸ’°πŸ’° Higher ⭐⭐⭐⭐⭐ Excellent Critical failures, detailed analysis
    gpt-3.5-turbo πŸ’°πŸ’° Low ⭐⭐ Basic Budget-constrained scenarios
    gpt-4-turbo πŸ’°πŸ’°πŸ’° High ⭐⭐⭐⭐ Very Good Balance of speed and quality

    Note: The action validates the model name and falls back to gpt-4o-mini if an unsupported model is specified.

    The default gpt-4o-mini provides the best balance of cost and quality for most use cases.

3. Use the Action

Add the action to your workflow file (.github/workflows/*.yml) as shown in the usage examples above.

Data Sent to Datadog

The action sends the following structured data to Datadog:

Successful Runs

{
  "host": "github-actions-runners",
  "level": "INFO",
  "message": "Agglayer E2E Test Run",
  "run_id": "12345678",
  "run_repository": "agglayer/my-repo",
  "run_status": "success",
  "run_url": "https://github.com/agglayer/my-repo/actions/runs/12345678",
  "ref_agglayer": "v1.2.3",
  "ref_aggkit": "main",
  "ref_e2e": "develop",
  "ref_kurtosis": "feature-branch",
  "service": "agglayer/e2e",
  "timestamp": 1672531200000,
  "workflow_file": "e2e-tests.yml"
}

Failed Runs with AI Analysis

{
  "host": "github-actions-runners", 
  "level": "INFO",
  "message": "Agglayer E2E Test Run",
  "run_id": "12345678",
  "run_repository": "agglayer/my-repo",
  "run_status": "failure",
  "run_url": "https://github.com/agglayer/my-repo/actions/runs/12345678",
  "ref_agglayer": "v1.2.3",
  "ref_aggkit": "main", 
  "ref_e2e": "develop",
  "ref_kurtosis": "feature-branch",
  "service": "agglayer/e2e",
  "timestamp": 1672531200000,
  "workflow_file": "e2e-tests.yml",
  "run_ai_report": "## Analysis Summary\nThe E2E tests failed due to a timeout in the service startup phase. The logs indicate that the agglayer service took longer than expected to become healthy...\n\n## Root Causes\n1. Network connectivity issues between services\n2. Resource constraints on the test runner\n\n## Recommended Actions\n1. Increase service startup timeout from 30s to 60s\n2. Add health check retries with exponential backoff\n3. Monitor resource usage during test execution"
}

AI-Powered Failure Analysis

When enable_ai_analysis is set to true and a workflow fails, the action automatically runs AI analysis before sending data to Datadog, ensuring the analysis is included in your centralized logs.

πŸ” What It Analyzes

  • Workflow context: Repository, run details, component versions, commit information
  • Event context: What triggered the workflow (push, PR, manual), actor, branch/ref
  • Run context: Retry attempts, timing, commit messages and authors
  • Failed job information: Job names, timing, conclusions
  • Log analysis: Recent log output from failed jobs
  • E2E-specific insights: Common patterns in end-to-end testing failures

πŸ€– AI Analysis Output

The ChatGPT analysis provides:

  1. Failure Summary: Concise explanation of what went wrong
  2. Root Cause Analysis: Potential underlying causes
  3. Actionable Solutions: Specific steps to fix the issue
  4. Systemic Insights: Patterns that might indicate broader problems

πŸ“Š Where to Find Analysis

  • Job Summary: Analysis appears in the GitHub Actions job summary
  • Console Output: Full analysis is also logged to the console
  • Datadog Logs: Analysis is included in the run_ai_report field for centralized monitoring and searchability
  • Datadog Dashboards: Create visualizations and alerts based on AI analysis patterns

🎯 Specialized for E2E Testing

The AI is trained to focus on common E2E testing issues:

  • Network connectivity problems
  • Service startup and health check failures
  • Test environment configuration issues
  • Resource constraints (memory, CPU, disk)
  • Race conditions and timing issues
  • Dependency version conflicts
  • Infrastructure provisioning problems

⚑ Efficient Analysis

The action optimizes performance by leveraging local information first:

🏠 Local Information Sources (No API Calls)

  • GitHub context: Workflow details, run info, commit data
  • Event payload: Trigger information, actor, branch details
  • Test output files: test-results.xml, junit.xml, pytest.log, e2e-results.json
  • Runner environment: OS, architecture, image version
  • System information: Disk usage, memory stats
  • System logs: Recent system events (when accessible)

🌐 API Fallback (When Local Info Unavailable)

  • Job logs: Only fetched when no local test outputs are found
  • Smart detection: Automatically chooses local vs API data sources

🧠 Analysis Priority

  1. Local test outputs (fastest, most detailed)
  2. System information (resource constraints, environment issues)
  3. Remote job logs (fallback when local data insufficient)

Monitoring and Analytics

Once configured, you can:

  • View Logs: Access logs in Datadog's Log Explorer with full context
  • Create Dashboards: Visualize test success rates, duration trends, and failure patterns
  • Set Alerts: Configure notifications for test failures or performance degradation
  • Filter by Components: Track specific versions and their test outcomes
  • Analyze Trends: Monitor test stability across different repository references
  • AI-Powered Insights:
    • Search and filter by AI analysis content in the run_ai_report field
    • Create alerts based on specific failure patterns identified by AI
    • Build dashboards showing common root causes and resolution trends
    • Track the effectiveness of fixes over time using AI recommendations

Common Status Values

Status Description
success All tests passed
failure One or more tests failed
cancelled Test run was cancelled
skipped Tests were skipped

Troubleshooting

Missing DATADOG_API_KEY

Error: curl: (22) The requested URL returned error: 403 Forbidden

Solutions:

  • Ensure you're using the correct organization-level secret DATADOG_API_KEY_GHA_RECORD_E2E_TEST_RUN
  • Verify you're passing the secret as an environment variable in your workflow (see examples above)
  • If the error persists, contact your organization administrator to verify the secret is properly configured

Secrets Not Available

Error: OpenAI API key not found or DATADOG_API_KEY is empty

Solution: Make sure you're passing secrets as environment variables in your workflow step:

- uses: agglayer/gha-record-e2e-test-run@v1
  with:
    # ... inputs
  env:  # ← This is required!
    DATADOG_API_KEY: ${{ secrets.DATADOG_API_KEY_GHA_RECORD_E2E_TEST_RUN }}
    OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

Invalid JSON Format

Error: curl: (22) The requested URL returned error: 400 Bad Request

Solution: Check that all input values are properly formatted and don't contain special characters that could break the JSON structure.

Network Issues

Error: curl: (6) Could not resolve host

Solution: Ensure the runner has internet access and can reach http-intake.logs.datadoghq.eu.

AI Analysis Not Running

Issue: AI analysis doesn't appear even when enabled

Solutions:

  • Ensure enable_ai_analysis is set to "true" (as a string)
  • Verify the OPENAI_API_KEY secret is properly configured
  • Check that the workflow status is failure - AI analysis only runs on failures
  • Confirm the runner has internet access to reach api.openai.com

OpenAI API Errors

Error: Failed to call ChatGPT API

Solutions:

  • Verify your OpenAI API key is valid and has sufficient credits
  • Check if you've exceeded your API rate limits
  • Ensure the API key has access to your chosen model (default: gpt-4o-mini)
  • Verify you're using a supported model: gpt-4o-mini, gpt-4o, gpt-3.5-turbo, gpt-4-turbo
  • Check the action logs for model validation messages
  • Try using a different model if the current one is unavailable

Releasing

This project includes a release.sh script to streamline the process of creating and publishing new releases. The script follows semantic versioning and automatically maintains major version tags.

How to Create a Release

  1. Ensure you're on the main branch with a clean working directory:

    git checkout main
    git pull origin main
  2. Run the release script:

    ./release.sh
  3. Follow the prompts:

    • The script will show the latest release tag
    • Enter a new version (e.g., v1.2.3) or press Enter to accept the suggested version
    • Confirm the release when prompted

What the Script Does

The release script automatically:

  1. Analyzes existing releases - Shows all major versions and their latest releases
  2. Validates your environment - Checks that you're in a git repository with a clean working directory
  3. Determines release strategy - Identifies whether you're releasing a new major, or patching an existing major version
  4. Works from correct branch - Automatically switches to main for new majors, or releases/v# for patches/minors
  5. Creates version tags - Creates both the specific version tag (e.g., v1.2.3) and updates the major version tag (e.g., v1)
  6. Manages release branches - Creates and maintains releases/v# branches for ongoing maintenance
  7. Pushes to GitHub - Automatically pushes all tags and branches to the remote repository

Version Tags

Users can reference your action using different tag formats:

# Specific version (recommended for production)
- uses: agglayer/gha-record-e2e-test-run@v1.2.3

# Major version (gets latest v1.x.x automatically)
- uses: agglayer/gha-record-e2e-test-run@v1

# Latest release (not recommended for production)
- uses: agglayer/gha-record-e2e-test-run@main

After Running the Script

  1. Create a GitHub Release:

    • Go to your repository's Releases page
    • Click "Create a new release"
    • Select the tag created by the script
    • Add release notes describing the changes
  2. Verify the release by testing the action with the new version tag

Multi-Major Version Support

The improved script now properly supports maintaining multiple major versions:

Scenario 1: First Release

./release.sh
# No previous releases found
# Enter version: v1.0.0
# β†’ Creates v1.0.0 tag and v1 tag pointing to it

Scenario 2: New Major Version

./release.sh
# Available major versions:
#   v1 β†’ latest: v1.3.2 (βœ… has release branch)
# Version suggestions:
#   v1: v1.3.3 (patch) or v1.4.0 (minor)
#   New major: v2.0.0

# Enter version: v2.0.0
# β†’ Creates releases/v1 branch from v1.3.2
# β†’ Creates v2.0.0 tag and updates v2 tag

Scenario 3: Patch for Older Major Version

./release.sh
# Available major versions:  
#   v1 β†’ latest: v1.3.2 (βœ… has release branch)
#   v2 β†’ latest: v2.1.0 (⚠️ no release branch)
# Version suggestions:
#   v1: v1.3.3 (patch) or v1.4.0 (minor) 
#   v2: v2.1.1 (patch) or v2.2.0 (minor)
#   New major: v3.0.0

# Enter version: v1.3.3
# β†’ Switches to releases/v1 branch
# β†’ Creates v1.3.3 tag and updates v1 tag
# β†’ Users on v1 get the bug fix!

Example Release Workflow

# Example: Patch release for v1 while v2 exists
git status  # Can be on any branch - script handles checkout

./release.sh
# Script shows available versions and suggests next versions
# Enter version: v1.4.0
# Confirm: y

# The script:
# 1. Switches to releases/v1 branch
# 2. Creates v1.4.0 tag  
# 3. Updates v1 tag to point to v1.4.0
# 4. Pushes everything to GitHub

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

For issues related to this action, please open an issue in this repository.

For Datadog-related questions, consult the Datadog documentation.

About

Github Action to record E2E test runs on Datadog

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages