Skip to content

Remove CUDA synchronizations by slicing input tensor with int instead of CUDA tensors in nn.LinearEmbeddingEncoder #2266

Remove CUDA synchronizations by slicing input tensor with int instead of CUDA tensors in nn.LinearEmbeddingEncoder

Remove CUDA synchronizations by slicing input tensor with int instead of CUDA tensors in nn.LinearEmbeddingEncoder #2266

Workflow file for this run

name: PR Labeler
on: # yamllint disable-line rule:truthy
pull_request:
jobs:
triage:
if: github.repository == 'pyg-team/pytorch-frame'
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- name: Add PR labels
uses: actions/labeler@v4
continue-on-error: true
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
sync-labels: true
- name: Add PR author
uses: samspills/assign-pr-to-author@v1.0
if: github.event_name == 'pull_request'
continue-on-error: true
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"