Skip to content

Placeholder tracing segmentation faults #9049

Open
@rpsilva-aws

Description

@rpsilva-aws

🐛 Bug

There is a segmentation fault when attempting to stage the HLO dump that involve placeholder tensors. The issue seems to originate from the PjRt computation client, as it attempts to execute the computation. However, for placeholder tensors, it requires the buffers to be present from a given PjRt data, which is not the case for placeholders. We should not be doing any materialization when tracing and returning the HLO, so this might be an undesired behavior.

To Reproduce

import torch
import torch_xla
from torch_xla.core.xla_builder import create_placeholder_tensor

shape = (32,32)
dtype = torch.float32
p = create_placeholder_tensor(shape, dtype)
result = p + 1.0
print(torch_xla._XLAC._get_xla_tensors_hlo(result))

Environment

  • Reproducible on XLA backend [CPU/TPU/CUDA]: CPU
  • torch_xla version: 2.6+

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions