Skip to content

[Bug]: 在昆仑芯上读取模型特定数据类型会 core dump #11087

@YoctoHan

Description

@YoctoHan

软件环境

- paddlepaddle-xpu: 3.3.0.dev20250912 
- paddlenlp: 3.0.0b4.post20250825

重复问题

  • I have searched the existing issues

错误描述

在昆仑芯 P800 上加载模型,如果指定数据类型为'float16'或者'float32'可以正常加载,但是如果指定'bfloat16'则会出现 core dump,100% 复现。

稳定复现步骤 & 代码

import paddle
from paddlenlp.transformers import AutoTokenizer, AutoModelForCausalLM

# 可以成功执行
model = AutoModelForCausalLM.from_pretrained("/aiXcoder-7B-base", convert_from_torch=True, dtype="float32")

# 可以成功执行
model = AutoModelForCausalLM.from_pretrained("/aiXcoder-7B-base", convert_from_torch=True, dtype="float16")

# 不可以成功执行,报 core dump
model = AutoModelForCausalLM.from_pretrained("/aiXcoder-7B-base", convert_from_torch=True, dtype="bfloat16")

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions