how to add an extra fixed tensor to the token embedding in gpt2 arch #9197
Unanswered
Francis235
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I want to know how to add an extra fixed tensor to the token embedding. My model is based on gpt2, and my input format is '[mel][bos][token][eos]', I need to add my mel_embed(a fixed vector, such as 1x1600) to the embedding of '[mel][bos][token][eos]', the python code is:
I try to create a tensor in build_gpt2 as the following:
but I get the following error:

llama-cli: /workspace/llama.cpp/ggml/src/ggml-backend.c:1574: ggml_backend_sched_split_graph: Assertion `src_backend_id != -1' failed.
I think I should create the inp_me tensor in advance just like the model weight and bias created in llm_load_tensors(), but I don't know how to do that. Any suggestion? Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions