Skip to content

How you manage the error in evaluate_main()? #2

@Zapotecatl

Description

@Zapotecatl

Hi @MaggotHATE ,

Sorry, it is not a issue, it is a doubt. In the main example there is a part where if an error occurs the chat ends:

LOG("eval: %s\n", LOG_TOKENS_TOSTR_PRETTY(ctx, embd).c_str());

if (llama_decode(ctx, llama_batch_get_one(&embd[i], n_eval, n_past, 0))) {
     LOG_TEE("%s : failed to eval\n", __func__);
     return 1;
}

I saw, you use the function: evaluate_main(), int checkEmb(), but I don't understan how you manage the error.

if (llama_decode(ctx, llama_batch_get_one(&embd[i], n_eval, n_past, 0))) {

I mean, I don't understand in which cases the error occurs and how to avoid it or how manage it (my knwolege of llama is limited). In my app (a chat bot) at the moment I am using a work around in which if the error occurs within an external while(true) the chat is launched again and it gives the impression that the chat continues and the user does not perceive that the program ended abruptly.

So, please, could you explain to me how you manage the error?

Thanks for the help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions