Skip to content

no thinking , no answer .. only DONE #593

@RamazanGeven

Description

@RamazanGeven

Yesterday, i installed 2 lollms on 2 different machine and they worked well. today i wanted to delete and install again. After install , it works but no thinking, no answer just DONE

[INFO][2025-02-06 09:32:21] Selecting personality

  • Selecting active personality 0 ...[INFO][2025-02-06 09:32:21] ok
    Selected lollms
    [INFO][2025-02-06 09:32:21] Saving configuration
    INFO: ::1:52670 - "POST /select_personality HTTP/1.1" 200 OK
    INFO: ::1:52670 - "POST /get_config HTTP/1.1" 200 OK
    INFO: ::1:52671 - "POST /get_config HTTP/1.1" 200 OK
    Listing all personalitiesINFO: ::1:52671 - "GET /list_bindings HTTP/1.1" 200 OK
    INFO: ::1:52671 - "GET /get_available_models HTTP/1.1" 200 OK
    Listing modelsok
    INFO: ::1:52671 - "GET /list_models HTTP/1.1" 200 OK
    Getting active model
    ok
    INFO: ::1:52671 - "GET /get_active_model HTTP/1.1" 200 OK
    INFO: ::1:52671 - "POST /get_config HTTP/1.1" 200 OK
    INFO: ::1:52671 - "POST /get_personality_languages_list HTTP/1.1" 200 OK
    INFO: ::1:52671 - "POST /get_personality_language HTTP/1.1" 200 OK
    INFO: ::1:52671 - "GET /is_rt_on HTTP/1.1" 200 OK
    OK
    INFO: ::1:52670 - "GET /get_all_personalities HTTP/1.1" 200 OK
    Loading discussion for client QPkiRKA1wyFMagQbAAAN ... ok
    INFO: ::1:52671 - "GET /is_rt_on HTTP/1.1" 200 OK
    INFO: ::1:52670 - "POST /get_discussion_files_list HTTP/1.1" 200 OK
    INFO: ::1:52672 - "POST /get_discussion_files_list HTTP/1.1" 200 OK
    INFO: ::1:52673 - "GET /get_generation_status HTTP/1.1" 200 OK
    Starting message generation by lollms
    [INFO][2025-02-06 09:32:34] Text generation requested by client: QPkiRKA1wyFMagQbAAAN
    [INFO][2025-02-06 09:32:34] Started generation task
    [INFO][2025-02-06 09:32:34] Received message : who are you (3)
    [INFO][2025-02-06 09:32:34] prompt has 1192 tokens
    [INFO][2025-02-06 09:32:34] warmup for generating up to 2898 tokens
    Llama.generate: 1 prefix-match hit, remaining 415 prompt tokens to eval
    llama_perf_context_print: load time = 1079.49 ms
    llama_perf_context_print: prompt eval time = 0.00 ms / 415 tokens ( 0.00 ms per token, inf tokens per second)
    llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
    llama_perf_context_print: total time = 7141.27 ms / 416 tokens
    [INFO][2025-02-06 09:32:42]
    Finished executing the generation

[INFO][2025-02-06 09:32:42] ## Done Generation ##

[INFO][2025-02-06 09:32:42] ╔══════════════════════════════════════════════════╗
[INFO][2025-02-06 09:32:42] ║ Done ║
[INFO][2025-02-06 09:32:42] ╚══════════════════════════════════════════════════╝
Llama.generate: 1 prefix-match hit, remaining 32 prompt tokens to eval
INFO: ::1:52686 - "POST /get_discussion_files_list HTTP/1.1" 200 OK
llama_perf_context_print: load time = 1079.49 ms
llama_perf_context_print: prompt eval time = 0.00 ms / 32 tokens ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 1120.33 ms / 33 tokens
[INFO][2025-02-06 09:32:43]

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions