Skip to content

How slow is slow? #49

@flatsiedatsie

Description

@flatsiedatsie

I downloaded the github repo and placed in on a localhost server.

I opened the page, and clicked on the "Load GPT2 117Mb" model.

I've been waiting for a few minutes now, with the output stuck on Loading token embeddings.... Is that normal behaviour?

Loading model from folder: gpt2
Loading params...
Warning: Buffer size calc result exceeds GPU limit, are you using this value for a tensor size? 50257 768 1 154389504
bufferSize @ model.js:510
loadParameters @ model.js:298
await in loadParameters (async)
loadModel @ model.js:276
initialize @ model.js:32
await in initialize (async)
loadModel @ gpt/:105
onclick @ gpt/:23
Params: {n_layer: 12, n_head: 12, n_embd: 768, vocab_size: 50257, n_ctx: 1024, …}
Loading token embeddings...
  • Apple M1 Pro
  • Brave 1.61 - "WebGPU is supported in your browser!"

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions