Skip to content

Checking if we have GPU, using C library. #10625

Answered by Emreerdog
Emreerdog asked this question in Q&A
Discussion options

You must be logged in to vote

Thankfully I found it. It seems that when loading the model using llama_load_model_from_file it is iterating through devices. By this, I can see all devices:

// use all available devices
        for (size_t i = 0; i < ggml_backend_dev_count(); ++i) {
            ggml_backend_dev_t dev = ggml_backend_dev_get(i);
            switch (ggml_backend_dev_type(dev)) {
                case GGML_BACKEND_DEVICE_TYPE_CPU:
                case GGML_BACKEND_DEVICE_TYPE_ACCEL:
                    // skip CPU backends since they are handled separately
                    break;

                case GGML_BACKEND_DEVICE_TYPE_GPU:
                    model->devices.push_back(dev);
                    break…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@slaren
Comment options

Answer selected by Emreerdog
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants