Skip to content

Commit 8685878

Browse files
committed
Additional examples for BERT
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
1 parent 3dd20af commit 8685878

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

_posts/2025-04-16-local-llm-openfaas-edge.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -371,6 +371,12 @@ Whilst Ollama does not yet support multi-modal models, which can process and pro
371371
372372
You can deploy the [function we wrote previously on the blog](https://www.openfaas.com/blog/transcribe-audio-with-openai-whisper/) that uses Whisper to OpenFaaS Edge as a core service, then send it HTTP requests like we did to the Ollama service.
373373
374+
You may find that despite the hype around LLMs, they are not a one-size fits all solution.
375+
376+
An alternative that is popular for classification is to use BERT (Bidirectional Encoder Representations from Transformers), a state-of-the-art NLP model from Google.
377+
378+
There's good examples on the [Kaggle](https://www.kaggle.com/code/merishnasuwal/document-classification-using-bert), [Keras](https://keras.io/keras_hub/api/models/bert/bert_text_classifier/), and [Tensorflow](https://www.tensorflow.org/text/tutorials/classify_text_with_bert) sites.
379+
374380
### Conclusion
375381
376382
The latest release of [OpenFaaS Edge](https://docs.openfaas.com/deployment/edge/) adds support for Nvidia GPUs for core services defined in the `docker-compose.yaml` file. This makes it easy to run local LLMs using a tool like Ollama, then to call them for a wide range of tasks and workflows, whilst retaining data privacy and complete confidentiality.

0 commit comments

Comments
 (0)