Highlights
Pinned Loading
-
How to run LLM AI model locally on a...
How to run LLM AI model locally on a PC/Server 1Here are the steps to get a model running on _your not-so-powerful_ computer:
231. Install llama.cpp (you can also build it from source with CMake):
4```shell
5brew install llama.cpp
-
tunapanda/h5p-standalone
tunapanda/h5p-standalone PublicDisplay H5P content without the need for an H5P server
-
Issuing TLS Certificates on Traefik ...
Issuing TLS Certificates on Traefik Using Let's Encrypt Pebble ACME Test Server 1# Issuing TLS Certificates on Traefik Using Let's Encrypt Pebble ACME Test Server
23### Prerequisites
4- Docker with the Docker Compose plugin
5 -
murage-poc/silver
murage-poc/silver PublicThis repository contains multiple demos (NodeJs + Kysley + dbmate), each located on its own branch.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.