Integrate qwen-api with the Cline Extension.
-
Clone the repository
git clone https://github.com/arosyihuddin/qwen-cline.git cd qwen-cline
-
Choose an installation method
-
Poetry
poetry install poetry shell
-
venv
python3.12 -m venv .venv # macOS/Linux source .venv/bin/activate # Windows PowerShell .\.venv\Scripts\activate pip install --upgrade pip pip install -r requirements.txt
-
-
Pull image dari Docker Hub:
docker pull rosyihuddin/qwen-cline:latest
-
Buat file
.env
dari template:cp .env.example .env
Edit file
.env
untuk mengisi nilaiQWEN_AUTH_TOKEN
danQWEN_COOKIE
-
Jalankan container dengan env-file:
docker run -d -p 8000:8000 --env-file .env rosyihuddin/qwen-cline:latest
-
Akses aplikasi di http://localhost:8000
Create a .env
file at the project root:
QWEN_AUTH_TOKEN=<your_auth_token>
QWEN_COOKIE=<your_cookie>
# Config
THINKING=true
# Default THINKING_BUDGET Max 38912
THINKING_BUDGET=3000
WEB_SEARCH=false
WEB_DEVELOPMENT=false
Note: Follow the authentication guide in the qwen-api repository to obtain your token and cookie.
The server will run at http://localhost:8000
and expose these endpoints:
- GET
/v1/models
- POST
/v1/chat/completions
(streaming via SSE)
# With Poetry
poetry run uvicorn src.server:app --host 0.0.0.0 --port 8000
# With venv
uvicorn src.server:app --host 0.0.0.0 --port 8000
-
List available models
curl http://localhost:8000/v1/models
-
Test streaming chat
curl -N -X POST http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "qwen3-235b-a22b", "messages": [{"role":"user","content":"Hello, Qwen!"}], "stream": true }'
-
Open the Cline sidebar in VS Code and select LM Studio as the provider.
-
Use Custom Base URL.
-
Enter:
http://localhost:8000
-
Cline will automatically call
GET /v1/models
and detect the modelqwen3-235b-a22b
. -
Choose the model, save
Now, Cline will use your qwen-cline
server as its AI backend, streaming tokens in real time within your IDE! 🎉