Replies: 1 comment
-
Mnn app exposing an API to local host. But it's not compatible with openai specification. For adventurous souls there is llxprt app in GitHub. A fork of Gemini cli. It has tools and tool calls. If you have right setup a LLM exposed an API and llxprt you could have an ai development in your pocket. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Expose a localhost HTTP/WebSocket API in PocketPal so third-party plugins or apps can build autonomous or semi-autonomous “agentic” behaviors without bloating the core app.
Core concept:
PocketPal runs a local API server (127.0.0.1:PORT), with endpoints for:
/v1/session — start chat sessions.
/v1/chat — send/receive messages, with optional tool schema.
/v1/chat/tool_result — feed plugin outputs back to the model.
/v1/tools/execute — optional in-app tool execution (calendar, HTTP, etc.).
/v1/models — list available models.
WS /v1/stream — token streaming + tool_call events.
Plugins can be:
Local apps/scripts (Tasker, Termux, Node.js, Python).
Remote services accessed via WebSocket or HTTP.
They implement the agent loop, call the API, and handle side effects.
Tools: Model can output structured tool_call JSON (name + args).
Either PocketPal executes safe built-in tools, or
Emits an event for a plugin to handle, then posts the result back.
Security & Safety:
Localhost bind by default; API key for access.
Consent gate per tool + per plugin.
Block SSRF (no 127.0.0.1 / RFC1918 in http_request).
Quotas, logs, kill switch for agents.
Benefits:
Keeps PocketPal lightweight — agent logic & new features live in plugins.
Faster iteration — plugins can be updated independently.
Works cross-platform — same API usable on Android, desktop, or server builds.
Example Flow:
Plugin creates a session: POST /v1/session.
Sends a prompt with tool schema to /v1/chat.
Model responds with tool_call.
Plugin executes the action, posts result to /v1/chat/tool_result.
Repeat until finished.
Rollout Plan:
Phase 1: /v1/session, /v1/chat, WS /v1/stream.
Phase 2: /v1/tools/execute + safe http_request handler.
Phase 3: Consent UI, quotas, logs, plugin manager page.
Phase 4: Optional OpenAI-compatible routes for SDK compatibility.
[ This post copy pasted from a human-ai conversation as a summary ]
Beta Was this translation helpful? Give feedback.
All reactions