Skip to content

feiskyer/chatgpt-copilot

Repository files navigation


An VS Code ChatGPT Copilot Extension

The Most Loved Open-Source ChatGPT Extension for VS Code

ChatGPT Copilot is a powerful and telemetry-free extension for Visual Studio Code, bringing the capabilities of ChatGPT directly into your coding environment.

Features

  • 🤖 Supports GPT-4, o1, Claude, Gemini, Ollama, Github and other OpenAI-compatible local models with your API key from OpenAI, Azure OpenAI Service, Google, Anthropic or other providers.
  • 💥 Model Context Protocol (MCP) to bring your own tools and DeepClaude (DeepSeek R1 + Claude) mode for best AI responses.
  • 📂 Chat with your Files: Add multiple files and images to your chat using @ for seamless collaboration.
  • 📃 Streaming Answers: Receive real-time responses to your prompts in the sidebar conversation window.
  • 📖 Prompt Manager: Chat with your own prompts (use # to search).
  • 🔥 Tool calls via prompt parsing for models that don't support native tool calling.
  • 📝 Code Assistance: Create files or fix your code with one click or keyboard shortcuts.
  • ➡️ Export Conversations: Export all your conversation history at once in Markdown format.
  • 📰 Custom Prompt Prefixes: Customize what you are asking ChatGPT with ad-hoc prompt prefixes.
  • 💻 Seamless Code Integration: Copy, insert, or create new files directly from ChatGPT's code suggestions.
  • ➕ Editable Prompts: Edit and resend previous prompts.
  • 🛡️ Telemetry Free: No usage data is collected.

Recent Release Highlights

  • v4.9: Add prompt based tool calls for models that don't support native tool calling.
  • v4.8: New LOGO and new models.
  • v4.7: Added Model Context Protocol (MCP) integration.
  • v4.6: Added prompt manager, DeepClaude mode (DeepSeek + Claude) mode, Github Copilot provider and chat with files.

Installation

  • Install the extension from the Visual Studio Marketplace or search ChatGPT Copilot in VScode Extensions and click install.
  • Reload Visual Studio Code after installation.

Supported Models & Providers

AI Providers

The extension supports major AI providers with hundreds of models:

Provider Models Special Features
OpenAI GPT-4o, GPT-4, GPT-3.5-turbo, o1, o3, o4-mini Reasoning models, function calling
Anthropic Claude Sonnet 4, Claude 3.5 Sonnet, Claude Opus 4 Advanced reasoning, large context
Google Gemini 2.5 Pro, Gemini 2.0 Flash, Gemini Pro Search grounding, multimodal
GitHub Copilot GPT-4o, Claude Sonnet 4, o3-mini, Gemini 2.5 Pro Built-in VS Code authentication
DeepSeek DeepSeek R1, DeepSeek Reasoner Advanced reasoning capabilities
Azure OpenAI GPT-4o, GPT-4, o1 Enterprise-grade security
Azure AI Various non-OpenAI models Microsoft's AI model hub
Ollama Llama, Qwen, CodeLlama, Mistral Local model execution
Groq Llama, Mixtral, Gemma Ultra-fast inference
Perplexity Llama, Mistral models Web-enhanced responses
xAI Grok models Real-time information
Mistral Mistral Large, Codestral Code-specialized models
Together Various open-source models Community models
OpenRouter 200+ models Access to multiple providers

AI Services

Configure the extension by setting your API keys and preferences in the settings.

Configuration Description
API Key Required, get from OpenAI, Azure OpenAI, Anthropic or other AI services
API Base URL Optional, default to "https://api.openai.com/v1"
Model Optional, default to "gpt-4o"

Refer to the following sections for more details on configuring various AI services.

OpenAI

Special notes for ChatGPT users: OpenAI API is billed separately from ChatGPT App. You need to add credits to your OpenAI for API usage here. Once you add credits to your API, create a new api key and it should work.

Configuration Example
API Key your-api-key
Model gpt-4o
API Base URL https://api.openai.com/v1 (Optional)
Ollama

Pull your image first from Ollama library and then setup the base URL and custom model.

Configuration Example
API Key ollama (Optional)
Model custom
Custom Model qwen2.5
API Base URL http://localhost:11434/v1/
DeepSeek

Ollama provider:

Configuration Example
API Key ollama (Optional)
Model custom
Custom Model deepseek-r1
API Base URL http://localhost:11434/v1/

DeepSeek provider:

Configuration Example
API Key your-deepseek-key
Model deepseek-reasoner
API Base URL https://api.deepseek.com

SiliconFlow (SiliconCloud) provider:

Configuration Example
API Key your-siliconflow-key
Model custom
Custom Model deepseek-ai/DeepSeek-R1
API Base URL https://api.siliconflow.cn/v1

Azure AI Foundry provider:

Configuration Example
API Key your-azure-ai-key
Model DeepSeek-R1
API Base URL https://[endpoint-name].[region].models.ai.azure.com
Anthropic Claude
Configuration Example
API Key your-api-key
Model claude-3-sonnet-20240229
API Base URL https://api.anthropic.com/v1 (Optional)
Google Gemini
Configuration Example
API Key your-api-key
Model gemini-2.0-flash-thinking-exp-1219
API Base URL https://generativelanguage.googleapis.com/v1beta (Optional)
Azure OpenAI

For Azure OpenAI Service, apiBaseUrl should be set to format https://[YOUR-ENDPOINT-NAME].openai.azure.com/openai/deployments/[YOUR-DEPLOYMENT-NAME].

Configuration Example
API Key your-api-key
Model gpt-4o
API Base URL https://endpoint-name.openai.azure.com/openai/deployments/deployment-name
Github Copilot

Github Copilot is supported with built-in authentication (a popup will ask your permission when using Github Copilot models).

Supported Models:

  • OpenAI Models: gpt-3.5-turbo, gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, gpt-4.1, gpt-4.5
  • Reasoning Models: o1-ga, o3-mini, o3, o4-mini
  • Claude Models: claude-3.5-sonnet, claude-3.7-sonnet, claude-3.7-sonnet-thought, claude-sonnet-4, claude-opus-4
  • Gemini Models: gemini-2.0-flash, gemini-2.5-pro
Configuration Example
Provider GitHubCopilot
API Key github
Model custom
Custom Model claude-sonnet-4
Github Models

For Github Models, get your Github token from here.

Configuration Example
API Key your-github-token
Model o1
API Base URL https://models.inference.ai.azure.com
OpenAI compatible Models

To use OpenAI compatible APIs, you need to set a custom model name: set model to "custom" and then specify your custom model name.

Example for groq:

Configuration Example
API Key your-groq-key
Model custom
Custom Model mixtral-8x7b-32768
API Base URL https://api.groq.com/openai/v1
DeepClaude (DeepSeek + Claude)
Configuration Example
API Key your-api-key
Model claude-3-sonnet-20240229
API Base URL https://api.anthropic.com/v1 (Optional)
Reasoning API Key your-deepseek-api-key
Reasoning Model deepseek-reasoner (or deepseek-r1 regarding to your provider)
Reasoning API Base URL https://api.deepseek.com (or your own base URL)

Commands & Keyboard Shortcuts

The extension provides various commands accessible through the Command Palette (Ctrl+Shift+P / Cmd+Shift+P) and keyboard shortcuts.

Context Menu Commands

Context Menu Commands (Right-click on selected code)

Command Keyboard Shortcut Description
Generate Code Ctrl+Shift+A / Cmd+Shift+A Generate code based on comments or requirements
Add Tests Ctrl+K Ctrl+Shift+1 / Cmd+K Cmd+Shift+1 Generate unit tests for selected code
Find Problems Ctrl+K Ctrl+Shift+2 / Cmd+K Cmd+Shift+2 Analyze code for bugs and issues
Optimize Ctrl+K Ctrl+Shift+3 / Cmd+K Cmd+Shift+3 Optimize and improve selected code
Explain Ctrl+K Ctrl+Shift+4 / Cmd+K Cmd+Shift+4 Explain how the selected code works
Add Comments Ctrl+K Ctrl+Shift+5 / Cmd+K Cmd+Shift+5 Add documentation comments to code
Complete Code Ctrl+K Ctrl+Shift+6 / Cmd+K Cmd+Shift+6 Complete partial or incomplete code
Ad-hoc Prompt Ctrl+K Ctrl+Shift+7 / Cmd+K Cmd+Shift+7 Use custom prompt with selected code
Custom Prompt 1 Ctrl+K Ctrl+Shift+8 / Cmd+K Cmd+Shift+8 Apply your first custom prompt
Custom Prompt 2 Ctrl+K Ctrl+Shift+9 / Cmd+K Cmd+Shift+9 Apply your second custom prompt
General Commands

General Commands

Command Description
ChatGPT: Ask anything Open input box to ask any question
ChatGPT: Reset session Clear current conversation and start fresh
ChatGPT: Clear conversation Clear the conversation history
ChatGPT: Export conversation Export chat history to Markdown file
ChatGPT: Manage Prompts Open prompt management interface
ChatGPT: Toggle Prompt Manager Show/hide the prompt manager panel
Add Current File to Chat Context Add the currently open file to chat context
ChatGPT: Open MCP Servers Manage Model Context Protocol servers
Prompt Management

Prompt Management

  • Use # followed by prompt name to search and apply saved prompts
  • Use @ to add files to your conversation context
  • Access the Prompt Manager through the sidebar for full prompt management

Model Context Protocol (MCP)

The extension supports the Model Context Protocol (MCP), allowing you to extend AI capabilities with custom tools and integrations.

What is MCP?

What is MCP?

MCP enables AI models to securely connect to external data sources and tools, providing:

  • Custom Tools: Integrate your own tools and APIs
  • Data Sources: Connect to databases, file systems, APIs, and more
  • Secure Execution: Sandboxed tool execution environment
  • Multi-Step Workflows: Agent-like behavior with tool chaining

MCP Server Types

The extension supports three types of MCP servers:

Type Description Use Case
stdio Standard input/output communication Local command-line tools and scripts
sse Server-Sent Events over HTTP Web-based tools and APIs
streamable-http HTTP streaming communication Real-time data sources
How to configure MCP?

MCP Configuration

  1. Access MCP Manager: Use ChatGPT: Open MCP Servers command or click the MCP icon in the sidebar
  2. Add MCP Server: Configure your MCP servers with:
    • Name: Unique identifier for the server
    • Type: Choose from stdio, sse, or streamable-http
    • Command/URL: Executable path or HTTP endpoint
    • Arguments: Command-line arguments (for stdio)
    • Environment Variables: Custom environment settings
    • Headers: HTTP headers (for sse/streamable-http)

Example MCP Configurations

File System Access (stdio):

{
  "name": "filesystem",
  "type": "stdio",
  "command": "npx",
  "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/directory"],
  "isEnabled": true
}

Web Search (sse):

{
  "name": "web-search",
  "type": "sse",
  "url": "https://api.example.com/mcp/search",
  "headers": {"Authorization": "Bearer your-token"},
  "isEnabled": true
}
Agent Mode

Agent Mode

When MCP servers are enabled, the extension operates in Agent Mode:

  • Max Steps: Configure up to 15 tool execution steps
  • Tool Chaining: Automatic multi-step workflows
  • Error Handling: Robust error recovery and retry logic
  • Progress Tracking: Real-time tool execution feedback

Configurations

Full list of configuration options

Core Configuration

Setting Default Description
chatgpt.gpt3.provider Auto AI Provider: Auto, OpenAI, Azure, AzureAI, Anthropic, GitHubCopilot, Google, Mistral, xAI, Together, DeepSeek, Groq, Perplexity, OpenRouter, Ollama
chatgpt.gpt3.apiKey API key for your chosen provider
chatgpt.gpt3.apiBaseUrl https://api.openai.com/v1 API base URL for your provider
chatgpt.gpt3.model gpt-4o Model to use for conversations
chatgpt.gpt3.customModel Custom model name when using custom model option
chatgpt.gpt3.organization Organization ID (OpenAI only)

Model Parameters

Setting Default Description
chatgpt.gpt3.maxTokens 0 (unlimited) Maximum tokens to generate in completion
chatgpt.gpt3.temperature 1 Sampling temperature (0-2). Higher = more creative
chatgpt.gpt3.top_p 1 Nucleus sampling parameter (0-1)
chatgpt.systemPrompt System prompt for the AI assistant

DeepClaude (Reasoning + Chat) Configuration

Setting Default Description
chatgpt.gpt3.reasoning.provider Auto Provider for reasoning model (Auto, OpenAI, Azure, AzureAI, Google, DeepSeek, Groq, OpenRouter, Ollama)
chatgpt.gpt3.reasoning.apiKey API key for reasoning model
chatgpt.gpt3.reasoning.apiBaseUrl https://api.openai.com/v1 API base URL for reasoning model
chatgpt.gpt3.reasoning.model Model to use for reasoning (e.g., deepseek-reasoner, o1)
chatgpt.gpt3.reasoning.organization Organization ID for reasoning model (OpenAI only)

Agent & MCP Configuration

Setting Default Description
chatgpt.gpt3.maxSteps 15 Maximum steps for agent mode when using MCP servers

Feature Toggles

Setting Default Description
chatgpt.gpt3.generateCode-enabled true Enable code generation context menu
chatgpt.gpt3.searchGrounding.enabled false Enable search grounding (Gemini models only)

Prompt Prefixes & Context Menu

Setting Default Description
chatgpt.promptPrefix.addTests Implement tests for the following code Prompt for generating unit tests
chatgpt.promptPrefix.addTests-enabled true Enable "Add Tests" context menu item
chatgpt.promptPrefix.findProblems Find problems with the following code Prompt for finding bugs and issues
chatgpt.promptPrefix.findProblems-enabled true Enable "Find Problems" context menu item
chatgpt.promptPrefix.optimize Optimize the following code Prompt for code optimization
chatgpt.promptPrefix.optimize-enabled true Enable "Optimize" context menu item
chatgpt.promptPrefix.explain Explain the following code Prompt for code explanation
chatgpt.promptPrefix.explain-enabled true Enable "Explain" context menu item
chatgpt.promptPrefix.addComments Add comments for the following code Prompt for adding documentation
chatgpt.promptPrefix.addComments-enabled true Enable "Add Comments" context menu item
chatgpt.promptPrefix.completeCode Complete the following code Prompt for code completion
chatgpt.promptPrefix.completeCode-enabled true Enable "Complete Code" context menu item
chatgpt.promptPrefix.adhoc-enabled true Enable "Ad-hoc Prompt" context menu item
chatgpt.promptPrefix.customPrompt1 Your first custom prompt template
chatgpt.promptPrefix.customPrompt1-enabled false Enable first custom prompt in context menu
chatgpt.promptPrefix.customPrompt2 Your second custom prompt template
chatgpt.promptPrefix.customPrompt2-enabled false Enable second custom prompt in context menu

User Interface

Setting Default Description
chatgpt.response.showNotification false Show notification when AI responds
chatgpt.response.autoScroll true Auto-scroll to bottom when new content is added

How to install locally

Build and install locally

We highly recommend installing the extension directly from the VS Code Marketplace for the easiest setup and automatic updates. However, for advanced users, building and installing locally is also an option.

  • Install vsce if you don't have it on your machine (The Visual Studio Code Extension Manager)
    • npm install --global vsce
  • Run vsce package
  • Follow the instructions and install manually.
npm run build
npm run package
code --uninstall-extension feiskyer.chatgpt-copilot
code --install-extension chatgpt-copilot-*.vsix

Acknowledgement

Acknowledgements

Inspired by gencay/vscode-chatgpt project and made effortlessly accessible thanks to the intuitive client provided by the Vercel AI Toolkit, this extension continues the open-source legacy, bringing seamless and robust AI functionalities directly into the editor with telemetry free.

License

This project is released under ISC License - See LICENSE for details. Copyright notice and the respective permission notices must appear in all copies.