How to Use MCP Servers with ChatGPT (2026)
Learn how to connect Model Context Protocol servers to ChatGPT using Docker bridges or custom API integrations. Includes complete code examples and comparisons.
WHAT YOU'LL LEARN
- Two methods to use MCP servers with ChatGPT
- Docker bridge approach for plugin architecture
- OpenAI API integration with MCP SDK
- Complete code examples for GitHub MCP server
- Comparison with Claude Desktop native support
- Troubleshooting common integration issues
Can ChatGPT Use MCP Servers?
Yes! While ChatGPT doesn't have native MCP support like Claude Desktop, you can connect MCP servers to ChatGPT using two approaches:
- Docker Bridge Method: Run MCP servers in Docker containers and expose them via HTTP endpoints that ChatGPT plugins can access
- OpenAI API Method: Build a custom integration that forwards ChatGPT's tool calls to MCP servers using the OpenAI SDK and MCP client libraries
Important Note
These methods require more setup than Claude Desktop's native MCP support. If you're choosing an AI assistant specifically for MCP integration, Claude Desktop offers the easiest experience.
Comparison: Two Integration Methods
METHOD 1: DOCKER BRIDGE
METHOD 2: OPENAI API
Method 1: Docker Bridge Setup
This approach wraps MCP servers in Docker containers and exposes them as HTTP APIs that ChatGPT plugins can call.
Prerequisites
DOCKER
Docker Desktop installed
CHATGPT PLUS
For plugin access
NODE.JS
v18 or higher
Step 1: Create HTTP Wrapper
Create a Node.js server that bridges HTTP requests to MCP protocol calls:
mcp-http-bridge.js
const express = require('express');
const { spawn } = require('child_process');
const { Client } = require('@modelcontextprotocol/sdk/client/index.js');
const { StdioClientTransport } = require('@modelcontextprotocol/sdk/client/stdio.js');
const app = express();
app.use(express.json());
// Initialize MCP client
let mcpClient;
let mcpTransport;
async function initMCPClient() {
// Spawn the MCP server process
const serverProcess = spawn('npx', [
'-y',
'@modelcontextprotocol/server-github'
], {
env: {
...process.env,
GITHUB_TOKEN: process.env.GITHUB_TOKEN
}
});
// Create transport
mcpTransport = new StdioClientTransport({
command: serverProcess,
stderr: 'pipe'
});
// Create client
mcpClient = new Client({
name: 'mcp-http-bridge',
version: '1.0.0'
}, {
capabilities: {}
});
await mcpClient.connect(mcpTransport);
console.log('MCP client connected');
}
// List available tools
app.get('/tools', async (req, res) => {
try {
const tools = await mcpClient.listTools();
res.json(tools);
} catch (error) {
res.status(500).json({ error: error.message });
}
});
// Call a tool
app.post('/tools/:toolName', async (req, res) => {
try {
const { toolName } = req.params;
const { arguments: toolArgs } = req.body;
const result = await mcpClient.callTool({
name: toolName,
arguments: toolArgs
});
res.json(result);
} catch (error) {
res.status(500).json({ error: error.message });
}
});
// Health check
app.get('/health', (req, res) => {
res.json({ status: 'ok' });
});
// Start server
const PORT = process.env.PORT || 3000;
initMCPClient().then(() => {
app.listen(PORT, () => {
console.log(`MCP HTTP Bridge running on port ${PORT}`);
});
}).catch(err => {
console.error('Failed to initialize MCP client:', err);
process.exit(1);
});Step 2: Create Dockerfile
Dockerfile
FROM node:18-alpine WORKDIR /app # Install dependencies COPY package*.json ./ RUN npm install # Copy application COPY . . # Expose port EXPOSE 3000 # Run bridge CMD ["node", "mcp-http-bridge.js"]
Step 3: Create docker-compose.yml
docker-compose.yml
version: '3.8'
services:
mcp-bridge:
build: .
ports:
- "3000:3000"
environment:
- GITHUB_TOKEN=${GITHUB_TOKEN}
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3Step 4: Configure Environment
.env
GITHUB_TOKEN=ghp_your_token_here
Step 5: Start the Bridge
Terminal
# Build and start docker-compose up -d # Verify it's running curl http://localhost:3000/health # List available tools curl http://localhost:3000/tools
Step 6: Create ChatGPT Plugin Manifest
plugin-manifest.json
{
"schema_version": "v1",
"name_for_model": "github_mcp",
"name_for_human": "GitHub MCP",
"description_for_model": "Access GitHub repositories via Model Context Protocol. Create issues, search code, manage PRs, and read files.",
"description_for_human": "Manage GitHub repositories through AI conversations",
"auth": {
"type": "none"
},
"api": {
"type": "openapi",
"url": "http://localhost:3000/openapi.json"
},
"logo_url": "http://localhost:3000/logo.png",
"contact_email": "support@example.com",
"legal_info_url": "http://localhost:3000/legal"
}Limitation
ChatGPT plugins require publicly accessible URLs. For local development, use tools like ngrok to create a tunnel: ngrok http 3000
Method 2: OpenAI API Integration
This approach uses the OpenAI SDK to forward function calls from ChatGPT to MCP servers programmatically.
Prerequisites
OPENAI API KEY
From platform.openai.com
PYTHON 3.8+
Or Node.js 18+
Step 1: Install Dependencies
Python
pip install openai mcp anthropic
Step 2: Create MCP-to-OpenAI Bridge
chatgpt_mcp_bridge.py
import os
import json
import asyncio
from openai import OpenAI
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
# Initialize OpenAI client
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# MCP server configuration
MCP_SERVER_CONFIG = {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": os.getenv("GITHUB_TOKEN")
}
}
class MCPChatGPTBridge:
def __init__(self):
self.mcp_session = None
self.mcp_tools = {}
async def connect_mcp(self):
"""Initialize MCP client connection"""
server_params = StdioServerParameters(
command=MCP_SERVER_CONFIG["command"],
args=MCP_SERVER_CONFIG["args"],
env=MCP_SERVER_CONFIG["env"]
)
# Connect to MCP server
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
self.mcp_session = session
# Initialize and list tools
await session.initialize()
tools_response = await session.list_tools()
# Store tools
for tool in tools_response.tools:
self.mcp_tools[tool.name] = tool
print(f"Connected to MCP. Available tools: {list(self.mcp_tools.keys())}")
def convert_mcp_tools_to_openai_format(self):
"""Convert MCP tool schemas to OpenAI function format"""
openai_tools = []
for tool_name, tool in self.mcp_tools.items():
openai_tool = {
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": {
"type": "object",
"properties": tool.inputSchema.get("properties", {}),
"required": tool.inputSchema.get("required", [])
}
}
}
openai_tools.append(openai_tool)
return openai_tools
async def call_mcp_tool(self, tool_name, arguments):
"""Execute MCP tool call"""
result = await self.mcp_session.call_tool(tool_name, arguments)
return result.content
async def chat_with_mcp(self, user_message, conversation_history=None):
"""Chat with GPT-4 using MCP tools"""
if conversation_history is None:
conversation_history = []
# Add user message
messages = conversation_history + [
{"role": "user", "content": user_message}
]
# Convert MCP tools to OpenAI format
openai_tools = self.convert_mcp_tools_to_openai_format()
# First API call
response = openai_client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=messages,
tools=openai_tools,
tool_choice="auto"
)
response_message = response.choices[0].message
messages.append(response_message)
# Handle tool calls
if response_message.tool_calls:
for tool_call in response_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
print(f"Calling MCP tool: {function_name}")
print(f"Arguments: {function_args}")
# Execute MCP tool
tool_result = await self.call_mcp_tool(
function_name,
function_args
)
# Add tool result to messages
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": str(tool_result)
})
# Second API call with tool results
final_response = openai_client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=messages
)
return final_response.choices[0].message.content, messages
return response_message.content, messages
# Example usage
async def main():
bridge = MCPChatGPTBridge()
await bridge.connect_mcp()
# Test queries
conversation = []
# Query 1: Search repositories
response, conversation = await bridge.chat_with_mcp(
"Search my GitHub repositories for ones related to machine learning",
conversation
)
print(f"\nAssistant: {response}\n")
# Query 2: Create an issue
response, conversation = await bridge.chat_with_mcp(
"Create a GitHub issue in my repo 'username/ml-project' titled 'Add model evaluation metrics' with description 'Need to add precision, recall, and F1 score calculations'",
conversation
)
print(f"\nAssistant: {response}\n")
if __name__ == "__main__":
asyncio.run(main())Step 3: TypeScript Version (Alternative)
chatgpt-mcp-bridge.ts
import OpenAI from 'openai';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
interface MCPTool {
name: string;
description: string;
inputSchema: any;
}
class MCPChatGPTBridge {
private mcpClient: Client | null = null;
private mcpTools: Map<string, MCPTool> = new Map();
async connectMCP() {
// Create MCP transport
const transport = new StdioClientTransport({
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-github'],
env: {
...process.env,
GITHUB_TOKEN: process.env.GITHUB_TOKEN
}
});
// Create MCP client
this.mcpClient = new Client({
name: 'chatgpt-mcp-bridge',
version: '1.0.0'
}, {
capabilities: {}
});
await this.mcpClient.connect(transport);
// List available tools
const { tools } = await this.mcpClient.listTools();
tools.forEach(tool => {
this.mcpTools.set(tool.name, tool);
});
console.log(`Connected to MCP. Tools: ${Array.from(this.mcpTools.keys()).join(', ')}`);
}
convertMCPToolsToOpenAI(): OpenAI.Chat.ChatCompletionTool[] {
return Array.from(this.mcpTools.values()).map(tool => ({
type: 'function',
function: {
name: tool.name,
description: tool.description,
parameters: tool.inputSchema
}
}));
}
async callMCPTool(toolName: string, args: any): Promise<any> {
if (!this.mcpClient) {
throw new Error('MCP client not connected');
}
const result = await this.mcpClient.callTool({
name: toolName,
arguments: args
});
return result.content;
}
async chat(userMessage: string, history: OpenAI.Chat.ChatCompletionMessageParam[] = []) {
const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [
...history,
{ role: 'user', content: userMessage }
];
const tools = this.convertMCPToolsToOpenAI();
// First API call
const response = await openai.chat.completions.create({
model: 'gpt-4-turbo-preview',
messages,
tools,
tool_choice: 'auto'
});
const responseMessage = response.choices[0].message;
messages.push(responseMessage);
// Handle tool calls
if (responseMessage.tool_calls) {
for (const toolCall of responseMessage.tool_calls) {
const functionName = toolCall.function.name;
const functionArgs = JSON.parse(toolCall.function.arguments);
console.log(`Calling MCP tool: ${functionName}`);
const toolResult = await this.callMCPTool(functionName, functionArgs);
messages.push({
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify(toolResult)
});
}
// Second API call with results
const finalResponse = await openai.chat.completions.create({
model: 'gpt-4-turbo-preview',
messages
});
return {
response: finalResponse.choices[0].message.content,
messages
};
}
return {
response: responseMessage.content,
messages
};
}
}
// Example usage
async function main() {
const bridge = new MCPChatGPTBridge();
await bridge.connectMCP();
const { response } = await bridge.chat(
"Search my GitHub repositories for projects with TypeScript"
);
console.log(`\nAssistant: ${response}\n`);
}
main().catch(console.error);Step 4: Run the Bridge
Terminal
# Set environment variables export OPENAI_API_KEY="sk-..." export GITHUB_TOKEN="ghp_..." # Run Python version python chatgpt_mcp_bridge.py # Or TypeScript version npm install npx tsx chatgpt-mcp-bridge.ts
Complete Example: GitHub MCP with ChatGPT
Here's a production-ready implementation that handles multiple MCP servers:
multi_server_bridge.py
import os
import asyncio
from typing import Dict, List
from openai import OpenAI
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
class MultiServerMCPBridge:
"""Bridge supporting multiple MCP servers"""
def __init__(self):
self.openai = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
self.servers = {}
self.all_tools = {}
async def add_server(self, name: str, config: dict):
"""Add an MCP server to the bridge"""
server_params = StdioServerParameters(
command=config["command"],
args=config["args"],
env=config.get("env", {})
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
tools_response = await session.list_tools()
self.servers[name] = {
"session": session,
"tools": {tool.name: tool for tool in tools_response.tools}
}
# Merge tools with server prefix
for tool_name, tool in self.servers[name]["tools"].items():
prefixed_name = f"{name}_{tool_name}"
self.all_tools[prefixed_name] = {
"server": name,
"original_name": tool_name,
"tool": tool
}
print(f"Added server '{name}' with {len(tools_response.tools)} tools")
def get_openai_tools(self) -> List[dict]:
"""Convert all MCP tools to OpenAI format"""
openai_tools = []
for prefixed_name, tool_info in self.all_tools.items():
tool = tool_info["tool"]
openai_tool = {
"type": "function",
"function": {
"name": prefixed_name,
"description": f"[{tool_info['server']}] {tool.description}",
"parameters": tool.inputSchema
}
}
openai_tools.append(openai_tool)
return openai_tools
async def execute_tool(self, prefixed_name: str, arguments: dict):
"""Execute a tool on the appropriate MCP server"""
tool_info = self.all_tools[prefixed_name]
server_name = tool_info["server"]
original_name = tool_info["original_name"]
session = self.servers[server_name]["session"]
result = await session.call_tool(original_name, arguments)
return result.content
# Configuration
SERVERS = {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {"GITHUB_TOKEN": os.getenv("GITHUB_TOKEN")}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
"env": {}
}
}
async def main():
bridge = MultiServerMCPBridge()
# Connect to all servers
for name, config in SERVERS.items():
await bridge.add_server(name, config)
# Example conversation
messages = []
user_input = "List my GitHub repositories and save the names to a file"
messages.append({"role": "user", "content": user_input})
# Call OpenAI with all MCP tools
response = bridge.openai.chat.completions.create(
model="gpt-4-turbo-preview",
messages=messages,
tools=bridge.get_openai_tools(),
tool_choice="auto"
)
# Process response and tool calls...
print(response.choices[0].message.content)
if __name__ == "__main__":
asyncio.run(main())Supported MCP Servers
These MCP servers work well with ChatGPT integration:
| Server | Status | Notes |
|---|---|---|
| GitHub | ✓ Excellent | All features work perfectly |
| Filesystem | ✓ Excellent | Read/write local files |
| PostgreSQL | ✓ Good | Query execution works well |
| Notion | ✓ Good | Page creation and search |
| Slack | ⚠ Partial | Message reading works, webhooks need setup |
| Puppeteer | ⚠ Partial | Browser automation, Docker recommended |
Limitations vs Claude Desktop
While these integrations work, they have limitations compared to Claude Desktop's native MCP support:
| Feature | ChatGPT (Bridge) | Claude Desktop |
|---|---|---|
| Setup Complexity | High (custom code) | Low (JSON config) |
| Configuration | Manual bridge code | Simple JSON file |
| Maintenance | Update bridge regularly | Auto-updates |
| Cost | OpenAI API costs | Included in Pro |
| Performance | Extra latency (API calls) | Native integration |
| Tool Discovery | Manual mapping | Automatic |
| Error Handling | Custom logic needed | Built-in |
| State Management | Complex | Handled natively |
Troubleshooting Common Issues
1. "Module Not Found" Errors
Problem: MCP SDK or OpenAI packages not installed
Solution: Install dependencies:
pip install openai mcp npm install @modelcontextprotocol/sdk openai
2. MCP Server Won't Start
Problem: Server process fails to spawn
Solutions:
- Verify npx is available:
npx --version - Check environment variables are set correctly
- Test server manually:
npx -y @modelcontextprotocol/server-github - Check server logs for error messages
3. OpenAI API Rate Limits
Problem: "Rate limit exceeded" errors
Solution: Implement exponential backoff and request queuing:
import time
from openai import RateLimitError
def call_with_retry(func, max_retries=3):
for attempt in range(max_retries):
try:
return func()
except RateLimitError:
wait = 2 ** attempt
time.sleep(wait)
raise Exception("Max retries exceeded")4. Tool Schema Conversion Errors
Problem: MCP tool schemas don't match OpenAI format
Solution: Add schema validation and conversion:
def convert_schema(mcp_schema):
# Handle missing properties
if "properties" not in mcp_schema:
mcp_schema["properties"] = {}
# Ensure required is array
if "required" not in mcp_schema:
mcp_schema["required"] = []
return mcp_schema5. Docker Bridge Not Accessible
Problem: ChatGPT can't reach local bridge
Solutions:
- Use ngrok for public URL:
ngrok http 3000 - Deploy bridge to cloud (Railway, Render, Fly.io)
- Configure proper CORS headers in bridge
- Ensure health check endpoint works
Performance Optimization
1. Connection Pooling
Reuse MCP connections instead of creating new ones for each request:
class ConnectionPool:
def __init__(self):
self._connections = {}
async def get_connection(self, server_name):
if server_name not in self._connections:
# Create new connection
conn = await create_mcp_connection(server_name)
self._connections[server_name] = conn
return self._connections[server_name]
pool = ConnectionPool()2. Caching Tool Schemas
Cache MCP tool schemas to avoid repeated lookups:
from functools import lru_cache
@lru_cache(maxsize=128)
def get_tool_schemas(server_name):
# Cached tool schema lookup
return fetch_schemas_from_server(server_name)3. Parallel Tool Execution
When multiple tools are called, execute them in parallel:
import asyncio
async def execute_tools_parallel(tool_calls):
tasks = [
execute_tool(call.function.name, call.function.arguments)
for call in tool_calls
]
results = await asyncio.gather(*tasks)
return resultsAlternative: Use Claude Desktop
If you're evaluating AI assistants for MCP integration, consider Claude Desktop's native support:
WHY CLAUDE DESKTOP IS EASIER
- 5-minute setup: Just edit one JSON config file
- Zero code: No bridge or integration code needed
- Auto-discovery: Tools appear automatically
- Native performance: No API call overhead
- Built-in error handling: Robust connection management
- 70+ servers: Works with all MCP servers out of the box
See our GitHub MCP setup guide for Claude Desktop configuration (takes 5 minutes vs 1-2 hours for ChatGPT).
When to Use Each Approach
Use ChatGPT + MCP If:
- You're already heavily invested in OpenAI ecosystem
- You need programmatic access via OpenAI API
- You're building a custom application on top of ChatGPT
- You have development resources for bridge maintenance
Use Claude Desktop If:
- You want the simplest MCP setup experience
- You're not building a custom application
- You need reliable, production-ready MCP integration
- You want to use multiple MCP servers easily
What's Next?
Now that you understand MCP integration with ChatGPT, explore more: