This guide will help you quickly set up and start using openai-tool2mcp to bring OpenAI’s powerful built-in tools to your MCP-compatible models.
Before you begin, make sure you have:
pip install openai-tool2mcp
git clone https://github.com/alohays/openai-tool2mcp.git
cd openai-tool2mcp
pip install -e .
You can set your API key in one of two ways:
Option 1: Environment Variable
# Linux/macOS
export OPENAI_API_KEY="your-api-key-here"
# Windows (Command Prompt)
set OPENAI_API_KEY=your-api-key-here
# Windows (PowerShell)
$env:OPENAI_API_KEY="your-api-key-here"
Option 2: Configuration File
Create a file named .env
in your project directory:
OPENAI_API_KEY=your-api-key-here
The simplest way to start the server is using the command-line interface:
# Start with all tools enabled
openai-tool2mcp start
# Start with specific tools
openai-tool2mcp start --tools retrieval code_interpreter
The server will start on http://localhost:8000
by default.
Configure your MCP-compatible client to connect to your local server:
http://localhost:8000
Claude App supports the Model Context Protocol, making it a perfect client for openai-tool2mcp.
http://localhost:8000
Once configured, you’ll see new tools available in Claude:
Here’s how to use the tools in Claude:
Claude, can you search the web for the latest news about AI regulations?
Claude will use the OpenAI web search tool through your local MCP server to fetch the latest news.
You can also use openai-tool2mcp programmatically in your Python applications:
from openai_tool2mcp import MCPServer, ServerConfig
from openai_tool2mcp.tools import OpenAIBuiltInTools
# Configure the server
config = ServerConfig(
openai_api_key="your-api-key-here", # Optional if set in environment
tools=[
OpenAIBuiltInTools.WEB_SEARCH.value,
OpenAIBuiltInTools.CODE_INTERPRETER.value
]
)
# Create and start the server
server = MCPServer(config)
server.start(host="127.0.0.1", port=8000)
You can customize your server with these options:
config = ServerConfig(
openai_api_key="your-api-key-here",
tools=["retrieval", "code_interpreter"], # Enable specific tools
request_timeout=60, # Timeout in seconds
max_retries=5 # Max retries for failed requests
)
The CLI provides several configuration options:
openai-tool2mcp start --help
Available options:
--host
: Host address to bind to (default: 127.0.0.1)--port
: Port to listen on (default: 8000)--api-key
: OpenAI API key (alternative to environment variable)--tools
: Space-separated list of tools to enable--timeout
: Request timeout in seconds--retries
: Maximum number of retries for failed requestsYou can also run openai-tool2mcp in Docker:
# Build the Docker image
docker build -t openai-tool2mcp .
# Run the container
docker run -p 8000:8000 -e OPENAI_API_KEY="your-api-key-here" openai-tool2mcp
Make sure:
Check:
Verify:
For more detailed troubleshooting, enable debug logs:
openai-tool2mcp start --log-level debug
Now that you have openai-tool2mcp up and running, consider:
For any issues or contributions, visit our GitHub repository.