GPT-5.5 Setup Guide Across Dev Tools

Getting Started with GPT-5.5 Across Your Development Tools

OpenAI just released GPT-5.5, bringing significant improvements in reasoning, code generation, and real-time understanding. Whether you're building in Claude Code, Cursor, Zed, or integrating directly via API, Bedrock, or Vertex, we've got you covered. This guide walks you through the exact configuration changes needed for each platform.

Claude Code

Claude Code offers a seamless integration experience with GPT-5.5 through its model selector. Since Claude Code now supports cross-model inference, you can switch between providers directly in the UI.

Open your Claude Code settings and navigate to the model configuration panel. Select "GPT-5.5" from the dropdown menu under "Available Models." The platform will automatically handle authentication if you've previously connected your OpenAI account.

{
  "model_provider": "openai",
  "model_name": "gpt-5.5",
  "temperature": 0.7,
  "max_tokens": 4096,
  "top_p": 0.9
}

If you're using a custom API key, paste it into the "OpenAI API Key" field in settings. Claude Code will encrypt and store it securely. Once configured, all code generation, analysis, and debugging tasks will automatically route to GPT-5.5.

Cursor

Cursor's model management has been streamlined for GPT-5.5. Open your Cursor settings using Cmd+, (or Ctrl+, on Windows) and search for "model provider."

"models": {
  "default": "gpt-5.5",
  "fast": "gpt-4-turbo",
  "reasoning": "gpt-5.5"
},
"openai": {
  "apiKey": "sk-...",
  "organization": "org-..."
}

Update your `.cursorrules` file to specify GPT-5.5 for specific workflows. For instance, use GPT-5.5 for complex architectural decisions and code refactoring, while keeping GPT-4 Turbo for quick completions to manage costs.

Cursor's inline completions, chat features, and codebase analysis will all respect your model selection. The new reasoning capabilities in GPT-5.5 particularly shine during multi-file refactoring tasks and when analyzing unfamiliar codebases.

Zed

Zed's AI integration supports GPT-5.5 through its native settings. Access the settings file at `~/.config/zed/settings.json` on macOS/Linux or `%APPDATA%\Zed\settings.json` on Windows.

{
  "assistant": {
    "default_model": {
      "provider": "openai",
      "model": "gpt-5.5"
    },
    "openai": {
      "api_key": "sk-..."
    }
  }
}

After updating this configuration, restart Zed to load the changes. You can invoke the assistant using Ctrl+Enter in any file. The inline code suggestions and multi-file context awareness in Zed work exceptionally well with GPT-5.5's improved instruction following.

Zed's lightweight nature means faster context switching—GPT-5.5 handles the complex reasoning while the editor stays responsive. Test it first with a simple code generation task to verify your API key is working correctly.

OpenAI API Direct Integration

For direct API calls, update your client to specify the GPT-5.5 model identifier in your requests.

import openai

openai.api_key = "sk-..."

response = openai.ChatCompletion.create(
  model="gpt-5.5",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Write a recursive function to find prime numbers."}
  ],
  temperature=0.7,
  max_tokens=2048
)

If you're using the newer OpenAI Python client (v1.0+), the syntax is slightly different:

from openai import OpenAI

client = OpenAI(api_key="sk-...")

response = client.chat.completions.create(
  model="gpt-5.5",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Write a recursive function to find prime numbers."}
  ]
)

Billing for GPT-5.5 API calls is straightforward—check OpenAI's pricing page for current per-token rates. The improved efficiency means you'll often need fewer tokens for complex tasks compared to earlier models.

AWS Bedrock

Amazon Bedrock now includes GPT-5.5 through OpenAI's partnership. Access it via the Bedrock console or SDK.

import boto3

bedrock = boto3.client('bedrock-runtime', region_name='us-east-1')

response = bedrock.invoke_model(
  modelId='openai.gpt-5-5',
  body=json.dumps({
    "messages": [
      {"role": "user", "content": "Explain quantum entanglement"}
    ],
    "temperature": 0.7,
    "max_tokens": 1024
  })
)

result = json.loads(response['body'].read())

First, ensure GPT-5.5 is enabled in your AWS account—request access through the Bedrock Model Access page. Once approved, the model ID is `openai.gpt-5-5`. Use IAM roles to manage authentication securely in production environments.

Google Vertex AI

Google Vertex AI offers GPT-5.5 access through a partnership model. Install the Vertex SDK and configure your project.

from vertexai.generative_models import GenerativeModel

model = GenerativeModel("gpt-5.5")

response = model.generate_content(
  "Build a REST API in Python using FastAPI",
  generation_config={
    "temperature": 0.7,
    "top_p": 0.9,
    "max_output_tokens": 2048
  }
)

print(response.text)

Authenticate using Google Cloud credentials: set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to point to your service account JSON file. Vertex AI handles billing through your Google Cloud account, integrating seamlessly with other GCP services.

Start with these configurations today and unlock GPT-5.5's enhanced reasoning and code generation capabilities across your entire development workflow.

Now you know more than 99% of people. — Sara Plaintext