Introducing GPT-5.5
— OpenAI (@OpenAI) April 23, 2026
A new class of intelligence for real work and powering agents, built to understand complex goals, use tools, check its work, and carry more tasks through to completion. It marks a new way of getting computer work done.
Now available in ChatGPT and Codex. pic.twitter.com/rPLTk99ZH5
My reaction before you touch settings: this launch is about task completion, not just smarter chat. If GPT-5.5 really does better on long, messy, tool-heavy work, your setup strategy should be route-based, not “flip every default at once.”
My reaction after the embed: treat this like an infra migration. Add GPT-5.5 as a targeted lane for hard coding and agent tasks, keep a rollback model live, and measure completion rate per workflow.
Before all tools: one model ID rule
Use the exact model identifier your account exposes. In examples below I use gpt-5.5 for readability, but your tenant may expose a variant (for example, dated or provider-prefixed IDs). Copy exactly what your platform returns in the model picker or models API. This is the biggest source of silent failures.
{
"models": {
"default": "gpt-5.4",
"agent_candidate": "gpt-5.5",
"rollback": "gpt-5.4"
}
}
Also note rollout timing: OpenAI announced GPT-5.5 in ChatGPT and Codex first, with API availability staged. If your tool depends on API access, wire the config now but gate traffic until your account confirms availability.
Claude Code
If you run Claude Code with multiple providers, the exact change is adding an OpenAI profile and setting GPT-5.5 for the specific profile you use for long-horizon coding tasks. Keep your prior model as fallback so you can revert instantly.
{
"providers": {
"openai": {
"apiKeyEnv": "OPENAI_API_KEY",
"model": "gpt-5.5",
"fallbackModel": "gpt-5.4"
}
},
"profiles": {
"agentic-coding": {
"provider": "openai",
"model": "gpt-5.5"
},
"default": {
"provider": "openai",
"model": "gpt-5.4"
}
}
}
Exact config change: set profiles.agentic-coding.model from gpt-5.4 to gpt-5.5. Leave everything else alone for first pass.
Cursor
In Cursor, make the change at both personal default and project override levels. Team mistakes usually happen when one engineer changes local model but repo-level settings still pin an older model.
{
"cursor.ai.defaultModel": "gpt-5.4",
"cursor.ai.modelOverrides": {
"long_refactor": "gpt-5.5",
"test_debug_cycle": "gpt-5.5",
"quick_chat": "gpt-5.4"
},
"cursor.ai.provider": "openai"
}
Exact config change: in your project settings, update the override keys that map to heavy workflows so they point to gpt-5.5. Don’t change every route on day one.
Useful context: every major lab is now competing on multi-step execution quality, not just response quality.
A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War.https://t.co/rM77LJejuk
— Anthropic (@AnthropicAI) February 26, 2026
My reaction after this embed: benchmark tables matter less than where your own pipeline fails today. Route GPT-5.5 into those failure points first.
Zed
Zed setups differ by provider plugin, but the exact move is the same: change the model for your assistant profile that handles deep coding or tool-using work.
{
"assistant": {
"provider": "openai",
"default_model": "gpt-5.4",
"profiles": {
"heavy-lift": {
"model": "gpt-5.5"
},
"everyday": {
"model": "gpt-5.4"
}
}
}
}
Exact config change: set assistant.profiles.heavy-lift.model to gpt-5.5. Then reload editor settings and run a one-prompt model echo check to confirm active model.
OpenAI API
For direct API users, the exact change is in the request model field plus route-level config in your app. Keep one environment variable for candidate traffic so you can canary safely.
{
"model": "gpt-5.5",
"input": "Analyze this repo and implement fixes with tests.",
"max_output_tokens": 4096
}
OPENAI_MODEL_DEFAULT=gpt-5.4
OPENAI_MODEL_AGENT=gpt-5.5
OPENAI_MODEL_ROLLBACK=gpt-5.4
Exact config change: update model from gpt-5.4 to gpt-5.5 only in your agent route handler, not globally, until you have cost and reliability data.
AWS Bedrock
If you access OpenAI models through Bedrock-style unified infrastructure in your org, the exact change is your model ARN or model ID mapping in the invocation layer. Use the ID shown in your Bedrock catalog for your region.
{
"modelId": "openai.gpt-5-5",
"inferenceConfig": {
"maxTokens": 4096,
"temperature": 0.2
},
"fallbackModelId": "openai.gpt-5-4"
}
Exact config change: change modelId from your GPT-5.4 entry to the GPT-5.5 entry for the same region. If the ID is unavailable, keep mapping untouched and fail closed instead of auto-downgrading silently.
This matters because advanced models now sit closer to security-critical workflows than casual chat workflows.
Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software.
— Anthropic (@AnthropicAI) April 7, 2026
It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans.https://t.co/NQ7IfEtYk7
My reaction: whichever lab you use, stronger models increase both upside and blast radius. Your config should include explicit approvals for high-impact actions.
Google Vertex AI
On Vertex, update the model resource name in your endpoint or generation call. Partner model names can be region-specific, so copy from Vertex model registry exactly.
{
"model": "publishers/openai/models/gpt-5.5",
"parameters": {
"temperature": 0.2,
"maxOutputTokens": 4096
},
"labels": {
"route": "agentic-coding"
}
}
Exact config change: replace the model resource path used by your agentic route from GPT-5.4 to GPT-5.5, then redeploy endpoint config so old revisions do not keep serving stale model pointers.
Rollout checklist you should actually use
Do this in order: model ID verify, targeted route change, canary traffic, rollback test, then gradual expansion. If you skip rollback testing, you don’t have a rollout plan, you have a hope plan.
{
"rollout": {
"phase1": "10% agent traffic",
"phase2": "30% if completion_rate improves and error_rate stable",
"phase3": "60% after 7 days",
"rollbackTrigger": "error_rate > 2% OR completion_rate drops > 5%"
}
}
Cost gotcha: even if GPT-5.5 is more token-efficient per task, total spend can still rise if teams delegate larger tasks. Track cost per completed workflow, not cost per call.
Final context marker: the frontier trend is converging on execution-heavy AI, so tool config discipline is now a core engineering skill.
A statement on the comments from Secretary of War Pete Hegseth. https://t.co/Gg7Zb09IMR
— Anthropic (@AnthropicAI) February 28, 2026
Bottom line: the exact change in every tool is simple, update the model field for the right workflow. The hard part is controlled rollout, observability, and rollback. Do those three well and GPT-5.5 becomes a productivity upgrade instead of an incident generator.
Now you know more than 99% of people. — Sara Plaintext