Claude Opus 4.7 is live, and the fastest way to adopt it is to change one thing first everywhere: the model name. The new model string is claude-opus-4-7, and Anthropic kept base pricing the same as Opus 4.6 ($5/MTok input, $25/MTok output). The practical caveat is that 4.7 can use more tokens on the same input in some workloads, so do the swap, then validate cost and behavior with real tasks.

This guide shows the exact setup change across Claude Code, Cursor, Zed, direct API usage, Amazon Bedrock, and Google Vertex AI. Each section includes a copy-paste config example plus one “don’t get burned” note.

Claude Code

If you’re already using Claude Code, this is usually the easiest migration. You can switch model at runtime or set it in config. If your setup supports effort controls, start with high or xhigh for harder coding tasks.

# Runtime switch inside Claude Code
/model claude-opus-4-7

# Optional: check current model
/status

If you use a local config file for defaults, update the model field:

{
  "model": "claude-opus-4-7",
  "effort": "high"
}

Gotcha: Opus 4.7 follows instructions more literally than 4.6. If your old prompts were vague, you may see “technically correct but not what I meant” outputs. Tighten instructions instead of blaming the model.

Cursor

In Cursor, model selection is typically done in the model picker and then persisted in workspace/user settings depending your setup. If your team pins models in settings, update that value directly.

{
  "cursor.aiModel": "claude-opus-4-7"
}

If your org uses project-level settings checked into the repo, make the same change there so everyone gets consistent behavior.

{
  "ai": {
    "defaultModel": "claude-opus-4-7"
  }
}

Gotcha: Cursor workflows that rely on long context plus tool calls can change token usage after upgrade. Track “cost per completed task,” not just per-request latency.

Zed

Zed’s AI assistant configuration depends on whether you’re using Anthropic directly or a routed provider. The key change is still the same: set default model to Opus 4.7.

{
  "assistant": {
    "provider": "anthropic",
    "default_model": "claude-opus-4-7"
  }
}

If you maintain separate model profiles for chat vs edit/refactor, keep Sonnet or a cheaper model for lightweight operations and reserve Opus 4.7 for complex tasks.

{
  "assistant": {
    "profiles": {
      "fast": { "model": "claude-sonnet-4-6" },
      "deep": { "model": "claude-opus-4-7" }
    }
  }
}

Gotcha: Don’t force Opus on every keystroke-level assist event. You’ll get great quality and terrible economics.

Anthropic API

For direct API users, the migration is explicit: send "model": "claude-opus-4-7". If you already have robust retry and tool-calling wrappers, this is a low-friction change.

curl https://api.anthropic.com/v1/messages \
  -H "x-api-key: $ANTHROPIC_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -H "content-type: application/json" \
  -d '{
    "model": "claude-opus-4-7",
    "max_tokens": 2048,
    "messages": [
      {"role": "user", "content": "Refactor this module and explain tradeoffs."}
    ]
  }'

If your SDK supports effort/task budget controls, add them for long-running agent tasks:

{
  "model": "claude-opus-4-7",
  "effort": "xhigh",
  "task_budget_tokens": 120000
}

Gotcha: Tokenizer behavior changed in 4.7. The same prompt may map to more tokens depending content type. Measure on production-like payloads before you lock budgets.

Amazon Bedrock

On Bedrock, the migration is the model ID in your inference call. Use the Opus 4.7 model ID available in your region/account, then keep the request body the same structure you used for Anthropic messages on Bedrock.

{
  "modelId": "anthropic.claude-opus-4-7",
  "contentType": "application/json",
  "accept": "application/json",
  "body": {
    "anthropic_version": "bedrock-2023-05-31",
    "max_tokens": 2048,
    "messages": [
      {"role": "user", "content": "Analyze this CI failure and propose a fix."}
    ]
  }
}

If your Bedrock account shows a fully versioned model ID, use that exact string from the Bedrock model catalog for your region and replace the previous Opus 4.6 entry in config.

{
  "llm": {
    "provider": "bedrock",
    "modelId": "anthropic.claude-opus-4-7-...regional-version..."
  }
}

Gotcha: Bedrock model IDs are region-scoped and versioned differently across rollouts. Always copy from the console/API list instead of hardcoding from blog posts.

Google Cloud Vertex AI

For Vertex, switch the model reference to the Claude Opus 4.7 publisher model your project has access to. If you already integrated Anthropic models on Vertex, this is mostly a model-name update.

{
  "model": "publishers/anthropic/models/claude-opus-4-7",
  "messages": [
    {"role": "user", "content": "Generate a migration plan with risk controls."}
  ],
  "max_tokens": 2048
}

If your tooling stores model aliases, map your old alias forward:

{
  "models": {
    "opus_primary": "claude-opus-4-7"
  }
}

Gotcha: Vertex availability can vary by region and org policy. Validate quota, model access, and latency in your target region before a full cutover.

Recommended rollout pattern (so you don’t regret Friday deploys)

Do this in order:

1) Swap model ID only.
2) Run regression suite on real prompts/tools.
3) Compare completion rate + tool error rate + cost/task.
4) Tune effort level (high vs xhigh) by workload class.
5) Roll out gradually (10% → 50% → 100%).

Focus on the metrics that matter in production: successful task completion, retries, tool-call failures, human correction time, and token spend per resolved issue. Opus 4.7’s real value shows up in long, messy tasks where older models needed babysitting.

When not to switch everything to Opus 4.7

If your workload is mostly short chat responses, lightweight autocomplete, or low-stakes content generation, keep a cheaper model as default and route only hard tasks to Opus 4.7. This is the cleanest way to get the quality jump without blowing up your cost profile.

Bottom line: the exact config change is simple everywhere (point to claude-opus-4-7), but the smart migration is about routing and measurement. Treat Opus 4.7 as your “hard problems” engine, and you’ll feel the upgrade where it actually counts.

Now you know more than 99% of people. — Sara Plaintext