Most people running Claude Opus 4.7 are leaving the majority of its capability on the table — not because the model is limited, but because they're prompting it like Opus 4.6 or, worse, like ChatGPT. This guide is a practical, ten-step rewire.
Opus 4.7 (released 16 April 2026) is more literal, more effort-sensitive, and more direct by default than its predecessors. Each technique below is calibrated to those specific behaviors.
Most people jump straight to advanced techniques and skip the fundamentals. That's backwards. Read these in order — each one stacks on the last. The first three alone will lift the floor of your outputs more than any clever trick further down.
The most expensive prompting mistake is overpaying or underpaying for intelligence. Anthropic's current lineup splits cleanly into three roles. Pick the one that fits the job, not the one you used last time.
Long-horizon agentic work, complex reasoning, knowledge work, high-resolution vision, memory-intensive tasks. Slower and more expensive ($5 / MTok input, $25 / MTok output), but nothing else in the generally available lineup matches it when the task demands real thinking.
Strong reasoning, faster, cheaper. Covers roughly 80% of everyday tasks well. Start here by default and only escalate to Opus when the output disappoints or the task genuinely needs more depth.
Fastest, cheapest, ideal for high-volume, straightforward tasks — classification, extraction, summarization, simple lookups. Drop down to Haiku whenever speed and cost matter more than nuance.
Treat these like a checklist. The first three are non-negotiable. The rest are situational, but every one of them will sharpen the output on the right task. Each rule has the same shape: what it is, why it matters in Opus 4.7 specifically, and a worked example.
The effort parameter controls how much intelligence the model brings to a task. Unlike earlier versions, where Claude tended to overperform regardless, Opus 4.7 respects effort levels strictly — especially at the low end. As one engineering leader put it: low-effort Opus 4.7 is roughly equivalent to medium-effort Opus 4.6. Set effort deliberately for every use case.
# Python · Anthropic SDK client.messages.create( model="claude-opus-4-7", max_tokens=64000, thinking={"type": "adaptive"}, output_config={"effort": "xhigh"}, messages=[{"role": "user", "content": "..."}], )
max or xhigh, set max_tokens to at least 64k — the model needs room to think across tool calls and subagents.
The single highest-leverage move is specificity. Opus 4.7 is extremely capable but extremely literal. Vague prompts get scoped, not generalized. If you want an instruction applied across all sections, not just the first, you have to say so. If you want "above and beyond" output, you have to request it explicitly.
Anthropic's own framing: think of your prompt as instructions to a brilliant but literal new hire on their first day. They'll do exactly what you say — so say exactly what you mean.
"Write about market positioning."
"Analyze the 3 most effective market positioning strategies for B2B SaaS companies targeting mid-market in a crowded category. For each strategy, explain what's driving its effectiveness, provide one specific company example, and assess whether it's likely to strengthen or weaken over the next 18 months. Apply this framework to all three strategies, not just the first."
The difference isn't more words. It's more specificity, explicit scope, and a format directive at the end.
This is Claude's structural superpower, and almost nobody uses it. Claude was specifically trained to recognize XML tags as structural markers. When your prompt has multiple components — context, instructions, data, constraints, output format — XML tags prevent the model from mixing them up.
<context> You are helping me evaluate a potential Series A investment. The company is a vertical SaaS business targeting logistics operators, currently at $2.4M ARR growing 15% MoM. </context> <instructions> Analyze the three key risks that most commonly derail vertical SaaS companies at this stage. For each risk, explain the warning signs and what a founder should be doing to mitigate them. Apply this analysis to all three risks, not just the most obvious one. </instructions> <constraints> - Be direct. Give me your honest assessment, not a balanced "it depends." - Use specific examples from real companies where possible. - Flag any assumptions you're making. - Maximum 600 words. </constraints>
Tag names are flexible — there is no magic set. Use whatever makes semantic sense: <background>, <rules>, <examples>, <output_format>. Consistency across your own prompts matters more than the specific names you choose.
If one technique consistently separates good outputs from great ones, it's this: show Claude what good looks like. Instead of describing tone, format, or style in abstract terms, provide two or three concrete examples. The model will pattern-match against them far more reliably than it will follow descriptive instructions.
Wrap each example in <example> tags (multiple examples in <examples>) so they're cleanly distinguished from the instructions.
<examples> <example> Input: "We need to cut 20% of the engineering budget" Output: "Reducing engineering spend by 20% requires prioritization across three areas: contractor headcount, infrastructure costs, and tooling licenses. Here's a phased approach that preserves our two highest-impact product initiatives..." </example> </examples> Now analyze this situation using the same approach: "We need to extend our runway by 6 months without reducing headcount"
Anthropic recommends three to five examples for best results. You can also ask Claude to evaluate your examples for relevance and diversity, or to generate additional ones based on your initial set.
For complex problems requiring analysis, multi-step reasoning, or strategic judgment, telling Claude to work through its reasoning before producing a final answer dramatically improves accuracy.
Simplest version: add "Think through this step by step before giving your final answer." The structured version uses tags to separate reasoning from output:
<instructions> Evaluate whether we should expand into the Southeast Asian market this year. Before giving your recommendation, work through the analysis inside <analysis> tags. Consider: market size and growth trajectory, regulatory requirements by country, competitive landscape, our current operational capacity, and capital requirements vs. expected payback period. Then provide your final recommendation with a clear resource allocation suggestion. </instructions>
Forcing visible reasoning prevents Claude from pattern-matching to the most likely answer and back-filling justification after the fact.
high and xhigh, deep reasoning is largely automatic for demanding tasks. Extended thinking with a fixed budget_tokens is no longer supported — adaptive thinking is the only thinking-on mode, and it reliably outperforms the old fixed-budget approach.
Claude can only work with what you give it. The more relevant context you include — documents, data, company background, goals, audience — the more tailored and accurate the output becomes. Don't make the model guess what you already know.
<background> Our company builds financial infrastructure for neobanks in emerging markets. We're Series B, $12M ARR, primarily serving West Africa and Southeast Asia. Main competitors are Banking-as-a-Service players like Railsbank and Synapse, plus local incumbents. We differentiate on compliance coverage and local payment rail integrations. </background> <data> [Paste Q1 metrics, customer feedback, churn data, whatever is relevant] </data> <task> Based on this context, identify our three biggest growth opportunities for the next two quarters. </task>
Don't leave the shape of Claude's response to chance. If you want a table, ask for a table. If you want a specific word count, state it. If you want an executive brief with defined sections, describe each section.
<output_format> Respond with: 1. A one-paragraph executive summary (3-4 sentences max) 2. A comparison block with: Factor | Current State | Target | Gap 3. A "Recommended Actions" section with 3 specific next steps, ranked by impact </output_format>
Explicit format specs eliminate the most common frustration with AI output: getting a 2,000-word essay when you wanted a concise brief, or bullet points when you needed flowing analysis.
Telling Claude what not to do is as important as telling it what to do. Without constraints, the model defaults to its training patterns, which often means overly formal, hedge-heavy prose that sounds like a committee wrote it.
<constraints> - Do NOT open with "In today's rapidly evolving landscape" or any variant - Skip the preamble. Start with the most important insight. - No bullet points — write in prose paragraphs - If you're uncertain about a claim, flag it explicitly rather than hedging everything - Maximum 500 words - Be direct. I want your honest assessment, not a balanced "it depends." </constraints>
Opus 4.7 calibrates response length to how complex it judges the task to be — shorter on simple lookups, much longer on open-ended analysis. If your use case requires a specific verbosity, tune it explicitly.
To decrease verbosity:
Provide concise, focused responses. Skip non-essential context and keep examples minimal.
Key insight: positive examples showing appropriate concision work better than negative instructions like "don't be verbose." Show Claude a response at the length and depth you want, and it will match that pattern far more reliably than it will follow abstract length instructions.
Here's what a properly structured Opus 4.7 prompt looks like when all ten techniques are stacked. Notice how each block does one job — context, instructions with embedded reasoning request, constraints, output format — and nothing leaks across.
<context> I'm the CEO of a B2B fintech startup ($8M ARR, 45 employees). We're deciding whether to raise a Series B now or extend runway and raise in 18 months. Current runway: 14 months. Revenue growth: 12% MoM. CAC payback: 8 months. </context> <instructions> Analyze both timing options. Before giving your recommendation, work through the trade-offs in <analysis> tags, considering: - Current market conditions for fintech Series B rounds - Our specific metrics relative to typical Series B benchmarks - The risk/reward of raising now vs. in 18 months at potentially better metrics - What we should use the 18 months to optimize if we extend Then provide a clear recommendation with a specific action plan. Apply your analysis to both options equally — don't weight one by default. </instructions> <constraints> - Be direct. Give me your honest read, not a balanced "it depends." - Use specific benchmarks from comparable fintech Series B raises where possible. - Flag any assumptions you're making about market conditions. - Keep the total response under 700 words. </constraints> <output_format> 1. Analysis (in <analysis> tags) 2. Recommendation (2-3 sentences, clear and direct) 3. 90-day action plan (5 specific actions, whether we raise now or extend) </output_format>
This prompt is clear, structured, specific, and constrained. It tells Claude exactly what to do, how to think about it, what to avoid, and how to format the response. The output is categorically different from "Should I raise a Series B now or wait?"
1. Model picked deliberately — not whichever was last. 2. Effort level set to match task difficulty. 3. Context, instructions, constraints, and output format each in their own tag. 4. At least one concrete example of what "good" looks like. 5. Reasoning explicitly requested for anything analytical. 6. The don'ts named — preamble, hedging, structure you don't want. 7. Length stated as a positive target, not a negative prohibition.
Prompting is one lever. The other big levers on this site cover the surfaces Claude prompts run through.
Every Anthropic doorway in one place — web app, Claude Code, API, Agent SDK, MCP, Skills, Artifacts, Tool Use, Memory, Computer Use.
skillsPackage the prompt patterns you keep typing into reusable slash commands — the highest-leverage move once you've internalised the rules above.
thinkingHow adaptive thinking replaced fixed-budget extended thinking in 4.7, and what that means for how you should structure reasoning prompts.