Contents
Table of Contents
  1. 1. Why Claude for whitepaper teardowns
  2. 2. Framework #1 — Six-axis whitepaper teardown
  3. 3. Framework #2 — Tokenomics unlock-pressure model
  4. 4. Framework #3 — Audit report vs. on-chain contract diff
  5. 5. Framework #4 — Team reverse background check
  6. 6. Framework #5 — Competitor matrix
  7. 7. What Claude cannot do
  8. 8. FAQ

Claude × Crypto — 5 Whitepaper Teardown Frameworks (200K Context Power Plays)

"200K context" has been marketing-deck-bullet-pointed to death. But in crypto due diligence the number actually changes the workflow — you can drop the whitepaper, the audit report, screenshots of the team's LinkedIn, and six months of archived tweets into the same conversation and let Claude cross-reference all of it in one pass. Below are five prompt frameworks I have iterated on for half a year.

2026-05-15 By PromptDeck ~10 min read
Disclaimer: the prompts below are for research scaffolding and information triage, not investment advice. Claude still hallucinates on the occasional number (unlock percentages, wallet balances) — every quantitative output has to be cross-checked against the source PDF or a block explorer before you act on it.

1. Why Claude for whitepaper teardowns #

Over the past six months I ran the same task set against the same 68-page L2 whitepaper on four different models. Editor-graded scores:

Same 68-page whitepaper · same 5-task set · editor-team grading
Model One-shot read? Tokenomics accuracy Appendix recall Tone discipline Overall
Claude Sonnet 4.5 Yes (200K) 9.2 9.0 9.0 9.1
Claude Opus 4.5 Yes (200K) 9.4 9.3 9.0 9.2
GPT-4o Has to chunk 7.8 7.0 7.5 7.4
Gemini 2.5 Pro Yes (1M) 8.5 8.6 7.0 8.0

Gemini has a bigger context window but a shallower read on crypto-specific framing. GPT-4o has to be chunked, and tokenomics numbers across chunks often fail to reconcile. Claude's real edge is not "biggest context" — it is holding detail at full context length. That is what matters.

2. Framework #1 — Six-axis whitepaper teardown #

This is the baseline. Every project starts here. Upload the PDF directly to Claude — Sonnet 4.5 is more than enough.

Prompt template:

The attached PDF is the whitepaper for [PROJECT NAME].
Tear it down along the six axes below.

For every axis, use the format: "Quote from the whitepaper → your read."
If something is not stated in the document, mark it "not stated in the whitepaper" — do NOT fill in the gap with general knowledge.

1. Problem statement: what does the project claim to solve? One sentence, in their words.
2. Technical approach: what tech do they use, and how is it different from what already exists?
3. Tokenomics: token utility, total supply, initial allocation.
4. Unlock curve: vesting schedules for team / investors / community.
5. Roadmap: key milestones in the next 12 months, and what has already been delivered.
6. Risk section: what risks does the whitepaper itself disclose?

Final step: list the five things the whitepaper SHOULD have addressed but did not.

That last line — "should have but didn't" — is where most of the alpha lives. Recurring blind spots I see project after project:

3. Framework #2 — Tokenomics unlock-pressure model #

Whitepaper unlock curves are almost always described in prose, which makes "which months actually hurt" impossible to eyeball. Pair Claude with Artifacts and it will draw the chart for you.

Prompt template:

Based on the vesting schedule on pages X–Y of the attached whitepaper:

Build a 36-month token-release table with these columns:
- Month (1–36)
- Team unlock (% of total supply)
- Investor unlock
- Ecosystem fund unlock
- Community rewards release
- Total new monthly float
- Cumulative circulating supply (%)

Then, using that table, answer:
1. Which 3 months carry the heaviest unlock pressure, and from which tranche?
2. Holding price flat, the USD value unlocked each month (assume FDV = $X).
3. Which months see unlocks greater than 5× the chain's recent average daily volume (i.e. mechanical sell pressure)?

Finish with a markdown table plus a 36-month bar chart (Artifacts).

I ran this on an AI-narrative project last year and Claude flagged that "6.2% of team tokens unlock as a single cliff in month 13." That cliff structure (fully locked for 12 months, then a one-shot release in month 13) was buried in a single sentence of the whitepaper, but the chart made it impossible to miss. The token bled 38% that month, on schedule.

4. Framework #3 — Audit report vs. on-chain contract diff #

This is where 200K context actually flexes: feed the whitepaper, the audit report, and the deployed .sol source from Etherscan into the same conversation. Drag the .sol files in directly.

Prompt template:

I am giving you three attachments:
- Attachment 1: the whitepaper PDF
- Attachment 2: the audit report PDF (CertiK / OpenZeppelin / Halborn)
- Attachment 3: the .sol source code actually deployed on mainnet

Run a three-layer diff:

Layer 1 — Contract permissions as described in the whitepaper vs. permissions reviewed in the audit report vs. modifiers in the actual source.
- Are they consistent? Where do they diverge?

Layer 2 — Audit issues marked "Acknowledged but not fixed."
- For each: title, severity, the team's response, and whether the issue still exists in the deployed version.

Layer 3 — Admin functions in the source.
- List every onlyOwner / onlyAdmin / emergency-pause function.
- Which ones have a timelock and which do not.
- Multi-sig N-of-M configuration, if visible in code.

Do NOT give me a "this project is safe / unsafe" verdict. I want the fact list. I will judge.

That last line matters. An AI verdict on "is this project safe?" is worthless — the value is in the fact list. You take the facts, you make the call.

5. Framework #4 — Team reverse background check #

Paste in the team's LinkedIn screenshots, their previous project pages, their Twitter history, their Crunchbase entries — whatever you can find. Remember Claude does not browse, so you have to gather the material first.

Prompt template:

I am giving you publicly available info on five core team members of this project (LinkedIn summaries + Twitter history + previous project pages).

Output:

1. Per-member timeline:
   - Publicly claimed history vs. independently verifiable history (flag any inconsistency).

2. Fate of previous projects:
   - Still running? Acquired? Shut down? Rugged?
   - Time from launch to shutdown / acquisition.

3. Red-flag list:
   - Past SEC actions, community complaints, contract exploits.
   - Members simultaneously listed on 2+ active crypto projects.
   - Visible contradictions between their LinkedIn and their Twitter self-description.

4. Suggested follow-up keywords I should search to cross-verify.

Do NOT conclude "the team is trustworthy / not trustworthy." Just give me the structured facts.

Claude's strength here is cross-referencing horizontally — five resumes laid side by side. It will catch things like "three of them claim to have worked at the same company but their tenures never overlap," which is exactly the kind of thing the human eye glides over.

6. Framework #5 — Competitor matrix #

Last one. Use this after you have finished single-project work — drop three to five whitepapers from the same sector into Claude in a single turn.

Prompt template (Opus recommended):

Attached are the whitepapers for four projects in the same sector (A / B / C / D).
Build a comparison matrix with these columns:

- Consensus mechanism
- Mainnet launch date
- Current TVL (I will paste DefiLlama numbers separately)
- Team token allocation + end of unlock
- Investor vs. community allocation ratio
- Development activity over the last 6 months (GitHub commit cadence — I will provide)
- Number of audits + audit firms
- Whitepaper page count and writing rigor (your subjective grade)
- One-line unique positioning

Then answer:

1. Of the four, which tokenomics design is most retail-friendly?
2. Which project shows the largest gap between roadmap promise and actual delivery?
3. Which whitepaper most obviously prioritizes marketing language over technical detail?

Finally, list the fields you could NOT fill in from the whitepapers alone — i.e. where you need external data.

That final line is critical. By default the model will brute-force every field even when the source is silent. Forcing it to admit "insufficient data" is what keeps you out of the trap of making decisions on values the AI invented.

7. What Claude cannot do #

Six months of doing crypto DD with Claude, and here are the traps that bit me. Skip the lessons-learned-the-hard-way tax:

Limit 1 — Zero internet access. Any "current TVL," "last 7 days of price," "real-time contract balance" Claude gives you is either fabricated or a stale value from its training data. Unlike ChatGPT, Claude does not have Browse. Every piece of live data, you fetch it and paste it.

Limit 2 — It will still occasionally mis-align numbers inside a long document. One real example: page 23 of a whitepaper said the team's allocation was 22%; the vesting detail on page 47 said 18%. Claude went with 22% — it picked the first number it saw and never flagged the contradiction. The fix: cross-check critical numbers against the source yourself, and when in doubt ask Claude to list "every page where this number appears in the whitepaper."

Limit 3 — Anything that smells like leverage/derivatives triggers a wall of disclaimers. Asking "how do I run a 3x leveraged structured trade" gets you a paragraph of "this is not advice" that buries the signal. Reframe: instead of asking for a recommendation, hand Claude a position and ask for analysis — "assume I already hold position X; analyze its exposure if BTC drops 10%." Switch the model from "advisor" to "analyst."

Limit 4 — Free-tier context is heavily clipped. Free-plan Claude has nowhere near 200K usable context — call it ~100K, and that has to share with the conversation history. To feed a whole whitepaper you need Pro. Otherwise expect the model to start "forgetting" earlier turns by message five or six.

See live Binance data → Browse the full Prompt Library →

8. FAQ #

Q1 — Is Claude better than ChatGPT for crypto project analysis?

For long-document teardowns, yes — 200K context fits the whitepaper, the audit, and the team's tweets all at once. The trade-off is no web browsing, so all real-time data has to be pasted in by you. For ordinary back-and-forth chat the two models are roughly even.

Q2 — Sonnet or Opus for whitepaper analysis?

Sonnet 4.5 handles about 80% of the work at roughly one-fifth the price of Opus. Use Opus 4.5 for heavy multi-variable reasoning — e.g. five projects compared simultaneously with assigned tokenomics risk grades.

Q3 — Will Claude refuse to discuss crypto projects?

Structural questions, tokenomics, technical details — fine. What Claude refuses is "will token X pump?" and "recommend a leverage strategy." That refusal is a feature: it stops the model from generating confident-sounding noise.

Q4 — Can I run automated analysis through the Claude API?

Yes. Anthropic's API is priced per token. Sonnet 4.5 input runs roughly $3 per million tokens; a 200K input pass costs about $0.60. Batching 50 projects of due diligence overnight comes in under $50.

Q5 — How do I build a crypto-specific workflow in Claude Projects?

In Claude.ai, create a Project and put your "crypto DD framework," your "blacklist of common project-team marketing phrases," and your own risk tolerance into the Project Instructions. Upload the whitepaper for each new project — the framework auto-applies. Reuse across analyzes.

Disclosure: this page contains affiliate referral links (Binance, tagged rel="sponsored"). Registering through them may earn us a commission — it adds no extra cost to you. Every prompt in this article was field-tested on Claude Sonnet 4.5 / Opus 4.5 (March 2026 to May 2026 versions). Model updates may shift output style. Full disclosure →

PromptDeck · 2026-05-15