ai-models · 4 min read

Ask Claude for HTML, Not Markdown: The Case for Rich AI Outputs

Markdown became the default AI output format because of token constraints that no longer exist. Asking Claude for HTML instead unlocks SVG diagrams, interactive widgets, color-coded annotations, and in-page navigation. Here's why developers are switching and how to prompt for it.

There's a quiet shift happening in how developers use AI assistants, and it's not about which model writes the best Python. It's about output format. Thariq Shihipar, who works on the Claude Code team at Anthropic, made the case recently that HTML is an underused output format for AI-generated explanations and tools. Simon Willison picked it up and called it enough to change his defaults after years of asking for Markdown.

That's worth paying attention to. Willison has been building with LLMs longer than most, and he doesn't change defaults lightly.

What HTML Actually Unlocks

When you ask an AI for an explanation in Markdown, you get headings, bullet points, and code blocks. That's fine. But when you ask for HTML, you're opening a much wider surface area. The model can generate SVG diagrams inline. It can add interactive widgets, expandable sections, and in-page navigation. It can use CSS to create visual hierarchy that Markdown simply can't express.

Willison put it plainly:

Asking Claude for an explanation in HTML means it can drop in SVG diagrams, interactive widgets, in-page navigation and all sorts of other neat ways of making the information more pleasant to navigate.

This matters most when the content is complex. A static wall of text with headers is fine for a quick answer. But for a PR review, a security exploit walkthrough, or an explanation of tricky streaming logic, a document that can show you a color-coded diff with margin annotations is genuinely more useful than the same information formatted in Markdown.

Real Examples: What This Looks Like in Practice

The PR review use case is one of the more compelling examples. Instead of asking Claude to summarize a pull request, you can prompt it to build an interactive HTML artifact. A prompt like this:

Help me review this PR by creating an HTML artifact that describes it. I'm not very familiar with the streaming/backpressure logic so focus on that. Render the actual diff with inline margin annotations, color-code findings by severity...

...produces something that a flat text response can't match. Color-coded severity levels let you triage at a glance. Inline annotations sit next to the relevant code, not three paragraphs away. The document is navigable, not just readable.

The same logic applies to technical explanations. Willison references a Linux security exploit explanation as an example of what a well-structured HTML output can do: reformat inline code, highlight dangerous execution paths, and let you step through the logic interactively. That's a different category of output than a Markdown response.

Why Markdown Became the Default (and Why That's Changing)

Markdown's dominance as an AI output format wasn't arbitrary. Willison traces it back to the GPT-4 era, when context windows topped out at 8,192 tokens. At that scale, Markdown's token efficiency over HTML was a real constraint. Verbose HTML structure meant less room for actual content.

I've been defaulting to ask for most things in Markdown since the GPT-4 days, when the 8,192 token limit meant that Markdown's token-efficiency over HTML was extremely worthwhile.

Modern models don't have that problem. Context windows are orders of magnitude larger, and the extra tokens HTML requires are essentially free in practical terms. The original reason to prefer Markdown for output is gone. What remains is habit.

Willison had already written about useful patterns for building HTML tools back in December 2025, but Thariq's piece pushed him to extend that thinking to explanatory output, not just tooling. The distinction matters: this isn't just about building quick interactive apps. It's about using HTML as the default rich format anytime the content benefits from structure, visual hierarchy, or interactivity.

How to Actually Prompt for This

The shift in practice is small but specific. Instead of asking Claude to explain something, ask it to generate an HTML document that explains something. You can be explicit about what you want:

A general framing that works: "Output HTML, neatly styled and using capabilities of HTML and CSS and JavaScript to make the explanation rich and interactive and as clear as possible." That single instruction opens up a lot of surface area the model wouldn't use otherwise.

The resulting files are completely self-contained. Drop them in a browser, share them as attachments, or serve them from anywhere. No dependencies, no build step.

What This Says About Developer Workflows

There's a broader pattern here. Developers are increasingly using AI to produce finished artifacts, not just code snippets to integrate into existing projects. A self-contained HTML file that explains a PR, visualizes a dataset, or walks through a security vulnerability is a finished thing. You open it, you use it, you share it.

This is different from asking an AI to write a Python script you then have to run, debug, and maintain. It's closer to asking for a document that happens to be interactive. The speed-to-useful-output ratio is high, and the cognitive overhead is low. That's a sweet spot for the kinds of one-off tools developers reach for constantly.

Bottom Line

If you're still defaulting to Markdown for AI-generated explanations and analysis, it's worth reconsidering. The token-efficiency argument that made Markdown the sensible choice no longer applies at modern context window sizes. HTML gives the model room to produce something genuinely richer, and for complex technical content, that richness translates directly to usefulness. Start with PR reviews, code walkthroughs, or anything where a color-coded, navigable document would beat a flat text response. The prompt change is minimal. The output difference can be significant.

Sources


You Might Also Like

The weekly digest

Every Sunday: the 5 AI tools, papers, and posts worth your time.

Curated by humans, sent at 9am ET. No sponsored content in the main feed — affiliates are clearly marked.