A short primer on Answer Engine Optimization
What AEO is, how it differs from classic SEO, and the five levers that actually move citations in ChatGPT, Perplexity, and Gemini.
Answer Engine Optimization — AEO — is the practice of being the primary source that AI models cite when users ask a question in your category. It's the successor discipline to classic SEO, not a replacement for it.
The shift is simple to describe and large in consequence. Ten years ago, the winning move was ranking on page one. Five years ago, it was ranking in a rich result. Today, for a growing share of high-intent queries, it's being quoted — named, cited, paraphrased — in a synthesized answer. That answer appears at the top of Google, inside ChatGPT, across Perplexity, and in every AI surface that's maturing right now.
Why this is not just "new SEO"
Classic SEO assumes a list. You optimize to appear on the list and climb it. AI surfaces assume a conclusion. They read your page, and many others, and output one response.
That shift inverts the unit of value:
- On a list, you compete for position.
- In a synthesis, you compete for citation.
The second is harder. You don't just need to be among the top ten — you need to be the one the model decides is the most authoritative, most readable, most quotable source. That is a different content shape, a different structure, a different measurement frame.
Five levers that actually move citations
Every AEO engagement we've run converges on the same five levers. They're ordered by leverage — the ones at the top shape everything below.
1. Primary-source content
Models reach for the clearest statement of the truth. That means short, declarative explainers; comparison pages that don't hedge; definitions written by someone who understands the thing.
Volume farms don't work. Ten thousand listicles don't beat one well-written definition of your category.
2. Entity structure
The model has to know what you are. That means a clean Organization graph, unambiguous relationships between your products, services, and people, and consistent naming across the pages models actually read.
If your homepage calls you one thing and your Organization schema calls you another, you've just handed the model ambiguity — which models resolve by picking the competitor with a cleaner entity.
3. Schema coverage
JSON-LD is not ornamental. It is the machine-readable overlay that lets a model understand your claims without having to guess. Every primary-source page should carry the type that matches its content — FAQPage, HowTo, Product, Article, DefinedTerm, SoftwareApplication.
The schema and the prose must agree. When they disagree, models typically trust neither.
4. Technical substrate
None of the above matters if your pages don't render, don't get crawled, or are slow enough to be dropped. Rendering (SSR or static), crawlability (robots, sitemaps, llms.txt), and Core Web Vitals are all prerequisites. They compound silently: a bad substrate bleeds everything above.
5. Authority signals
The classical E-E-A-T dimensions — experience, expertise, authoritativeness, trust — still matter, arguably more than they did. Models weigh them heavily when choosing which source to cite. Byline real people with verifiable experience. Link to primary research. Say what you don't know.
How to measure
AEO is measurable if you stop measuring vanity. The metrics that matter:
- Citations in AI surfaces. Track what shows up when you run your category's top queries in ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews. Record the citing pages and the synthesized claims. Repeat weekly.
- Share of voice in AI answers. Of all the citations across a set of queries, what percentage are yours?
- Assisted conversions from AI surfaces. GA4 will show this as "AI assistant" or similar referrer traffic. It tends to be a small trickle with high intent.
- Branded-query lift. When you win the synthesized answer, you also tend to win the branded search that follows.
Rankings still matter, but they're downstream. Treat them as a sanity check, not a goal.
What to do on Monday
If you're reading this and want to move the needle in ninety days, three concrete next steps:
- Run the probe. Open your top fifteen buyer queries in ChatGPT, Perplexity, and Google AI Overviews. Record who gets cited. If you're not on the list, note who is and what they did.
- Pick one primary-source page. The single most important definition in your category. Rewrite it to be quoted — short declarative opener, named entities, numeric claims with citations. Ship it with matching JSON-LD.
- Close the gap that looks smallest. If competitors have a comparison page and you don't, ship one. If they have
FAQPageand you don't, add it. Don't try to do everything at once.
AEO is compounding work. You don't need to boil the ocean. You need to ship one thing that deserves to be cited, and then the next.
If you want a fast pass on how your category's answer surface looks today — book a thirty-minute call. We'll probe it live.