Ready to build your own Founder-Led Growth engine? Book a Strategy Call
Frontlines.io | Where B2B Founders Talk GTM.
Strategic Communications Advisory For Visionary Founders
The foundational models are holding their mechanics close. No LLM has published how its citation algorithm works, which means anyone claiming a definitive playbook is speculating. Kevin's take: build controlled tests yourself. If you hear FAQs drive citations, run 10 pages with FAQs against 10 without, monitor results, and let your own data inform strategy. Borrowed conviction in a fast-moving probabilistic space is a liability.
AI search is probabilistic — the same prompt asked twice can return different citations, different brands, different answers. Applying traditional SEO rank-tracking to this environment will make results look random and uncontrollable, because by that framework, they are. The right measurement model: track brand mention and citation trends across a defined set of high-intent prompts over a longer time horizon. Directional momentum — are you showing up more or less across that prompt set over time — is the signal that matters.
Reddit shows up heavily in top-of-funnel, high-volume AI queries. But when you isolate the prompts that matter — bottom-of-funnel, higher purchase intent — citations shift toward niche industry publications, many of which most marketers aren't tracking at all. The implication: chasing Reddit presence as an AI search strategy optimizes for the wrong part of the funnel. Find the authoritative niche sources that LLMs are pulling from for your category's high-intent prompts, and get your brand into those.
If buyers are researching inside AI platforms and only occasionally clicking through, the entity actually crawling your pages most often is an AI agent. These bots don't engage with animations, JavaScript, or heavy HTML — they want fast, token-light, structured content. Kevin's framing: AI exchanges value in tokens, so a token-heavy page is a slow, inefficient crawl. The practical path forward is serving two distinct experiences — a full human-facing site and a stripped-down, facts-forward version that AI bots can consume accurately and quickly. Scrunch does this by identifying AI bot traffic at the CDN edge layer via integrations with tools like Cloudflare and Akamai, then serving clean HTML to the bot in real time.
Most marketing teams start their AI search measurement at referral traffic from LLM platforms. That's the wrong starting point. The actual buyer journey involves multiple research and discovery interactions inside AI platforms before a single click ever happens — and when those users do click through, they convert at a higher rate because the AI has already done the education. If you're not tracking brand citation rates, prompt coverage, and third-party source authority in your space, you're blind to most of the funnel.
SEO teams often absorb AI search responsibility given the surface-level similarities. That works — with a caveat. The SEO mindset that optimizes for fixed, deterministic rankings is a direct liability in a probabilistic environment. The profile Kevin looks for is closer to a growth or performance marketer: someone oriented around running experiments, reading ambiguous signals, and iterating fast. The question isn't "where do we rank" — it's "what can we test to show up more, and are we seeing reciprocal traffic and conversions on the other end."
The Measurement Problem at the Heart of AI Search
Ask most B2B marketers how their brand is performing in AI search, and they’ll point to referral traffic from ChatGPT or Perplexity. It’s a reasonable instinct — and almost entirely the wrong place to look.
Kevin White, Head of Marketing at Scrunch, made this case in a recent episode of The Marketing Front Lines. His argument isn’t abstract. It’s grounded in what Scrunch has observed building measurement infrastructure for brands navigating how they show up inside large language models. The referral click, Kevin argues, is the last event in a long chain of interactions that most marketing teams never see.
“If you think about the user behavior before that,” he explains, “it’s someone searching on ChatGPT, they’re asking questions back and forth, they’re getting a lot of information and doing a lot of research — and then occasionally they’ll click on the tiny little gray pill to visit your site. And when they do, they typically convert at a higher rate because they have all of this research and discovery already built up.”
That invisible pre-click journey is where the real funnel lives. And most B2B marketing teams are measuring none of it.
Why SEO Measurement Logic Breaks Here
Traditional search gave marketers a deterministic feedback loop: rankings, click-through rates, position tracking. A tactic either moved the needle or it didn’t. The signal was clear.
LLMs break that loop entirely.
“We’re moving from a deterministic world to a probabilistic world,” Kevin says. “You can ask the same question over and over again, and you can get different responses, you can get different citations, you can get different brands mentioned.”
Teams that try to apply rank-tracking logic to this environment will find results that look random and uncontrollable — because by that measurement framework, they are. The fix isn’t better rank tracking. It’s a different model altogether: tracking brand mention and citation trends across a defined set of high-intent prompts over a longer time horizon. Direction of travel matters more than any single data point.
There’s also a deeper epistemological problem worth sitting with. No practitioner, agency, or thought leader actually knows how LLM citation algorithms work — because the foundational models don’t publish that information. “Any expert that’s saying this is exactly what works,” Kevin says plainly, “go do the experiment yourself.” Build controlled tests. Run 10 pages with a given tactic against 10 without. Let your own data lead.
The Reddit Misconception — and What Actually Drives High-Intent Citations
The prevailing advice in AI search circles is to prioritize Reddit. The reasoning follows that LLMs train heavily on Reddit data, so Reddit presence translates to citation presence. Kevin’s data complicates that story significantly.
Reddit does show up — but predominantly in high-volume, top-of-funnel queries. When you isolate the prompts that carry genuine purchase intent, the citation sources shift.
“The prompts that are more bottom of the funnel or higher intent — we see the citations of those be more like niche publications,” Kevin explains. “Reddit captures more top of funnel, more volume. But the maybe more important prompts are citations that you maybe don’t know about.”
For B2B founders, the practical consequence is pointed: the third-party sources influencing your buyers at the bottom of the funnel are likely industry-specific publications you’ve never prioritized for outreach or contributed content. Identifying them requires measuring which sources LLMs actually cite when users ask your category’s high-intent questions — not assuming the answer is Reddit.
Your Website’s Actual Primary Visitor
Here is the architectural reality most B2B marketing teams haven’t fully reckoned with: as more buyers conduct research inside AI platforms without clicking through, the entity most consistently and consequentially visiting your website is an AI agent, not a human.
“Bot traffic is actually your primary visitor,” Kevin says. “This is a first-class type of visit.”
The problem is structural. B2B websites are built for human experiences — layered with JavaScript, animations, multimedia, and visual interactions that serve no purpose for an AI crawler. These bots process content in tokens. A token-heavy page is a slow, degraded crawl that returns less accurate results and fewer citations.
“If your page is token-light — it doesn’t have lots of JavaScript and media — AI is going to be able to crawl through it much faster,” Kevin explains.
Scrunch’s technical solution is to identify AI bot traffic at the CDN edge via integrations with Cloudflare, Akamai, and similar providers, then serve a stripped-down HTML version of the page — structured facts, no superfluous code — specifically optimized for bot consumption. The outcome is more accurate crawling and stronger citation rates. The honest trade-off Kevin acknowledges: optimizing for bots can degrade the human experience. These two audiences may need to be served with genuinely separate content layers.
The Hiring Question
When it comes to building the team to own this function, Kevin’s answer is precise. An SEO background is fine — with one non-negotiable caveat.
“I would hire someone with a curious mind. What experiments can we run to show up more in the search? And then are we seeing reciprocal traffic and conversions on the other end?”
The disqualifying trait isn’t an SEO background. It’s an inability to operate outside deterministic thinking — someone who reaches for rank tracking when the environment doesn’t support it, or who treats AI search as SEO with a different name. The skillset that actually fits is closer to a growth marketer: hypothesis-driven, experiment-oriented, comfortable drawing conclusions from ambiguous signals.
The tactics may rhyme with SEO. The operating mindset is fundamentally different.
Where to Start
The practical sequence Kevin recommends: measure first, act second. Get visibility into whether your brand is being cited for the prompts that matter to your buyers. Identify which third-party sources carry authority in your space within LLMs. Find the content gaps where competitors are showing up weakly and your brand isn’t showing up at all. Then build content and pursue placements with that data as your guide.
It’s not a complicated framework. But it requires accepting that the starting point isn’t publishing content or chasing Reddit — it’s understanding the citation landscape you’re actually operating in before deciding how to change it.
The marketers who figure out AI search won’t do it by following someone else’s playbook. They’ll do it by building one from their own data.