Run an experiment right now. Open ChatGPT and ask: "What are the best [your category] companies?" Then ask the same question on Perplexity. Then on Claude.
Are you in the answers? If you are, how are you described? If you aren't, who is — and why?
Most marketers haven't done this. The ones who have are often surprised by what they find. Not because the answers are wrong necessarily, but because the pattern of who gets recommended is so different from the pattern of who ranks in Google that it reveals an entirely different competitive landscape.
The Mechanism: How It Actually Works
ChatGPT, Perplexity, Claude, and Gemini are large language models trained on vast datasets of text from the internet — news articles, research papers, blogs, forums, industry reports, social media. When you ask one of them to recommend brands in a category, it draws on patterns in that training data to synthesise an answer.
It's not retrieving a ranked list. It's generating a response based on which brands appear frequently and favourably in authoritative contexts across its training data. The brands that appear most consistently, in the most trusted sources, associated most clearly with the right category and solution attributes — those are the brands that get recommended.
"ChatGPT doesn't know your brand because of your website. It knows your brand because of what other credible sources have said about you — and how consistently they've said it."
This is why the pattern looks so different from search rankings. A site can rank on Google primarily through on-site optimisation and link acquisition. Getting recommended by an AI model requires genuine third-party authority — the kind that accumulates from press coverage, analyst reports, research citations, and expert mentions over time.
The Six Signals That Drive AI Recommendations
Based on our testing across hundreds of brands and dozens of categories, here are the signals that most consistently drive AI recommendation frequency:
What Doesn't Work (And Why People Keep Trying It)
A lot of brands are trying to influence AI recommendations through tactics that simply don't work. The most common:
Publishing more content on your own site
Your website content contributes minimally to how AI models represent you. The training data that matters most is third-party content — what others say about you, not what you say about yourself. Increasing your blog output doesn't move the needle.
Keyword optimisation
AI models don't retrieve results based on keyword matching. They synthesise answers based on learned associations. Optimising for exact keywords is irrelevant to how AI models decide what to recommend.
Technical SEO improvements
Crawlability and structured data help ensure your site can be indexed, which is baseline good practice. But no amount of schema markup or page speed improvement will change how an AI model describes your brand or whether it recommends you.
Paid AI search advertising
Most AI platforms don't currently offer sponsored placements in organic recommendations. Perplexity has some paid integrations, but organic recommendations aren't for sale — they reflect genuine authority signals.
There are no shortcuts to AI recommendations. The brands that get recommended consistently are the ones that have built genuine authority in their category over time. The playbook is: earn it from authoritative third-party sources, make your entity clear and consistent, maintain presence over time.
Platform Differences Matter
ChatGPT, Perplexity, Claude, and Gemini don't all recommend the same brands for the same queries. Training data, model architecture, and retrieval mechanisms differ. In our testing:
- ChatGPT tends to surface brands with strong Wikipedia/Wikidata presence and high press volume in major publications
- Perplexity gives heavier weight to current, indexed content — recency is a stronger signal here than on other platforms
- Claude (Anthropic) often surfaces brands that appear in research and analytical contexts; more likely to recommend specialist brands over generalist ones
- Gemini reflects Google's knowledge graph heavily — brands with strong Google entity signals (Knowledge Panel, Maps, etc.) tend to appear more frequently
This is why citation breadth — appearing consistently across multiple platforms — is so important. Single-platform visibility is fragile. When your authority signals are strong enough to drive recommendations on all four major platforms, you have a genuinely resilient position.
The Practical Implication
If you want to be recommended by ChatGPT, the programme looks like this:
- Establish your entity. Ensure Wikipedia, Wikidata, and Google's knowledge graph have accurate, rich information about your brand. This is the foundation.
- Earn press in indexed publications. Volume matters, but source quality matters more. Ten articles in Tier-1 publications outperform 1,000 articles in obscure blogs.
- Get analyst coverage. Gartner, Forrester, and category-specific analyst firms carry enormous weight in AI training data. Getting into analyst reports is one of the highest-ROI GEO activities for B2B brands.
- Create definitional content. Own specific intellectual territory with original research, frameworks, and data that other sources reference. This creates citation-worthy content that earns mentions even without outreach.
- Measure consistently. Track your citation frequency across platforms monthly. Watch for improvements after major citation campaigns. The feedback loop is slow — often 60–90 days — but it's measurable.
None of this is fast. Most brands running effective GEO programmes see meaningful citation improvements in 4–6 months, with substantial competitive advantage at 12–18 months. The brands that start now set a baseline their competitors will struggle to match later.
Understand the mechanism. Build the programme. The brands in the AI recommendations weren't lucky — they earned it.
See if ChatGPT is recommending you
Your AI visibility score tells you exactly which platforms cite your brand — and what you'd need to fix to improve your position.
Check your visibility score →