We used to think AI search was just "new Google." Optimise some keywords. Build some backlinks. Wait for results.
Then we ran a test.
We asked ChatGPT, Perplexity, Gemini, and Claude the same question: "Who's the best estate planning attorney in Austin?"
Four different answers. Four completely different firms recommended.
Different training data. Different retrieval methods. Different trust signals. Different winners.
That's when we realised: if you optimise for one platform, you might be invisible on three others. The businesses winning AI recommendations weren't the biggest or the oldest. They were the ones AI had learned to trust through signals most businesses don't even know exist.
So we started measuring those signals. We started with 50 checkpoints. After 100 audits, we added 30 more. After 200, we refined it to exactly 109.
109 is what actually predicts AI visibility. We didn't round it because we'd be removing signal.
Here's the reality of search in 2026: AI has already read the internet.
If your content says the same thing as everyone else (what we call "Consensus Content"), AI treats it as training data. It ingests your words, mixes them with 1,000 other sites, and gives the answer to the user without crediting you.
You become invisible. Not because you're wrong. Because you're not adding anything new.
To be cited, you need Information Gain. You need to offer a data point, a perspective, or a framework that exists nowhere else.
Think about it from the AI's perspective: why would it cite your article on "5 Tips for Estate Planning" when there are 47,000 other articles saying the same five things? It wouldn't. It would synthesise all of them into one answer and cite none of them.
But if your article says "Why the 2024 SECURE Act changes mean your existing trust needs review by March" with specific data on what happens if you don't? That's Information Gain. That gets cited.
The 109-Point Methodology measures two things:
Most businesses fail on both.
In late 2025, we audited two Melbourne accounting firms. On paper, they were nearly identical:
Both had 15+ years experience. Both specialised in medical professionals. Both had good Google reviews. Both had decent websites with regular blog content.
Traditional SEO would have scored them the same.
But when we ran both through the 109-point analysis, the difference was stark.
Firm A: 34 new clients directly attributed to AI search referrals. Average client value: $8,500/year.
Firm B: 0 new clients from AI search.
The difference: $289,000 in annual recurring revenue. And Firm B doesn't even know they lost it, because they never knew the channel existed.
This is why 109 data points matter. They're the difference between being recommended and being invisible.
Traditional Google SEO was about ranking. You fought for a slot on Page 1. If you got position 3, you might get 10% of clicks.
AI search is about retrieval. You're fighting to be included in the answer. There's no position 3. There's "cited" or "not cited."
AI answer engines (ChatGPT, Perplexity, Google AI Overviews) operate on what's called RAG: Retrieval-Augmented Generation. They:
If you're not retrieved in step 3, you don't exist in step 5.
The signals that determine retrieval are different from the signals that determine ranking. Google rankings reward backlinks and keyword optimisation. AI retrieval rewards:
The 109 points measure these signals across 8 operational zones.
We organise 109 data points into 8 categories. Each zone answers a specific question about your AI visibility.
The Question: Can AI identify who you are?
Most websites are just words on a page. To AI, "John Smith, Accountant" is just a string of characters. Without structured data, AI has to guess what you are, who works there, and what you're qualified to do.
What we measure:
What we typically find: Most professional services have zero schema or only basic Organization markup. AI literally doesn't know they're a law firm vs a blog about law.
The Question: Do you add new value, or just repeat what everyone else says?
Google's "Hidden Gems" algorithm update and AI citation logic both penalise derivative content. If your article on Estate Planning says the same thing as 10,000 other articles, AI has no reason to cite yours specifically.
What we measure:
What we typically find: 80% of professional service content is consensus content. It's not wrong, but it's not citable either.
The Question: Where is your expertise trapped?
This is the most common failure point for established experts. Your best thinking is locked in formats AI can't easily access:
We call this The Black Box Effect. You have a Ferrari engine under the hood, but it's wrapped in a chassis AI can't recognise.
What we measure:
What we typically find: The average expert has 70%+ of their best thinking in inaccessible formats.
The Question: When AI recommends someone in your category, is it you?
We run live queries across ChatGPT, Perplexity, Gemini, and Claude to calculate your Share of Answer.
What we measure:
What we typically find: Most businesses have 0% Share of Answer. They've never been recommended by AI because AI has never been given a reason to recommend them.
The Question: Can AI verify you're a legitimate expert?
Google's E-E-A-T framework (Experience, Expertise, Authoritativeness, Trust) has been adopted by AI systems as a trust filter. The first E — Experience — is critical in 2026. AI looks for evidence that you've actually done the work, not just written about it.
What we measure:
What we typically find: Many firms have strong credentials but no machine-readable way for AI to verify them.
The Question: Can AI pull a clean answer from your content?
AI builds answers by extracting "knowledge chunks." If your expertise is buried in a 400-word paragraph with no clear structure, AI skips it. It prefers content it can cleanly quote.
What we measure:
What we typically find: Most content is written for humans to read linearly, not for AI to extract specific answers. Both matter.
The Question: Do you own your geography?
For professional services, "best" usually means "best near me." We test your connection to specific geographic signals.
What we measure:
What we typically find: Firms serving multiple locations often have no clear geographic identity to AI.
The Question: Are you visible across all AI platforms, or just one?
A business can dominate Perplexity (which searches the live web) and be invisible on ChatGPT (which relies heavily on training data). You need presence across the ecosystem.
What we measure:
What we typically find: Most businesses have visibility on 0-1 platforms. Being strong on Perplexity means nothing if your clients use ChatGPT.
| Zone | Data Points | Weight | What It Measures |
|---|---|---|---|
| 1. Identity Layer | 17 | 16% | Schema, entity structure, machine-readability |
| 2. Information Gain | 13 | 12% | Content quality, originality, data density |
| 3. Black Box Audit | 10 | 9% | Trapped expertise, format accessibility |
| 4. Share of Answer | 20 | 18% | Citation rate across platforms |
| 5. E-E-A-T Trust | 14 | 13% | Experience signals, credential verification |
| 6. Extractability | 12 | 11% | Content structure, snippet-readiness |
| 7. Local Vector | 8 | 7% | Geographic signals, local authority |
| 8. Platform Diversification | 15 | 14% | Cross-platform presence, consistency |
| TOTAL | 109 | 100% |
| Grade | Score | What It Means |
|---|---|---|
| A | 90-109 | Primary Source. AI treats you as a definitive authority. You're the "Wikipedia" of your niche. |
| B | 75-89 | Contender. You appear in the conversation but often as a secondary option. |
| C | 55-74 | Participant. You exist, but you're not driving the recommendation. |
| D | 35-54 | At Risk. You're relying on legacy SEO signals that are fading. |
| F | 0-34 | Invisible. To AI, you effectively don't exist. |
Industry average for professional services: 28 (F).
Not because they're bad at what they do. Because nobody told them AI search works differently.
When we run your business through the 109-point analysis, you get:
1. Your AI Visibility Score (0-109)
A definitive number showing exactly where you stand. Broken down by zone so you can see where you're strong and where you're invisible.
2. Share of Answer Map
Your citation rate vs competitors across all tested platforms and queries. A visual breakdown of who's winning and where.
3. The Intent Gap
The high-value questions your ideal clients are asking that you're not positioned to answer. Each query includes estimated monthly volume, current winner, and revenue opportunity.
4. The Black Box Inventory
Where your expertise is trapped. Specific assets (podcasts, PDFs, videos) that should be converted to machine-readable content.
5. The Priority Roadmap
What to fix first, second, third — ranked by impact and effort. Not "improve your SEO." Specific actions: "Create Knowledge Entry answering [exact question] to capture [specific opportunity]."
We don't predict algorithm changes.
AI platforms evolve constantly. We measure current state and build for structural advantage, not tricks that break next month.
We don't guarantee specific rankings.
AI doesn't work like Google's "10 blue links." There's no position #1 to promise. We measure and improve Share of Answer.
We don't replace human judgment.
The diagnostic shows you what's broken. Fixing it still requires strategy — deciding which opportunities to pursue, what voice to use, how to position.
We don't pretend all platforms matter equally.
For your specific business, ChatGPT might matter more than Claude (or vice versa). The diagnostic shows you where your clients are asking questions.
We didn't start with 109. We started with 50.
After 100 audits, we added 30 more (things we missed that cost clients visibility).
After 200 audits, we refined it to 109 (removing redundancies, adding critical gaps).
109 is what 1,100+ projects and 20 years of positioning work taught us.
Could we make it 150 points? Sure. But we'd be adding noise, not signal.
Could competitors copy our 109 points? They could try. But they don't have:
The 109 points aren't the secret. The experience applying them is.
The 109-Point Methodology is diagnostic. It tells you where you're invisible and why.
The treatment is the PG Identity Engine — the system we use to:
The methodology ensures we know exactly what to build. The Engine ensures we build it at the quality AI requires.
Want to see how the 109-point analysis applies to your business?
The report is free. Takes about an hour to present. You'll see:
No obligation. No pitch disguised as an audit.
If the 109-point analysis shows you're already well-positioned, we'll tell you. If it shows this isn't the right fit, we'll tell you that too.
SEO audits measure Google rankings and technical health. The 109-Point Methodology measures whether AI platforms will recommend you — a fundamentally different question that requires different data.
We started with 50. After hundreds of audits, we added checks we were missing and removed redundancies. 109 is what actually predicts AI visibility. We didn't round it because we'd be removing signal.
You can check some things manually — ask ChatGPT about your industry and see if you appear. But the full methodology requires testing across platforms, competitor analysis, schema validation, and opportunity modelling that would take 15-20 hours to replicate.
If you score above 75, you're ahead of 90% of businesses. We'll show you optimisation opportunities, but you may not need our ongoing program. We'll tell you honestly.
We recommend quarterly. AI platforms update constantly, competitors improve, and your own content ages. Quarterly audits catch drift before it costs you visibility.
Any high-trust industry where expertise matters: legal, financial services, healthcare, architecture, consulting, skilled trades. If your clients make high-stakes decisions based on your expertise, the methodology applies.
The 109-Point Methodology is proprietary to Probably Genius. It reflects our understanding of how AI platforms evaluate and recommend businesses — developed through 1,100+ digital builds and continuous testing across platforms.
Probably Genius LLC