THE 109-POINT METHODOLOGY

THE DISCOVERY THAT CHANGED EVERYTHING

We used to think AI search was just "new Google." Optimise some keywords. Build some backlinks. Wait for results.

Then we ran a test.

We asked ChatGPT, Perplexity, Gemini, and Claude the same question: "Who's the best estate planning attorney in Austin?"

Four different answers. Four completely different firms recommended.

Different training data. Different retrieval methods. Different trust signals. Different winners.

That's when we realised: if you optimise for one platform, you might be invisible on three others. The businesses winning AI recommendations weren't the biggest or the oldest. They were the ones AI had learned to trust through signals most businesses don't even know exist.

So we started measuring those signals. We started with 50 checkpoints. After 100 audits, we added 30 more. After 200, we refined it to exactly 109.

109 is what actually predicts AI visibility. We didn't round it because we'd be removing signal.


WHY YOUR CONTENT ISN'T GETTING CITED

The Consensus Problem

Here's the reality of search in 2026: AI has already read the internet.

If your content says the same thing as everyone else (what we call "Consensus Content"), AI treats it as training data. It ingests your words, mixes them with 1,000 other sites, and gives the answer to the user without crediting you.

You become invisible. Not because you're wrong. Because you're not adding anything new.

To be cited, you need Information Gain. You need to offer a data point, a perspective, or a framework that exists nowhere else.

Think about it from the AI's perspective: why would it cite your article on "5 Tips for Estate Planning" when there are 47,000 other articles saying the same five things? It wouldn't. It would synthesise all of them into one answer and cite none of them.

But if your article says "Why the 2024 SECURE Act changes mean your existing trust needs review by March" with specific data on what happens if you don't? That's Information Gain. That gets cited.

The 109-Point Methodology measures two things:

  1. Does AI know you exist? (Entity Identity)
  2. Do you give AI a reason to cite you? (Information Gain)

Most businesses fail on both.



THE $47,000 LESSON

When Two "Identical" Firms Aren't

In late 2025, we audited two Melbourne accounting firms. On paper, they were nearly identical:

Both had 15+ years experience. Both specialised in medical professionals. Both had good Google reviews. Both had decent websites with regular blog content.

Traditional SEO would have scored them the same.

But when we ran both through the 109-point analysis, the difference was stark.

FIRM A: Entity Identity score 71/109
  • Schema markup correctly identified them as a Professional Service
  • Founder had Person schema with credentials and knowsAbout properties
  • Content answered specific questions ("Tax implications of buying into a medical practice")
  • FAQs structured for AI extraction
  • Cited in 3 of 4 AI platforms when we tested "accountant for surgeons Melbourne"
FIRM B: Entity Identity score 23/109
  • No schema beyond basic Organization
  • No author attribution on articles
  • Content was generic ("Why you need a good accountant")
  • No structured FAQs
  • Cited in 0 of 4 AI platforms for the same query

The result over 12 months:

Firm A: 34 new clients directly attributed to AI search referrals. Average client value: $8,500/year.

Firm B: 0 new clients from AI search.

The difference: $289,000 in annual recurring revenue. And Firm B doesn't even know they lost it, because they never knew the channel existed.

This is why 109 data points matter. They're the difference between being recommended and being invisible.


FROM RANKING TO RETRIEVAL

The Shift You Need to Understand

Traditional Google SEO was about ranking. You fought for a slot on Page 1. If you got position 3, you might get 10% of clicks.

AI search is about retrieval. You're fighting to be included in the answer. There's no position 3. There's "cited" or "not cited."

AI answer engines (ChatGPT, Perplexity, Google AI Overviews) operate on what's called RAG: Retrieval-Augmented Generation. They:

  1. Receive a question
  2. Search their knowledge base and the web for relevant sources
  3. Retrieve the most trustworthy content
  4. Generate an answer that synthesises what they found
  5. Cite the sources that contributed

If you're not retrieved in step 3, you don't exist in step 5.

The signals that determine retrieval are different from the signals that determine ranking. Google rankings reward backlinks and keyword optimisation. AI retrieval rewards:

  • Entity clarity (does AI know what you are?)
  • Information Gain (do you add value beyond consensus?)
  • E-E-A-T signals (Experience, Expertise, Authoritativeness, Trust)
  • Extractability (can AI pull a clean answer from your content?)

The 109 points measure these signals across 8 operational zones.


THE 8 INSPECTION ZONES

We organise 109 data points into 8 categories. Each zone answers a specific question about your AI visibility.

Zone 1: The Identity Layer (17 Points)


The Question: Can AI identify who you are?

Most websites are just words on a page. To AI, "John Smith, Accountant" is just a string of characters. Without structured data, AI has to guess what you are, who works there, and what you're qualified to do.

What we measure:

  • Schema architecture (Organization, Person, ProfessionalService, LocalBusiness)
  • Entity connections (founder → firm → expertise → location)
  • SameAs properties (does your website link to your LinkedIn, Crunchbase, Google Business?)
  • Entity disambiguation (does AI confuse you with another business of the same name?)
  • Knowledge Graph presence (are you in Google's Knowledge Graph?)

What we typically find: Most professional services have zero schema or only basic Organization markup. AI literally doesn't know they're a law firm vs a blog about law.

Zone 2: Information Gain & Content Quality (13 Points)


The Question: Do you add new value, or just repeat what everyone else says?

Google's "Hidden Gems" algorithm update and AI citation logic both penalise derivative content. If your article on Estate Planning says the same thing as 10,000 other articles, AI has no reason to cite yours specifically.

What we measure:

  • Original data (do you cite your own statistics, case studies, research?)
  • Specific claims (numbers, dates, outcomes — not vague generalisations)
  • Contrarian perspectives (do you challenge consensus where appropriate?)
  • Data density (ratio of facts to filler)
  • Recency (is this from 2025 or 2019?)
  • Author attribution (who wrote this, and why should we trust them?)

What we typically find: 80% of professional service content is consensus content. It's not wrong, but it's not citable either.

Zone 3: The Black Box Audit (10 Points)


The Question: Where is your expertise trapped?

This is the most common failure point for established experts. Your best thinking is locked in formats AI can't easily access:

  • 200 podcast episodes with no transcripts
  • Conference presentations as PDFs
  • Client workshops that were never documented
  • LinkedIn posts that your website doesn't reference
  • Videos without structured captions

We call this The Black Box Effect. You have a Ferrari engine under the hood, but it's wrapped in a chassis AI can't recognise.

What we measure:

  • Transcript accessibility (are video/audio insights converted to text?)
  • PDF traps (is your best content locked in downloadable PDFs?)
  • Social silos (is your thought leadership stuck on LinkedIn?)
  • Expertise gap (the delta between what you know and what AI can see)

What we typically find: The average expert has 70%+ of their best thinking in inaccessible formats.

Zone 4: Share of Answer (20 Points)


The Question: When AI recommends someone in your category, is it you?

We run live queries across ChatGPT, Perplexity, Gemini, and Claude to calculate your Share of Answer.

What we measure:

  • Direct recommendation rate (how often are you the #1 answer?)
  • Competitor displacement (who's winning the queries you should own?)
  • Citation context (when mentioned, is it positive or neutral?)
  • Query coverage (across how many relevant questions do you appear?)
  • Platform variance (are you strong on Perplexity but invisible on ChatGPT?)

What we typically find: Most businesses have 0% Share of Answer. They've never been recommended by AI because AI has never been given a reason to recommend them.

Zone 5: E-E-A-T Trust Signals (14 Points)


The Question: Can AI verify you're a legitimate expert?

Google's E-E-A-T framework (Experience, Expertise, Authoritativeness, Trust) has been adopted by AI systems as a trust filter. The first E — Experience — is critical in 2026. AI looks for evidence that you've actually done the work, not just written about it.

What we measure:

  • Digital exhaust (conference photos, team pages, office location, client interactions)
  • Author authority (are your authors verified entities with cross-domain reputation?)
  • Credential verification (can AI confirm your qualifications?)
  • Review sentiment (not just stars — the keywords used: "solved my problem," "saved me money")
  • External citations (are you referenced by other trusted sources?)
  • Proof of life (signals that distinguish you from an AI-generated spam site)

What we typically find: Many firms have strong credentials but no machine-readable way for AI to verify them.

Zone 6: Answer Extractability (12 Points)


The Question: Can AI pull a clean answer from your content?

AI builds answers by extracting "knowledge chunks." If your expertise is buried in a 400-word paragraph with no clear structure, AI skips it. It prefers content it can cleanly quote.

What we measure:

  • NLP formatting (do your headers match actual questions people ask?)
  • Table structure (AI loves tables — they're easy to extract)
  • Direct answer positioning (is the answer in the first sentence or buried at the end?)
  • FAQ structure (are common questions answered in FAQ format with proper schema?)
  • List structure (is complex information organised in parseable lists?)

What we typically find: Most content is written for humans to read linearly, not for AI to extract specific answers. Both matter.

Zone 7: The Local Vector (8 Points)


The Question: Do you own your geography?

For professional services, "best" usually means "best near me." We test your connection to specific geographic signals.

What we measure:

  • Service area schema (have you explicitly defined where you work?)
  • Local relevance (do you reference local laws, landmarks, market conditions?)
  • Geographic consistency (does your address match across platforms?)
  • Local citations (are you listed in local directories with consistent NAP?)
  • Local authority (do you appear in local-specific queries?)

What we typically find: Firms serving multiple locations often have no clear geographic identity to AI.

Zone 8: Platform Diversification (15 Points)


The Question: Are you visible across all AI platforms, or just one?

A business can dominate Perplexity (which searches the live web) and be invisible on ChatGPT (which relies heavily on training data). You need presence across the ecosystem.

What we measure:

  • Model variance (testing visibility across LLMs with different knowledge cutoffs)
  • Retrieval source analysis (are you found via Bing, Google, or direct training?)
  • Platform-specific signals (what each platform weights differently)
  • Training data presence (do you appear in datasets AI models likely trained on?)
  • Consistency across platforms (does AI say the same thing about you everywhere?)

What we typically find: Most businesses have visibility on 0-1 platforms. Being strong on Perplexity means nothing if your clients use ChatGPT.


THE SCORING FRAMEWORK

How We Grade

Zone Data Points Weight What It Measures
1. Identity Layer 17 16% Schema, entity structure, machine-readability
2. Information Gain 13 12% Content quality, originality, data density
3. Black Box Audit 10 9% Trapped expertise, format accessibility
4. Share of Answer 20 18% Citation rate across platforms
5. E-E-A-T Trust 14 13% Experience signals, credential verification
6. Extractability 12 11% Content structure, snippet-readiness
7. Local Vector 8 7% Geographic signals, local authority
8. Platform Diversification 15 14% Cross-platform presence, consistency
TOTAL 109 100%

The Grading Scale

Grade Score What It Means
A 90-109 Primary Source. AI treats you as a definitive authority. You're the "Wikipedia" of your niche.
B 75-89 Contender. You appear in the conversation but often as a secondary option.
C 55-74 Participant. You exist, but you're not driving the recommendation.
D 35-54 At Risk. You're relying on legacy SEO signals that are fading.
F 0-34 Invisible. To AI, you effectively don't exist.

Industry average for professional services: 28 (F).

Not because they're bad at what they do. Because nobody told them AI search works differently.


WHAT THE 109 POINTS PRODUCE

The Free AI Visibility Report

When we run your business through the 109-point analysis, you get:

1. Your AI Visibility Score (0-109)

A definitive number showing exactly where you stand. Broken down by zone so you can see where you're strong and where you're invisible.

2. Share of Answer Map

Your citation rate vs competitors across all tested platforms and queries. A visual breakdown of who's winning and where.

3. The Intent Gap

The high-value questions your ideal clients are asking that you're not positioned to answer. Each query includes estimated monthly volume, current winner, and revenue opportunity.

4. The Black Box Inventory

Where your expertise is trapped. Specific assets (podcasts, PDFs, videos) that should be converted to machine-readable content.

5. The Priority Roadmap

What to fix first, second, third — ranked by impact and effort. Not "improve your SEO." Specific actions: "Create Knowledge Entry answering [exact question] to capture [specific opportunity]."


WHAT WE DON'T MEASURE (AND WHY)

We don't predict algorithm changes.

AI platforms evolve constantly. We measure current state and build for structural advantage, not tricks that break next month.

We don't guarantee specific rankings.

AI doesn't work like Google's "10 blue links." There's no position #1 to promise. We measure and improve Share of Answer.

We don't replace human judgment.

The diagnostic shows you what's broken. Fixing it still requires strategy — deciding which opportunities to pursue, what voice to use, how to position.

We don't pretend all platforms matter equally.

For your specific business, ChatGPT might matter more than Claude (or vice versa). The diagnostic shows you where your clients are asking questions.


WHY 109 POINTS? (THE HONEST ANSWER)

We didn't start with 109. We started with 50.

After 100 audits, we added 30 more (things we missed that cost clients visibility).

After 200 audits, we refined it to 109 (removing redundancies, adding critical gaps).

109 is what 1,100+ projects and 20 years of positioning work taught us.

Could we make it 150 points? Sure. But we'd be adding noise, not signal.

Could competitors copy our 109 points? They could try. But they don't have:

  • 20 years of brand positioning experience to understand why signals matter
  • 1,100+ project track record to verify what actually predicts visibility
  • The AI agent infrastructure to actually fix what we find
  • Quarterly testing across platforms to catch when signals change

The 109 points aren't the secret. The experience applying them is.


THE CONNECTION TO WHAT WE BUILD

The 109-Point Methodology is diagnostic. It tells you where you're invisible and why.

The treatment is the PG Identity Engine — the system we use to:

  • Install Entity Infrastructure (Zone 1 fixes)
  • Produce content that adds Information Gain (Zone 2 fixes)
  • Extract expertise from the Black Box (Zone 3 fixes)
  • Engineer answers for Intent Gap queries (Zone 4, 6 fixes)
  • Build the E-E-A-T signals AI needs (Zone 5 fixes)

The methodology ensures we know exactly what to build. The Engine ensures we build it at the quality AI requires.

Learn more about the PG Identity Engine →


GET YOUR FREE AI VISIBILITY REPORT

Want to see how the 109-point analysis applies to your business?

The report is free. Takes about an hour to present. You'll see:

  • Your current AI Visibility Score (0-109)
  • Which competitors are being recommended instead of you
  • The specific queries where you're invisible
  • What it's likely costing you in missed opportunities
  • The priority roadmap for fixing it

No obligation. No pitch disguised as an audit.

If the 109-point analysis shows you're already well-positioned, we'll tell you. If it shows this isn't the right fit, we'll tell you that too.

Get Your Free AI Visibility Report →


FREQUENTLY ASKED QUESTIONS

How is this different from an SEO audit?

SEO audits measure Google rankings and technical health. The 109-Point Methodology measures whether AI platforms will recommend you — a fundamentally different question that requires different data.

Why 109 points specifically?

We started with 50. After hundreds of audits, we added checks we were missing and removed redundancies. 109 is what actually predicts AI visibility. We didn't round it because we'd be removing signal.

Can I run this myself?

You can check some things manually — ask ChatGPT about your industry and see if you appear. But the full methodology requires testing across platforms, competitor analysis, schema validation, and opportunity modelling that would take 15-20 hours to replicate.

What if I score well?

If you score above 75, you're ahead of 90% of businesses. We'll show you optimisation opportunities, but you may not need our ongoing program. We'll tell you honestly.

How often should this be run?

We recommend quarterly. AI platforms update constantly, competitors improve, and your own content ages. Quarterly audits catch drift before it costs you visibility.

What industries does this work for?

Any high-trust industry where expertise matters: legal, financial services, healthcare, architecture, consulting, skilled trades. If your clients make high-stakes decisions based on your expertise, the methodology applies.

The 109-Point Methodology is proprietary to Probably Genius. It reflects our understanding of how AI platforms evaluate and recommend businesses — developed through 1,100+ digital builds and continuous testing across platforms.