THE AI INTEGRITY STANDARD™

  1. What Is the AI Integrity Standard?
  2. Why Content Quality Scoring Matters
  3. The Five Dimensions
  4. The Scoring Process
  5. The 85+ Threshold
  6. Example Scorecard
  7. What Fails the Integrity Gate
  8. The Integrity Gate in Practice
  9. Connection to Content Quality Standard
  10. Frequently Asked Questions
  11. The Bottom Line
The Concept

What Is the AI Integrity Standard?

The AI Integrity Standard™ is a proprietary 100-point scoring system that measures content readiness for AI extraction across five dimensions: Intent Match, Information Gain, Entity Density, Citation Quality, and Extractability.

Every piece of content we produce is scored against this standard before publication. If it doesn't reach our threshold of 85/100, it doesn't publish. No exceptions.

This is how we produce at scale without producing slop.


Why Content Quality Scoring Matters

Here's the uncomfortable truth about AI-generated content:

50% of new web content is now AI-generated.

But only 18% of AI citations go to AI-generated content.

The other 82% goes to human-quality content that meets higher editorial standards.

Most agencies use AI to produce volume. They generate "consensus content" — generic text that repeats what's already on the internet. AI models absorb this as training data, but they don't cite it as a source.

The distinction:

  • Content AI learns from → Training data (no attribution)
  • Content AI cites → Source material (you get credit)

The AI Integrity Standard ensures every piece we produce is source-worthy, not just noise.


The Five Dimensions

We score content across five dimensions, each measuring a different aspect of AI-readiness.

1. Intent Match (20 points)


What it measures: Does this content actually answer the question someone asked?

AI doesn't reward content that's tangentially related to a query. It rewards content that directly addresses intent. If someone asks "Is a non-compete enforceable in Texas?" and your content talks generally about employment law, you've missed the intent.

Scoring criteria:

Score Criteria
18-20 Directly answers the specific question in the first 100 words
14-17 Answers the question but buries the answer in the middle
10-13 Addresses the topic but doesn't answer the specific question
0-9 Tangentially related or off-topic

Why it matters: AI reformulates queries to find the best answer. Content that matches reformulated intent gets retrieved. Content that misses intent gets skipped.

2. Information Gain (25 points)


What it measures: Does this content add something new that AI doesn't already have?

This is the most heavily weighted dimension because it's the most important. AI already has access to millions of articles saying the same generic things. It doesn't need another "5 Tips for Better Marketing."

Information Gain means: specific data, original insights, real examples, proprietary frameworks, or expert perspectives that AI can't get elsewhere.

Scoring criteria:

Score Criteria
22-25 Contains original data, case studies with specific numbers, or proprietary methodology
17-21 Contains expert insights or perspectives not widely published
12-16 Synthesizes existing information in a useful new way
6-11 Rephrases commonly available information
0-5 Generic content indistinguishable from hundreds of similar articles

Why it matters: AI cites sources that make its answers better. If your content doesn't improve AI's ability to answer questions, it has no reason to cite you.

3. Entity Density (20 points)


What it measures: Is this content properly attributed to a verified entity?

Content floating without clear authorship is less trustworthy than content clearly attributed to a known expert or organization. Entity Density measures how well the content connects to verified entities — the author, the organization, related experts, cited sources.

Scoring criteria:

Score Criteria
18-20 Clear author with credentials, organization attribution, schema markup, cited experts
14-17 Author and organization clear, some schema, limited external citations
10-13 Basic attribution but no schema or credential verification
5-9 Unclear authorship or generic "admin" attribution
0-4 Anonymous or unattributed content

Why it matters: E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) signals depend on entity clarity. AI trusts attributed expertise more than anonymous content.

4. Citation Quality (15 points)


What it measures: Are claims supported by credible, verifiable sources?

AI models are increasingly cautious about unsupported claims. Content that cites authoritative sources — government data, peer-reviewed research, established industry bodies — signals higher reliability than content making claims without evidence.

Scoring criteria:

Score Criteria
14-15 Multiple authoritative citations (government, academic, primary sources)
11-13 Some authoritative citations plus industry sources
8-10 Industry sources and secondary references
4-7 Minimal citations, mostly self-referential
0-3 No citations or unreliable sources

Why it matters: AI cross-references claims. Content backed by authoritative sources is more likely to be cited than content making unsupported assertions.

5. Extractability (20 points)


What it measures: Can AI actually pull a clean answer from this content?

Brilliant insights buried in wandering prose are invisible to AI. Extractability measures whether the content is structured for AI to find and extract the key information — clear headings, direct statements, summary sections, FAQ formatting.

Scoring criteria:

Score Criteria
18-20 Clear structure, direct answers, summary sections, FAQ elements, scannable format
14-17 Good structure with some direct statements, reasonably scannable
10-13 Moderate structure but key points buried in paragraphs
5-9 Wall of text with no clear structure
0-4 Completely unstructured, stream-of-consciousness

Why it matters: AI creates answers by extracting snippets. If your insights aren't extractable, they can't become the answer.

The Scoring Process

Every piece of content passes through our Integrity Gate — the quality checkpoint where scoring happens.

Step 1: Automated Pre-Score

Our system runs an initial assessment against the five dimensions, flagging potential issues before human review.

Step 2: Human Verification

A human reviewer validates the automated score, checking for nuances the system might miss — particularly around voice integrity and expertise accuracy.

Step 3: Threshold Check

The final score is calculated. If it's below 85, the content is flagged for revision.

Step 4: Revision or Rejection

Below-threshold content is either revised to meet the standard or rejected entirely. We don't publish content that doesn't pass.


The 85+ Threshold

Why 85? Why not 70 or 90?

Below 70: Content likely has fundamental issues — wrong intent, no information gain, or unextractable structure. Not worth revising.

70-84: Content has potential but needs work. Usually fixable with targeted revision.

85-89: Solid content that meets professional standards. Publishable.

90-95: Excellent content with strong information gain and perfect structure. Citation-worthy.

96-100: Exceptional content that could become a primary source on the topic.

We set the threshold at 85 because that's where content transitions from "adequate" to "citation-worthy." We're not trying to publish adequate content. We're trying to publish content AI will cite.


Example Scorecard

Here's what a real scorecard looks like for a Knowledge Entry we produced:

Article: "Is a Non-Compete Agreement Enforceable in Texas After the FTC Rule?"

Dimension Score Notes
Intent Match 19/20 Directly answers the question in first paragraph
Information Gain 23/25 Contains specific FTC rule analysis + Texas case law examples
Entity Density 17/20 Author attributed with credentials, organization linked, schema complete
Citation Quality 14/15 FTC primary source, Texas court citations, Bar Association reference
Extractability 18/20 Clear structure, FAQ section, summary box
TOTAL 91/100 ✓ Exceeds threshold — approved for publication

What made this score high:

  • Answered a specific, timely question (not generic "understanding non-competes")
  • Included original analysis of how new FTC rules interact with Texas state law
  • Cited primary sources (FTC rule, court cases)
  • Structured for extraction with clear headings and summary

What Fails the Integrity Gate

Common reasons content scores below 85:

  1. "Consensus content" — Restating what's already widely published without adding new insight. Information Gain score: 6-10.
  2. "Thought leadership fluff" — Opinion pieces without supporting evidence or specific examples. Citation Quality score: 3-5.
  3. "Keyword-stuffed prose" — Content written for search engines circa 2015, not for AI extraction. Extractability score: 5-8.
  4. "Anonymous expertise" — No clear author, no credentials, no entity connection. Entity Density score: 4-7.
  5. "Adjacent answers" — Content that talks around a topic but never directly answers the question. Intent Match score: 8-12.

When we reject content, we explain exactly which dimensions failed and why. Revision targets the specific gaps.


The Integrity Gate in Practice

The Integrity Gate isn't just a scoring system — it's a quality culture.

For clients: You can trust that nothing publishes under your name that doesn't meet professional standards. No AI slop. No hallucinated claims. No generic content.

For the production system: Every agent in our Multi-Model Swarm knows the standard they're working toward. The scoring criteria shape how content is created, not just how it's evaluated.

For AI visibility: Content that passes the Integrity Gate is designed for citation, not just publication. The five dimensions map directly to what AI evaluates when deciding what to cite.


Connection to Content Quality Standard

The AI Integrity Standard is the scoring methodology. The Content Quality Standard is the public-facing explanation of our quality commitment.

Every client site displays a Content Quality Standard badge linking to our methodology. This signals to both humans and AI that the content has been verified against a rigorous standard — not just generated and published.

The badge serves as:

  • Trust signal for visitors — "This content is verified"
  • Authority signal for AI — "This content meets editorial standards"
  • Differentiation signal — "This isn't AI slop"

 

FREQUENTLY ASKED QUESTIONS

Why these five dimensions specifically?

They map to what AI evaluates when deciding whether to cite content. Intent Match = relevance. Information Gain = value-add. Entity Density = trustworthiness. Citation Quality = verifiability. Extractability = accessibility. Together they cover the complete citation decision.

Who does the scoring?

Initial scoring is automated. Human review validates and adjusts. Final decisions are made by humans, not machines. This hybrid approach combines scale with judgment.

Can I see my content scores?

Yes. Monthly reports include aggregate scores and trends. We flag any content that scored below 90 so you understand where quality is strong versus adequate.

What happens to content that fails?

It doesn't publish. Below-threshold content either gets revised to meet the standard or is rejected entirely. We never publish content that fails the Integrity Gate.

How is this different from traditional editorial review?

Traditional review asks "Is this good enough?" The Integrity Gate asks "Will AI cite this?" Different questions, different criteria. Good writing isn't automatically AI-ready. AI-ready content isn't automatically good writing. We optimize for both.

Does higher score guarantee AI citation?

No. Scoring measures readiness for citation, not guaranteed citation. A 95-scoring article on a low-volume topic may get fewer citations than an 86-scoring article on a high-demand topic. But score determines citation-worthiness when the topic comes up.

Can I request a specific score threshold?

The 85 threshold is non-negotiable — it's what defines quality work. However, we can prioritize resources toward topics where you want to push toward 90+ scores.


 

The Bottom Line

Volume is easy. Quality is hard. Citation-worthy quality is rare.

Most agencies use AI to produce more content faster. They measure success by output volume. The result is a flood of generic content that AI absorbs but never cites.

We measure success by citation-worthiness. Every piece is scored. Every piece must pass. Every piece is designed to be the answer AI gives, not just another page it crawls.

The 85+ threshold isn't arbitrary. It's the line between "published" and "cited."


Want to see how your current content scores?

Book Your 109-Point Diagnostic →


 

Related Methodology

The Client Codex — Your verified knowledge base that the Integrity Gate checks against
Multi-Model Philosophy —
How our AI agents are trained to meet the standard
The Golden Thread —
Entity connections that boost Entity Density scores
Content Quality Standard —
The public-facing quality commitment