The AI Integrity Standard™ is a proprietary 100-point scoring system that measures content readiness for AI extraction across five dimensions: Intent Match, Information Gain, Entity Density, Citation Quality, and Extractability.
Every piece of content we produce is scored against this standard before publication. If it doesn't reach our threshold of 85/100, it doesn't publish. No exceptions.
This is how we produce at scale without producing slop.
Here's the uncomfortable truth about AI-generated content:
50% of new web content is now AI-generated.
But only 18% of AI citations go to AI-generated content.
The other 82% goes to human-quality content that meets higher editorial standards.

Most agencies use AI to produce volume. They generate "consensus content" — generic text that repeats what's already on the internet. AI models absorb this as training data, but they don't cite it as a source.
The distinction:
The AI Integrity Standard ensures every piece we produce is source-worthy, not just noise.
We score content across five dimensions, each measuring a different aspect of AI-readiness.
What it measures: Does this content actually answer the question someone asked?
AI doesn't reward content that's tangentially related to a query. It rewards content that directly addresses intent. If someone asks "Is a non-compete enforceable in Texas?" and your content talks generally about employment law, you've missed the intent.
Scoring criteria:
| Score | Criteria |
|---|---|
| 18-20 | Directly answers the specific question in the first 100 words |
| 14-17 | Answers the question but buries the answer in the middle |
| 10-13 | Addresses the topic but doesn't answer the specific question |
| 0-9 | Tangentially related or off-topic |
Why it matters: AI reformulates queries to find the best answer. Content that matches reformulated intent gets retrieved. Content that misses intent gets skipped.
What it measures: Does this content add something new that AI doesn't already have?
This is the most heavily weighted dimension because it's the most important. AI already has access to millions of articles saying the same generic things. It doesn't need another "5 Tips for Better Marketing."
Information Gain means: specific data, original insights, real examples, proprietary frameworks, or expert perspectives that AI can't get elsewhere.
Scoring criteria:
| Score | Criteria |
|---|---|
| 22-25 | Contains original data, case studies with specific numbers, or proprietary methodology |
| 17-21 | Contains expert insights or perspectives not widely published |
| 12-16 | Synthesizes existing information in a useful new way |
| 6-11 | Rephrases commonly available information |
| 0-5 | Generic content indistinguishable from hundreds of similar articles |
Why it matters: AI cites sources that make its answers better. If your content doesn't improve AI's ability to answer questions, it has no reason to cite you.
What it measures: Is this content properly attributed to a verified entity?
Content floating without clear authorship is less trustworthy than content clearly attributed to a known expert or organization. Entity Density measures how well the content connects to verified entities — the author, the organization, related experts, cited sources.
Scoring criteria:
| Score | Criteria |
|---|---|
| 18-20 | Clear author with credentials, organization attribution, schema markup, cited experts |
| 14-17 | Author and organization clear, some schema, limited external citations |
| 10-13 | Basic attribution but no schema or credential verification |
| 5-9 | Unclear authorship or generic "admin" attribution |
| 0-4 | Anonymous or unattributed content |
Why it matters: E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) signals depend on entity clarity. AI trusts attributed expertise more than anonymous content.
What it measures: Are claims supported by credible, verifiable sources?
AI models are increasingly cautious about unsupported claims. Content that cites authoritative sources — government data, peer-reviewed research, established industry bodies — signals higher reliability than content making claims without evidence.
Scoring criteria:
| Score | Criteria |
|---|---|
| 14-15 | Multiple authoritative citations (government, academic, primary sources) |
| 11-13 | Some authoritative citations plus industry sources |
| 8-10 | Industry sources and secondary references |
| 4-7 | Minimal citations, mostly self-referential |
| 0-3 | No citations or unreliable sources |
Why it matters: AI cross-references claims. Content backed by authoritative sources is more likely to be cited than content making unsupported assertions.
What it measures: Can AI actually pull a clean answer from this content?
Brilliant insights buried in wandering prose are invisible to AI. Extractability measures whether the content is structured for AI to find and extract the key information — clear headings, direct statements, summary sections, FAQ formatting.
Scoring criteria:
| Score | Criteria |
|---|---|
| 18-20 | Clear structure, direct answers, summary sections, FAQ elements, scannable format |
| 14-17 | Good structure with some direct statements, reasonably scannable |
| 10-13 | Moderate structure but key points buried in paragraphs |
| 5-9 | Wall of text with no clear structure |
| 0-4 | Completely unstructured, stream-of-consciousness |
Why it matters: AI creates answers by extracting snippets. If your insights aren't extractable, they can't become the answer.
Every piece of content passes through our Integrity Gate — the quality checkpoint where scoring happens.
Step 1: Automated Pre-Score
Our system runs an initial assessment against the five dimensions, flagging potential issues before human review.
Step 2: Human Verification
A human reviewer validates the automated score, checking for nuances the system might miss — particularly around voice integrity and expertise accuracy.
Step 3: Threshold Check
The final score is calculated. If it's below 85, the content is flagged for revision.
Step 4: Revision or Rejection
Below-threshold content is either revised to meet the standard or rejected entirely. We don't publish content that doesn't pass.
Why 85? Why not 70 or 90?
Below 70: Content likely has fundamental issues — wrong intent, no information gain, or unextractable structure. Not worth revising.
70-84: Content has potential but needs work. Usually fixable with targeted revision.
85-89: Solid content that meets professional standards. Publishable.
90-95: Excellent content with strong information gain and perfect structure. Citation-worthy.
96-100: Exceptional content that could become a primary source on the topic.
We set the threshold at 85 because that's where content transitions from "adequate" to "citation-worthy." We're not trying to publish adequate content. We're trying to publish content AI will cite.
Here's what a real scorecard looks like for a Knowledge Entry we produced:
Article: "Is a Non-Compete Agreement Enforceable in Texas After the FTC Rule?"
| Dimension | Score | Notes |
|---|---|---|
| Intent Match | 19/20 | Directly answers the question in first paragraph |
| Information Gain | 23/25 | Contains specific FTC rule analysis + Texas case law examples |
| Entity Density | 17/20 | Author attributed with credentials, organization linked, schema complete |
| Citation Quality | 14/15 | FTC primary source, Texas court citations, Bar Association reference |
| Extractability | 18/20 | Clear structure, FAQ section, summary box |
| TOTAL | 91/100 | ✓ Exceeds threshold — approved for publication |
What made this score high:
Common reasons content scores below 85:
When we reject content, we explain exactly which dimensions failed and why. Revision targets the specific gaps.
The Integrity Gate isn't just a scoring system — it's a quality culture.
For clients: You can trust that nothing publishes under your name that doesn't meet professional standards. No AI slop. No hallucinated claims. No generic content.
For the production system: Every agent in our Multi-Model Swarm knows the standard they're working toward. The scoring criteria shape how content is created, not just how it's evaluated.
For AI visibility: Content that passes the Integrity Gate is designed for citation, not just publication. The five dimensions map directly to what AI evaluates when deciding what to cite.

The AI Integrity Standard is the scoring methodology. The Content Quality Standard is the public-facing explanation of our quality commitment.
Every client site displays a Content Quality Standard badge linking to our methodology. This signals to both humans and AI that the content has been verified against a rigorous standard — not just generated and published.
The badge serves as:
They map to what AI evaluates when deciding whether to cite content. Intent Match = relevance. Information Gain = value-add. Entity Density = trustworthiness. Citation Quality = verifiability. Extractability = accessibility. Together they cover the complete citation decision.
Initial scoring is automated. Human review validates and adjusts. Final decisions are made by humans, not machines. This hybrid approach combines scale with judgment.
Yes. Monthly reports include aggregate scores and trends. We flag any content that scored below 90 so you understand where quality is strong versus adequate.
It doesn't publish. Below-threshold content either gets revised to meet the standard or is rejected entirely. We never publish content that fails the Integrity Gate.
Traditional review asks "Is this good enough?" The Integrity Gate asks "Will AI cite this?" Different questions, different criteria. Good writing isn't automatically AI-ready. AI-ready content isn't automatically good writing. We optimize for both.
No. Scoring measures readiness for citation, not guaranteed citation. A 95-scoring article on a low-volume topic may get fewer citations than an 86-scoring article on a high-demand topic. But score determines citation-worthiness when the topic comes up.
The 85 threshold is non-negotiable — it's what defines quality work. However, we can prioritize resources toward topics where you want to push toward 90+ scores.
Volume is easy. Quality is hard. Citation-worthy quality is rare.
Most agencies use AI to produce more content faster. They measure success by output volume. The result is a flood of generic content that AI absorbs but never cites.
We measure success by citation-worthiness. Every piece is scored. Every piece must pass. Every piece is designed to be the answer AI gives, not just another page it crawls.
The 85+ threshold isn't arbitrary. It's the line between "published" and "cited."
Book Your 109-Point Diagnostic →
Related Methodology
The Client Codex — Your verified knowledge base that the Integrity Gate checks against
Multi-Model Philosophy — How our AI agents are trained to meet the standard
The Golden Thread — Entity connections that boost Entity Density scores
Content Quality Standard — The public-facing quality commitment
Probably Genius LLC