Content QA Audit Agent

    68 checks across 7 dimensions: Technical SEO, Linking, E-E-A-T, Anti-Slop, AI Extraction, Schema, and AI Access. Scores your page and fixes what fails.

    Free & open68 checks7 dimensionsInstall in 30 seconds

    What This Agent Does

    Scores and fixes any content page across 7 dimensions covering technical SEO fundamentals, internal linking structure, E-E-A-T authority signals, AI-generated content detection (anti-slop), content structure for AI extraction, entity and schema depth, and AI platform accessibility.

    Each of the 68 checks uses a PASS/PARTIAL/FAIL rubric with clear criteria, producing both an overall normalized score (0-100) and an AEO sub-score for AI citation readiness. The agent includes glossary-specific scoring adjustments and common failure pattern analysis by dimension.

    What You Get

    • A. Technical SEO (10 checks) — H1, keywords, meta tags, schema types, heading hierarchy, alt text
    • B. Internal Linking & Structure (10 checks) — link count, hub links, word count, paragraph length, structured elements
    • C. E-E-A-T & Authority (10 checks) — experience markers, named frameworks, specific data, honest limitations, contrarian takes
    • D. Anti-Slop Detection (10 checks) — banned phrases, filler transitions, vague claims, adjective stacking, fabricated data
    • E. Content Structure for AI Extraction (10 checks) — TLDR blocks, definition formats, numbered frameworks, comparison tables
    • F. Entity & Schema Depth (10 checks) — entity statements, Organization/Author/Article schema, FAQ schema accuracy
    • G. AI Platform Accessibility (8 checks) — SSG rendering, AI crawler access, content freshness, visible dates

    Install

    Choose your preferred installation method. Both put the agent rule in the right place for Claude Code to discover automatically.

    Copy the rule below and save it as .claude/rules/qa-audit.md in your project root.

    .claude/rules/qa-audit.md
    # Content QA Audit Agent Rules
    
    When asked to run a QA audit, score the page across 7 dimensions (68 checks, 136 raw points, normalized to 100). Every check uses PASS (2 pts), PARTIAL (1 pt), or FAIL (0 pts).
    
    ## How to Run an Audit
    
    1. Read the target page file completely
    2. Identify the page type (homepage, service, landing page, vertical, blog, glossary, listicle)
    3. Score all 68 checks across dimensions A-G
    4. Calculate raw score (max 136), then normalize: `(raw / 136) * 100`
    5. Calculate AEO sub-score (E+F+G raw, max 56)
    6. Output a scorecard with pass/fail per check, line references for failures, and both scores
    7. Flag the 3 highest-impact fixes
    
    ---
    
    ## Dimension A: Technical SEO (0-20 points, 10 checks)
    
    Verifies on-page SEO fundamentals are correctly implemented.
    
    | Check | What to Verify | PASS (2) | PARTIAL (1) | FAIL (0) |
    |---|---|---|---|---|
    | A1: Single H1 with primary keyword | Exactly one `<h1>`, contains the primary keyword for the page | H1 present, keyword included naturally | H1 present but keyword missing or awkwardly forced | Multiple H1s or no H1 |
    | A2: Primary keyword in first 100 words | Count words from start of body content | Keyword appears within first 100 words | Keyword appears within first 200 words | Keyword absent from opening section |
    | A3: Meta title <=60 chars with keyword | Check the page's metadata export | Title <=60 chars, primary keyword present, consistent branding format | Title has keyword but exceeds 60 chars | No meta title or keyword missing |
    | A4: Meta description <=155 chars with keyword + CTA | Check the page's metadata export | Description <=155 chars, keyword present, includes value prop or CTA | Description has keyword but exceeds 155 chars or lacks CTA | No meta description or keyword missing |
    | A5: Self-referencing canonical URL | Check metadata or `<link rel="canonical">` | Canonical matches the page's own absolute URL | Canonical present but URL mismatch | No canonical |
    | A6: OpenGraph + Twitter card | Check metadata for openGraph and twitter fields | Both OG and Twitter meta present with title, description, image | OG present but Twitter missing or vice versa | Neither present |
    | A7: Correct schema types | Match page type against required schemas (e.g., Service + BreadcrumbList + FAQ for service pages) | All required schema types present for the page type | Some schema types present, 1 missing | No schema or wrong types |
    | A8: No schema placeholders | Search schema content for `PLACEHOLDER`, `TODO`, `TBD`, `{{` | Zero placeholders found | -- | Any placeholder found |
    | A9: Valid heading hierarchy | Check H1 > H2 > H3, no skipped levels | Clean hierarchy, no level skips | One level skip (H1 to H3) | Multiple skipped levels or broken hierarchy |
    | A10: All images have alt text | Check every `<img>` or image component for `alt` attribute | All images have descriptive alt text (not filenames) | Some images have alt, some don't | No alt text on images, or alt text is just filenames |
    
    ---
    
    ## Dimension B: Internal Linking & Structure (0-20 points, 10 checks)
    
    Verifies link architecture, content structure, and formatting compliance.
    
    | Check | What to Verify | PASS (2) | PARTIAL (1) | FAIL (0) |
    |---|---|---|---|---|
    | B1: Minimum internal links | Count internal links to other pages on the site. Landing pages: 5+, blog/other: 3+ | Meets or exceeds minimum for page type | 1-2 links below minimum | Fewer than 3 internal links |
    | B2: Mandatory hub links present | Must link to assigned hub/pillar pages per your internal linking map | All mandatory hub links present | Missing 1 hub link | Missing 2+ hub links |
    | B3: Hub link in first 300 words | Count words from body start, find first hub link | Hub link appears within first 300 words | Hub link appears within first 500 words | No hub link in opening section |
    | B4: Anchor text varies | Check that links to the same target use different anchor text | All anchor text varies | Some repetitive anchor text | Same anchor text for multiple links to same target |
    | B5: Cross-links to related pages | For similar pages (verticals, services): links to 2+ sibling pages | Cross-links to related pages present | Only 1 cross-link | No cross-links to related pages |
    | B6: No broken internal links | All href values resolve to existing pages | All internal links verified | 1 potentially broken link | 2+ broken links |
    | B7: CTA links to contact/conversion page | At least one CTA button/link pointing to your conversion page | CTA present, links correctly | CTA present but links elsewhere | No CTA found |
    | B8: Word count meets minimum | Count body text words. Landing: 2,000+; Glossary: 500+; Blog: varies by topic | Meets minimum for page type | Within 10% of minimum | Below minimum by >10% |
    | B9: Paragraphs <=4 sentences | Scan all paragraphs for sentence count | All paragraphs <=4 sentences | 1-2 paragraphs exceed limit | 3+ paragraphs exceed limit |
    | B10: Structured elements per 1,000 words | Count tables, lists, grids, cards per 1,000 words | At least 1 structured element per 1,000 words | Structured elements present but not every 1,000 words | No structured elements (all prose) |
    
    ---
    
    ## Dimension C: E-E-A-T & Authority Signals (0-20 points, 10 checks)
    
    Verifies the content demonstrates real practitioner experience and domain authority.
    
    | Check | What to Verify | PASS (2) | PARTIAL (1) | FAIL (0) |
    |---|---|---|---|---|
    | C1: Experience markers | Content contains first-person plural ("we") + action verbs showing direct practice: "we audit", "we build", "we've tested", "when we work with" | 3+ experience markers found | 1-2 experience markers | Zero practice statements (reads like a textbook) |
    | C2: Named methodology/framework | References a specific methodology unique to your organization | Names at least 1 specific framework with detail | References a framework vaguely | No named methodology -- only generic advice |
    | C3: Specific data from verified sources | Uses real numbers: keyword volumes, competitor stats, market figures from your fact registry | 3+ specific verified data points | 1-2 data points | No specific data -- all claims are vague |
    | C4: Honest limitations | Contains boundaries: "we're not a full-service...", "this won't work if...", "no guaranteed results" | Clear limitation stated | Implied limitation but not explicit | No honesty markers -- reads as if it can solve everything |
    | C5: Specificity over generality | Concrete numbers/examples instead of vague claims. Flag: "many companies", "significant growth", "various industries" | Content is specific throughout -- names, numbers, examples | Mix of specific and vague | Dominated by vague claims with no concrete examples |
    | C6: Contrarian/opinionated take | At least 1 clear stance that distinguishes from generic advice | Clear contrarian position with supporting argument | Opinion stated but generic (not differentiated) | No opinions -- could be written by anyone |
    | C7: FAQ answers self-contained | Each FAQ answer works as a standalone response without needing the question | All FAQ answers are complete, self-contained statements | Some FAQ answers are self-contained, some aren't | FAQ answers like "Yes, it does" or require question for context |
    | C8: "Not a fit" honesty | Page states who this is NOT for, or acknowledges scope limits | Explicit "not a fit" statement | Implicit limit | No acknowledgment of limitations |
    | C9: No unresolved placeholders | Zero `[PLACEHOLDER]`, `[NEEDS VERIFICATION]`, `[CASE STUDY]`, `[TODO]`, `TBD` | Zero placeholders found | -- | Any placeholder found (must fix before ship) |
    | C10: Fair competitor references | Competitors mentioned factually, strengths before gaps, no disparagement | Competitors referenced fairly | Competitors mentioned but tone is borderline | Competitors disparaged or misrepresented |
    
    ---
    
    ## Dimension D: Anti-Slop Detection (0-20 points, 10 checks)
    
    Flags AI-generated filler that undermines domain authority. These patterns make content read as "AI slop" rather than expert writing.
    
    | Check | What to Verify | PASS (2) | PARTIAL (1) | FAIL (0) |
    |---|---|---|---|---|
    | D1: Zero banned phrases | Search for all banned phrases: "leverage", "best-in-class", "synergy/synergize", "cutting-edge/state-of-the-art", "solutions" (standalone), "unlock/unleash", "game-changer", "disruptive/disrupt", "thought leadership" (self-referential), "touch base/circle back", "low-hanging fruit", "move the needle", "at the end of the day", "in today's digital landscape", "one-stop shop", "guaranteed results" | Zero banned phrases found | 1-2 banned phrases | 3+ banned phrases |
    | D2: Consistent voice (no solo "I/my") | Organization content must use "we" consistently. Exception: glossary/reference pages (third person) | "We" used consistently throughout | 1-2 instances of "I" | Frequent "I/my" usage |
    | D3: No filler transitions | Search for: "Let's dive in", "Without further ado", "In this section we'll explore", "Let's get started", "Now let's talk about", "Moving on to" | Zero filler transitions | 1 filler transition | 2+ filler transitions |
    | D4: No vague authority claims | Search for: "studies show", "research indicates", "experts agree", "it's widely known", "it's well-established" without specific citation | Zero vague authority claims (or all claims have specific sources) | 1 vague claim | 2+ vague authority claims without sources |
    | D5: No adjective stacking | Search for 3+ consecutive adjectives before a noun: "comprehensive, innovative, cutting-edge solution" | No adjective stacking found | 1 instance | 2+ instances of adjective stacking |
    | D6: No generic benefit statements | Flag: "save time and money", "grow your business", "take your X to the next level", "streamline your operations", "maximize your potential", "drive meaningful results" | Zero generic benefit statements | 1-2 generic benefits | 3+ generic benefits -- reads like template content |
    | D7: Section openers are specific | First sentence of each H2 section states a specific claim, data point, or question -- not generic setup | All section openers are specific | Mix of specific and generic openers | Most sections open with generic setup ("In this section...") |
    | D8: Prose rhythm varies | Content structure varies between sections -- alternates prose, lists, tables, cards, blockquotes | Clear structural variety across sections | Some variety but predominantly one format | Every section follows identical structure |
    | D9: No fabricated data | Zero invented statistics, case studies, testimonials, client names, or URLs. All data from your verified fact registry | All data verified against your data source | 1 unverified data point (flagged) | Fabricated statistics or case studies |
    | D10: Practitioner knowledge test | Content contains at least 3 elements requiring domain-specific knowledge that a generic LLM wouldn't produce: specific tool workflows, industry regulatory details, named competitor analysis | 3+ practitioner-specific elements | 1-2 practitioner elements | Content is entirely generic -- ChatGPT could produce identical output |
    
    ---
    
    ## Dimension E: Content Structure for AI Extraction (0-20 points, 10 checks)
    
    Can AI systems extract clean, citable passages from this page? These checks verify content is shaped for passage extraction -- the mechanism by which ChatGPT, Perplexity, Claude, and Google AI Overviews select what to cite.
    
    | Check | What to Verify | PASS (2) | PARTIAL (1) | FAIL (0) |
    |---|---|---|---|---|
    | E1: TLDR / quotable answer in first 300 words | First 300 words contain a clear, direct answer to the page's core search query. Pages 2,000+ words should include a visually distinct Quick Answer or TLDR block (under 60 words). | Clear TLDR/Quick Answer block present AND extractable answer an LLM would cite | Clear extractable answer present but no visually distinct TLDR block | First 300 words are all context/setup with no direct answer |
    | E2: Direct-answer first sentence per H2 section | Every H2 section opens with a 40-60 word self-contained answer. Test: cover the heading and read only the first sentence -- does it state a specific claim? | All section openers work standalone -- each states a specific claim extractable without the heading | Most work standalone, 1-2 need heading context | Section openers require heading to make sense, use "As mentioned above...", or start with generic transitions |
    | E3: Definition blocks use "X is Y" format | When defining a concept, the page leads with a direct declaration: "Schema markup is structured data vocabulary..." not "When it comes to structured data..." | At least 1 definition block per page uses "X is Y" format in the first sentence, extractable standalone | Definition exists but buried in prose or prefaced with context-setting | No clear "X is Y" definition blocks on the page |
    | E4: Numbered frameworks with bold step labels | Multi-step processes use numbered lists with bold step names and one-sentence definitions. AI systems cite numbered frameworks as complete blocks. | At least 1 numbered framework (3-7 steps) with bold labels and one-sentence definitions per step | Numbered list present but steps lack bold labels or definitions are missing | No numbered/labeled frameworks -- processes described only in prose |
    | E5: Comparison table or decision framework | Page contains at least one comparison table, scoring rubric, decision matrix, or criteria list with 3+ criteria. | Comparison table or decision framework with 3+ rows and 2+ comparison columns present | List that implies comparison but not structured as table/matrix | No structured comparison or decision framework |
    | E6: Section headings are descriptive, not clever | H2/H3 headings describe their content plainly. "How Schema Improves AI Citations" works. "The Schema Secret" doesn't. | All H2/H3 headings are descriptive -- reader/AI can predict section content from heading alone | Most headings descriptive, 1-2 are vague or clever | Multiple headings that are clever, vague, or fail to describe their content |
    | E7: No vague opening paragraph | Page's first paragraph leads with substance. No "In today's landscape..." -- lead with a specific claim, definition, or data point. | First paragraph leads with a specific claim, definition, or data point within first 2 sentences | First paragraph has some substance but opens with 1 sentence of context-setting | First paragraph is entirely context-setting with no specific claim in first 3 sentences |
    | E8: Lists use semantic HTML elements | Uses actual `<ul>`/`<ol>` elements for list content, not dashes or asterisks formatted as prose. | All list content uses semantic HTML list elements | Most lists are semantic but 1-2 instances of prose-embedded pseudo-lists | Significant list content formatted as prose dashes rather than HTML lists |
    | E9: Key data points in structured format | Important statistics live in tables, callout cards, or list items -- not buried mid-paragraph. | Key data points are in tables or visually distinct elements | Some data in structured format, some buried in paragraphs | All data points buried in prose with no structured presentation |
    | E10: FAQ answers self-contained | Each FAQ answer begins with a direct response in the first sentence. No "It depends" or "Great question!" -- lead with the answer. | All FAQ answers are complete standalone statements; first sentence delivers the answer directly | Most answers work standalone but 1-2 begin with "Yes" or need the question for context | Answers like "Yes, it does" or "It depends" that require the question to make sense |
    
    ---
    
    ## Dimension F: Entity & Schema Depth (0-20 points, 10 checks)
    
    Does structured data correctly define the entity and provide the machine-readable layer AI systems need? Goes deeper than A7 ("correct schema types") into field-level correctness and entity clarity.
    
    | Check | What to Verify | PASS (2) | PARTIAL (1) | FAIL (0) |
    |---|---|---|---|---|
    | F1: Entity statement in first 300 words | Page contains a clear "X is Y" entity-defining statement about the organization or page subject within the first 300 words. | Clear entity statement present: "[Brand/subject] is [definitive description]" within first 300 words | Entity implied but no explicit statement, or statement appears after 300 words | No entity statement -- an LLM cannot determine what the page's subject is |
    | F2: Organization schema with @id, name, url, sameAs | Homepage/service pages have Organization schema with a stable `@id` URI, official `name`, canonical `url`, and `sameAs` array linking to official profiles. | Schema includes `@id`, `name`, `url`, and `sameAs` array with 2+ profile links. N/A pages reference org via publisher `@id` (auto-PASS). | Schema present but missing `sameAs` or missing `@id` | No organization schema, or missing `name` or `url` |
    | F3: Author schema with Person type and sameAs | Blog/article pages include author with `@type: Person`, `name`, `jobTitle`, `image`, and `sameAs` linking to professional profiles. | Author Person schema with `name`, `jobTitle`, `image`, and `sameAs` array. N/A for non-article pages (auto-PASS). | Author in schema but missing `sameAs` or `jobTitle` | No author attribution in schema |
    | F4: Schema serviceType/knowsAbout matches page topic | Service schema `serviceType` and Organization `knowsAbout` match the topics the page actually covers. | `serviceType` or `knowsAbout` specifically matches the page's actual topic. N/A for blog/glossary (auto-PASS). | Schema present but uses overly broad types | Generic service type that doesn't match the page's specific offering |
    | F5: Consistent entity naming | Same company/product name used in schema `name`, H1, meta title, body copy, and OG tags. | Entity name consistent across schema, meta, headings, and body | Mostly consistent but 1-2 inconsistent references | 3+ different entity name forms used with no consistency |
    | F6: JSON-LD format for all structured data | All structured data uses JSON-LD in `<script type="application/ld+json">` tags, not Microdata or RDFa. | All schema blocks are JSON-LD | Mix of JSON-LD and one Microdata/RDFa element | No JSON-LD; schema in Microdata/RDFa only, or no schema |
    | F7: Article schema has required fields | Blog/article pages include Article schema with `headline`, `datePublished`, `dateModified`, `author`, `publisher`, `mainEntityOfPage`, and `image`. | All required fields present with correct values. N/A for non-article pages (auto-PASS). | Schema present but missing 1 important field | Missing 2+ required fields |
    | F8: FAQ schema answers match visible content | FAQ schema answer text is word-for-word identical to the visible FAQ answer on the page. | Every FAQ schema answer matches visible content exactly (minor whitespace differences OK) | 1-2 minor differences between schema and visible text | Significant mismatch between schema answers and displayed FAQ answers |
    | F9: dateModified reflects actual last update | `dateModified` in Article schema reflects the real last content update, not just the original publication date. | `dateModified` is accurate. N/A for non-article pages (auto-PASS). | `dateModified` present but stale (identical to `datePublished` despite updates) | `dateModified` missing or clearly wrong |
    | F10: All schema URLs absolute + mainEntityOfPage present | Every `url`, `@id`, `mainEntityOfPage` in schema uses full absolute URLs, never relative paths. `mainEntityOfPage` points to canonical URL. | All URLs absolute. `mainEntityOfPage` present and matches canonical. | Most URLs absolute but 1 relative path, or `mainEntityOfPage` missing | Multiple relative URLs in schema |
    
    ---
    
    ## Dimension G: AI Platform Accessibility (0-16 points, 8 checks)
    
    Can AI crawlers technically access, process, and index the content? These infrastructure-level checks determine whether AI systems can even *see* the content.
    
    | Check | What to Verify | PASS (2) | PARTIAL (1) | FAIL (0) |
    |---|---|---|---|---|
    | G1: Content renders in initial HTML (SSG) | Core page content exists in the initial HTML response -- not injected via client-side JavaScript. AI crawlers do not execute JS as reliably as Googlebot. | Page uses static generation -- all content in initial HTML. Client components for interactivity only. | Most content SSG but one section requires client-side JS to render | Core page content requires JavaScript execution to render |
    | G2: robots.txt allows AI crawlers | Your robots.txt allows GPTBot, ClaudeBot, PerplexityBot, and ChatGPT-User. No `Disallow` rules targeting AI crawlers. | Allows all AI crawlers. No AI crawler blocked. | Uses wildcard allow but doesn't explicitly name AI crawlers | Any AI crawler explicitly blocked via `Disallow` |
    | G3: No noai/noimageai meta tags | Page does not include `<meta name="robots" content="noai">` or `noimageai`. | No `noai` or `noimageai` directives found | -- (binary check) | `noai` or `noimageai` directive present on an indexable page |
    | G4: No critical content behind interactions | All substantive content is accessible without clicking, expanding, or accepting. AI crawlers cannot click accordions or dismiss cookie banners. | All content freely accessible in initial page load. Accordion content present in initial HTML. | Content accessible but a non-blocking overlay could confuse crawlers | Core content behind login, paywall, cookie wall, or requires JS interaction to reveal |
    | G5: Schema validates without structural errors | JSON-LD schema parses as valid JSON and uses correct `@type` values. No syntax errors or invalid property names. | All schema blocks parse as valid JSON with correct `@type` values | Schema mostly valid but one block has a minor issue | Schema contains JSON syntax errors, invalid `@type`, or malformed structure |
    | G6: Content freshness -- dateModified within 12 months | `dateModified` or visible update signals indicate content freshness within 12 months. AI systems deprioritize stale content. | Content shows update within last 12 months | Content between 12-18 months old | Content older than 18 months with no update signal |
    | G7: max-image-preview not restricted | `robots` metadata does not restrict `max-image-preview` to `none` or `standard`. AI platforms use image previews for context. | `max-image-preview: large` set or not restricted | `max-image-preview` set to `standard` | `max-image-preview: none` set |
    | G8: Visible "last updated" date on page | Pages display a visible publication or "Last updated" date. Schema `dateModified` alone is not sufficient. | Visible date displayed on page. N/A for glossary/utility pages (auto-PASS). | Date in schema but not visually displayed | No date signal visible or in schema |
    
    ---
    
    ## Scoring
    
    **Raw:** 68 checks x 2 pts = 136 max
    **Normalized:** `(raw / 136) * 100` -- all thresholds use this
    
    ### Score Thresholds (Normalized)
    
    | Score | Verdict | Action |
    |---|---|---|
    | 90-100 | Ship it | Production ready -- strong across all 7 dimensions |
    | 75-89 | Minor fixes | Flag specific failed checks, fix before deploy |
    | 55-74 | Significant gaps | Fails E-E-A-T, anti-slop, OR AEO structure -- major revision needed |
    | Below 55 | Full rewrite | Generic content that hurts domain authority and is invisible to AI search |
    
    ### AEO Sub-Score (E+F+G, raw out of 56)
    
    | AEO Sub-Score | Verdict |
    |---|---|
    | 86-100% (48-56 raw) | Citation-ready |
    | 68-85% (38-47 raw) | Near-ready -- fix specific gaps |
    | 50-67% (28-37 raw) | Significant AEO gaps |
    | Below 50% (<28 raw) | Not citation-ready |
    
    ### Minimum Scores by Page Type (Normalized)
    
    | Page Type | Minimum | Rationale |
    |---|---|---|
    | Homepage | 85 | First impression -- entity clarity (F) especially critical |
    | Blog posts | 80 | Authority content -- content structure (E) and author schema (F3) are key AEO differentiators |
    | Landing pages (service/vertical) | 75 | High-intent pages -- entity schema depth (F) and extraction structure (E) impact AI citations |
    | Glossary/reference terms | 65 | Reference content -- C-dimension scored differently, many E/F/G checks auto-PASS |
    
    ---
    
    ## Glossary/Reference Page Scoring Adjustments
    
    Glossary pages use third person. Adjust these checks:
    
    **Dimension C:**
    - C1 (Experience): Score based on any "Connection" or application section, not the main definition body
    - C6 (Contrarian take): Not required -- glossary is authoritative reference, not opinion
    - C8 ("Not a fit"): Not required -- glossary pages don't sell
    
    **Dimension E:**
    - E3 (Definition blocks): Auto-PASS -- the entire glossary page is a definition block
    - E4 (Numbered frameworks): Auto-PASS -- glossary explains concepts, not processes
    - E5 (Comparison table): Auto-PASS -- not required for reference definitions
    
    **Dimension F:**
    - F1 (Entity statement): Score based on the term being defined ("X is Y" first sentence), not organization entity
    - F2-F4 (Org/Author/serviceType schema): Auto-PASS -- glossary uses DefinedTerm schema
    - F7 (Article required fields): Auto-PASS -- not article pages
    - F9 (dateModified): Auto-PASS -- not article pages
    
    **Dimension G:**
    - G8 (Visible date): Auto-PASS -- glossary is evergreen reference content
    
    ---
    
    ## Common Failure Patterns by Dimension
    
    ### A -- Technical SEO
    Most common failure: **A7 (missing schema types)**. Pages often have Service schema but forget BreadcrumbList or FAQPage.
    
    ### B -- Internal Linking
    Most common failure: **B2 (missing hub links)**. New pages frequently miss linking to all pillar/hub pages.
    
    ### C -- E-E-A-T
    Most common failure: **C5 (specificity)**. AI-generated content defaults to vague claims like "many companies benefit" instead of concrete examples.
    
    ### D -- Anti-Slop
    Most common failure: **D1 (banned phrases)**. "Solutions", "leverage", and "cutting-edge" slip through most frequently. Also **D6 (generic benefits)**.
    
    ### E -- Content Structure for AI Extraction
    Most common failure: **E1 (burying the answer / missing TLDR)**. Pages spend 300+ words on setup before answering the core question. Add a Quick Answer block (under 60 words) in the first 300 words. Also **E3 (missing "X is Y" definitions)**.
    
    ### F -- Entity & Schema Depth
    Most common failure: **F9 (stale dateModified)**. Schema often hardcodes `dateModified = datePublished`. Also **F2 (incomplete Organization schema)** -- `sameAs` array missing.
    
    ### G -- AI Platform Accessibility
    Most common failure: **G8 (no visible date)**. Many service pages lack a visible "Last updated" date. Also **G4 (content behind interactions)** -- FAQ accordion content must be in initial HTML.
    
    ---
    
    ## Scorecard Output Format
    
    ```
    ## QA Audit -- [Page URL]
    **Page type:** [type] | **Raw score:** [n]/136 | **Normalized:** [n]/100 | **Verdict:** [Ship/Minor fixes/Significant gaps/Full rewrite]
    **AEO sub-score:** [n]/56 ([n]%) | **AEO verdict:** [Citation-ready/Near-ready/Significant AEO gaps/Not citation-ready]
    
    ### A. Technical SEO: [n]/20
    - A1: PASS/PARTIAL/FAIL [detail]
    - A2: PASS/PARTIAL/FAIL [detail]
    ...
    
    ### B. Internal Linking & Structure: [n]/20
    - B1: PASS/PARTIAL/FAIL [detail]
    ...
    
    ### C. E-E-A-T & Authority: [n]/20
    - C1: PASS/PARTIAL/FAIL [detail]
    ...
    
    ### D. Anti-Slop Detection: [n]/20
    - D1: PASS/PARTIAL/FAIL [detail]
    ...
    
    ### E. Content Structure for AI Extraction: [n]/20
    - E1: PASS/PARTIAL/FAIL [detail]
    ...
    
    ### F. Entity & Schema Depth: [n]/20
    - F1: PASS/PARTIAL/FAIL [detail]
    ...
    
    ### G. AI Platform Accessibility: [n]/16
    - G1: PASS/PARTIAL/FAIL [detail]
    ...
    
    ### Top 3 Fixes (highest impact)
    1. [fix + check ID + file:line reference]
    2. [fix + check ID + file:line reference]
    3. [fix + check ID + file:line reference]
    
    ### AEO-Specific Recommendations
    [1-3 recommendations focused on E/F/G improvements with highest citation impact]
    ```
    
    ---
    
    ## Out of Scope
    
    This agent focuses on content QA scoring. The following are not covered:
    
    - Google Search Central compliance (covered by Google SEO Compliance Agent)
    - Copy persuasion quality and audience resonance (covered by Copywriter Audit Agent)
    - Fact and statistic verification workflows (covered by Fact Verification Agent)
    - Visual design and accessibility (covered by Accessibility Agent)
    - Performance and Core Web Vitals (covered by Performance Agent)
    - Content freshness and decay monitoring (covered by Content Refresh Agent)

    Usage

    Once installed, open your project in Claude Code and ask:

    Run a QA audit on /blog/my-latest-post and fix anything below a PASS

    Claude Code will follow the scoring rubric, check every dimension, and output a structured scorecard with pass/fail per check and prioritized fix recommendations.

    Works Great With

    Need a Custom Agent?

    We build custom Claude Code agent rules tailored to your team's workflows, content standards, and tech stack.

    Get in touch