# wcag-scan — Product Overview

## wcag-scan

**Automated WCAG 2.1/2.2 Accessibility Auditing Platform**

Last updated: 23 March 2026

***

### Table of Contents

1. What is WCAG?
2. Why wcag-scan?
3. Market Context
4. Product Overview
5. Detection Architecture
6. Scoring Methodology
7. Plan Structure
8. Competitive Position
9. Product Roadmap

***

### 1. What is WCAG?

#### The Standard

The **Web Content Accessibility Guidelines (WCAG)** are a set of technical standards published by the World Wide Web Consortium (W3C) that define how to make web content accessible to people with disabilities. The current operative version is **WCAG 2.1**, with 2.2 extending it further.

The guidelines are built around four principles — known as **POUR**:

| Principle          | Meaning                                                                   | Example requirements                                                       |
| ------------------ | ------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| **Perceivable**    | Information must be presentable in ways users can perceive                | Alt text for images, captions for video, sufficient colour contrast        |
| **Operable**       | Interface components must be operable by all users                        | Full keyboard navigation, no seizure-inducing content, enough time to read |
| **Understandable** | Information and operation must be understandable                          | Consistent navigation, clear error messages, readable language             |
| **Robust**         | Content must be robust enough to be interpreted by assistive technologies | Valid HTML, ARIA roles, compatible with current and future user agents     |

Each principle contains **guidelines**, and each guideline contains measurable **success criteria** at three conformance levels:

| Level   | Meaning                                                          | Who requires it                                      |
| ------- | ---------------------------------------------------------------- | ---------------------------------------------------- |
| **A**   | Minimum accessibility — failure here causes outright exclusion   | Baseline for all regulations                         |
| **AA**  | Standard accessibility — the legal and procurement floor         | EAA, ADA, EN 301 549, AODA, Equality Act             |
| **AAA** | Enhanced accessibility — best practice, not universally required | Target for public services and high-priority content |

#### Who is Affected

Approximately **1.3 billion people worldwide** — 16% of the global population — live with some form of disability (WHO, 2023). Web accessibility barriers affect:

* **Blind and low-vision users** relying on screen readers (NVDA, JAWS, VoiceOver)
* **Deaf and hard-of-hearing users** who need captions and transcripts
* **Motor-impaired users** who navigate entirely by keyboard or switch device
* **Cognitive and learning disability users** who depend on consistent layout and clear language
* **Older adults**, who disproportionately experience combinations of the above

This is not a niche audience. In the US alone, people with disabilities control an estimated **$490 billion in disposable income** (American Institutes for Research).

#### Consequences of Non-Compliance

**Legal exposure** is the primary driver of urgency:

* The **European Accessibility Act (EAA)**, in force since June 2025, mandates WCAG 2.1 AA for all private-sector digital products sold in the EU. Non-compliance exposes businesses to administrative enforcement, market access restrictions, and civil liability across all 27 member states.
* **ADA Title III** in the United States has no safe harbour for websites. Federal courts have consistently held that commercial websites are "places of public accommodation." Over 4,000 ADA website lawsuits were filed in 2024 alone, with settlements typically ranging from $25,000 to $150,000.
* The **Equality Act 2010** (UK), **AODA** (Canada), and equivalent national laws carry similar risk.

**Remediation cost** compounds over time. Accessibility issues found during design cost roughly **1x** to fix. The same issue found during development costs **6x**. Found after deployment: **100x** (IBM Systems Sciences Institute). Automated scanning is the only cost-effective way to catch issues continuously.

**Reputational damage** is the less-quantified but real third risk. Accessibility failures are public — screen reader users report issues on social media, and accessibility audits are increasingly required in procurement processes.

***

### 2. Why wcag-scan?

Most accessibility tools are either too narrow (single engine, one source of truth) or too expensive for SMBs and development teams (enterprise-only pricing, complex onboarding). wcag-scan is built to close both gaps.

#### Core Differentiators

**1. Multi-engine corroboration — fewer false positives, broader coverage** wcag-scan runs four independent detection sources in a single scan and cross-references their findings. Violations confirmed by multiple engines receive a confidence boost. Violations unique to a single heuristic engine are down-weighted. No other tool in the SMB/mid-market segment does this.

**2. Real keyboard simulation — catches what static analysis cannot** wcag-scan physically simulates Tab and Escape key presses to detect focus traps, hidden-but-focusable elements, and modal dismiss failures. These are WCAG 2.1.2 Level A criteria. Static DOM analysis cannot evaluate them.

**3. Deterministic, AI-independent scoring** The numeric score and conformance level are computed algorithmically from raw violation data. The AI writes fix descriptions but never controls the verdict. Two identical pages always receive identical scores — a prerequisite for regression tracking and compliance documentation.

**4. Third-party exemption — you're only penalised for what you control** Cloudflare challenge pages, cookie consent overlays, live chat widgets, and CDN error pages are identified and excluded from the pass/fail determination. Sites are not penalised for third-party infrastructure they cannot modify.

**5. GitHub Agent — automated fix pull requests** The Team plan includes wcag-agent: an AI agent that reads your scan results, reads your codebase, and opens pull requests with element-level code fixes. No other tool in this price segment automates the remediation step.

**6. Time to value — under 30 seconds, no sign-in** A full 4-engine scan completes in under 30 seconds with no account required. The free tier delivers a complete scored report with actionable fixes immediately.

***

### 3. Market Context

#### Regulatory Drivers

| Regulation                              | Jurisdiction                | Scope                                                | Status                                      |
| --------------------------------------- | --------------------------- | ---------------------------------------------------- | ------------------------------------------- |
| European Accessibility Act (EAA)        | EU (27 member states)       | All private-sector digital products and services     | **In force since 28 June 2025**             |
| ADA Title III                           | United States               | All public-facing commercial websites                | Active; litigation volume increasing yearly |
| EN 301 549 v3.2.1                       | EU public sector            | Government and publicly-funded websites              | In force since 2021                         |
| AODA                                    | Canada (Ontario)            | Organisations with 50+ employees                     | Phase-in through 2025                       |
| Equality Act 2010                       | United Kingdom              | All businesses offering goods/services to the public | In force                                    |
| Barrierefreiheitsstärkungsgesetz (BFSG) | Germany (EAA transposition) | All B2C digital products                             | In force since 28 June 2025                 |

#### Market Data

| Metric                                          | Value                      | Source                                   |
| ----------------------------------------------- | -------------------------- | ---------------------------------------- |
| Websites failing basic WCAG 2.1 AA              | 95.9% of the top 1,000,000 | WebAIM Million Report, February 2025     |
| US ADA website lawsuits filed (2024)            | 4,061                      | UsableNet Mid-Year ADA Litigation Report |
| Average ADA lawsuit settlement range            | $25,000 – $150,000         | Seyfarth Shaw ADA Title III Summary      |
| Global web accessibility software market (2024) | $590 M                     | Grand View Research                      |
| Projected market (2030)                         | $2.0 B (CAGR 22.4%)        | Grand View Research                      |

#### Why This Matters Now

1. **Compliance is no longer optional.** The EAA creates a legal floor. Every company selling into the EU must comply.
2. **Automated tooling catches 30–50% of WCAG issues** (W3C WAI estimate). The rest requires manual review — but automated scanning is the essential first step and the only scalable approach for continuous monitoring.
3. **Developer tooling is the wedge.** The current market leaders (Deque, Siteimprove) charge $10,000–$50,000+/year for enterprise seats, leaving a wide gap for SMB and mid-market.

***

### 4. Product Overview

#### What a Scan Produces

| Output                                        | Free        | Pro                         | Team                        |
| --------------------------------------------- | ----------- | --------------------------- | --------------------------- |
| Compliance score (0–100, deterministic)       | ✓           | ✓                           | ✓                           |
| WCAG conformance level (AAA / AA / A / Fail)  | ✓           | ✓                           | ✓                           |
| Prioritised violation list with code snippets | ✓ (capped)  | ✓                           | ✓                           |
| Developer-actionable fix for each violation   | ✓           | ✓ (includes corrected code) | ✓ (includes corrected code) |
| Effort estimate per issue                     | —           | ✓ (low / medium / high)     | ✓                           |
| Affected user groups per issue                | —           | ✓                           | ✓                           |
| PDF report                                    | Watermarked | Clean                       | Clean                       |
| CSV export                                    | —           | ✓                           | ✓                           |
| Summary and quick wins                        | ✓           | ✓                           | ✓                           |
| Manual check recommendations                  | ✓           | ✓                           | ✓                           |

#### Customer Segments

| Segment                      | Use case                                             | Plan                       |
| ---------------------------- | ---------------------------------------------------- | -------------------------- |
| Individual developer         | One-off audits, personal projects                    | Free                       |
| Freelance developer / agency | Client deliverable, audit report PDF                 | Pro                        |
| In-house engineering team    | Continuous monitoring, regression tracking           | Pro / Team                 |
| Legal / compliance team      | Documentation for EAA / ADA compliance evidence      | Team                       |
| Enterprise buyer             | Multi-domain monitoring, CI/CD integration (roadmap) | Team + upcoming Enterprise |

***

### 5. Detection Architecture

#### Principle: Multi-Source Corroboration

A single testing engine always has blind spots. axe-core, the industry standard, covers approximately 57 rules for WCAG 2.1 A/AA. It deliberately skips heuristic checks (vague alt text, placeholder-as-label patterns, autocomplete attributes). Static DOM analysis cannot evaluate the computed accessibility tree that assistive technology actually reads. And no static tool simulates keyboard interaction.

wcag-scan addresses this by running **four independent detection sources** in a single headless browser session:

1. **axe-core** — the industry-standard WCAG rule engine (\~57 rules, wcag2a/wcag2aa/best-practice tags)
2. **wcag-engine (proprietary)** — 27 gap-filling rules across DOM heuristics, accessibility tree analysis, and keyboard simulation
3. **IBM Equal Access (ACE)** — IBM's accessibility checker (\~163 rules)
4. **W3C Nu HTML Checker** — W3C's official structural HTML validator

All four run in a single browser session. Results are cross-referenced: violations confirmed by multiple engines are boosted in confidence; single-engine heuristic findings are down-weighted.

#### Proprietary Rule Inventory (wcag-engine)

**Layer 1 — DOM Heuristic Rules (21 rules)**

| #  | Rule ID                | WCAG Criterion | Level | What it detects                                                                     |
| -- | ---------------------- | -------------- | ----- | ----------------------------------------------------------------------------------- |
| 1  | `alt-text`             | 1.1.1          | A     | Generic or filename-based alt text                                                  |
| 2  | `form-labels`          | 1.3.1          | A     | Placeholder-as-label anti-pattern                                                   |
| 3  | `aria`                 | 4.1.2          | A     | Invalid WAI-ARIA 1.2 roles; aria-hidden on focusable elements (self or descendants) |
| 4  | `heading-hierarchy`    | 2.4.6          | AA    | Zero headings; missing h1; skipped heading levels                                   |
| 5  | `skip-links`           | 2.4.1          | A     | Missing or broken skip links                                                        |
| 6  | `landmark-regions`     | 1.3.6          | AA    | Missing main/nav/footer; duplicate landmarks without unique labels                  |
| 7  | `link-purpose`         | 2.4.4          | A     | Vague link text ("click here", "read more")                                         |
| 8  | `error-identification` | 3.3.1          | A     | Forms without visible error identification                                          |
| 9  | `target-size`          | 2.5.8          | AA    | Tap targets below 24×24 px (WCAG 2.2 — axe has not implemented this)                |
| 10 | `input-type`           | 1.3.5          | AA    | Missing autocomplete attributes on personal data fields                             |
| 11 | `fieldset-legend`      | 1.3.1          | A     | Radio/checkbox groups without a fieldset/legend or group label                      |
| 12 | `tabindex-positive`    | 2.4.3          | A     | Elements with tabindex > 0 disrupting natural focus order                           |
| 13 | `label-in-name`        | 2.5.3          | A     | Accessible name that replaces (rather than contains) the visible text               |
| 14 | `reduced-motion`       | 2.3.3          | AAA   | Animations without prefers-reduced-motion support                                   |
| 15 | `new-window`           | 3.2.5          | AAA   | Links opening a new tab/window without warning                                      |
| 16 | `pdf-links`            | 2.4.4          | A     | Links to PDFs or documents without a file-type label                                |
| 17 | `placeholder-contrast` | 1.4.3          | AA    | Placeholder text with insufficient contrast                                         |
| 18 | `focus-contrast`       | 2.4.11         | AA    | Focus indicator with contrast ratio < 3:1 (WCAG 2.2)                                |
| 19 | `carousel`             | 2.2.2          | A     | Auto-advancing carousels without a pause mechanism                                  |
| 20 | `text-spacing`         | 1.4.12         | AA    | Content clipped when WCAG text-spacing overrides are applied                        |
| 21 | `reflow`               | 1.4.10         | AA    | Horizontal scroll at 320px viewport (fails 400% zoom requirement)                   |

**Layer 2 — Accessibility Tree Rules (3 rules)**

Uses the browser's computed accessibility tree — the same data structure screen readers consume.

| #  | Rule ID                  | WCAG Criterion | Level | What it detects                                            |
| -- | ------------------------ | -------------- | ----- | ---------------------------------------------------------- |
| 11 | `tree-interactive-names` | 4.1.2          | A     | Interactive elements with no computed accessible name      |
| 12 | `tree-landmark-coverage` | 1.3.6          | AA    | Missing main landmark (handles React/Next.js deep nesting) |
| 13 | `tree-live-regions`      | 4.1.3          | AA    | Dynamic UI with no aria-live region for announcements      |

**Layer 3 — Keyboard Simulation Rules (3 rules)**

Uses real Tab/Escape key simulation. No other commercial automated tool performs this.

| #  | Rule ID                     | WCAG Criterion | Level | What it detects                                |
| -- | --------------------------- | -------------- | ----- | ---------------------------------------------- |
| 14 | `keyboard-hidden-focusable` | 2.4.3          | A     | Elements in the tab order that are not visible |
| 15 | `keyboard-focus-trap`       | 2.1.2          | A     | Modal dialogs that fail to trap keyboard focus |
| 16 | `keyboard-escape-modal`     | 2.1.2          | A     | Modal dialogs that don't close on Escape key   |

#### Deduplication

When multiple engines detect the same issue, findings are merged into a single unified violation. The more severe fix text wins, and confidence is boosted when both sources agree above threshold.

***

### 6. Scoring Methodology

wcag-scan produces two compliance outputs per scan: a **numeric score** (0–100) and a **conformance level** (AAA / AA / A / Fail). Both are computed deterministically. The AI does not influence the score or level.

#### Numeric Score

The score starts at 100 and subtracts penalties per violation, weighted by severity:

| Severity | Base penalty |
| -------- | ------------ |
| Critical | −20          |
| Serious  | −10          |
| Moderate | −5           |
| Minor    | −2           |

Penalties from the proprietary engine are scaled by confidence. Double-penalising the same WCAG criterion across engines is prevented.

#### Conformance Level

| Verdict  | Condition                                                      |
| -------- | -------------------------------------------------------------- |
| **Fail** | Any Level A criterion is violated                              |
| **A**    | Level A passes; one or more Level AA criteria violated         |
| **AA**   | Both A and AA pass — meets EAA / ADA / EN 301 549 requirements |
| **AAA**  | Zero violations detected across all engines                    |

#### Third-Party Exemption

Issues from infrastructure the site owner cannot control — Cloudflare, cookie banners, live chat widgets, CDN error pages — are identified and excluded from the pass/fail determination.

***

### 7. Plan Structure

#### Tier Comparison

| Capability             | Free        | Pro ($12/mo) | Team ($49/mo)       |
| ---------------------- | ----------- | ------------ | ------------------- |
| Sign-in required       | No          | Yes          | Yes                 |
| Scan type              | Single page | Single page  | Single + multi-page |
| Tracked domains        | —           | 3            | 10                  |
| Team seats             | —           | 1            | 5                   |
| Violations analysed    | Capped      | Up to 35     | Up to 35            |
| Corrected code fixes   | —           | ✓            | ✓                   |
| Effort estimates       | —           | ✓            | ✓                   |
| PDF report             | Watermarked | Clean        | Clean               |
| CSV export             | —           | ✓            | ✓                   |
| Scan history           | —           | ✓            | ✓                   |
| Auto-scan monitoring   | —           | ✓            | ✓                   |
| Email digests          | —           | ✓            | ✓                   |
| Multi-page crawl       | —           | —            | Up to 10 pages/run  |
| wcag-agent (GitHub PR) | —           | 1 run/month  | 5 runs/month        |

***

### 8. Competitive Position

|                                 | wcag-scan             | axe DevTools (Deque)    | Siteimprove     | WAVE (WebAIM)   | Lighthouse         |
| ------------------------------- | --------------------- | ----------------------- | --------------- | --------------- | ------------------ |
| **Detection sources**           | 4 independent engines | 1 (axe)                 | 1 (proprietary) | 1 (proprietary) | Partial axe subset |
| **Real keyboard simulation**    | ✓                     | ✗                       | ✗               | ✗               | ✗                  |
| **Accessibility tree analysis** | ✓                     | ✗                       | Unknown         | ✗               | ✗                  |
| **AI-generated code fixes**     | ✓                     | Partial (Deque AI)      | ✗               | ✗               | ✗                  |
| **Automated fix PRs**           | ✓ (Team)              | ✗                       | ✗               | ✗               | ✗                  |
| **Deterministic scoring**       | ✓                     | ✓                       | ✓               | N/A             | ✓                  |
| **Continuous monitoring**       | ✓                     | ✓                       | ✓               | ✗               | ✗                  |
| **Multi-page crawl**            | ✓                     | ✓                       | ✓               | ✗               | ✗                  |
| **Free tier (no sign-in)**      | ✓                     | Limited                 | ✗               | ✓               | ✓ (CLI)            |
| **Pricing segment**             | SMB / mid-market      | Mid-market / enterprise | Enterprise      | Free            | Free               |

***

### 9. Product Roadmap

#### Shipped

| Feature                                                    | Plan availability |
| ---------------------------------------------------------- | ----------------- |
| 4-engine WCAG scan (axe + wcag-engine + IBM ACE + W3C)     | All               |
| 27 proprietary gap-filling rules across 3 detection layers | All               |
| Deterministic scoring with confidence weighting            | All               |
| Third-party issue exemption                                | All               |
| PDF report (watermarked free / clean paid)                 | All               |
| CSV export                                                 | Pro, Team         |
| AI-powered fix descriptions and corrected code             | Pro, Team         |
| Scan history and regression tracking                       | Pro, Team         |
| Automated weekly monitoring with score-change alerts       | Pro, Team         |
| Weekly / monthly email digest                              | Pro, Team         |
| Multi-page crawl with sitemap-aware route discovery        | Team              |
| GitHub Agent — automated fix pull requests                 | Team              |
| Team seats with invitation workflow                        | Team              |

#### Planned — Near-Term (next 6 months)

| Feature                                                             | Impact                                        |
| ------------------------------------------------------------------- | --------------------------------------------- |
| **CI/CD integration** (GitHub Action + CLI binary)                  | Shift-left testing in PRs                     |
| **Remediation tracking** — mark issues as fixed, re-verify          | Closes the remediation loop                   |
| **White-label PDF** — custom branding and logo                      | Agency and consultancy revenue                |
| **Slack / Teams / webhook alerts**                                  | Enterprise notification workflows             |
| **Screenshot annotations** — violations overlaid on page screenshot | Visual context for non-technical stakeholders |

#### Planned — Mid-Term (6–12 months)

| Feature                                              | Impact                                      |
| ---------------------------------------------------- | ------------------------------------------- |
| **REST API access** (with webhooks)                  | Enables integrations and reseller workflows |
| **WCAG 2.2 full coverage**                           | Complete coverage of the latest standard    |
| **Component-level scanning** (Storybook integration) | Catch issues before deployment              |
| **Scan diff** — changes between consecutive scans    | Regression analysis and progress tracking   |

#### Planned — Long-Term (12–18 months)

| Feature                                                 | Impact                                         |
| ------------------------------------------------------- | ---------------------------------------------- |
| **Enterprise SSO** (SAML 2.0, SCIM provisioning)        | Enterprise procurement requirement             |
| **Assistive technology simulation** (NVDA / VoiceOver)  | Real screen reader output testing              |
| **Legal audit trail** — timestamped, signed PDF reports | Litigation defence and EAA compliance evidence |
| **Managed remediation service**                         | Professional services layer                    |

***

*Market data points include attributed sources. Roadmap features reflect current planning and are subject to change.*


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://www.docs.wcag-scan.com/wcag-scan-product-overview.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
