APCA Is Now in Chrome DevTools. It Matters More Than You Think.

Accessibility & Standards · 2026

The contrast algorithm baked into WCAG 2.x was calibrated for CRT monitors. A better one just landed directly in your browser inspector. Here is what changed, why the math is actually different, and what it means for how we build.

APCA Chrome DevTools WCAG 3.0 ADA Compliance Color Contrast
Inspector · Contrast · APCA Mode
White on near-black
Lc 106 PASS
Amber on warm white
Lc 52 REVIEW
Orange bold heading
Lc 49 WCAG: PASS
Light blue on navy
Lc 38 WCAG: PASS
Small gray caption text
Lc 43 WCAG: PASS
Lc values via APCA. WCAG column shows legacy verdict. Disagreements are the whole point.

Open Chrome. Open DevTools. Click any element with text. Look at the Styles pane. Click the color swatch. You will see a new line: a contrast score labeled with the letters APCA. It is not experimental anymore. It is right there, live, rendering a number that tells you something fundamentally different than the 4.5:1 ratio you have been using since 2008.

This is a big deal. Not because Google shipped a new UI. Because the algorithm behind it is genuinely better science. Understanding why it is better science will change how you specify colors for every interface you build.

What APCA Actually Stands For

APCA is the Accessible Perceptual Contrast Algorithm. It was developed by Andrew Somers at Myndex Research under the umbrella of a larger color science model called SACAM (S-Luv Accessible Color Appearance Model, formerly SAPC). It is the official candidate contrast method for WCAG 3.0, currently being developed as "Project Silver" by the W3C Accessibility Guidelines Working Group.

The acronym tells you what matters: Perceptual. Not mathematical. The entire premise is that human beings do not see contrast the way a calculator measures it, and a formula calibrated for 1990s research and CRT monitors is now something we can improve on directly in our workflow.

Origin Context

WCAG 2.0 was finalized in 2008, built on research from the 1990s, calibrated for CRT monitors that barely exist anymore. APCA was built specifically for self-illuminated LCD and OLED displays, modern font rendering, and the actual spatial contrast sensitivity curve of the human visual system. These are not the same thing.

The WCAG 2.x Contrast Formula: What Changed

The contrast formula WCAG 2.x has used since 2008 works like this: convert each color to relative luminance using a specific sRGB transfer function, then divide the lighter value by the darker one, adding 0.05 to each to avoid division by zero. If that ratio is 4.5 or higher for body text and 3.0 or higher for large text, you pass.

WCAG 2.x Contrast Ratio
CR = (L1 + 0.05) / (L2 + 0.05) where L1 is the lighter luminance, L2 is the darker

Symmetric. Polarity-blind. Size-blind. Font-weight-blind. Calibrated for a display technology from 1995.

The gaps in this formula are predictable, well-documented, and now solvable.

Dark color pairs. Near-black text on a dark background can score at 4.5:1 while being genuinely difficult to read. The formula overstates contrast for dark color pairs. APCA's lead developer calls this the "wrongtrast" problem. Dark mode design using APCA scoring gives you a far more accurate readout.

Orange and saturated colors. Orange text on white can mathematically reach 4.5:1 while appearing thin and uncomfortable to read in practice. APCA catches this because it accounts for how the luminance distribution interacts with the polarity of the text-background relationship.

Font weight is ignored entirely. A 300-weight 12px caption and a 700-weight 24px heading receive the exact same minimum contrast threshold. Thin strokes need more contrast headroom. Bold strokes can work with less. APCA accounts for this distinction directly.

86%

of websites fall short of WCAG 2.x contrast requirements. Myndex Research notes that a meaningful portion of those gaps are the result of the formula itself flagging genuinely readable designs as non-compliant. APCA scoring resolves this category of false positives.

APCA Research Documentation, Myndex / Inclusive Reading Technologies

What APCA Does Differently

APCA does not calculate a ratio. It calculates an Lc value: Lightness Contrast. The output is a number from 0 to approximately Lc 106, and it is perceptually uniform. That last part is what makes it valuable as engineering guidance rather than a compliance checkbox.

APCA Core Operation (Simplified)
Lc = abs(Ybackground^0.65 - Ytext^0.62) * 100 + directional polarity correction + spatial frequency adjustment for font size/weight

Asymmetric. Polarity-aware (dark-on-light vs light-on-dark score differently). Typography-aware.

Perceptual uniformity means that doubling the Lc value corresponds to roughly doubling the perceived contrast, across the full range of available colors. WCAG 2.x has no such property. A ratio of 4.5:1 near gray represents a completely different perceptual experience than a ratio of 4.5:1 near black. APCA normalizes this so the number means the same thing everywhere on the color wheel.

The Lc Scale in Practice

Lc Threshold Reference (APCA-RC Bronze)
Lc 90
Preferred minimum for body text, small or thin typefaces, long-form reading. The gold standard.
Lc 75
Minimum for standard body text in most UI contexts. Adequate for normal-weight fonts at readable sizes.
Lc 60
Suitable for large or bold text, prominent headings, UI labels with strong visual weight. Not appropriate for running body copy.
Lc 45
Placeholder or very large decorative text only. Still readable for large bold display use.
Lc 15
Approximate point of invisibility for many users. Useful only for purely decorative elements with no informational content.

This is what makes the Chrome DevTools implementation useful in practice. Rather than a binary pass at a single threshold, APCA gives you a score you can interpret by context. Lc 62 on a 700-weight 28px heading is solid. Lc 62 on a 300-weight 14px paragraph is not. The score plus the context gives you real design guidance.

Where WCAG 2.x and APCA Diverge

The disagreements between the two systems cluster around predictable scenarios where the old formula has always been a rough approximation.

Scenario WCAG 2.x Verdict APCA Verdict Ground Truth
Dark gray text on black (#222 on #000) PASS at ~1.7:1 FAIL Lc 8 Unreadable
White text on dark navy (#fff on #1e3a5f) PASS at 7.2:1 PASS Lc 78 Readable
Orange bold heading (#ff6600 on white) PASS at 3.05:1 REVIEW Lc 49 Uncomfortable, size-dependent
Light blue on navy (#a8c8e8 on #1e3a5f) PASS at 4.7:1 FAIL Lc 38 Too low for body text
Light gray caption text (#767676 on #f5f5f5) PASS at 4.5:1 LOW Lc 43 Insufficient for small thin text
Near-white on dark background (#e8e8e8 on #2a2a2a) FAIL at 3.8:1 PASS Lc 71 Readable dark mode pair

That last row matters for anyone building dark mode interfaces. APCA was explicitly engineered to handle polarity correctly. Light-on-dark and dark-on-light are treated as distinct contrast scenarios. The result is far fewer false negatives on dark mode color pairs that are visually comfortable and readable.

How to Enable It in Chrome DevTools

The APCA contrast score is now visible directly in the inspector without enabling any experiment flag. Here is the exact path.

  1. Open DevTools Press F12 or right-click any element and select Inspect. Go to the Elements panel.
  2. Inspect a text element Click any text node in the DOM tree. In the Styles pane on the right, find the color property. You will see a small color swatch next to the value.
  3. Click the color swatch The color picker opens. Below the hex or RGB inputs you will see a Contrast Ratio section. If APCA is available, you will see the Lc value displayed here alongside or replacing the old ratio.
  4. Read the score in context The score reflects the specific element you are inspecting, with its computed background. Use the Lc scale above to interpret it based on the font size and weight of that element.
  5. Use the autocorrect feature If the score is low, expand the contrast section to see AA and AAA targets. Chrome will show you the nearest passing color via an eyedropper and a refresh icon that suggests the minimum-change compliant value.
For Experimental Full APCA Mode

To access the full APCA scoring interface including the lookup table and typography-aware recommendations, go to Settings, then Experiments in DevTools, and enable "Apply APCA algorithm." This replaces the WCAG ratio display entirely with Lc values and gives you font-size-aware scoring throughout the inspector.

The Real Coverage Numbers for Automated Accessibility Scanning

You have probably seen the claim that automated tools only catch around 30% of accessibility issues. It gets repeated constantly. It is not wrong, but it is not the complete picture either, and using the wrong framing leads to misallocated testing resources.

Here is the accurate breakdown based on current research.

Deque axe-core (best-in-class automated)
57%
Of total accessibility issues by volume, based on 13,000+ pages and 300,000 issues. This is not the percentage of WCAG criteria covered. It is the percentage of actual real-world bugs caught. High-frequency issues like missing alt text and contrast errors dominate raw issue counts.
WAVE / IBM Equal Access
30-40%
Solid automated scanners with different rule sets. The 30% figure many people quote typically comes from measuring the percentage of individual WCAG Success Criteria that can be fully tested by automation, not the volume of issues found.
Lighthouse (Google)
~30%
Strong integration into CI and CD pipelines. Runs on the rendered DOM. Good for catching the most frequent structural issues. Treat it as a reliable baseline layer that captures the high-frequency issues quickly.
Fully automated across all tools combined
13-57%
The honest range. 13% is the share of WCAG criteria flagged with high accuracy and near-zero false positives. 57% is the upper bound when measuring by issue volume with a best-in-class tool on a wide representative dataset. The gap between those numbers is real and important to understand.
Source note: Deque Systems "Automated Accessibility Coverage Report" (2021, 13,000+ pages). Accessible.org analysis of WCAG criteria coverage (2025). GovWebworks M-Enabling Summit summary (2026). Numbers represent ranges across methodologies. WCAG criteria coverage and issue-volume coverage are different measurements.

What the "30% coverage" figure is actually measuring is how many of the 50-something WCAG 2.x Success Criteria can be fully confirmed by automated static code analysis alone. That number is roughly 13 to 15 out of 50. Which is about 30%. The important part: those 13 to 15 criteria include the most common, highest-frequency issues. Missing alt attributes, contrast, missing form labels, invalid ARIA. So in terms of the actual bugs that show up most often across real sites, automated tools hit much harder than 30% of the total defect volume.

The categories automated tools cannot reach are real and significant. Whether alt text is accurate and meaningful (not just present). Whether error messages are clear and actionable. Whether keyboard navigation follows a logical order. Whether screen reader announcements are correct for dynamic content. Whether interactive components behave correctly with actual assistive technologies at the OS layer. None of that is detectable by reading HTML. It requires a human.

What This Means for Your Testing Strategy

Automated scanning is your first line. It is fast, reproducible, and catches the high-frequency issues that dominate ADA litigation risk. Use axe or WAVE in every development pipeline. A clean automated scan means you have handled the easy stuff. The remaining 40% by issue volume requires human evaluation, keyboard testing, and in many cases actual screen reader users. That is the work that requires dedicated expertise.

Why APCA in the Inspector Is a Bigger Deal for Scanning

Contrast is one of the issues automated tools can theoretically catch. It is a mathematical check. WCAG 2.x's contrast math generated false positives and false negatives at a significant rate. Tools running it were flagging readable designs and approving designs that were difficult to read.

APCA in Chrome DevTools raises the accuracy of every contrast check you run during development. Not just for WCAG 3.0. For anyone trying to build things people can actually read. The score is more honest. It accounts for font size and weight. It handles dark mode correctly. It stops penalizing good dark-on-dark pairings the way the old formula did.

The practical result: fewer false positives during development, which means fewer "this looks fine to everyone but the scanner says it fails" conversations with clients. That alone is worth learning the new scale.

What to Do Right Now

WCAG 3.0 is not law yet. APCA is not a legal requirement yet. ADA enforcement still references WCAG 2.1 in most contexts. None of that means you should wait.

Designing to APCA now means designing to a more honest model of readability. When WCAG 3.0 lands in enforcement frameworks, your design system will already be ahead of it. More importantly, the things APCA requires you to get right, typography-aware contrast, proper dark mode handling, polarity-correct scoring, are exactly the things that make interfaces genuinely readable for people with low vision, cataracts, and processing differences. That has always been the goal. APCA is the math that finally matches it.

Enable APCA in Chrome DevTools today. Add it to your development standards. Build color palettes that score against Lc values. The browser already speaks the language.

ADA Accessibility Audits for Regulated Industries

We run full-stack accessibility audits combining automated scanning, manual keyboard and screen reader testing, and contrast analysis using both WCAG 2.x and APCA scoring. If your site is in healthcare, legal, or financial services, contrast is a known litigation risk. Let us look at the full picture.

Get an Audit

References

  1. Myndex Research / Andrew Somers, "Why APCA as a New Contrast Method?" — git.apcacontrast.com
  2. APCA in a Nutshell — git.apcacontrast.com/documentation/APCA_in_a_Nutshell
  3. W3C Silver / WCAG 3.0 Visual Contrast Subgroup — w3.org/WAI/GL/task-forces/silver/wiki
  4. Google web.dev, "Testing Web Design Color Contrast" (2022) — web.dev/articles/testing-web-design-color-contrast
  5. Deque Systems, "The Automated Accessibility Coverage Report" (2021) — deque.com/automated-accessibility-testing-coverage/
  6. Accessible.org, "Accessibility Scans Reliably Flag 13% of WCAG Criteria" (2025)
  7. GovWebworks, "Accessibility for April 2026 and Beyond" (February 2026)
  8. accessibilitychecker.org, "Understanding APCA" (December 2025)
  9. Chrome DevTools Team — chromium.bugs.chromium.org/p/chromium/issues/detail?id=1121900