Cookie Banner Dark Patterns in 2026: How They Work, Why Regulators Are Cracking Down, and How to Build Symmetric Consent

·20 min read·

I scanned ten popular consumer websites — household names in retail, travel, social, and news — for dark patterns in their consent flows. Every single one failed at least one criterion that the European Data Protection Board has been explicit about since 2023. Half of them shipped a reject or withdrawal path that visibly does not work: trackers fire, cookies persist, the only thing that changes is the banner disappears. None offered a withdrawal mechanism that satisfies GDPR Art. 7(3) — "It shall be as easy to withdraw consent as to give it."

This is not a story about bad designers. It is a story about engineering decisions. Every dark pattern in a cookie banner exists because someone wrote a Jira ticket, a CSS class, an event listener, or a tag-management rule that produced exactly the behaviour you see. The reject button is a <a> tag instead of a <button> because a stakeholder wanted to "reduce friction on opt-in." The accept button is filled and the reject button is outlined because the design system's "primary action" token won the argument. The settings page reopens but the cookies stay because nobody wired the revocation handler to a deletion routine. These are features, shipped on purpose.

This post walks through the four dark-pattern categories the EDPB recognises, shows what each one looks like in code, what regulators (CNIL especially, with €475M in cookie-related fines in September 2025 alone) are now treating as automatic violations, and what a symmetric, GDPR-defensible consent implementation actually requires from your engineering team. It is aimed at the people who actually build and ship these flows — frontend engineers, tag-management owners, product managers, and the privacy-engineering function that increasingly sits between them.

The Short Version#

What Counts as a Dark Pattern (Under GDPR)#

A dark pattern in a consent flow is a UI choice that systematically biases the user toward sharing more data than they would if presented with a neutral interface. The EDPB's Guidelines 03/2022 on Deceptive Design Patterns (adopted in version 2.0 on 14 February 2023) is the canonical European definition. The CNIL, the AEPD, the Garante, and most other DPAs cite it directly.

The legal hook is GDPR Art. 4(11), which defines consent as freely given, specific, informed, and unambiguous. A dark pattern undermines at least one of those four words. If the reject button is buried, consent is not freely given. If "manage preferences" hides what categories actually do, it is not informed. If the only available option is "Accept All," it is not specific. If the banner styling makes the user think they are dismissing an ad, it is not unambiguous.

Four GDPR provisions are the operative regulatory anchors:

Every dark pattern below violates at least one of these — usually two or three simultaneously.

The Four Categories — In Code#

I'll cover the four categories most useful for engineers, mapping each to the EDPB's underlying definitions: what the pattern is, the code that produces it, what I observed in the 10-site scan, and what the legal violation actually is.

1. Accept/Reject Asymmetry#

This is the most common and the most easily detected. The "accept" affirmation is rendered as a high-contrast filled button, prominently placed in the user's natural eye path. The "reject" option is a smaller, lower-contrast, often outlined or text-only element placed below or to the side. The CNIL fined Google €150M in 2022 specifically for this pattern in the YouTube and Google Search consent banners. The 2025 €325M follow-on fine cited continuing asymmetry as one of the violations.

The CSS that produces it is, almost always, a single class swap:

<!-- ASYMMETRIC — what most sites ship -->
<div class="cmp-banner">
  <button class="btn btn-primary">Accept All Cookies</button>
  <a href="#" class="link-secondary">Reject</a>
</div>

btn-primary is the design system's high-emphasis token: filled background, brand colour, larger padding, often an icon. link-secondary is a text link with no background, smaller font, often grey. The visual weight delta is 8:1 to 20:1 depending on the design system. The user's eye lands on the filled button. The mouse follows. The reject path technically exists, but no behavioural research suggests users find it at the same rate.

In my scan corpus, this pattern was visible in five of the ten sites. The specific manifestations:

The fix is the smallest line of CSS in this entire post:

<!-- SYMMETRIC — both buttons identical -->
<div class="cmp-banner" role="dialog" aria-labelledby="cmp-title">
  <h2 id="cmp-title">Cookie consent</h2>
  <p>We use cookies for analytics and personalisation. You can change this anytime.</p>
  <div class="cmp-actions">
    <button class="btn btn-primary" data-action="reject">Reject all</button>
    <button class="btn btn-primary" data-action="accept">Accept all</button>
    <button class="btn btn-secondary" data-action="settings">Manage preferences</button>
  </div>
</div>

Both primary actions use the same class. Same height, same padding, same contrast. The settings button is the secondary, because granular configuration is a less common path. The CNIL formal notice of December 2024 made this explicit: equal visibility and accessibility means equal colour, size, and placement. Anything else is, in CNIL's words now adopted by Belgium, Germany, Italy, and the Netherlands, an Article 82 violation in France and an Art. 7 GDPR violation everywhere.

2. Pre-Selection (Pre-Ticked Boxes)#

A toggle, checkbox, or radio button is set to the consent-granting state by default. The user must actively un-tick to refuse. The CJEU settled this conclusively in Planet49 (C-673/17, October 2019): pre-ticked boxes do not constitute valid consent under GDPR or the ePrivacy Directive. The case was about a German online lottery, but the holding applies to every consent UI on the web.

The code looks like:

<!-- INVALID — pre-ticked, fails Planet49 -->
<label>
  <input type="checkbox" name="analytics" checked>
  Allow analytics cookies
</label>
<label>
  <input type="checkbox" name="marketing" checked>
  Allow marketing cookies
</label>

It also appears in less obvious forms:

// Tag-management config — equally invalid
const consentDefaults = {
  analytics_storage: 'granted',     // wrong — must default to 'denied'
  ad_storage: 'granted',            // wrong
  ad_user_data: 'granted',          // wrong
  ad_personalization: 'granted',    // wrong
};

Google Consent Mode v2 requires denied-by-default for any of the four *_storage and *_personalization keys that are not strictly necessary. A site that initialises the dataLayer with granted defaults and then narrows the scope after the user clicks something is not collecting valid consent — it is post-rationalising tracking that already happened.

The fix is structural:

// Default to denied, upgrade only on explicit consent
gtag('consent', 'default', {
  analytics_storage: 'denied',
  ad_storage: 'denied',
  ad_user_data: 'denied',
  ad_personalization: 'denied',
  wait_for_update: 500,
});
 
// On the user clicking "Accept all":
gtag('consent', 'update', {
  analytics_storage: 'granted',
  ad_storage: 'granted',
  ad_user_data: 'granted',
  ad_personalization: 'granted',
});

In my scan corpus, pre-ticked checkboxes were uncommon — most sites have moved past them after a decade of enforcement. The modern equivalent is the silent default: the consent state is "granted" before the banner ever renders, the page already loaded the trackers, and the banner is functional theatre. I observed this on seven of ten sites: trackers fired before the consent SDK initialised. NOYB's 2021 audit found similar patterns, and remediation rates were notably high here — 68% of sites NOYB challenged removed pre-ticked boxes, the highest fix rate of any category.

3. Concealment (Hidden Reject, Buried Withdrawal)#

Concealment is the dark pattern that survives the asymmetry crackdown. Once regulators forced "Reject all" onto the first layer, the energy moved to making the withdrawal path inaccessible. The user accepts in two seconds. Withdrawing requires three clicks, a fresh page, a settings menu hidden in a footer, and — in the worst cases — a re-acceptance step where the default is, again, accept.

NOYB's data on this is unambiguous: only 18% of the sites they challenged added an easy withdrawal mechanism after complaint. This was the most-resisted change of all the categories they tracked. The reason is structural: withdrawal is the one consent operation where the controller has nothing to gain and everything to lose. There is no metric that improves when you make withdrawal one click instead of four.

What concealment looks like in markup:

<!-- The banner is in your face -->
<div class="cmp-banner" data-cmp="initial-layer">...</div>
 
<!-- The withdrawal link is in the footer, in 11px grey text, behind a generic label -->
<footer>
  <div class="legal-links">
    <a href="/privacy">Privacy</a>
    <a href="/terms">Terms</a>
    <a href="#" onclick="OneTrust.ToggleInfoDisplay()">Cookie settings</a>
  </div>
</footer>

OneTrust, Cookiebot, Didomi, and the in-house equivalents at most large platforms all expose a JavaScript handle that re-opens the consent UI. Whether that handle is wired to a discoverable button is a frontend choice. Most sites in my corpus chose not to wire it discoverably. Two sites had no withdrawal mechanism at all — the user could only opt back out by manually clearing browser cookies.

Concealment also takes the form of fragmented choice architecture. The first layer offers two clear options. "Manage preferences" leads to a second layer where the granular toggles are split across five tabs. The "Save" button is at the bottom of the last tab. Many users, having clicked through three tabs and seen 47 vendor toggles, click "Accept all" on the way out just to escape. The EDPB explicitly identifies this as a deceptive pattern under the "Overloading" category.

Defence is operational. The withdrawal link must be:

<!-- Discoverable, named, symmetric -->
<footer>
  <button type="button" class="btn-link"
          aria-label="Withdraw or change cookie consent"
          onclick="cmp.openWithdrawalDialog()">
    Withdraw cookie consent
  </button>
</footer>

4. Hierarchy Manipulation (Default Permissive, Decoy Choices)#

The fourth category is the most subtle. The interface presents choices, but the architecture of the choice itself is weighted. Defaults favour data sharing. Privacy-protective options are framed as data loss ("we won't be able to personalise your experience"). Decoy options crowd the user toward the controller's preferred outcome. The CMP renders three buttons: "Accept all," "Manage preferences," and "Continue without accepting" — but "Continue without accepting" is in 11px grey text against a beige background, while the other two are filled and prominent. Technically there is a reject path. Behaviourally there is one.

Hierarchy manipulation also lives in the language layer. The EDPB calls this out specifically: "Outside our control" (suggesting the controller has no choice but to deploy the trackers); "Required for the best experience" (implying degraded service for refusal); "Help us improve" (framing tracking as a moral act). The button itself may say "Yes, I support free content" instead of "Accept tracking cookies."

I observed this in two cases that were not in my main four categories:

The fix here is at the level of product design rather than CSS. The consent UI must be dismissible without consenting, the page underneath must remain functional during the choice, and the language must be neutral.

What "Working Reject" Actually Means in Code#

The most damaging finding from my scan is not asymmetry — it is that the reject or withdrawal path, when present, often does nothing. In half of the ten sites, either the initial reject set tracking cookies and fired tracker requests, or the withdrawal flow re-opened the consent UI but never deleted the cookies that were already set. The banner disappeared. The user believed they had refused or revoked. The data flowed.

This is the most legally exposed dark pattern of all because it constitutes both an Art. 7 violation (consent obtained was not valid) and an Art. 5(1)(a) violation (processing was not lawful, fair, or transparent). The CNIL fines of September 2025 against Google and Shein cited the disjoint between the reject UI and the actual tag-firing behaviour as a primary aggravating factor.

What working reject looks like, end-to-end:

// 1. The page boots with all non-essential trackers gated.
//    No GTM container loads, no analytics SDK initialises,
//    no fingerprinting calls run. Nothing. Consent state = 'denied'.
 
// 2. The user clicks "Reject all".
function onReject() {
  // 2a. Persist the decision so it survives reloads.
  cmp.storeConsent({
    state: 'rejected',
    timestamp: Date.now(),
    version: CONSENT_VERSION,
  });
 
  // 2b. Update Google Consent Mode (no upgrades sent).
  gtag('consent', 'update', {
    analytics_storage: 'denied',
    ad_storage: 'denied',
    ad_user_data: 'denied',
    ad_personalization: 'denied',
  });
 
  // 2c. Critically: actively delete any cookies that exist.
  //     This includes those a previous accept set, or that
  //     a misconfigured tracker set pre-consent.
  deleteCookies(NON_ESSENTIAL_COOKIES);
  clearLocalStorage(NON_ESSENTIAL_KEYS);
 
  // 2d. Send a Reject signal to any third-party CMP/server
  //     that needs it (TCF v2.2, IAB Multi-State Privacy String).
  __tcfapi('setConsent', 2, { purposes: {}, vendors: {} });
 
  // 2e. Hide the banner.
  cmp.hideBanner();
}
 
// 3. On every subsequent page load, the stored decision
//    is read FIRST, before any tracker initialisation, and
//    used to gate the entire stack.

Step 2c is the one that fails most often in the wild. Sites store the consent decision but never wire it to a deletion routine. Tracking cookies set during accept survive a subsequent reject. The CNIL's December 2024 formal notice specifically called this out: rejection must result in the deletion or anonymisation of data already collected on the basis of withdrawn consent.

Why This Happens (A Short Detour Into Engineering Reality)#

Dark patterns in consent flows do not appear by accident. They are produced by predictable organisational forces:

  1. A/B testing optimises for the wrong metric. The engineering team is asked to maximise opt-in rate. The PM ships an experiment. The asymmetric variant wins. Nobody runs the experiment to optimise informed consent rate, because that is not a metric anyone has dashboards for.
  2. Tag management runs ahead of consent management. GTM is configured by the marketing team. The CMP is configured by the privacy team. The two teams discover at integration time that the tags have been firing all along. Fixing this requires breaking the dataLayer architecture that paid every product launch for the past three years.
  3. The CMP is a checkbox vendor. Many in-house teams treat OneTrust/Cookiebot/Didomi as a compliance box. The vendor ships defaults that are arguably defensible in some jurisdictions. Nobody re-tests them quarterly. Regulators are increasingly explicit that defending against dark patterns is the controller's responsibility, not the CMP vendor's — and a CMP misconfiguration is the controller's GDPR violation, not the vendor's.
  4. Withdrawal is invisible to product metrics. No dashboard tracks withdrawal rate. No quarterly review penalises a hard-to-find withdrawal flow. The team that owns the footer link is not the team that owns the banner. Withdrawal degrades and nobody notices until a regulator does.

The structural fix is not a better CMP. It is privacy engineering as a first-class discipline: someone whose job description includes auditing the consent stack quarterly, owning the dataLayer defaults, and defining a withdrawal-rate metric that gets reviewed alongside opt-in rate. Most large EU-facing sites do not have this role yet. Most regulator findings in 2024–2025 explicitly recommend creating it.

What Symmetric Implementation Costs#

This is the part that, after a decade of consultancy reports about how complicated GDPR is, reads as almost flippant: a fully symmetric, GDPR-defensible consent flow is roughly one engineering day plus a tag-management audit.

The work is:

The reason this work doesn't get done is not engineering complexity. It is that most product organisations are not structured to make a one-day project worth doing if it costs measurable opt-in rate. The privacy-engineering function exists, in part, to make that trade-off explicit at the org level — not to argue it on individual code reviews.

Enforcement: 2024–2026 Is the Inflection Point#

The CNIL's enforcement curve on cookie banners makes the regulatory direction unambiguous:

NOYB (Max Schrems' organisation) has filed over 500 GDPR complaints specifically targeting cookie-banner dark patterns. Their 2021 baseline data is the most cited industry benchmark: 81% of audited sites had no first-layer reject, 73% used deceptive colours, and only 18% subsequently added a working withdrawal mechanism after being challenged. Mathur et al.'s 2019 ACM CSCW paper Dark Patterns at Scale established the academic taxonomy that the EDPB now operationalises in regulatory practice — and the empirical finding that prevalence is concentrated in the most popular sites, not the obscure ones, is consistent with what I observed in my own ten-site corpus.

Enforcement attention is also moving to measurable, scriptable criteria. CNIL's December 2024 notice referenced visual asymmetry in colour, size, and placement — these are testable in code. The pattern over the next 18 months will be regulators commissioning their own scanning tools (CNIL has already prototyped one) and issuing automated, evidence-backed notices at scale. The age of the discretionary, complaint-driven enforcement model is ending. The age of compliance-as-code is beginning.

What to Actually Do, This Quarter#

If you operate a consumer-facing site in the EU or serving EU users:

  1. Audit your first-layer consent UI — accept and reject buttons must use the same visual class. If they don't, this is a 15-minute fix and a meaningful regulatory risk reduction.
  2. Audit your default consent stategtag('consent', 'default', {...}) must initialise every non-essential storage key to 'denied'. Anything else is invalid consent regardless of what the user later clicks.
  3. Audit your withdrawal flow — single-click from every page footer, named "Withdraw cookie consent" or its non-English equivalent. The re-opened banner must be symmetric.
  4. Test that reject actually works — script a Playwright run that clicks Reject, then checks that no non-essential cookies are set, no tracker requests fire, and any pre-existing non-essential cookies have been deleted. If your reject path leaves cookies behind, your CMP is misconfigured — and "misconfigured" is the controller's violation, not the vendor's.
  5. Define and track withdrawal-rate — a metric that does not exist will not be defended in any product review. Make it visible.

Dark patterns in consent flows are not a UX failure. They are an engineering pattern, deliberately shipped, and increasingly enforced against. Symmetric consent is cheap to build and structurally hard to defend at the org level. The 2024–2025 enforcement wave is the moment when the cost of the latter starts to dominate the savings from the former.

If your privacy programme ends at "we have a banner," it ends in the wrong place.


This post is part of an ongoing series on web privacy, built on a structured knowledge base of GDPR and ePrivacy jurisprudence and grounded in scans of real consumer websites. Related reading: Browser Fingerprinting in 2026, the Privacy Audit series.

The 10-site scan referenced throughout was performed by an AI-powered privacy auditor I built — Playwright, Firefox, three-variant scanning (ignore / accept / reject), and automated scoring against the EDPB criteria above. Site identities are anonymised in this post; the published scan-by-scan series does name them, with full methodology and raw data.

[S.01]§ Related

Browser Fingerprinting in 2026: How It Works, Why Regulators Are Cracking Down, and How to Defend Against It

The tracking method that survives when cookies die. A technical guide to canvas, WebGL, AudioContext, and WebGPU fingerprinting — what GDPR and ePrivacy actually say, and what defenses hold up.

15 min read

I Built an AI That Audits Websites for Privacy. Here's What It Found.

A data engineer who builds tracking infrastructure by day built an AI that audits everyone else's. 10 major websites scanned. Average score: 4.9 out of 10.

4 min read