The Responsibility Gap and Why it Matters for TA Leaders
It's been a hectic few weeks. Half term with the family, then submitting the first two essays of my MA in AI, Ethics and Society.
But I'm back with something I've been working on for months.
Over the next few weeks, I'm going to share frameworks that will change how you think about AI in hiring.
Not more vendor promises.
Not more "trust us, we're ethical."
But practical tools to evaluate whether your AI hiring systems are actually fair and who's accountable when they're not.
Three frameworks. Three critical questions most organizations can't answer.
Let me introduce them.
FRAMEWORK 1: THE RESPONSIBILITY GAP
This is the space between what AI vendors claim their systems do and what organizations deploying those systems can actually verify.
When I ask TA leaders: "Can you explain how your ATS ranks candidates?"
I get: "It uses our requirements and matches based on qualifications."
When I push deeper: "What algorithm? What weights? What definition of quality is it optimizing for?"
Silence.
Not because they're incompetent.
Because vendors optimize for passing audits, not for transparency.
And TA leaders accept high-level assurances because they don't have frameworks to demand more.
This gap is where discrimination happens.
Derek Mobley applied for 80+ jobs through Workday's AI screening. Rejected every time. No one could explain why—not the companies, not the vendor, not the algorithm.
In May 2025, his case became a class action affecting millions of applicants.
"The vendor said it was fine" didn't protect those employers.
The responsibility gap is structural, not accidental.
And it's about to become very expensive.
FRAMEWORK 2: THE GROUP AGENT OF HIRING
When discrimination happens through an AI hiring system, who's responsible?
→ The vendor: "We built according to spec. The employer controls the inputs."
→ The employer: "We trusted the vendor's audit. They're the AI experts."
→ The recruiter: "I just used the system's recommendations."
→ HR: "The system is neutral. It's how people use it."
Result: No one is accountable.
Philosophers call this "the problem of many hands" when responsibility becomes so diffused across multiple actors that no single party can be held accountable.
In hiring AI, you have a group agent: multiple actors (vendor data scientists, product managers, TA leaders, recruiters, hiring managers) collectively producing an outcome that none individually controls.
When that outcome is discriminatory, accountability dissolves.
This is what I'm researching in my MA in AI, Ethics and Society.
And it's what organizations need to address before the EU AI Act deadline: August 2, 2026.
Because "accountability diffusion" won't be a defense.
FRAMEWORK 3: LEVEL 1 VS LEVEL 2 ASSESSMENT
Most organizations focus on Level 1: Technical Compliance
→ Does the AI explicitly use race, gender, age as inputs?
→ Does it pass bias tests?
→ Can it generate explanations?
→ Is it GDPR compliant?
Vendors pass Level 1 audits. TA leaders trust those audits. Systems get deployed.
But Level 2: Structural Legitimacy is where fairness actually lives.
Level 2 asks:
→ What definition of "qualified" or "high performer" is the algorithm optimizing for?
→ Who decided that definition? How was it validated?
→ What proxy indicators does it use (education, communication style, career trajectory)?
→ How do those correlate with protected characteristics?
→ If it learns from past hiring, how does it prevent encoding historical bias?
→ Can you explain specific decisions to rejected candidates?
Example: The Elite University Problem
Your ATS learns from 5 years of hiring data. You hired lots of people from elite universities - not explicit bias, just better networks and reach.
The AI learns: "Elite degree = quality."
It starts ranking elite degree candidates higher.
Level 1 audit:
✅ No explicit use of race or socioeconomic status
Level 2 analysis:
❓ Elite universities correlate heavily with privilege, which correlates with race. You're using education as a proxy for characteristics you're not supposed to consider.
Most audits only examine Level 1.
The gap between Level 1 and Level 2 is where organizations get exposed.
To litigation. To regulatory penalties. To candidates they've systematically excluded.
WHY THIS MATTERS NOW
Two forces are closing these gaps whether you're ready or not:
- Litigation: The Workday case is just the beginning. Every major ATS vendor uses similar technology.
- Regulation: EU AI Act deadline is 9 months away. Non-compliance penalties: €35M or 7% of global revenue.
Most organizations can't demonstrate:
→ Transparency (they don't understand vendor methodology)
→ Explainability (they can't explain decisions to candidates)
→ Meaningful human oversight (they don't understand AI recommendations well enough)
→ Continuous bias monitoring (they can't audit independently)
The responsibility gap prevents compliance.
WHAT'S COMING
Over the next few weeks, I'll break down each framework in depth:
→ Week 2: The Responsibility Gap - why it exists, how it's maintained, what closes it
→ Week 2: The Level 2 Readiness Framework - assess your organization's capability
→ Week 3: FTSE 100 research - how prepared are UK's largest companies?
→ Week 3: Practical tools to close the gap
This isn't about perfection.
It's about accountability.
It's about TA leaders being able to answer: "Why was this candidate rejected?"
And not having to say: "The algorithm decided, and I can't explain how."
If you're a TA, HR, or DEI leader deploying AI in hiring……
If you suspect passing audits isn't the same as being fair……
If you've felt stuck in the responsibility gap……
Follow along.
Let's build the frameworks that should exist but don't.
Dan Gallagher
#ResponsibleAI #AIinHiring #TalentAcquisition #EUAIAct #VeritynIndex