Loading...
The Interpreter Who Cannot Be Replaced
What Spotify's Engineers Teach Us About the Future of Language Access
What Actually Happened at Spotify
Last month, Spotify's co-CEO made an announcement that stopped a lot of people mid-scroll.
The company's most senior engineers have not written a single line of code since December 2025. Not because they burned out. Not because the company downsized. Because an AI system called Honk, built on Anthropic's Claude Code, does that part now. The engineers direct it from Slack on their morning commute. They review. They approve. They merge. The AI executes.
Spotify shipped more than fifty new features in 2025. Not despite that shift. Because of it.
If you are a professional interpreter, I want you to sit with that for a moment. Not because it is scary. Because it is clarifying.
Spotify did not replace its engineers. It elevated them.
The people who spent their careers writing code now spend their time on the work that requires human judgment: designing systems, catching what the AI gets wrong, making decisions about direction and quality that require deep expertise and contextual awareness. They need to understand their craft more deeply than before, because directing an AI system well requires knowing exactly what good output looks like—and recognizing the moment when something is quietly, dangerously off.
Their role shifted from execution to orchestration.
And that shift is already beginning in language access. Right now.
The Honest Truth About Where AI Is Heading in Interpreting
Real-time translation tools are improving every quarter. AI captioning is becoming more accurate. Automated recognition research for signed languages is moving faster than most people in the field realize.
The question is not whether these tools will affect the interpreting profession. They will. The question is which parts of interpreting they can handle—and which parts they fundamentally cannot.
What AI Can Do Reasonably Well
- •Transfer words
- •Match vocabulary
- •Produce linguistic output at speed
What AI Cannot Do
- •Walk into a room and read what is happening before a single word is spoken
- •Notice that a Deaf patient is not asking questions because they are afraid, not because they understand
- •Recognize that a doctor's clinical tone is landing differently than he intends
- •Hold the emotional weight of a terminal diagnosis and deliver it with both accuracy and human dignity
That is the work that will define the interpreter of the future. And it is work that no algorithm is close to replicating.
What Deaf Communities Have Always Been Asking For
Here is what gets lost in conversations about AI and the future of interpreting: Deaf communities have been articulating these needs for a long time.
Not just accurate words. Interpreters who understand the cultural context they are walking into. Practitioners who recognize power dynamics and do not unconsciously reinforce them. People who can be trusted to carry meaning across two languages and two cultural worlds in a way that leaves everyone more connected, not just more informed.
That has always been the standard. AI is not raising the bar. It is simply making clearer where the bar actually is.
The interpreters who will thrive in the next decade are the ones who have been building toward that standard all along: the ones who reflect on their practice, who examine their cultural assumptions, who have developed the emotional regulation to stay present in the hardest rooms, and who understand that their role is not to be invisible but to be trustworthy.
The Infrastructure Gap
Spotify invested heavily in building the tools and workflows that allowed their engineers to operate at a higher level. They did not just tell people to “level up.” They built the infrastructure to make it possible.
The interpreting profession has not done that yet.
CEU requirements still lean heavily on linguistic competency. Professional development often treats emotional intelligence and cultural mediation as electives rather than core skills. There is very little structured infrastructure for helping interpreters develop the judgment, reflective capacity, and emotional attunement that will define their value in an AI-augmented world.
That gap is not an abstract problem. It shows up in practice every day. Interpreters who want to grow beyond linguistic accuracy have few structured ways to do it. The ones who develop real expertise in reading rooms, navigating cultural complexity, and processing emotional weight often do so informally, through trial and error, without feedback loops that help them see their own patterns.
Closing that gap requires infrastructure that treats the high-judgment dimensions of interpreting—emotional intelligence, cultural awareness, ethical reasoning, reflective practice—as core professional competencies worthy of the same structured development we give to language skills. It requires tools that help interpreters examine their own decision-making patterns, practice in complex scenarios, and track their growth over time. And it requires a framework specific enough to generate real insight—not generic wellness advice, but targeted, evidence-based professional development.
That is what the ECCI framework was designed to provide. And it is what InterpretReflect was built to put into daily practice—so that the shift from execution to orchestration does not happen to interpreters, but happens with them, on their terms.
From Code Writers to System Architects. From Interpreters to Communication Orchestrators.
The trajectory is the same across both professions.
Spotify's best engineers did not become less valuable when AI started writing code. They became more valuable, because their judgment, expertise, and contextual awareness became the thing the system depended on.
For interpreters, communication orchestration means something specific. It means the work expands beyond the words to encompass the entire communicative environment. Consider what that looks like day to day:
Before the Assignment Starts
You are not just reviewing vocabulary lists. You are researching the cultural context of the interaction, anticipating where power dynamics might affect access, preparing for the emotional register of the content, and coordinating with your team about switching strategy and role positioning. You are designing the conditions for communication to succeed.
During the Assignment
You are monitoring multiple streams simultaneously: the linguistic content, the emotional subtext, the cultural framing, the room dynamics, your own internal state. When something shifts—a participant shuts down, a power imbalance surfaces, the emotional weight spikes—you make real-time adjustments that no automated system could recognize, let alone execute. You are not transferring words. You are conducting an entire communicative experience.
After the Assignment Ends
You are processing what happened, identifying where the meaning got complicated, examining your own choices under pressure, and extracting learning that makes you sharper for the next one. You are not just accumulating experience. You are building expertise.
The communication orchestrator does not do less than the traditional interpreter. They do more. They see more, hold more, and navigate more. And that expanded scope is precisely what makes them irreplaceable.
The interpreters who will be most sought after in five years are not the ones who can keep pace with a machine on vocabulary and speed. They are the ones who bring what a machine cannot: the ability to read a room, navigate cultural complexity, make ethical calls in real time, and hold the full humanity of an interaction while it is happening.
That is not a soft skill. That is the entire value proposition.
A Note to the Interpreter Reading This
You have probably had moments in your career where you knew, in the middle of an assignment, that something more than language was happening. Where the words were technically accurate but something about the meaning was at risk. Where you made a choice that required judgment no training program explicitly prepared you for.
That instinct is not a bonus feature of your practice. It is the center of it.
The future of language access does not belong to the interpreter who can process words the fastest. It belongs to the interpreter who has done the work to develop genuine emotional intelligence, real cultural humility, and the kind of reflective practice that turns difficult assignments into professional growth.
Deaf communities deserve that interpreter. And honestly? You deserve the tools and support to become that interpreter fully.
That is the work. And it starts with taking the human dimensions of this profession as seriously as the linguistic ones.
Source: Spotify Q4 2025 earnings call, February 10, 2026, as reported by TechCrunch and Fast Company.
About the Author
Sarah Wheeler, M.Ed., M.S.
Sarah is the founder of HuVia Technologies and creator of the ECCI Model. A CODA with 20+ years of interpreting experience across medical, legal, VRS, and educational settings, she holds graduate degrees in Interpreter Pedagogy and Psychology and is an Air Force veteran. InterpretReflect is available at www.interpretreflect.com.
Ready to build the skills that cannot be automated?
5 quick questions. Instant results across all five domains. Find out where you're strong and where you can grow.