Whose English gets to be default?
Have you ever been misunderstood while speaking to a voice interface because of your accent? How did these subtle barriers get encoded into automated speech recognition technology?
Accent bias is, unfortunately, one of the last socially acceptable forms of prejudice. Its influence is woven into voice interfaces and its effects increase the potential for harm through emerging use cases like emotion or sentiment detection.
This three-part talk explores how history, culture, and missing data have converged to produce the conditions for biasing outcomes and reproducing exclusion in voice interfaces:
- In the first part, we look at the human fascination with robots from the thirteenth century to today. We move on to examine how these technological beings reproduce culturally constructed ideas of linguistic authority while strengthening reproduce problematic ideas about gender roles and accent prestige. So we ask, whose language gets to take control?
- In the second, our attention turns to datasets, because it is in the data that does not exist or is missing, that we identify the people whose speech could be marginalised and how they might be at risk of calculative or automated harms. So we ask, whose missing language is weaponised against them?
- We finish by thinking about the ways we conceptualise nationality, and how this collective definition is full of contradictions and tensions for linguistic identity. So we conclude, asking whose voices are recognised as English?
Have you ever been misunderstood while speaking to a voice interface because of your accent? How did these subtle barriers get encoded into automated speech recognition technology?
Accent bias is, unfortunately, one of the last socially acceptable forms of prejudice. Its influence is woven into voice interfaces and its effects increase the potential for harm through emerging use cases like emotion or sentiment detection.
This three-part talk explores how history, culture, and missing data have converged to produce the conditions for biasing outcomes and reproducing exclusion in voice interfaces:
- In the first part, we look at the human fascination with robots from the thirteenth century to today. We move on to examine how these technological beings reproduce culturally constructed ideas of linguistic authority while strengthening reproduce problematic ideas about gender roles and accent prestige. So we ask, whose language gets to take control?
- In the second, our attention turns to datasets, because it is in the data that does not exist or is missing, that we identify the people whose speech could be marginalised and how they might be at risk of calculative or automated harms. So we ask, whose missing language is weaponised against them?
- We finish by thinking about the ways we conceptualise nationality, and how this collective definition is full of contradictions and tensions for linguistic identity. So we conclude, asking whose voices are recognised as English?
Got some juicy gossip?
We’d love to hear it (and any other questions, wishes or suggestions you have).


