Day 2
11:25 am
-
1:25 pm
main stage
side stage
workshop 1
workshop 1
Hörsaal 1
Hörsaal 2
Hörsaal 3
talk
workshop
extended talk
panel discussion
dialogue session
Skill level:
senior
sign-up only

When AI Can’t See You: Designing Against Algorithmic Cruelty

As AI and data-driven systems become embedded in everyday products, designers are increasingly shaping decisions that affect people’s emotional wellbeing, access, and safety. Yet many of these systems work well only for the people they were implicitly designed around — and quietly fail everyone else.

This workshop explores how algorithmic cruelty emerges: not through malicious intent, but through everyday design decisions built on incomplete data, narrow assumptions, or unexamined defaults. From recommendation engines that resurface painful memories, to biometric AI that fails on certain skin tones, to automated systems that expose or exclude vulnerable users — the consequences are often invisible until they cause harm.

Using examples from various digital products, this session shows where things typically go wrong, and how UX practitioners can intervene earlier to design more humane, inclusive systems.

Key Takeaways

  • A clearer understanding of how “well-functioning” AI systems can still cause harm
  • The ability to spot exclusion and risk earlier in the design process
  • Practical methods to design more inclusive, emotionally aware AI-driven experiences
  • A repeatable workshop activity you can run with your own product teams
  • A stronger UX role in shaping ethical, trustworthy AI systems


This session is for you if you are a UX designer, researcher, or product professional working on (or moving towards) AI or data-driven features and want to design them more responsibly.

It’s especially relevant if you’ve ever:

  • Shipped a feature that “worked” but didn’t feel quite right
  • Struggled to account for edge cases or vulnerable users
  • Relied on data or automation without fully trusting the outcome
  • Wanted a clearer way to raise ethical concerns within your team

Participants should be comfortable with core UX concepts and open to interrogating assumptions, working through ambiguity, and engaging in group discussion.

You’ll get the most out of this session if you’re looking for practical tools and frameworks to identify hidden risks early and design more inclusive, human-centered AI systems — even within real-world constraints.

Got some juicy gossip?

We’d love to hear it (and any other questions, wishes or suggestions you have).

Sessions you might also like

talk
workshop
extended talk
panel discussion
dialogue session
The messy science of conversion rate optimization
Marcella Sullivan
Creative CX
talk
workshop
extended talk
panel discussion
dialogue session
When AI Can’t See You: Designing Against Algorithmic Cruelty
Lizette Spangenberg
Allegra
talk
workshop
extended talk
panel discussion
dialogue session
For 30 years, websites worked the same way – until now.
Oliver Kartak
University of Applied Arts Vienna