
Lizette Spangenberg
Lizette
Spangenberg
As AI and data-driven systems become embedded in everyday products, designers are increasingly shaping decisions that affect people’s emotional wellbeing, access, and safety. Yet many of these systems work well only for the people they were implicitly designed around — and quietly fail everyone else.
This workshop explores how algorithmic cruelty emerges: not through malicious intent, but through everyday design decisions built on incomplete data, narrow assumptions, or unexamined defaults. From recommendation engines that resurface painful memories, to biometric AI that fails on certain skin tones, to automated systems that expose or exclude vulnerable users — the consequences are often invisible until they cause harm.
Using examples from various digital products, this session shows where things typically go wrong, and how UX practitioners can intervene earlier to design more humane, inclusive systems.
Key Takeaways
This session is for you if you are a UX designer, researcher, or product professional working on (or moving towards) AI or data-driven features and want to design them more responsibly.
It’s especially relevant if you’ve ever:
Participants should be comfortable with core UX concepts and open to interrogating assumptions, working through ambiguity, and engaging in group discussion.
You’ll get the most out of this session if you’re looking for practical tools and frameworks to identify hidden risks early and design more inclusive, human-centered AI systems — even within real-world constraints.
As AI and data-driven systems become embedded in everyday products, designers are increasingly shaping decisions that affect people’s emotional wellbeing, access, and safety. Yet many of these systems work well only for the people they were implicitly designed around — and quietly fail everyone else.
This workshop explores how algorithmic cruelty emerges: not through malicious intent, but through everyday design decisions built on incomplete data, narrow assumptions, or unexamined defaults. From recommendation engines that resurface painful memories, to biometric AI that fails on certain skin tones, to automated systems that expose or exclude vulnerable users — the consequences are often invisible until they cause harm.
Using examples from various digital products, this session shows where things typically go wrong, and how UX practitioners can intervene earlier to design more humane, inclusive systems.
Key Takeaways
This session is for you if you are a UX designer, researcher, or product professional working on (or moving towards) AI or data-driven features and want to design them more responsibly.
It’s especially relevant if you’ve ever:
Participants should be comfortable with core UX concepts and open to interrogating assumptions, working through ambiguity, and engaging in group discussion.
You’ll get the most out of this session if you’re looking for practical tools and frameworks to identify hidden risks early and design more inclusive, human-centered AI systems — even within real-world constraints.

Lizette
Spangenberg
We’d love to hear it (and any other questions, wishes or suggestions you have).