Gamifying AI speech bias education through kid-centered design
project intro
Role
UX design, visual design, content design, user testing
Timeline
4 weeks
with
1 developer/designer
tools
Figma, Illustrator, Hugging Face
Impact highlight
Only team (out of 5) that delivered beyond requirement
From image-to-text to speech classification
problem
📦
Technical
Make a fully functional demo, though not required
👯♀️
Team
A team of 2 handling both design and development
⏱️
Time
A tight 4-week timeline to go from concept to launch
⚙️
Testing
No access to kid testers until the final testing day
solution
01
Three distinct characters help children experience different types of AI speech bias—emotion, language, and accent—through voice-based inputs.
02
Children speak directly to each character and get immediate feedback from AI, making bias visible and learning hands-on.
03
After each interaction, kids are guided to think critically about the AI’s response and discover how bias may have influenced the outcome.
research
insight 1
Kids are ready for AI bias in other modality
Current research focused on image-to-text bias. We can explore other modalities.
insight 2
Kids have short attention spans
Process should be easy to follow and balance learning and playing.
insight 3
Age 6-10 brings different learning needs
Due to time, the design should work for age 6—then would also be usable for older kids.
insight 4
Kid-friendly visual and interactivity
Engaging visuals and interactive elements are key to engagement.
project Direction
After testing several HuggingFace APIs, we started with a speech emotion classifier that offered both accuracy and speed—providing a solid foundation for us to build upon.
Image-to-text → Speech classification
Speech-emotion classification API on HuggingFace
user flow & ideation
I began by sketching out the core learning flow, then distilled it into key information that shaped the structure of the key screen.
Iteration 1: Kid-centered design

Dynamic, conversational guidance
Simple, intuitive interactions with hierarchy
Match aesthetics to popular kid games
Based on the feedback, I adjusted the visual hierarchy with elements like conversation bubbles, buttons, instructions to make the page more engaging.
I created a set of pixel cartoon characters that evolve visually based on the AI’s response. Inspired by popular kids’ media, I landed on a palette and style that felt lively, tech-forward, and accessible to ages 6–10.
At the end of each voice interaction, we added a pause to prompt kids to reflect on AI’s response, encouraging critical thinking about the process.
To make the experience more engaging, I scripted all character dialogue and trained AI voices to give each one a distinct personality—since we didn’t have time for human voiceovers.
Iteration 2: Usability
We showed our iterations to the researcher and tested a few more times. Then we made three major changes before launch.
Evaluation
🎮
Gamified interaction kept them engaged
The gamification made it easy and fun for kids to stay involved.
🎨
Pixel visuals felt familiar and exciting
Kids really liked the visual as it relates to some of the trending games they play.
🤖
“Tricking the AI” helped them reflect and learn
Kids were learning from tricking the AI into giving the answer they wanted.
🗣️
Voices still felt a bit robotic and unnatural
AI-trained voices lacked the expressiveness kids expected.
📚
Reflection steps were too text-heavy
Kids found the texts too long and no way to skip steps when starting over.
🙋
Kids needed prompt scaffolds when they were stuck
Prompts with constraints should be given to ensure a safe experience.
reflection