Jump to

Gamify AI speech bias education through kid-centered design

Launched

Launched

Launched

AI

AI

AI

0-1

0-1

0-1

EdTech

EdTech

EdTech

project intro

Launched from scratch in 4 weeks, EmoTune is a tool that helps children understand AI speech bias through interactive, voice-based gameplay while supporting research at UW LED Lab. I led end-to-end design and conducted user testing to generate insights for the lab’s ongoing studies.

Launched from scratch in 4 weeks, EmoTune is a tool that helps children understand AI speech bias through interactive, voice-based gameplay while supporting research at UW LED Lab. I led end-to-end design and conducted user testing to generate insights for the lab’s ongoing studies.

Role

UX design, visual design, content design, user testing

Timeline

4 weeks

with

1 developer/designer

tools

Figma, Illustrator, Hugging Face

Impact highlight

Launched a 0-to-1 working website in 4 weeks

Launched a 0-to-1 working website in 4 weeks

Launched a 0-to-1 working website in 4 weeks

Only team (out of 5) that delivered beyond requirement

🎯

🎯

Helped expand the research and collected insights

Helped expand the research and collected insights

Helped expand the research and collected insights

From image-to-text to speech classification

💬

💬

Sparked strong engagement and curiosity in kids

Sparked strong engagement and curiosity in kids

Sparked strong engagement and curiosity in kids

Kids loved our AI bias learning tool during the testing

Kids loved our tool during the testing

problem

As AI becomes part of children’s lives, UW LED Lab seeks interactive tools to help kids aged 6-10 understand AI bias and generate research insights through testing.

As AI becomes part of children’s lives, UW LED Lab seeks interactive tools to help kids aged 6-10 understand AI bias and generate research insights through testing.

We were one of the five teams tasked with this challenge and here are the constraints we faced:

Make a fully functional prototype, though not required

Make a fully functional prototype, though not required

Make a fully functional prototype, though not required

Technical constraint 📦

A team of 2 handling both design and development

A team of 2 handling both design and development

A team of 2 handling both design and development

Team constraint 👯‍♀️

A tight 4-week timeline to go from concept to launch

A tight 4-week timeline to go from concept to launch

A tight 4-week timeline to go from concept to launch

Time constraint ⏱️

No access to kid testers until the final testing day

No access to kid testers until the final testing day

No access to kid testers until the final testing day

Testing constraint ⚙️

solution

EmoTune is a gamified, AI-powered tool to help children recognize and reflect on AI speech bias holistically.

EmoTune is a gamified, AI-powered tool to help children recognize and reflect on AI speech bias holistically.

01

Explore speech bias through three intelligent AI characters

Explore speech bias through three intelligent AI characters

Three distinct characters help children experience different types of AI speech bias—emotion, language, and accent—through voice-based inputs.

02

Real-time voice interaction with AI

Real-time voice interaction with AI

Children speak directly to each character and get immediate feedback from AI, making bias visible and learning hands-on.

03

Reflection and explanation

Reflection and explanation

After each interaction, kids are guided to think critically about the AI’s response and discover how bias may have influenced the outcome.

Due to the tight timeline, we started with...

Due to the tight timeline, we started with...

Due to the tight timeline, we started with...

research

What we could learn from the lab's existing research?

What we could learn from the lab's existing research?

Before jumping into design, we leveraged key insights from lab's previous tests with 6–10-year-olds:

Insight 1 📊

Kids are ready for AI bias in other modality

Kids are ready for AI bias in other modality

Kids are ready for AI bias in other modality

Current research focused on image-to-text bias. We can explore other modalities.

Insight 2 🧠

Kids have short attention spans

Kids have short attention spans

Kids have short attention spans

Process should be easy to follow and balance learning and playing.

Insight 3 📏

Age 6-10 brings different learning needs

Age 6-10 brings different learning needs

Age 6-10 brings different learning needs

Due to time, the design should work for age 6—then would also be usable for older kids.

Insight 4 🎨

Kid-friendly visual and interactivity

Kid-friendly visual and interactivity

Kid-friendly visual and interactivity

Engaging visuals and interactive elements are key to engagement.

project Direction

We decided to expand the lab’s research by designing for AI speech bias through voice interaction.

We decided to expand the lab’s research by designing for AI speech bias through voice interaction.

After testing several HuggingFace APIs, we started with a speech emotion classifier that offered both accuracy and speed—providing a solid foundation for us to build upon.

Image-to-text → Speech classification

Speech-emotion classification API on HuggingFace

user flow & ideation

Through a simple, intuitive flow, kids will interact with characters that represent AI and learn bias through voice-based play.

Through a simple, intuitive flow, kids will interact with characters that represent AI and learn bias through voice-based play.

I began by mapping out the core learning flow, then distilled it into key information that shaped the structure of the key screen.

Iteration 1: Kid-centered design

After a quick testing of the concept, I transformed the early wireframes into a game-like, kid-centered experience.

After a quick testing of the concept, I transformed the early wireframes into a game-like, kid-centered experience.

Quick testing showed lack of engagement

Quick testing showed lack of engagement

After the engineer coded the basic flow with lo-fi prototype, I tested with my 7-year-old cousin and noticed:

With the big idea and some sketches, we developed user flow via a storyboard that narrates the interaction using our envisioned product.

👀 Lost focus easily on static texts

😕 Didn't know where to look at

🎮 Expected fun visual and interactive styles

To address this, I did more research on the games he and his friends played and redesigned the screens with:

With the big idea and some sketches, we developed user flow via a storyboard that narrates the interaction using our envisioned product.

💬 Dynamic, conversational guidance

🎯 Simple, intuitive interactions with hierarchy

🧱 Match aesthetics to popular kid games

Game-inspired interfaces with hierarchy and dynamics

Game-inspired interfaces with hierarchy and dynamics

Based on the feedback, I adjusted the visual hierarchy with elements like conversation bubbles, buttons, instructions to make the page more engaging.

Playful visual design & aesthetics

Playful visual design & aesthetics

I created a set of pixel cartoon characters that evolve visually based on the AI’s response. Inspired by popular kids’ media, I landed on a palette and style that felt lively, tech-forward, and accessible to ages 6–10.

Reflection point to nudge proactive learning

Reflection point to nudge proactive learning

At the end of each voice interaction, we added a pause to prompt kids to reflect on AI’s response, encouraging critical thinking about the process.

AI-powered content design & voiceover

AI-powered content design & voiceover

To make the experience more engaging, I scripted all character dialogue and trained AI voices to give each one a distinct personality—since we didn’t have time for human voiceovers.

Iteration 2: Usability

Making the experience deeper, clearer, and more usable through improved improved content, system feedback, and API diversity.

Making the experience deeper, clearer, and more usable through improved improved content, system feedback, and API diversity.

We showed our iterations to the researcher and tested a few more times. Then we made three major changes before launch.

Original information architecture

We initially used one API (speech emotion) for all three characters.

Change 1: diverse and character-specific APIs

After consulting with the researcher, we added language and accent classification APIs, assigning one to each character.

Change 2: added explanation pages for clarity

Simple visuals and text explain how models work and where bias comes in.

Change 3: built-in error handling

We found out during testings that APIs may fail when analyzing short voice clips, so we added a re-record button and helpful reminders.

Original information architecture

We initially used one API (speech emotion) for all three characters.

Original information architecture

We initially used one API (speech emotion) for all three characters.

Change 1: diverse and character-specific APIs

After consulting with the researcher, we added language and accent classification APIs, assigning one to each character.

Change 2: added explanation pages for clarity

Simple visuals and text explain how models work and where bias comes in.

Change 3: built-in error handling

We found out during testings that APIs may fail when analyzing short voice clips, so we added a re-record button and helpful reminders.

Change 1: diverse and character-specific APIs

After consulting with the researcher, we added language and accent classification APIs, assigning one to each character.

Change 2: added explanation pages for clarity

Simple visuals and text explain how models work and where bias comes in.

Change 3: built-in error handling

We found out during testings that APIs may fail when analyzing short voice clips, so we added a re-record button and helpful reminders.

Evaluation

Kids engaged, explored, and reflected—with strong research value.

Kids engaged, explored, and reflected—with strong research value.

Final testing day

Final testing day

We tested with three kids (ages 6, 8, and 10) through UW KidsTeam. The kids enjoyed the experience, learned through exploration, and gave us valuable feedback that we reported to the researchers.

What kids loved…

What kids loved…

🎮

Gamified interaction kept them engaged

Gamified interaction kept them engaged

Gamified interaction kept them engaged

The gamification made it easy and fun for kids to stay involved.

🎨

Pixel visuals felt familiar and exciting

Pixel visuals felt familiar and exciting

Pixel visuals felt familiar and exciting

Kids really liked the visual as it relates to some of the trending games they play.

🤖

“Tricking the AI” helped them reflect and learn

“Tricking the AI” helped them reflect and learn

“Tricking the AI” helped them reflect and learn

Kids were learning from tricking the AI into giving the answer they wanted.

What could be improved…

What could be improved…

🗣️

Voices still felt a bit robotic and unnatural

Voices still felt a bit robotic and unnatural

Voices still felt a bit robotic and unnatural

AI-trained voices lacked the expressiveness kids expected.

📚

Reflection steps were too text-heavy

Reflection steps were too text-heavy

Reflection steps were too text-heavy

Kids found the texts too long and no way to skip steps when starting over.

🙋

Kids needed prompt scaffolds when they were stuck

Kids needed prompt scaffolds when they were stuck

Kids needed prompt scaffolds when they were stuck

Prompts with constraints should be given to ensure a safe, supportive experience.

Prompts with constraints should be given to ensure a safe experience.

reflection

Next: design for a safer and more diverse experience…

Next: design for a safer and more diverse experience…

01

Design more characters, powered by diverse APIs

Future versions should introduce more characters using varied APIs for language, accent, and emotion—to better reflect diverse backgrounds and experiences.

02

Design safer, structured prompts to encourage safe and supportive interaction

During testing, some kids tried to tease or trick the AI, showing the need for clearer, age-appropriate prompts to encourage respectful and guided interaction.

03

Expand access for older and less tech-savvy children on other devices

To reach a broader audience, future designs should support other devices and offer a more intuitive experience for older or less tech-savvy kids.

01

Design more characters, powered by diverse APIs

Future versions should introduce more characters using varied APIs for language, accent, and emotion—to better reflect diverse backgrounds and experiences.

02

Design safer, structured prompts to encourage safe and supportive interaction

During testing, some kids tried to tease or trick the AI, showing the need for clearer, age-appropriate prompts to encourage respectful and guided interaction.

03

Expand access for older and less tech-savvy children on other devices

To reach a broader audience, future designs should support other devices and offer a more intuitive experience for older or less tech-savvy kids.

01

Design more characters, powered by diverse APIs

Future versions should introduce more characters using varied APIs for language, accent, and emotion—to better reflect diverse backgrounds and experiences.

02

Design safer, structured prompts to encourage safe and supportive interaction

During testing, some kids tried to tease or trick the AI, showing the need for clearer, age-appropriate prompts to encourage respectful and guided interaction.

03

Expand access for older and less tech-savvy children on other devices

To reach a broader audience, future designs should support other devices and offer a more intuitive experience for older or less tech-savvy kids.

Thank you for your time ☺

Next Project

Hue (link to slide)

An MR wearable that helps you adapt who you are

Thank you for your time ☺

Next Project

Hue (link to slide)

An MR wearable that helps you adapt who you are

Thank you for your time ☺

Next Project

Hue (link to slide)

An MR wearable that helps you adapt who you are

Based in Seattle

11:42:49 PM

© 2025 Chloe Yu ☕️

Based in Seattle

11:42:49 PM

© 2025 Chloe Yu ☕️

© 2025 Chloe Yu ☕️