EmoTune gamifies AI bias education with children's voices.

Launched

Launched

Launched

0-1

0-1

0-1

EdTech

EdTech

EdTech

EmoTune was a website platform shipped from scratch in just four weeks. During the process, I learned how to deal with constraints, learn from kids, and make decisions quickly.

EmoTune was a website platform shipped from scratch in just four weeks. During the process, I learned how to deal with constraints, learn from kids, and make decisions quickly.

Role

UX design, Content Design, Visual Design

Timeline

4 weeks

with

1 designer/developer

tools

Figma, JavaScript

project background

Developing an MVP for AI bias education.

Developing an MVP for AI bias education.

The UW Learning, Epistemology, and Design (LED) Lab had research going on to promote the socio-critical understanding of AI bias for children aged 6 through 12.

Our goal was to launch an MVP (a kid-friendly AI interaction system) from scratch. In collaboration with KidsTeam UW (an organization that helps design new technology for children), we will perform a user test with kids in the end. The results will help inspire the lab’s next-stage research.

challenge

0 to1, launched in 4 weeks.

0 to1, launched in 4 weeks.

Due to the pre-scheduled timeline for this project and kids' availability for the user testing session, we (a team of 2), with limited resources, only had 4 weeks to make the project happen from the drawing board.

design question

How might we quickly create a fun learning
experience for children to explore AI bias?

How might we optimize the workflow of managers to increase their remote work productivity?

How might we optimize the workflow of managers to increase their remote work productivity?

Core experience

Launching an interactive, kid-friendly platform to learn about AI speech bias.

Launching an interactive, kid-friendly platform to learn about AI speech bias.

EmoTune is a gamified web-based application designed for kids to help them understand AI speech bias comprehensively.

01

Various APIs for comprehensive learning

Various APIs for comprehensive learning

Three characters represent three APIs covering different types of speech bias in emotion, language, and accent.

02

Direct voice interaction with AI

Direct voice interaction with AI

Each character has a complete process of guiding children to directly speak and interact with the AI system in order to help them better understand the AI bias.

03

Reflection and explanation

Reflection and explanation

In the later phase, we designed steps to ask the kids to reflect upon the results, and explained why the exact result would happen.

research

What if we explore a new modality for AI bias education?

What if we explore a new modality for AI bias education?

The current research focuses on the visual / image generation aspect of AI bias, so we would like to explore other opportunities. When looking for suitable APIs on Hugging Face, we noticed a "speech emotion" API that looks educational and intuitive to interact with. We decided to give a shot on this new modality.

👁️

ideation

How to design a fun and intuitive flow for children to learn about AI speech bias?

How to design a fun and intuitive flow for children to learn about AI speech bias?

User Flow & Information Architecture

User Flow & Information Architecture

Step 1: Write down core steps for a straightforward child-friendly flow

Step 2: Develop more details for the actual interaction

After testing the flow ourselves and talking to the researcher in the lab, we made three revisions:

Character-specific Functions:

  • Current three characters serve under the same API and only vary in personalities for kids' enjoyment. Introduce two additional APIs for distinct roles. Add a part to encourage starting over at the end.

Enhance explainability component:

  • Adding more info on how the model works, how the tool could be biased, what to expect in the design, etc.

Correction of errors:

  • We noticed the selected APIs may fail when analyzing short voice clips. So we will include a re-record button and text reminder.

After testing the flow ourselves and talking to the researcher in the lab, we made three revisions:

Character-specific Functions:

  • Current three characters serve under the same API and only vary in personalities for kids' enjoyment. Introduce two additional APIs for distinct roles. Add a part to encourage starting over at the end.

Enhance explainability component:

  • Adding more info on how the model works, how the tool could be biased, what to expect in the design, etc.

Correction of errors:

  • We noticed the selected APIs may fail when analyzing short voice clips. So we will include a re-record button and text reminder.

After testing the flow ourselves and talking to the researcher in the lab, we made three revisions:

Character-specific Functions:

  • Current three characters serve under the same API and only vary in personalities for kids' enjoyment. Introduce two additional APIs for distinct roles. Add a part to encourage starting over at the end.

Enhance explainability component:

  • Adding more info on how the model works, how the tool could be biased, what to expect in the design, etc.

Correction of errors:

  • We noticed the selected APIs may fail when analyzing short voice clips. So we will include a re-record button and text reminder.

Step 3: Develop different speech APIs for a comprehensive learning

iteration

Gamifying the learning experience!

Gamifying the learning experience!

Mid-Fidelity Iterations

Mid-Fidelity Iterations

The initial wireframes are intentionally simple, offering a basic layout for children to record voices and receive AI-analyzed text outputs. After a guerrilla test with my brother, I noticed a lack of engagement in our design. So, we got inspiration from games and introduced interactive characters with conversation bubbles, guiding children through prompts, provide feedback, and track progress.

With the big idea and some sketches, we developed user flow via a storyboard that narrates the interaction using our envisioned product.

Visual Design Inspiration

Visual Design Inspiration

Meanwhile, we also researched a bit on appropriate visual styles and color palettes that are kid-friendly.

With the big idea and some sketches, we developed user flow via a storyboard that narrates the interaction using our envisioned product.

Writing Content & AI Character Voice Training

Writing Content & AI Character Voice Training

I was responsible for drafting, editing and iterating the script for the whole design. Besides, I was also fully in charge of selecting suitable AI voices and training them to be into our model.

With the big idea and some sketches, we developed user flow via a storyboard that narrates the interaction using our envisioned product.

deliverable

A guided journey to help children understand AI speech bias, and advocate for inclusive AI systems.

A guided journey to help children understand AI speech bias, and advocate for inclusive AI systems.

visual design

Pixel Art arouses children's interest.

Pixel Art arouses children's interest.

I designed a series of pixel cartoon characters for three APIs based on the different states/results they would have throughout the design. After research, we decided a color palette with a young, playful vibe that matches our target audience.

evaluation & impact

EmoTune fostered high levels of engagement and enthusiasm for AI bias learning among kids.

EmoTune fostered high levels of engagement and enthusiasm for AI bias learning among kids.

Testing Day

Testing Day

During a session with UW KidsTeam, we had a chance to spent a session and test our launched website with a few kids (6-year-old girl, 8-year-old boy, 10-year-old boy). We received valuable feedback from them and felt glad that they enjoyed our design :)

Areas they enjoyed…

Areas they enjoyed…

🎮

🎮

🎮

Fun Gamified Mode

The whole gamification made the platform easy to engage for children.

The whole gamification made the platform easy to engage for children.

🎨

🎨

🎨

Colorful, Pixelated Visual

Pixelated Visual

Kids really liked the visual as it relates to some of the trending games they play.

Kids really liked the visual as it relates to some trending games they play.

Kids really liked the visual as it relates to some of the trending games they play.

👻

👻

👻

Tricking the AI to learn

Tricking AI to learn

Kids were learning from tricking the AI into giving the answer they wanted.

Kids were learning from tricking the AI into giving the answer they wanted.

Areas that can be improved…

Areas that can be improved…

🗣️

🗣️

🗣️

More Natural Voiceover

Kids think the current voiceover still sounds a bit too robotic (due to insufficient AI trainings).

Kids think the current voiceover still sounds a bit too robotic (due to insufficient AI trainings).

📚

📚

📚

More Concise Learning Progress

Kids found the texts too long and no one button exists to skip multiple steps at once when starting over.

Kids found the texts too long and no button to skip multiple steps at once when starting over.

Kids found the texts too long and no one button exists to skip multiple steps at once when starting over.

🙋

🙋

🙋

Give Constraints to Prompts

Prompts with constraints should be given to help kids brainstorm when they are stuck and ensure safety.

Prompts with constraints should be given to help kids brainstorm when they are stuck.

Prompts with constraints should be given to help kids brainstorm when they are stuck and ensure safety.

reflection

What have I learned? What is next?

What have I learned? What is next?

01

More Characters/APIs

In the future, think about not limited by certain APIs and design more characters that generate more languages, accents, and emotions for kids with various backgrounds.

02

Give Kids Prompts for Safety

Sometimes kids reacted unexpectedly by being mean to AI or trying to manipulate AI during the process. We should consider more clear guidelines and structured prompts to create a secure and supportive environment.

03

Make Design More Inclusive

The kids joined our testing session were aged 6-10 and experienced with technologies like AI. Next step, we should consider making our design more accessible and inclusive to kids in an older age group and less tech savvy.

01

Address Interaction in Other Parts of the Story

In a much earlier stage, our team also thought about weaving individual contributors into our story. If we have more time, we would like to address features that can mitigate pain points from both managers and ICs, such as decreasing frictions in communication delay during remote work.

02

Competitive Analysis

If have time, our team should conduct more research on what features existing products have in the market - what worked well / not well in real practice and why.

03

Design for Accessibility

As a next step, we can consider making the interfaces more accessible by following the WCAG standard.

01

Address Interaction in Other Parts of the Story

In a much earlier stage, our team also thought about weaving individual contributors into our story. If we have more time, we would like to address features that can mitigate pain points from both managers and ICs, such as decreasing frictions in communication delay during remote work.

02

Competitive Analysis

If have time, our team should conduct more research on what features existing products have in the market - what worked well / not well in real practice and why.

03

Design for Accessibility

As a next step, we can consider making the interfaces more accessible by following the WCAG standard.

Thank you for your time ☺

Next Project

InSync

An AI-powered reimagination of your online management

Thank you for your time ☺

Next Project

InSync

An AI-powered reimagination of your online management

Thank you for your time ☺

Next Project

InSync

An AI-powered reimagination of your online management

Based in Seattle

2:04:44 AM

© 2024 Chloe Yu ☕️

Based in Seattle

2:04:44 AM

© 2024 Chloe Yu ☕️

© 2024 Chloe Yu ☕️