BiasedBeans Project Hero

Project Name

BiasedBeans

Role

UI/UX Designer, UX Researcher, Project Lead

Team: Jei Park (Solo)

Duration

Fall 2024 (12 weeks)

Objective

Algorithmic biases are not abstract technical issues as they have real-world consequences that perpetuate systemic inequities and harm marginalized communities.

In this project, I aim to design an engaging and user-centric platform that empowers individuals to identify, report, and address generative AI biases, fostering transparency, accountability, and meaningful social impact.

Problem

Users often feel discouraged from actively reporting generative AI biases due to a lack of transparency, clear incentives, and a meaningful connection to real-world impact.

Reporting AI biases is deeply personal as it is driven by the same emotions they feel when confronting broader societal issues like gender inequality or unequal access to education. However, current reporting platforms fail to leverage this emotional connection, providing no clear pathway from bias reporting to tangible social impact.

Venn Diagram

Needs

"I want to see how my efforts to report AI biases actually help make a difference, so I feel like I'm contributing to something meaningful."

"I want a simple and intuitive platform that helps me identify biases and makes reporting them easy and rewarding."

"I want to support causes I care about and feel like my actions have a real-world impact."

Opportunities Gap

There is an opportunity to create a platform that transforms AI bias reporting into a meaningful, empowering experience by connecting users' emotional motivation to tangible social impact.

There we can create a self-reinforcing cycle where users feel valued, see their contributions driving change, and are motivated to continue engaging.

How Might We

"How might we design an engaging and rewarding platform that motivate users identify and report generative AI biases, while helping them feel their contributions make a meaningful impact?"

Methods

We adopted a human-centered design approach to deeply understand user needs and rapidly iterate on prototypes.

Our goal was to uncover the emotional and practical barriers users face when reporting AI biases and to design a solution that motivates engagement and fosters meaningful impact.

Through a blend of qualitative research, empathy-driven synthesis, and iterative testing, we gathered insights, identified key pain points, and refined our concepts.
Methods

Think Aloud + Data Synthesis + Affinity Clustering

I conducted usability testing with four participants using the think-aloud method, encouraging them to verbalize their thoughts while interacting with the low-fidelity version of the platform. The insights gathered were analyzed through affinity clustering, allowing us to identify recurring patterns and refine our understanding of the key user challenges.

Data Synthesis Process
Affinity Clustering Process

Empathy Map + Concept Model

The Empathy Map captures users' thoughts, feelings, and actions around AI bias reporting, revealing key challenges like lack of clarity, trust, and transparency, while highlighting motivations such as tangible outcomes and meaningful impact.

The Concept Model maps relationships between key stakeholders (developers, AI companies, hiring teams, and the public), emphasizing shared goals of fairness, transparency, and ease of reporting.

Empathy Map
Concept Model

Crazy 8s + Speed Dating with Storyboards

The goal of Speed Dating was to rapidly test multiple ideas through quick storyboard validation. Using insights from the Crazy 8's ideation method, where I brainstormed novel and creative solutions, I selected three promising storyboards for testing. This process revealed a key insight: users are most motivated when they see their actions contributing to a larger goal or mission.

Story Board
Crazy 8s Exercise

Solution

BiasedBeans Prototype Testing

Who

Tested with 4 CMU students majoring in AI, Data Science, and Statistics, chosen for their potential future impact in the tech industry.

What

Observed how participants scrolled social media and engaged with the bias reporting process, measuring time taken, number of attempts, and qualitative behaviors like hesitation or confidence.

Why

To evaluate if the app's interactive and gamified design motivated users to report biases, simplified bias identification, and encouraged consistent engagement through rewards.

How

Participants scrolled social media feeds on their phones, identified biases, and used a shortcut icon to access the BiasedBeans prototype. They selected bias categories, described their observations, and viewed a manually created rewards dashboard showing earnings, rankings, and community feedback. User reactions, ease of use, and engagement levels were recorded throughout the process.

Insights

Insight 1: Users Want to Understand AI Bias

The "Learn about GenAI Biases" section with real-world cases and articles helps users identify and report biases more confidently.

Insight 2: Impact Transparency Drives Engagement

Status updates showing report outcomes significantly increased user trust and motivation to continue reporting, seeing their direct contribution to AI improvement.

Insight 3: Bean Collection Motivates Action

Users are excited by earning beans for each validated report, comparing their collection to a rewards program.

Insight 4: Users Value Decision-Making Power

Users feel empowered by the authority to decide which social causes to support with their earned beans. This sense of agency transforms them from passive reporters to active contributors in creating social change.

Honest Signals - Success & Failure

Success:

  • Engaging Gamification: Users expressed genuine interest in earning beans, checking the leaderboard, and redeeming rewards.
  • Seamless Integration: Participants smoothly transitioned from social media to the app using the shortcut bean icon.
  • Clarity in Bias Categorization: Clear prompts made identifying and reporting biases easier, reducing hesitation.

Failure:

  • Difficulty in Classification: Some biases were hard to categorize accurately.
  • Overemphasis on Rewards: Users seemed more focused on perks than on meaningful reporting.
  • Contextual Gaps: Finding generative AI biases on social media required multiple searches, slowing down the process.

Final Summary

BiasedBeans transforms AI bias reporting into an engaging and purpose-driven experience through gamified rewards, seamless integration, and transparent contribution tracking. Users who reported gender-related biases felt more motivated when they could contribute their rewards to causes like "Empowering minority women in STEM" and "Advocating for equal pay policies"

The interactive dashboard further enhanced engagement by providing clear visibility into the impact of their contributions, fostering trust and sustained participation.

By aligning user motivations with meaningful outcomes, BiasedBeans makes bias reporting intuitive, impactful, and emotionally rewarding.