Features

For You

A feature that gives Instagram users the ability to override post recommendations from the algorithm and customize their feed to their own tastes. This tackles information apathy, the root problem behind why users don't audit algorithmic bias. View Figma Prototype here.

Access your customizations

when you're scrolling through your feed or changing your account settings.

Adjust content frequencies

by creating a new customization for certain post hashtags and categories.

Modify existing changes

by editing frequencies or removing the customization completely.

Restore defaults

with a click of a button if you decide that the changes are unnecessary.

Catching algorithmic bias —

Catching algorithmic bias —

For You, a feature on Instagram that lets users customize what they see on their feed, aiming to increase auditing of algorithmic bias.


By letting people choose their own content, we encourage them to care more about the information they consume. This tackles information apathy, the root problem behind why users don't audit algorithmic bias.


View the Figma Prototype here.

For You, a feature on Instagram that lets users customize what they see on their feed, aiming to increase auditing of algorithmic bias.


By letting people choose their own content, we encourage them to care more about the information they consume. This tackles information apathy, the root problem behind why users don't audit algorithmic bias.


View the Figma Prototype here.

For You, a feature on Instagram that lets users customize what they see on their feed, aiming to increase auditing of algorithmic bias.


By letting people choose their own content, we encourage them to care more about the information they consume. This tackles information apathy, the root problem behind why users don't audit algorithmic bias.


View the Figma Prototype here.

role

role

UX Designer

UX Designer

UX Designer

duration

duration

3 months

3 months

3 months

What would you like to see?

Search any word or hashtag

In what frequency?

Not at all

Frequent

You would see [more] posts like:

What would you like to see?

Animals

In what frequency?

Not at all

Frequent

You would see [more] posts like:

What would you like to see?

Animals

In what frequency?

Not at all

Frequent

You would see [more] posts like:

Features

Access your customizations

when you're scrolling through your feed or changing your account settings.

Adjust content frequencies

by creating a new customization for certain post hashtags and categories.

Modify existing changes

by editing frequencies or removing the customization completely.

Restore defaults

with a click of a button if you decide that the changes are unnecessary.

Features

Access your customizations

when you're scrolling through your feed or changing your account settings.

Adjust content frequencies

by creating a new customization for certain post hashtags and categories.

Modify existing changes

by editing frequencies or removing the customization completely.

Restore defaults

with a click of a button if you decide that the changes are unnecessary.

context

Algorithms have already decided that I don't look like a UX designer.

Algorithms have already decided that I don't look like a UX designer.

Or at least, not what people imagine when it comes to a UX designer. A quick search of "ux designer stock photos" shows many different people, just not many [redacted] year old Asian males like me.

Or at least, not what people imagine when it comes to a UX designer. A quick search of "ux designer stock photos" shows many different people, just not many [redacted] year old Asian males like me.

But what if I could use my personal perspective to challenge that?

But what if I could use my personal perspective to challenge that?

Leveraging millions of users' unique perspectives can help mitigate bias quickly.

Leveraging millions of users' unique perspectives can help mitigate bias quickly.

problem

How might we encourage Instagram users to report harmful content?

How might we encourage Instagram users to report harmful content?

How might we encourage Instagram users to report harmful content?

Why Instagram?

Why Instagram?

A diverse population of frequent users we can access.

A diverse population of frequent users we can access.

Why reporting?

Why reporting?

A quantifiable metric to measure user auditing.

A quantifiable metric to measure user auditing.

understand

How do users currently interact with algorithmic bias?

How do users currently interact with algorithmic bias?

I conducted a think-aloud contextual interview with 3 users to get some insight.

I conducted a think-aloud contextual interview with 3 users to get some insight.

… They don't.

… They don't.

Reporting was intuitive, but users didn't even give harmful ads a second glance.

Reporting was intuitive, but users didn't even give harmful ads a second glance.

redefine

Let's reframe the problem to encourage instagram users to care about content.

Let's reframe the problem to encourage instagram users to care about content.

We realized that before asking users to audit the content, we first need to make sure that users care about the content they are consuming.

We realized that before asking users to audit the content, we first need to make sure that users care about the content they are consuming.

dive

So… what do users care about?

So… what do users care about?

Our team had a hunch that people interacted with content that they actually liked. But in order to test this, this required users’ feeds to be populated with content that they enjoyed.


Thus, we decided to conduct a week-long journal study that tasked 5 users to change their current recommendation content to their desired target content. I then followed up with a thirty minute contextual interview about their journal data.

Our team had a hunch that people interacted with content that they actually liked. But in order to test this, this required users’ feeds to be populated with content that they enjoyed.


Thus, we decided to conduct a week-long journal study that tasked 5 users to change their current recommendation content to their desired target content. I then followed up with a thirty minute contextual interview about their journal data.

People interact with things they like,

People interact with things they like,

Our suspicions were confirmed. All users were able to successfully change their feed content to their target content, but the bottom line was:


All users needed to pursue some sort of action to enact this change.

Our suspicions were confirmed. All users were able to successfully change their feed content to their target content, but the bottom line was:


All users needed to pursue some sort of action to enact this change.

Our suspicions were confirmed. All users were able to successfully change their feed content to their target content, but the bottom line was:


All users needed to pursue some sort of action to enact this change.

User Journal
User Journal
User Journal

Post journal interviews also revealed that users would:

  1. Not only spend more time on Instagram now that their feed is more tailored to their tastes

  2. but would also continue maintaining this level of interaction to preserve the content on their feed.

Post journal interviews also revealed that users would:

  1. Not only spend more time on Instagram now that their feed is more tailored to their tastes

  2. but would also continue maintaining this level of interaction to preserve the content on their feed.

Post journal interviews also revealed that users would:

  1. Not only spend more time on Instagram now that their feed is more tailored to their tastes

  2. but would also continue maintaining this level of interaction to preserve the content on their feed.

but the algorithm is still a black box.

but the algorithm is still a black box.

users said

users said…

Their feed "one day seemed to just change." It wasn't clear when the algorithm was actually changing to match their preferences.

Their feed "one day seemed to just change." It wasn't clear when the algorithm was actually changing to match their preferences.

It wasn't clear what actions influenced the algorithm the most. Users all pursued vastly different actions to achieve the targeted change in content.

It wasn't clear what actions influenced the algorithm the most. Users all pursued vastly different actions to achieve the targeted change in content.

Some irrelevant posts were still being recommended. Though they understood this was the nature of the algorithm, the users didn't care for this content.

Some irrelevant posts were still being recommended. Though they understood this was the nature of the algorithm, the users didn't care for this content.

ideate

Meet Andy,

Meet Andy,

who helped us go from 40 to 3 ideas,

who helped us go from 40 to 3 ideas,

My team started ideating broadly by conducting a Crazy 8s activity. Our persona helped constrain our ideas and narrow down on a few plausible ideas.


3 ideas stood out to us in particular:

  1. Streamlining the “not interested” process. Marking things as “not interested” is almost just as important as liking a post, and so should be included where the other post interactions exist.

  2. Customization. Tailoring feed content to a users’ personal preferences since our test found that users would continue maintaining higher levels of interactions to maintain the content on their feed.

  3. Pop-ups to increase algorithm transparency. This would involve pop-ups notifying users of how the recommendation algorithm would change based on a certain action the user decides to take.

My team started ideating broadly by conducting a Crazy 8s activity. Our persona helped constrain our ideas and narrow down on a few plausible ideas.


3 ideas stood out to us in particular:

  1. Streamlining the “not interested” process. Marking things as “not interested” is almost just as important as liking a post, and so should be included where the other post interactions exist.

  2. Customization. Tailoring feed content to a users’ personal preferences since our test found that users would continue maintaining higher levels of interactions to maintain the content on their feed.

  3. Pop-ups to increase algorithm transparency. This would involve pop-ups notifying users of how the recommendation algorithm would change based on a certain action the user decides to take.

before coming to our final solution: customization.

before coming to our final solution: customization.

factors
considered

factors considered

Addressing user needs. Customization addressed problems mentioned by users, like confusing content shifts, in addition to our goal of diminishing information apathy.

Addressing user needs. Customization addressed problems mentioned by users, like confusing content shifts, in addition to our goal of diminishing information apathy.

Addressing business needs. Customization will also increase user engagement with the platform, making for high financial incentive to implement the feature.

Addressing business needs. Customization will also increase user engagement with the platform, making for high financial incentive to implement the feature.

Medium-risk solution. Streamlining was too passive and safe of an option, whereas pop-ups might be too invasive for users.

Medium-risk solution. Streamlining was too passive and safe of an option, whereas pop-ups might be too invasive for users.

create

On paper, it works

On paper, it works

Our team started with a rough paper prototype that only contained the bare minimum. This was because we only wanted to test whether the locations were intuitive, and if the features were sufficient.

Our team started with a rough paper prototype that only contained the bare minimum. This was because we only wanted to test whether the locations were intuitive, and if the features were sufficient.

Paper Prototype
Paper Prototype
Paper Prototype

note: this lofi prototype was created by my teammate, Zhichun.

Locations based on use cases:

locations based on use cases:

Home Nav Bar

Home Nav Bar

While users were browsing and exploring their feeds.

While users were browsing and exploring their feeds.

Settings Page

Settings Page

While users were adjusting other account preferences.

While users were adjusting other account preferences.

Each team member then used contextual inquiry to interview 3 users (for a total of 15 users), seeking honest signals by collecting users' thoughts using a post-interview System Usability Scale (SUS) Test and Satisfaction Survey.


Both surveys contained statements where users would express a level of agreement, from 0 to 5 (0 denoting strong disagreement, and 5 being strong agreement).

Each team member then used contextual inquiry to interview 3 users (for a total of 15 users), seeking honest signals by collecting users' thoughts using a post-interview System Usability Scale (SUS) Test and Satisfaction Survey.


Both surveys contained statements where users would express a level of agreement, from 0 to 5 (0 denoting strong disagreement, and 5 being strong agreement).

consensus

consensus

Easy and valuable to use. 100% of users agreed that it was easy to use, and would continue using it while on Instagram.

Easy and valuable to use. 100% of users agreed that it was easy to use, and would continue using it while on Instagram.

Inconsistent. >25% of users found the feature inconsistent due to the high number of choices on the frequency slider.

Inconsistent. >25% of users found the feature inconsistent due to the high number of choices on the frequency slider.

With a few notable points to change

With a few notable points to change

feedback

iteration

decision paralysis

slider labels follow a Likert scale of three options: not frequent, same, very frequent.

ambiguous end results

providing examples of posts that will be altered in frequency

centralizing existing customizations

adding a page that shows all current modified customizations

reflect

Results

Results

Would this feature influence you to report more or less often?

Next steps

Next steps

There are other considerations that I want to think about when designing in the future.

There are other considerations that I want to think about when designing in the future.

Unintended Consequences

Unintended Consequences

Doom-scrolling is already a problem, and we would be encouraging this.

Doom-scrolling is already a problem, and we would be encouraging this.

Edge Cases

Edge Cases

What if posts aren't tagged correctly? Or users don't add hashtags to their posts?

What if posts aren't tagged correctly? Or users don't add hashtags to their posts?

26.7% of users thought they would report content more often but this is not a big number, and also mainly speculation from users.


The more worrying number: 20% of users think they would actually report less.

26.7% of users thought they would report content more often but this is not a big number, and also mainly speculation from users.


The more worrying number: 20% of users think they would actually report less.

Further
Investigations

further investigations

Identification rate of biased ads. Does this number change if users are familiar with the content they are consuming?

Identification rate of biased ads. Does this number change if users are familiar with the content they are consuming?

Frequency of use. How often would users use this feature when they are actually browsing or surfing media?

Frequency of use. How often would users use this feature when they are actually browsing or surfing media?

Adding other features. Would pairing this current idea with our previous ideas be more effective to encourage auditing?

Adding other features. Would pairing this current idea with our previous ideas be more effective to encourage auditing?

Learnings

Learnings

Problems behind problems.

Thinking of "auditing algorithmic bias" immediately made me think of report rates. However, more research told me that this wasn't the actual problem to solve. Digging deeper reveals a set of other problems that we can solve in hopes of solving our initial problem.

Algorithmic experiences are hard to replicate.

I wanted to mock up the customization feature on Figma, but since each user’s experience is personalized on Instagram this would mean that we would need to know the users’ preferences prior to the prototype. I still need to find better ways to prototype algorithmic experiences.

Problems behind problems.

Thinking of "auditing algorithmic bias" immediately made me think of report rates. However, more research told me that this wasn't the actual problem to solve. Digging deeper reveals a set of other problems that we can solve in hopes of solving our initial problem.

Algorithmic experiences are hard to replicate.

I wanted to mock up the customization feature on Figma, but since each user’s experience is personalized on Instagram this would mean that we would need to know the users’ preferences prior to the prototype. I still need to find better ways to prototype algorithmic experiences.