Skip to the content.

Computing Bias Homework

Completion of popcorn hacks and homework for the computing bias lesson

Popcorn Hack #1

One well-known example of a biased computer system is the facial recognition software used by law enforcement agencies. Studies have shown that these systems often have higher error rates when identifying people of color, especially Black individuals, compared to white individuals.

Type of Bias:

This is an example of Pre-existing Social Bias. The bias stems from inequalities already present in society, which are then reflected in the training data used to build the facial recognition models. If the datasets are not diverse and are skewed toward lighter-skinned individuals, the system learns to perform better on those groups and worse on others.

Way to Reduce or Fix the Bias:

One way to reduce this bias is to improve the diversity and representativeness of the training data. By including more images of people from different racial, ethnic, and gender backgrounds, the system can be trained to perform more fairly across all demographics. Additionally, regular bias audits and transparency about how these systems are used can help ensure accountability.

Popcorn Hack #2

Two ways to mitigate the bias in the AI loan approval system:

Use Fair and Representative Training Data: The system should be retrained using a dataset that is balanced across gender and other demographic factors. This helps prevent the AI from learning and reinforcing historical biases present in the original data.

Implement Bias Detection and Fairness Audits: Regularly test the AI model for biased outcomes using fairness metrics. If bias is detected, adjustments can be made—such as tweaking the algorithm or adjusting decision thresholds—to ensure all applicants are evaluated more equally.

Homework Hack

System: Instagram – a social media app many people use daily to share photos and videos, and to discover content through the Explore page and algorithm-driven feed.

Identified Bias and Explanation: Instagram’s algorithm tends to promote certain types of content more than others, often favoring conventionally attractive people, specific body types, or lifestyle content that aligns with popular trends. This can reflect a Pre-existing Social Bias, since the algorithm learns from user engagement patterns that are shaped by societal preferences and stereotypes.

For example, posts from marginalized communities or those that don’t fit mainstream beauty standards may receive less visibility—not because the content is lower quality, but because of how the algorithm prioritizes engagement and past patterns.

Way to Reduce or Fix the Bias: Instagram could implement algorithmic fairness adjustments—tweaking the recommendation algorithm to actively promote a more diverse range of creators and content. They could also allow users to customize their feed preferences more transparently, giving people more control over what they see rather than relying solely on automated ranking.