Pondaven, Emily

Emily Pondaven

Emily Pondaven responds to the question: Is Bias inevitable in the production of knowledge?

Object 1: A world map in my living room (photo taken by me)

This 2d world map (30×150”) in my living room by Rand McNally produced in 2003 is based on the Gall-stereographic projection. It taught me about world geography at home since I was four years old. Despite viewing the knowledge gained by the map as truth as a child, new technology shows the inaccuracies of this projection, so the creator’s biases were passed onto me.

This object enriches the exhibition by depicting how the biases of how we view the world are inevitable due to the limit of technology. Despite worldwide usage of the Gall projection map, more advanced technology reveal its biases including its distorted country sizes, showing inherent widespread misconceptions. For example, Africa is heavily distorted as, in reality, it is approximately 14.5 times bigger than Greenland, and can fit the United States, India, and the majority of Europe (Taylor, 2015). These technological inaccuracies made me further question why, for instance, Europe is positioned at the centre of the map and Australia is at the bottom. When considering these distortions and country positions, biases about a country’s importance are built into our unconscious mind as we grow up. As no map is truly accurate on a 2d-scale, it makes us question if new maps that are pronounced as ‘real’ maps are perfectly unbiased.

It also considers the problem of the persistence of memory. Due to the unconscious biases shaped by the map, it is difficult to change our view of our mental map even when new, more accurate maps are presented to us because they do not agree with our already-made conceptions. Persistent unconscious cognitive biases are based on what knowledge was developed into our brains first, whether in childhood or passed on through evolution, like our negativity bias. These unconscious biases and our tendency to seek to confirm them cause initially-learnt knowledge to act as a powerful lens that distorts our consumption and production of new information, often bringing about a blissful ignorance to contradicting ideas. Therefore, the biases created and imprinted on our brains tend to persist and are inevitable.

 Object 2: A photograph of two people who were arrested and their score from the AI COMPAS recidivism algorithm, Image taken from (Angwin, Larson, Mattu, and Kirchner, 2016)

This photograph shows two arrested people and their scores from the COMPAS recidivism algorithm, a US justice system machine learning (ML) model that predicts future criminals based on a 137-question survey and their criminal records. Borden, an 18-year-old black girl, was rated high-risk for future crimes after taking a child’s scooter but did not re-offend, while Prater, a white man, was rated low-risk after committing armed robbery and did re-offend. The model’s racial biases were perpetuated by the creator and US justice system.

The object shows how ML models are influenced by their creators’ biases and instincts, and misrepresented historical data. For example, although race groups consume marijuana at roughly equal rates, Black Americans have historically been convicted for marijuana possession at higher rates (Chohlas-Wood, 2020). A model based on these historical records would unfairly rate Black Americans in possession of marijuana at a higher recidivism score. Due to our inherent authority bias towards machines, we tend to believe they are objective and overcome human biases. However, they instead exacerbate the creators’ prejudices, allowing law-makers to shift responsibility onto algorithms.

It also demonstrates the inevitability of bias in theoretically ‘fair’ machine algorithms due to inherent biases and factors beyond our control. The COMPAS survey’s questions include “Were your parents ever sent to jail?”, which based on statistical correlations appears fair but actually perpetuates stereotypes and the racial hierarchy ingrained into the American justice system as minorities may be more likely to be deemed high-risk. In theory, AI models can be completely fair by diversifying the training data as they are inherently unbiased like including all target groups in facial recognition. However, biases still seep into ‘fair’ models like from racial structures as seen above. Some sources of bias in a dataset are unknowable and defining ‘fairness’ is subjective to the creators and society based on the location and time period. Biases in machines and AI may be mitigated but are inevitable.

 

Object 3: The cover of a 1967 battleship board game, Image taken from (Grinberg, 2015)

The object is a sexist 1967 battleship boardgame cover. It depicts a father and son playing the game, while the mother and daughter are washing dishes on the side-lines in the background. This advertisement creates unconscious gender biases in children’s minds at a young age by normalising traditional views of women and men.

This object contributes to the exhibition as the development and normalisation of unconscious gender biases as a child by depicting gender stereotyping in daily life, like advertisements, cause biases to become inevitable. For example, the woman’s role is doing the housework and smiling approvingly at the enjoyment of the dad and son. Advertisements about science and war often target boys, while dolls or toy cooking sets target girls. By ingraining these gender biases into children’s unconscious minds, it indoctrinates them into their social roles and influences the perspectives of future generations as these gender stereotypes can be taught through generations if it is viewed as the “correct” gender roles. Therefore, the object exhibits how advertising and social media can perpetuate gender biases, making them inevitable.

Inversely, it also enriches this exhibition by illustrating that my awareness of gender stereotyping through this battleship boardgame cover and other advertisements during the time period suggests that biases are not inevitable. Instead, they can be avoided by learning about our gender biases and questioning gender roles that are exacerbated by the media. However, continuous gender stereotypes within children’s advertisements to target specific genders suggest their unavoidability. For example, as a child, I would play with pink barbies, while my brother played with dark-themed Lego sets. Advertisements have normalised using gender stereotypes for gender prejudices to be built into our unconscious mind, showing the inevitability of biases in daily life.

 

Works Cited

Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine Bias.” ProPublica. Accessed February 1, 2022. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Chohlas-Wood, Alex. 2020. “Understanding Risk Assessment Instruments in Criminal Justice.” Accessed February 2, 2022. https://www.brookings.edu/research/understanding-risk-assessment-instruments-in-criminal-justice/.

Grinberg, Emanuella. 2015. “The Science Supporting Gender-Neutral Marketing.” CNN. Cable News Network. Accessed February 9, 2022. https://edition.cnn.com/2015/09/24/living/gender-neutral-toys-marketing-feat/index.html.

Taylor, Adam. 2015. “This Interactive Map Shows How ‘Wrong’ Other Maps Are.” The Washington Post. WP Company. Accessed February 10, 2022. https://www.washingtonpost.com/news/worldviews/wp/2015/08/18/this-interactive-map-shows-how-wrong-other-maps-are/.