The biases present in machine learning algorithms are examined in Coded Bias. While working on a project in the MIT Media Lab, computer scientist Joy Buolamwini discovered that facial-recognition software was unable to identify her face, as an African-American female, unless she put on a white mask. Upon further examination, Joy learns that the training dataset for the software consisted of primarily white males. This kicks off a larger discussion about the unconscious biases of people, which end up being embedded in technology.
Coded Bias is a documentary by Shalini Kantayya (Catching the Sun) that acts as a bit of a cautionary tale about the flaws in machine learning algorithms, caused by biased data. In addition to the main subject of Joy Buolamwini, who ends up forming an “Algorithmic Justice League” to help enlighten people about data bias, the film also follows Silkie Carlo, who leads a group called Big Brother Watch in the UK, which is concerned by the use of, highly inaccurate, facial recognition systems by the police. Then there is China, which heavily uses facial recognition for its social credit system.
While the concerns raised in Coded Bias are nothing new, the film does raise some warning flags about the use of technology algorithms in the future. The situation in China is considered to be the worst possible outcome, where facial recognition technology is used as a form of surveillance. With technology companies having more access to user data than ever before, the question arises in Coded Bias about how that data should be used and whether or not there should be federal regulation of machine learning algorithms.