Language lessons across Australia & New Zealand

Call us! +61 (03) 8652 1381

Can AI Language Be Prone to Racial and Gender Biases, Too?

One of the more idealistic aspects of a world with AI in it, is that we’ll finally be free of petty human prejudices. Using AI would create equal opportunities, regardless of gender or race. It wouldn’t matter what your sexual orientation is or where you hail from, with the right algorithms jobs will be merit-based, and situations like race-driven arrests will substantially decrease. But is this really the case? Is AI truly stereotype free? Read on to find out!

Photo via Flickr

We’re no Cinderella

First of all, experts say, it’s important not to approach AI as this beyond-reproach, problem-solving entity here to save humanity from its own self. In reality, AI is an extension of our own culture, and that means it can come with a fair few stereotyping issues of its own.

When testing inherent biases in humans, scientists use a test referred to as the implicit association test, or IAT. In this experiment, words flash on a screen and subconscious stereotypes are determined according to how quickly the individual looking at the screen reacts. The IAT discovered, for example, that black and white Americans both tend to react more positively to names like ‘Courtney’ – associating them with words like ‘happy’— while a name like ‘Leroy’ is prone to create a negative reaction and association with words like ‘hatred’.

IAT, but for machines

Scientists were very keen to attempt a test similar to the IAT with AI in order to see if inherent biases do indeed disappear when it comes to machines. For this purpose, the word embedding association test (WEAT) was created. Computers have word embedding, which are basically the AI’s way of defining a word depending on the context it’s in. For example, ‘cotton’ and ‘fabric’ would be embedded words tied closely with a word like ‘fashion’ but not with ‘nature’ as it’s more likely to be used in context with the former rather than the latter. Scientists analysed hundreds of billions of words in order to see which ones featured similar embedding in a computer’s system.

Photo via Flickr

The bias is real

The results of using WEAT with AI revealed something very interesting: that computers can also be prone to inherent biases associated with words. Racial stereotyping was evident when researchers discovered that names like ‘Alison’ are embedded more closely with terms like ‘laughter’ in a computer’s system, while, on the other hand, a name like ‘Shaniqua’ would be more closely associated with words like ‘failure’.

And the biases didn’t stop there. Jobs like hygienist and librarian were more closely embedded with words like ‘female’ and ‘woman’— due to the algorithm calculating how many women hold these types of jobs as opposed to men. While it’s fascinating to see a machine do this, it’s also a disconcerting look at the firm hold biases and stereotypes still have in our modern society.

Photo via Flickr

Stereotyped language

So what exactly does this mean? If we can’t even get computers to give up biases, does this indicate we’ll never live in a stereotype-free world? Unfortunately, the idea of a world completely free of biases is pretty far-fetched. It’s human nature to create stereotypes, and even if the current ones go by the wayside, we’re sure to develop new ones to take their places. What the results of WEAT and other tests on AI biases do point out, is our propensity to convey bias through the language we use.

Think about it, when someone says the word ‘nurse’, are you more likely to automatically picture a man or a woman? Don’t feel bad if it’s the latter, that’s a pretty common reaction, experts state. Even Google is guilty of doing the same thing; when gender-neutral pronouns from several different languages are put through Google’s translation software, it tends to use ‘he’ when talking about a doctor, and ‘she’ when talking about a nurse.

At the end of the day, we’ve gone a long way towards creating a more equal world, but if the biases present in AI teaches us anything, it’s that we have an awful long way to go still. Gradually, we’ll begin to see more women filling jobs previously associated with men, and more individuals from varied backgrounds achieving success and breaking racial stereotypes. But ending the prejudice starts with us, not with the computers!

Do you think we’ll ever reach a point where AI will be free of race and gender biases? Why do you think these stereotypes still persist?