19 July 2022

This is a Feature, Not a Bug

Yet another story about how AIs become racist as they are trained.

Call me a cynic, but I do not believe that this is an accident.

When one looks at algorithms, like the ones that Facebook used to allow employment and real estate ads to avoid being shown to minorities, it is impossible not to attribute this to malice.

These mechanisms, whether AI or more generic algorithms, are racist by design.

Pandering to racists is good business:

As part of a recent experiment, scientists asked specially programmed robots to scan blocks with people’s faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.

Those virtual robots, which were programmed with a popular artificial intelligence algorithm, were sorting through billions of images and associated captions to respond to that question and others, and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

I may be donning a tinfoil hat here, but I do not think it is accidental, "Tell Mike it was only business. I always liked him."

0 comments :

Post a Comment