20 September 2025

Tech Bros Culture is Racist and Sexist

So it should not be a surprise that the artificial intelligence tools that they have developed downplay symptoms for women and minorities.

This is a feature of Silly-Con Valley culture, not a bug. 

Artificial intelligence tools used by doctors risk leading to worse health outcomes for women and ethnic minorities, as a growing body of research shows that many large language models downplay the symptoms of these patients.

A series of recent studies have found that the uptake of AI models across the healthcare sector could lead to biased medical decisions, reinforcing patterns of under-treatment that already exist across different groups in Western societies.

The findings by researchers at leading US and UK universities suggest that medical AI tools powered by LLMs have a tendency to not reflect the severity of symptoms among female patients, while also displaying less “empathy” toward Black and Asian ones.

………

But research by the MIT’s Jameel Clinic in June found that AI models, such as OpenAI’s GPT-4, Meta’s Llama 3, and Palmyra-Med—a healthcare-focused LLM—recommended a much lower level of care for female patients, and suggested some patients self-treat at home instead of seeking help.

A separate study by the MIT team showed that OpenAI’s GPT-4 and other models also displayed answers that had less compassion towards Black and Asian people seeking support for mental health problems.

That suggests “some patients could receive much less supportive guidance based purely on their perceived race by the model,” said Marzyeh Ghassemi, associate professor at MIT’s Jameel Clinic.

………

The problem of harmful biases stems partly from the data used to train LLMs. General-purpose models, such as GPT-4, Llama, and Gemini, are trained using data from the internet, and the biases from those sources are therefore reflected in the responses. AI developers can also influence how this creeps into systems by adding safeguards after the model has been trained.

“If you’re in any situation where there’s a chance that a Reddit subforum is advising your health decisions, I don’t think that that’s a safe place to be,” said Travis Zack, adjunct professor of University of California, San Francisco, and the chief medical officer of AI medical information start-up Open Evidence. 

They are going to kill us.

0 comments :

Post a Comment