It turns out that as a result of libel suits and complaints, ChatGPT has been manually hard-coding names into their system.
They cannot get the program to stop spewing falsehoods, so they shut it down whenever certain names are mentioned:
OpenAI's ChatGPT is more than just an AI language model with a fancy interface. It's a system consisting of a stack of AI models and content filters that make sure its outputs don't embarrass OpenAI or get the company into legal trouble when its bot occasionally makes up potentially harmful facts about people.
Recently, that reality made the news when people discovered that the name "David Mayer" breaks ChatGPT. 404 Media also discovered that the names "Jonathan Zittrain" and "Jonathan Turley" caused ChatGPT to cut conversations short. And we know another name, likely the first, that started the practice last year: Brian Hood. More on that below.
The chat-breaking behavior occurs consistently when users mention these names in any context, and it results from a hard-coded filter that puts the brakes on the AI model's output before returning it to the user.
………OpenAI did not respond to our request for comment about the names, but we know when the filter originated, and as a result, the other names are also likely filtered due to complaints about ChatGPT's tendency to confabulate erroneous responses when lacking sufficient information about a person.
We first discovered that ChatGPT choked on the name "Brian Hood" in mid-2023 while writing about his defamation lawsuit. In that lawsuit, the Australian mayor threatened to sue OpenAI after discovering ChatGPT falsely claimed he had been imprisoned for bribery when, in fact, he was a whistleblower who had exposed corporate misconduct.
The case was ultimately resolved in April 2023 when OpenAI agreed to filter out the false statements within Hood's 28-day ultimatum. That is possibly when the first ChatGPT hard-coded name filter appeared.
What they are doing at OpenAI is not just an admission of error, it is an admission that they cannot make the system work properly.
If the system were truly intelligent, then it could be made to understand the underlying facts, and it cannot, and never will be able to.
0 comments :
Post a Comment