It turns out that OpenAI has a tool to detect the content that its various large language machines generate, but they have refused to release it.
This is not a surprise. If they released this tool, people would no longer be able to use the tool to deceive other people, and without deception, there is no product:
OpenAI has a method to reliably detect when someone uses ChatGPT to write an essay or research paper. The company hasn’t released it despite widespread concerns about students using artificial intelligence to cheat.
The project has been mired in internal debate at OpenAI for roughly two years and has been ready to be released for about a year, according to people familiar with the matter and internal documents viewed by The Wall Street Journal. “It’s just a matter of pressing a button,” one of the people said.
In trying to decide what to do, OpenAI employees have wavered between the startup’s stated commitment to transparency and their desire to attract and retain users. One survey the company conducted of loyal ChatGPT users found nearly a third would be turned off by the anticheating technology.
An OpenAI spokeswoman said the company is concerned the tool could disproportionately affect groups such as non-native English speakers. “The text watermarking method we’re developing is technically promising but has important risks we’re weighing while we research alternatives,” she said. “We believe the deliberate approach we’ve taken is necessary given the complexities involved and its likely impact on the broader ecosystem beyond OpenAI.”
Yeah, they are using DEI as an alibi.
Oh, those poor non-English speakers.
Bullsh%$. The only value that ChatGPT offers IS cheating. Of
course they won't release the tool.
………
ChatGPT is powered by an AI system that predicts what word or word fragment, known as a token, should come next in a sentence. The anticheating tool under discussion at OpenAI would slightly change how the tokens are selected. Those changes would leave a pattern called a watermark.
The watermarks would be unnoticeable to the human eye but could be found with OpenAI’s detection technology. The detector provides a score of how likely the entire document or a portion of it was written by ChatGPT.The watermarks are 99.9% effective when enough new text is created by ChatGPT, according to the internal documents.
“It is more likely that the sun evaporates tomorrow than this term paper wasn’t watermarked,” said John Thickstun, a Stanford researcher who is part of a team that has developed a similar watermarking method for AI text.
………
There is broad agreement within the company that determining who can use this detector would be a challenge. If too few people have it, the tool wouldn’t be useful. If too many get access, bad actors might decipher the company’s watermarking technique.
Again bullsh%$. Just like the prior tech bro obsession, cryptocurrency, the only application is corruption, because all that LLM AI is good for is bullsh%$ting.
One does hope that this bubble pops sooner rather than later, because the longer it goes the more that the rest of us will have to pay to bail out the Silly-Con Valley bunco artists.
0 comments :
Post a Comment