19 July 2024

Headline of the Day

ChatGPT Isn’t ‘Hallucinating’—It’s Bullshitting!
Scientific American

Yes, I know I did not Bowdlerize the swear word, even though it's not January, but it is too good not to quote completely, particularly as a headline for Scientific American.

It's also accurate.  The failures of the LLM artificial intelligence are not some sort of bizarre artifact, they are the result of a deliberate decision by ChatGPT and its competitors to bullsh%$ in the hope that they can create the illusion of actual useful intelligence:

Right now artificial intelligence is everywhere. When you write a document, you’ll probably be asked whether you need your “AI assistant.” Open a PDF and you might be asked whether you want an AI to provide you with a summary. But if you have used ChatGPT or similar programs, you’re probably familiar with a certain problem—it makes stuff up, causing people to view things it says with suspicion.

It has become common to describe these errors as “hallucinations.” But talking about ChatGPT this way is misleading and potentially damaging. Instead call it bullshit.

We don’t say this lightly. Among philosophers, “bullshit” has a specialist meaning, one popularized by the late American philosopher Harry Frankfurt. When someone bullshits, they’re not telling the truth, but they’re also not really lying. What characterizes the bullshitter, Frankfurt said, is that they just don’t care whether what they say is true. ChatGPT and its peers cannot care, and they are instead, in a technical sense, bullshit machines.

 Frankfurt's essay was On Bullsh%$, and Dave Graeber's Bullsh%$ Jobs, further explored the concept of bullsh%$.

………

This isn’t rare or anomalous. To understand why, it’s worth thinking a bit about how these programs work. OpenAI’s ChatGPT, Google’s Gemini chatbot and Meta’s Llama all work in structurally similar ways. At their core is an LLM—a large language model. These models all make predictions about language. Given some input, ChatGPT will make some prediction about what should come next or what is an appropriate response. It does so through an analysis of enormous amounts of text (its “training data”). In ChatGPT’s case, the initial training data included billions of pages of text from the Internet.

From those training data, the LLM predicts, from some text fragment or prompt, what should come next. It will arrive at a list of the most likely words (technically, linguistic tokens) to come next, then select one of the leading candidates. Allowing for it not to choose the most likely word each time allows for more creative (and more human-sounding) language. The parameter that sets how much deviation is permitted is known as the “temperature.” Later in the process, human trainers refine predictions by judging whether the outputs constitute sensible speech. Extra restrictions may also be placed on the program to avoid problems (such as ChatGPT saying racist things), but this token-by-token prediction is the idea that underlies all of this technology.

Now, we can see from this description that nothing about the modeling ensures that the outputs accurately depict anything in the world. There is not much reason to think that the outputs are connected to any sort of internal representation at all. A well-trained chatbot will produce humanlike text, but nothing about the process checks that the text is true, which is why we strongly doubt an LLM really understands what it says.

It's bullsh%$, they know that it's bullsh%$, but these snollygosters know that they can exploit the AI mania before it all collapses like a bunch of broccoli.

If the US government were to actually prosecute tech bro fraud, we'd see 80% of the giants of Silly-Con valley in the dock.

0 comments :

Post a Comment