ChatGPT is Bullsh%$—Ethics and Information Technology (%$# mine)
Normally, this would not be a particularly notable headline, but Ethics and Information Technology is a prestigious peer reviewed journal.
One does not expect this in an academic publication.
Interestingly there is a fairly substantive as to whether ChatGPT "Hard" (deliberate) or "Soft" (negligent) bullsh%$.
Their conclusion, both:
………
The structure of the paper is as follows: in the first section, we outline how ChatGPT and similar LLMs operate. Next, we consider the view that when they make factual errors, they are lying or hallucinating: that is, deliberately uttering falsehoods, or blamelessly uttering them on the basis of misleading input information. We argue that neither of these ways of thinking are accurate, insofar as both lying and hallucinating require some concern with the truth of their statements, whereas LLMs are simply not designed to accurately represent the way the world is, but rather to give the impression that this is what they’re doing. This, we suggest, is very close to at least one way that Frankfurt talks about bullsh%$. We draw a distinction between two sorts of bullsh%$, which we call ‘hard’ and ‘soft’ bullsh%$, where the former requires an active attempt to deceive the reader or listener as to the nature of the enterprise, and the latter only requires a lack of concern for truth. We argue that at minimum, the outputs of LLMs like ChatGPT are soft bullsh%$: bullsh%$–that is, speech or text produced without concern for its truth–that is produced without any intent to mislead the audience about the utterer’s attitude towards truth. We also suggest, more controversially, that ChatGPT may indeed produce hard bullsh%$: if we view it as having intentions (for example, in virtue of how it is designed), then the fact that it is designed to give the impression of concern for truth qualifies it as attempting to mislead the audience about its aims, goals, or agenda. So, with the caveat that the particular kind of bullsh%$ ChatGPT outputs is dependent on particular views of mind or meaning, we conclude that it is appropriate to talk about ChatGPT-generated text as bullsh%$, and flag up why it matters that – rather than thinking of its untrue claims as lies or hallucinations – we call bullsh%$ on ChatGPT.
(%$# mine)
There is no "There" there.
0 comments :
Post a Comment