17 August 2024

Hoocoodanode?

With all of the studies showing that large language model artificial intelligence programs (LLMs) are doing little more than bullsh%$ting, and not bullsh%$ting particularly well, a study from the University of Bath indicates that LLMs cannot learn, and do not pose an existential threat to society.

I am not sure about that whole, "No existential threat," bit.  LLMs are being applied to things like customer service, and while this might not destroy the world, it will serve to render any sort of responsibility by consumer facing organizations a myth, and that destroys quite a lot: 

AI Lacks Independent Learning, Poses No Existential Threat

Summary: New research reveals that large language models (LLMs) like ChatGPT cannot learn independently or acquire new skills without explicit instructions, making them predictable and controllable. The study dispels fears of these models developing complex reasoning abilities, emphasizing that while LLMs can generate sophisticated language, they are unlikely to pose existential threats. However, the potential misuse of AI, such as generating fake news, still requires attention

Key facts:

  • LLMs are unable to master new skills without explicit instruction.
  • The study finds no evidence of emergent complex reasoning in LLMs.
  • Concerns should focus on AI misuse rather than existential threats.

Translated into English, it basically means that it's all a humbug, being pushed by snollygosters, much like cryptocurrency and the Dotcom boom at the turn of the century.

The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) – the premier international conference in natural language processing – reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe.

………

With growth, these models are likely to generate more sophisticated language and become better at following explicit and detailed prompts, but they are highly unlikely to gain complex reasoning skills.

………

As an illustration, LLMs can answer questions about social situations without ever having been explicitly trained or programmed to do so. While previous research suggested this was a product of models ‘knowing’ about social situations, the researchers showed that it was in fact the result of models using a well-known ability of LLMs to complete tasks based on a few examples presented to them, known as `in-context learning’ (ICL).

Through thousands of experiments, the team demonstrated that a combination of LLMs ability to follow instructions (ICL), memory and linguistic proficiency can account for both the capabilities and limitations exhibited by LLMs.

You know, an aggressive program of pursuing these mooks for fraud, in the same way that they pursued the bunco artists at Theranos, seems to me to be an increasingly good option.

Also, making sure that the creators of AI are civilly and criminally responsible for their harms, consider the use of an LLM at a 911 response center, for example, would go a long way to taking some air out of this bubble.

0 comments :

Post a Comment