Why can't neural networks ever learn sine waves?
— henry (@arithmoquine) May 18, 2024
So, for context, Stephen Wolfram wrote "Can AI Solve Science?" a few months ago arguing that neural networks can't generalize beyond their own training due to the constraints of architecture.
A particularly interesting example… pic.twitter.com/67VRMWAD8c
The original Ecch (Tweet) is quite long
The short version is that current artificial intelligence is unable to generalize the most basic period function out there, the sine wave is impossible for AI in general, and LLMs and neural networks in particular, to learn in the manner that humans do.
I'm not sure how important this is to what AI will be assigned to, derivative crap art and spam robots, but it does appear to me to be a major issue for the business.
If AI cannot "learn" a sin wave, something which even the most mathematically illiterate person out there can recognize and repeat, it means that AI is not learning.
If AI is not learning, then what it does with content covered by IP protection is not learning, but rather just repeating slightly modified output in an attempt to evade the exclusive licenses that are the defining feature of AI.
The short version of this is that most, if not all, of the activity in AI is less an exercise in developing a new technology, but rather an attempt by early funders of these programs to sell out before the generally lack of utility is apparent.
An even shorter version is, to quote the great John Kenneth Galbraith, is that this is, "The bezzle," that interval between when a scam is initiated and when the victims recognize this.
0 comments :
Post a Comment