12 October 2021

Good Point

Award winning Science Fiction writer Ted Chiang has an interesting perspective on the threat that many folks in tech think is posed by the development of artificial intelligence.

Specifically, he notes that the behavior of a rogue murderous AI is indistinguishable from a Silicon Valley startup.

When one looks at the characteristics of the T-800 from the Terminator, they do match those of a Silicon Valley startup: 

This summer, Elon Musk spoke to the National Governors Association and told them that “AI is a fundamental risk to the existence of human civilization.” Doomsayers have been issuing similar warnings for some time, but never before have they commanded so much visibility. Musk isn’t necessarily worried about the rise of a malicious computer like Skynet from The Terminator. Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

………

Insight is precisely what Musk’s strawberry-picking AI lacks, as do all the other AIs that destroy humanity in similar doomsday scenarios. I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.”

We already have soulless automata who will pursue their twisted agendas with no regard for human life, and they are called corporations.

I think that they might be a more immediate threat.

2 comments :

Tim Boudreau said...

That’s not exclusive to Silicon Valley startups. C.f. eBay v Newmark - shareholder value theory is the law of the land. Any business FALLING to behave in exactly this way is making itself liable to a lawsuit from shareholders, who will happily replace the management with one without a pesky moral compass.

Silicon Valley startups are just slightly better at this game than average.

Stephen Montsaroff said...

If only all these folks would follow the three laws of robotics.

Post a Comment