01 April 2024

Nice Lede

I’m fond of effective altruists. When you meet one, ask them how many people they’ve killed.
—Leif Wenar in Wired

That is a wonderful start to an article about the now disgraced philosophy Effective Altruism, whose most famous (now infamous) adherent was incompetent bunco artist Samuel Bankman-Fried.

The philosophical foundation for EA begins with modern Utilitarianism, where right and wrong are defined (generally, I am summarizing) as the greatest good for the greatest number of people.

EA is not just that, it looks at people as human capital, and decrees that people in the future are more valuable than people today, so an effective altruist is not justified, but required to accumulate as much wealth as possible in order to save those future people, even though it might result in suffering and death today.

For anyone with even the vaguest concept of finance, this is incoherent.  Since current capital is always (dollar for dollar) more valuable than future capital, since you can use capital you have today to do something that might create benefit, while you cannot with capital in the future.

This (and risk pricing) are why you pay interest on loans.

It is the John Kenneth Galbraith quote, "The modern conservative is engaged in one of man's oldest exercises in moral philosophy; that is, the search for a superior moral justification for selfishness," made manifest.

………

Before the fall of SBF, the philosophers who founded EA glowed in his glory. Then SBF’s crypto empire crumbled, and his EA employees turned witness against him. The philosopher-founders of EA scrambled to frame Bankman-Fried as a sinner who strayed from their faith.

Yet Sam Bankman-Fried is the perfect prophet of EA, the epitome of its moral bankruptcy. The EA saga is not just a modern fable of corruption by money and fame, told in exaflops of computing power. This is a stranger story of how some small-time philosophers captured some big-bet billionaires, who in turn captured the philosophers—and how the two groups spun themselves into an opulent vortex that has sucked up thousands of bright minds worldwide.

The real difference between the philosophers and SBF is that SBF is now facing accountability, which is what EA’s founders have always struggled to escape.

Ouch.  Very well deserved shade, but ouch.

………

The core of EA’s philosophy is a mental tool that venture capitalists use every day. The tool is “expected value” thinking, which you may remember from Economics 101. Say you’re trying to maximize returns on $100. You guess that some stock has a 60 percent chance of gaining $10 and a 40 percent chance of losing $10. Multiplying through, you find that this stock has an expected value of $102. You then repeat these calculations for the other stocks you could buy. The most rational gamble is to buy the stock with the highest expected value.

………

Expected value thinking can be applied to any choice, and it needn’t be selfish. You can aim at the most value-for-you, or the most value-for-the-universe—the method is the same. What EA pushes is expected value as a life hack for morality. Want to make the world better? GiveWell has done the calculations on how to rescue poor humans. A few clicks and you’re done: Move fast and save people. EA even set up a guidance counseling service that championed the earning-to-give strategy. EA showed math-talented students that getting really rich in finance would let them donate lots of money to EA. A huge income for yourself can SAVE HUNDREDS OF LIVES.


Expected value thinking can be useful in finance. But what if someone actually hacked their whole life with it? What if someone tried to calculate the expected value of every choice they made, every minute of every day?

That’s Sam Bankman-Fried. SBF was easy for EA to recruit because its life hack was already his mindset. As Michael Lewis writes, with every choice SBF thinks in terms of maximizing expected value. And his calculations are “rational” in being free from emotional connections to other people. EA encourages people to be more “rational” like this. SBF is a natural.

It should be noted that SBF was literally raised from birth to engage in this sort of absurd moral reductionism by his parents, who are quite notable extreme adherents of utilitarianism.  (Also, they are unindicted co-conspirators with their son, but that's beyond the scope of this analysis.)

………

When SBF’s frauds were first revealed, the EA philosophers tried mightily to put as much distance as they could between themselves and him. Yet young EAs should think carefully about how much their leaders’ reasoning shared the flaws of Sam Bankman-Fried’s.

With the philosophers too, good-for-others kept merging into good-for-me. SBF bought his team extravagant condos in the Bahamas; the EAs bought their crew a medieval English manor as well as a Czech castle. The EAs flew private and ate $500-a-head dinners, and MacAskill’s second book had a lavish promotion campaign that got him the cover story of Time magazine. For philosophers who thought themselves bringing such extraordinary goodness to the world, these personal benefits must have seemed easy to justify. Philosophy gives no immunity against self-deceit.

While SBF’s money was still coming in, EA greatly expanded its recruitment of college students. GiveWell’s Karnofsky moved to an EA philanthropy that gives out hundreds of millions of dollars a year and staffed up institutes with portentous names like Global Priorities and The Future of Humanity. Effective altruism started to synergize with adjacent subcultures, like the transhumanists (wannabe cyborgs) and the rationalists (think “Mensa with orgies”). EAs filled the board of one of the Big Tech companies (the one they later almost crashed). Suddenly, EA was everywhere. It was a thing.

This was also a paradoxical time for EA’s leadership. While their following was growing ever larger, their thinking was looking even soggier.

………

At some point, the money took over. The philosophers invited the tech billionaires to dance. Then the billionaires started calling the tunes.

I suspect that the tech billionaires didn’t want to be heroes merely by saving individual lives. That’s just what firemen do; that’s just what Spider-Man does. The billionaires wanted to be heroes who saved the whole human race. That’s what Iron Man does. SBF was never really interested in bed nets. His all-in commitment was to a philosophy of creating maximum value for the universe, until it ends.

So that became the philosophers’ goal too. They started to emphasize a philosophy, longtermism, that holds the moral worth of unborn generations equal to the worth of living ones. And actually, unborn generations could be worth a lot more than we are today, given population growth. What’s the point in deworming a few hundred kids in Tanzania when you could pour that money into astronautical research instead and help millions of unborn souls to escape Earth and live joyfully among the stars?

Conveniently for both the billionaires and the philosophers, longtermism removed any risk of being proved wrong. No more annoyances like bad press for the charities you’ve recommended. Now they could brag that they were guarding the galaxy, with no fear of any back talk from the people of the 22nd century.

Once again, we see both the Galbraith quote and the financial and temporal incoherence of EA.

I would argue that this is a feature of Effective Altruism, and not a bug.

By going in with extreme long-termism, and creating a moral justification for the mindless accumulation of excess wealth, they create a perfect philosophy for the hyper-rich to extract even more resources from the rest of us.

………

In other words, we’re still in the land of precise guesses built on weak evidence, but now the stakes are higher and the numbers are distant probabilities. Longtermism lays bare that the EAs’ method is really a way to maximize on looking clever while minimizing on expertise and accountability. Even if the thing you gave a 57 percent chance of happening never happens, you can still claim you were right. These expected value pronouncements thus fit the most philosophically rigorous definition of bullsh%$.

(%$ mine)

EA is dangerous incoherent garbage, but because it allows people like Elon Musk to see themselves, and represent themselves, as heroes for hoarding money the way that the Collyer brothers hoarded ……… well ……… everything, and so it gets funding from rich hypocrites.

Simply put, it justifies psychopathic greed.

0 comments :

Post a Comment