A new story from Crypto Critics' Corner


And Lifeboat was around before EA existed, but it shares many of the same philosophies and crypto people were super into that as well. So it's like a certain brand of person, a fear-mongering individual, seems to be quite attracted to these. That's not to necessarily suggest, as you said, that it was founded with those principles at top of mind. I do want to talk about, though, how much this affected SBF, the way he did business. In case you're unaware, SBF and Tara McElhay, who founded Alameda Research, they're both effective altruists. And so was Caroline Ellison. So a lot of the people involved in or she sort of was. I know Sam Trabuco as well, like a bunch of these people who were involved in this were effective altruists, and they took it very, very seriously. There's a document shown in one of these lawsuits or whatever, where it shows that Sam Bankman-Fried made a list of what he was doing that was harming people versus the good that he was doing or future good that he could do and how he was trying to weigh those concerns and weigh them actually monetarily. It wasn't just some weird system that he invented. He was actually weighing every dollar and cent as a good or bad thing. To me, that is proof positive that this philosophy played a key role in how he ended up committing fraud. Yes. But I think also you should detail how those effective altruists were very specifically involved right at the very founding of Alameda Research, because I think that kind of points at some of these issues, because Tara and some of the other people they were able to connect Sam with was the money that got Alameda Research to where it was, right? Yeah. No, I mean, that's it. Exactly. And I think a community in general was also, a lot of them were the cheerleaders for SPF, right? The benefits they were receiving were millions of dollars in donations, and ultimately that is how they measure whether their goal is succeeding or not. So again, the reason I want to talk about this, though, is because after acknowledging it, so if you go back, William McCaskill, you said there's another, sorry, who's the other gentleman involved in the beginning of EA, the founding of EA? Well, there's, I think, several people who come before McCaskill. Like I would point towards Toby Ord at Oxford and Peter Singer as both like effective altruists and stronger philosophers and writers than McCaskill. But McCaskill's like particular place of prominence in this story comes because in the United States, he acted as what a friend of the show, David C. Morris, called an effective altruism power broker, where he was connected to so many of these groups, so many of these funders and so many of these things that even in 2018, when effective altruists leaders were warned about some of Sam Bankman-Fried's improprieties, including his lies and sleeping with subordinates, it was McCaskill that like pushed to keep Sam Bankman-Fried centered in kind of this effective altruism community under this belief that like Sam Bankman-Fried would eventually be able to bring all of these other benefits to their broader community, that it was worth sacrificing and perhaps some of these earlier points in order to achieve this bigger goal. And that I think is kind of the rod at the heart of effective altruism, right? And like the reason it can be really hard to create like a coherent utilitarian moral philosophy is that it becomes really easy to use the ends to justify the means in a way that allows the means to just continue getting worse and worse and worse. And that is compounded if you effectively place an infinity on one side of your balancing scale by saying, no, this will kill all humans who will ever live in the future. And so represents the thing that it would be okay if we killed 90% of humans now, if it meant saving that entirety of humanity forward, right? It's possible to concoct that kind of justification when you get really into kind of the long-termist EA stances, and that has just obvious harms. Well, it's funny that you say this because I'm looking, so just to be clear to everyone, McCaskill did address this right around when FTX collapsed back in November. So he did address this, and what he specifically said talks about what you are exactly suggesting. He says, for years, the EA community has emphasized the importance of integrity, honesty, and the respect of common sense moral constraints. If customer funds were misused, then Sam did not listen. He must have thought he was above such considerations. A clear thinking EA should strongly oppose ends justify means reasoning. I hope to write more about this soon. In the meantime, here are some links to writings produced over the years. So he specifically says that ends justify the means isn't a healthy EA perspective. And yet, right? Like it's hard for me to fathom EA existing if you don't have some level of ends justify the means. If you don't have some, well, you have to weigh out the good versus the bad. That's part of the whole thing. And that has to be met with ends justify the means because otherwise you don't really have a philosophy at all. So it's a confusing kind of backwards statement to me. I feel confident that that statement from McCaskill was not written by a philosopher but by a public relations executive. Because like I kind of already alluded to here, like common sense, moral, whatever, isn't something that exists a priori, right? Like the fundamental basis of the philosophy that drives this is that you determine what's moral by assessing on balance it's good and it's bad. And there's variations of utilitarianism like rules-based utilitarianism where you modify that statement to say this type of action as a rule is more good than bad. And that gets around some of the issues but still fundamentally to presume that there was an obvious set of moral actions that the EA community agreed on and that Sam Bankman freed was clearly in violation of is particularly suspicious in light of the fact that time reporting suggests that William McCaskill personally advocated to keep Sam Bankman freed in the community after he knew he was violating common sense moral frameworks by lying and sleeping with his subordinates. Again, my issue with this statement now and that he's back, that he has not addressed any of this is that at the end of this statement in November, he specifically says, I was probably wrong. I will be reflecting on this in the days and months to come and thinking through what should change. He also says, if FTX misuse customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed. So if he's so ashamed and he feels like his ideas about this philosophy indeed helped create this fraud, which is kind of like what is being suggested, I don't think anyone has denied that now at this point that the philosophy helped drive their decision making. Why has he not said anything about it? Why is he not saying like, look, I am wise? I mean, I know why, right? Like you said, there's legal reasons and other things. He was an unpaid advisor and all that for the Founders Fund. He was involved in other ways. I am sure they've accepted who knows how many donations. So I'm sure there's some legal reason for this, but that's such a bullshit excuse, especially when your whole shit is about your philosophy. I'm not surprised. I'm not even disappointed. Why would a cult address allegations against the cult? I do just want to ensure that people keep this perspective in mind as he tries to come back into the public fold and the public sentiment. We're talking about the philosophy of effective altruism and all these things and taken just as like a philosophy, as a thing you're thinking about and perhaps using like personally to guide yourself may have somewhat limited harms. I think what's particularly interesting in this case is how some of these social dynamics have developed around effective altruism. There's another community that calls themselves the rationalists and many of the people in this community that congregates in places like Les Rung consider themselves effective altruists. And like these communities often have a type of norm where you're expected to kind of present your ideas in a certain way, address other people's ideas in a certain way, and it's a sort of kind of epistemic detachment that you're supposed to maintain when you're interacting with it.

Coming up next