> OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.
This is so tangled. I don't mean it as a criticism as I'm sure a lot of SV investments would have a much longer Relationship Disclosure sections. So props to them for including this.
Well, I would use it as a criticism. This is a tangled web of like-minded people giving money to each other and calling it charity.
Some people conflate this with Effective Altruism, which I think sucks. Compared to the rigorous work done by GiveWell, there's no way to tell if this is effective, or even altruistic.
It's just people assuming that the world will be a better place if more people who think like them have money, an assumption held by basically everyone everywhere.
Holden's dedicated a huge chunk of his career to moving hundreds of millions of dollars to alleviating poverty and disease. He's one of the founders of the effective altruism movement. You may disagree with his decision here, but to dismiss his efforts, saying it "sucks" and isn't "effective, or even altruistic", while ignoring the extensive public writing he's done on the subject that led him to these views and strawmanning his position as nepotism, is just awful.
"I disagree with the arguments presented, for these reasons" - cool.
If you think the grant isn't a good idea, make an argument for that.
"This person who has dedicated their life to doing as much good as possible is close to other people who also want to do as much good as possible, and their work has led to convergent viewpoints, therefore this isn't altruism" is cheap character assassination.
So, you obviously feel strongly about this, but let me explain why your comments are less persuasive for those of us outside this subculture:
The non-profit they donated to is (by any reading of their mission statement) an organization designed to create new technology that "will be the most significant technology ever created by humans" according to their own statements. It doesn't disburse cash or benefits to _anyone_, and actually pledges to keep some of the research secret, and "we expect to create formal processes for keeping technologies private when there are safety concerns" -- a situation the organization claims will happen, presumably regularly!
Creating influential technology is typically done for-profit, and research is typically funded in ways much less open to individual favoritism (review boards are a great anti-corruption tool), and the results of that research are typically available to (among others) the people that fund it. There is a lot about this situation that a reasonable person would describe as unusual.
In addition, all of these changes -- introducing more direct funding with less oversight, lack of access to results, lack of expectation of benefit to the targets of the charity -- all lend themselves to obscuring a fraud. That doesn't mean a fraud is present, but I'd be extremely aggressive about oversight.
What kind of oversight are we getting? Well, right now they list one of their major goals as the "tricky" goal of figuring out of they're making any progress at all.
I would not give this organization money. Dismissing these critiques as "character assassination" ignores the fact that I've only described aspects of the organization, not of the people involved, whom I have little information about.
Moreover, to add further context, the whole basis of Holden's effective altruism work has been around the idea that philanthropic dollars ought to be focused on charities with extremely rigorous proof behind how much they improve people's lives per dollar donated, and how much they need the money.
That context makes advising a donor to direct an "unusually large" sum to an organisation with an extremely vague goal and no tangible measure of progress towards it, little of the transparency demanded of other charities and existing funding commitments well in excess of their spending plans look like an extremely strange decision long before you read the disclosure statement.
>
Moreover, to add further context, the whole basis of Holden's effective altruism work has been around the idea that philanthropic dollars ought to be focused on charities with extremely rigorous proof behind how much they improve people's lives per dollar donated, and how much they need the money.
This isn't quite true; SCI, a charity that treats parasitic disease in the third world, is the subject of massive uncertainty and conflicting reports of effectiveness. It might turn out that it has very little impact at all. But it's still a recommended EA charity because it looks like there's a decent chance they're doing a ton of good. GW has written extensively about this.
Technological advancement has been responsible for most of the increases in human welfare, most notably allowing us to (at least temporally) escape the Malthusian condition. It's not implausible that technical research likely to be neglected by markets and academia could be a better use of money than even the most efficient African charity. But I drink the AI-is-very-likely-to-change-everything-we-really-mean-it-this-time Koolaid, though.
I don't want to pick a side in this, but OpenAI does seem to have published some open source software projects on GitHub, all that I have checked under the MIT license. (https://github.com/openai?type=source ). This is not a high bar to cross, as even Microsoft or Facebook do that, but I have also not found any evidence, on their website, of projects that they have not released as free software. The fact that they are planning not to publish technologies, if there are "safety concerns", in the future, is a different matter, but then given their claims, that is exactly what they should be doing. Clearly, one could argue that in that potential future, OpenAI will become a misnomer (disregarding the issue of what actually should be done), but at the moment it appears that all of OpenAI still _is_ open source.
I'm sure what they do falls within the rather broad field of AI. It's just not AGI, which is the thing they talk about to get people to give money to their AI projects.
It seems objectively better for some people to have access to money -- in this case the GiveWell founder. He's done great work there and now he's helping OpenAI all via a Facebook founder's billions.
"It seems objectively better for some people to have access to money" is classism in one sentence.
And I won't give people a pass because of the thing they co-founded and then forked. I can appreciate Wikipedia and believe that Larry Sanger's fork of it was dumb. I can appreciate GiveWell and think that Open Philantropy is corrupt.
And your suggestion about classism is moral relativism.
Who is corrupt? The guy giving away $8 billion or the one who founded GiveWell? And one of their focuses is criminal justice reform and trying to improve prison conditions.
If you want to look at something truly corrupt, look at the criminal justice system in the U.S.
In their relationship disclosure:
> OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.
This is so tangled. I don't mean it as a criticism as I'm sure a lot of SV investments would have a much longer Relationship Disclosure sections. So props to them for including this.