2015-09-03

This post is a slightly cleaned-up version of an email conversation I had with the brilliant and friendly Kelsey Piper, after reading her blog post last July on budgeting as an EA. Since it was originally an email discussion, some parts might be vague or unclear, for which I apologize in advance. Published with permission.

In summary: Earning-to-give breaks down into #1, having a large income; #2, giving a large proportion of that income; and #3, choosing an effective and underfunded place to give to. The current connotations of ETG emphasize #2 > #1 > #3, but the order of problem difficulty is #3 > #1 > #2. A large portion of the US, especially wealthy people, already has #2 down (US charitable giving is ~2.5% of GDP). But >99.9% fail at #3 – either only donating to non-weird things that rapidly become overfunded, or donating to scams/cults/pseudoscience and other ripoffs.

Me: Hi Kelsey, I just read your Tumblr post on budgeting. I mostly agree with what you’ve said, but I’m pretty sure it’s counterproductive to talk about going on a budget for purposes of donating – there’s already huge piles of money sitting around unused everywhere, that nobody really knows what to do with. That may sound weird, but let me give some examples of people in our social network.

As you’ve probably heard, Dustin Moskovitz is worth about $9 billion, and he plans to donate the large majority of that to charity through Good Ventures/Open Philanthropy Project. Private foundations are legally required to donate 5% of their assets each year, so for an $8 billion endowment that would be annual spending of ~$400 million or more. It would take ~20,000 people getting high-paid jobs in Silicon Valley, and donating 10% of their income a year, every year, for the rest of their lives, to equal one Dustin Moskovitz. And there’s every reason to think that Dustin isn’t a one-off special case. At least 137 billionaires have signed Bill Gates’s pledge to donate half their wealth, there are several other billionaires that people within EA are already discussing philanthropy with, and the EA idea is in the middle of a big publicity/coolness boom that shows no signs of slowing down.

But even forgetting about the ultra-wealthy…. last week, I was talking to one of my friends at Google, whom I’ve known since he started working there some years ago. Within the company, he isn’t especially important or famous or anything; he just has a normal Google job with a normal Google salary. He mentioned, to my surprise, that since he started working he hadn’t bothered to cash in any of his Google stock, which by now must be worth half a million dollars or more. He has a family, he lives in the Bay Area, and as far as I know he isn’t especially frugal; he just never had a good enough reason to spend any of it. After you reach a certain point, there just isn’t that much to spend more money on.

And even within the realm of spending less to donate more… doing some rough math, there are at least half a million people in the Bay Area who own houses that are worth over $1,000 per square foot. If one of them sold their house, and moved into a new house that was just 100 square feet smaller – a barely noticeable change – they’d have over $100,000 to donate. That’s the same donation size as someone making the US median income of ~$40,000 taking the 10% giving pledge, and then sticking to it every year for the next two and a half decades. (Of course, you wouldn’t literally switch houses for 100 square feet because of transaction costs, but just trying to illustrate the general point.)

In spite of all that, I do think there are cases where donating makes sense even if you aren’t that wealthy. In particular, if what you donate to is so strange or so new or so unpopular that virtually nobody else would be willing to fund it, then donating is likely a reasonable idea (and I have donated several times on that basis). But overall, it seems likely that given the ginormous overall wealth of the Bay Area, for someone who has any use for marginal dollars beyond buying luxuries that they don’t care much about, budgeting to give more is penny-wise and pound-foolish.

Kelsey: I think that people who make significant lifestyle changes consonant with their identity as an EA are likelier to get the right answers to hard effectiveness questions (giving a painful amount of money away made me value rationality much more and in a much more direct, immediate, pressing way; there’s various evidence that people are less biased when there’s money on the line). I don’t want people who don’t give identifying as EAs. I think it turns the movement into virtue-signaling faster than almost anything else could. If I were in charge I’d set an actual “if you don’t give at least this much you’re not an effective altruist” threshold because I really do think that our most likely failure mode is becoming a movement as meaningless as the “green!” label on food, and an expectation of giving (from everyone middle-class and up) prevents that.

Me: I largely agree, but I think the situation is more complicated than the world-model implied by what you’re saying.

Forgive me for the silly metaphor, but say there’s a big asteroid hurtling towards the Earth. (I’m using this as a metaphor for x-risk, but also for “ordinary” bad things like poverty, disease, aging, and so on.) We need to build lots and lots and lots of nukes to blow up the asteroid before it hits us. There are basically two components to a simple nuclear weapon: there’s the highly enriched uranium (HEU), and then there’s a casing with explosive charges, which collides two pieces of HEU against each other at high speed. Making the casing is far from trivial, but a competent team of electrical and mechanical engineers can probably figure it out, at least well enough. On the other hand, making the HEU is enormously difficult. Even national governments with huge laboratories and thousands of scientists and multi-billion-dollar budgets often fall flat on their faces. Metaphorically, the casing represents raising money for improving the world, and the HEU represents the ability to convert money into utilons with reasonably high efficiency.

In this metaphor, GiveWell’s original model corresponds to a group that hires two teams: one of them starts work on making casings, and the other goes around to all the world’s nuclear research labs, calls them up, and asks if they have any HEU just lying around which they aren’t using for anything. On the one hand, this is certainly a good idea. On the other, it’s not really tackling the hard part of the problem; you’re just piggybacking off other people’s existing solutions. I’m hugely impressed by how Holden recognized this and took OPP in a different direction, and how they’re now tackling the hard part of the problem head-on (at least, as head-on as anyone else has).

In the metaphor, you seem to be saying that the project to stop the asteroid should be composed 100% of experts on casings, who are actively helping to manufacture them. That’s certainly better than a project which does nothing, which (as you said) is the default failure mode. But it puts emphasis, and therefore things like social status awards, on the half of the problem that’s by far the easiest to solve. It’s also (breaking the metaphor) dangerously close in memespace to making the movement about showing off self-sacrifice, which is a failure mode that humans are probably evolutionarily adapted to fall into. I think this is why people keep suggesting things like giving blood, donating kidneys, and so on despite them not being plausibly effective.

The flip side is that, unfortunately, I think you’re right when you say that not having a donate-to-enter threshold makes it easier for the movement to degenerate into meaninglessness. It’s easy to judge whether someone is donating, and then not award status to people who don’t; no one really knows how to award status to [figuring out how to turn money into utilons in a non-domain-specific way] in a way that’s resistant to cheating. But I also think that, if we’re aiming to solve a significant fraction of the world’s major problems, we should kinda expect to have to tackle murky, difficult problems that nobody really knows how to handle yet.

Kelsey: Hmm. I hadn’t thought about that before (finding ways to convert money into utilons being the Hard Problem). I guess because I’ve always sort of thought of the economy as being a fairly efficient money-to-utilons machine. But I agree that we need more people doing research about what is effective; maybe they should all be people like Holden, who first earned a lot of money he wanted to give away and then pivoted to figuring out how to give it away? This admittedly involves wasting person-years of work doing something that, in your model, is mostly signaling. But I don’t think it’s totally signaling, the research isn’t actually a limiting reagent on the good money can do, just a multiplier – and it might actually also involve learning skills that make one a better researcher, too. Maybe we should point people towards careers that involve making high-stakes decisions with tight feedback loops, to hone the skills we eventually want them to use on figuring out multipliers.

And suffering is an attractive failure mode because it’s costly signaling of commitment, and you can’t actually do without costly signaling of commitment, if commitment is important. You can at least demand that the costly signaling not compromise future ability to do good? I hope? If someone donated a kidney I’d trust them more with my money (well, with the lives of currently existing humans). I wonder if that emotion is justified.

Me: I think “the economy” is mostly just a bad category – it takes a huge number of dissimilar things and throws them together in the same box, to the point where measurements of “the economy” (GDP, unemployment, inflation, etc.) are at best rough guesses and at worst outright lies. Economics contains a fair amount of useful knowledge within it, but IMO it really needs an overhaul to about half of its ontology. This isn’t really that surprising, for a science at such an early stage – you could think of it like, say, chemistry in the 17th century. There are lots of observations and rules and procedures that basically work, but there are still central concepts like “transmutation” that need to be thrown out, and other ones like “valence electron” that haven’t been discovered yet. (Not that I know how to do that – I have guesses, of course, but this’ll be a major decades-long project just like the invention of modern chemistry was.)

I think a better metaphor is to see the world as a collection of machines. A “machine” isn’t a literal mechanical device, but a collection of devices, procedures, memes, writings, traditions, institutions, Schelling points, and so on that operate together to reliably produce certain results. Some machines work well; others work surprisingly badly; and a great many simply fail to exist or haven’t been invented yet. You could say that entrepreneurship, in a broad sense, is the creation of a new machine; FDR and Florence Nightingale were entrepreneurs by that definition. Machines can also be destroyed, and of course they constantly evolve in response to the forces around them.

The way you produce happy lives for a large number of people – a larger number than you could help directly with your own muscles – is to build a set of machines that, taken as a whole, reliably give people what they want. (What exactly they do want is a whole other complex topic, and a central question to eg. MIRI’s FAI theory. But for now, we can just say that eg. no one ever wants to get infected with malaria.) In some cases, these machines already exist, and you can freely make use of them when setting up your own stuff. Eg. if your plan is to help people by setting up a gold-mining operation in Kenya, there already exists a very efficient machine to buy, sell, transport, refine, distribute, and price gold that you can take advantage of. You can more-or-less just bring big sacks of gold dust to downtown Nairobi, and hand them off there – you can trust that someone else will take care of utilizing them in the most efficient known way. However, this machine only exists because of a number of background conditions:

– fungibility: one ounce of gold is the same as any other ounce

– perfect information: it’s easy to tell if a bar is made of gold or not

– cheap shipping and distribution: the cost of transporting and distributing an ounce of gold is far less than the gold itself

– practical contract enforcement: there exist organizations which would be meaningfully punished if they just stole all your gold, so they don’t do so

– (a bunch of others I won’t get into)

By contrast, if tomorrow you discovered a cure for cancer, by itself that would be more-or-less useless. There’s no machine for evaluating and pricing and manufacturing and distributing cancer cures. You’d have to build one yourself, and that’s a huge amount of work and requires lots of different skills – dealing with bureaucracies, hiring and managing employees, raising funding, conducting human trials, and on and on and on. If you don’t happen to have those skills, then people will keep dying of cancer. (One example I have personal familiarity with is Dr. Eric Lagasse’s work on liver regeneration – we tried to build a machine for distributing this to patients, and fell flat on our faces, despite being IMO smart and capable in other domains.)

There isn’t any limit on how powerful a machine can be – the easiest historical example is Gutenberg’s printing press, the important part of which wasn’t really a “press” so much as a new set of techniques for making and using metallic type. On the other hand, trying to build an arbitrarily powerful one faces two fundamental constraints. The first is that, to be very powerful, it has to be fundamentally dissimilar from anything that many other people are trying to do. If it were similar to ones that tons of other people were already building, eg. how to make a better lithium-ion battery, odds are someone else would have built it already. The second constraint is that the vast majority of really original ideas are terrible; if you just naively disregard existing constraints, then you’ll probably fail, because reversed stupidity is not intelligence. (Paul Graham and Peter Thiel talk about this at length in Startup Ideas and “Zero to One“, respectively, though it’s a counterintuitive enough idea that you have to sort of see it from many angles to understand it well, kinda like the proverbial elephant with the blind men.) So to succeed, you have to know something that other people don’t; to do that, you have to know how to recognize which things you don’t know; and knowing how to recognize which things you don’t know is just really really hard. Eliezer’s Sequences are the best attempt I’ve seen so far to teach it (Artificial Addition is one particularly good example), and I like to think I’m pretty smart, and even so I don’t think I really understood it until having read them three or four times over about six years.

In keeping with the analogy, any given machine, once built, usually only works within a given set of operating parameters. You can make your car put out 100 kW instead of 50 kW by pressing the gas harder, but you’ll never make it produce 10,000 kW, because it’s designed to top out at 200 kW or thereabouts. Similarly, any given charity or type of charity can only handle so much money before it clunks out. And charities (or any other machine) that can operate productively under a load of even one percent as much money as the developed world has – tens or hundreds of billions a year – are more-or-less nonexistent because of various scaling issues. You’ve probably read that humans are evolutionarily adapted to work in small groups, from a handful up to 100 or so; the further you go beyond that, the more you’re stretching the cognitive abilities of the poor saps who have to run the thing beyond their natural design limits. One of the very few well-understood ways around this is to avoid tackling the scaling problem yourself, by just redistributing the money to others in some simple, well-defined way. But precisely because this is one of a very few well-known ways around a critical bottleneck, it’s one that’s extremely popular, and you’d therefore need a huge amount of resources to substantially add to what’s already being done (IIRC, even ignoring existing aid altogether, there’s already over $300 billion per year in direct remittances to the very poor from friends and family).

Hence, under this framework, the two largest ways to contribute at the margin are:

– to build a new machine where the type of machine is relatively well-understood, and the bottleneck is that the existing machines can’t scale well and the type of labor required to build new ones is scarce; this covers both creating new charities to address tropical diseases, and most “ordinary” software entrepreneurship, as well as many other things

– to build a new machine where the type of machine isn’t well-understood, and the bottleneck is the skill and background knowledge to have the required insights into what blanks need filling in; Eliezer is one example of someone we know who’s AFAICT succeeded at this, but successes here are necessarily much rarer than in the first category

By “build”, what I really mean is “contribute to building in a relatively non-replaceable way”; there are usually many different types of skills required, hence many opportunities to contribute. And it’s certainly true that one opportunity is “provide the initial rounds of funding”. However, in order for your financial contribution to be non-replaceable, you yourself must have the same types of unusual cognitive abilities as the people running the organization – the ones that make them able to succeed when most others couldn’t. If you yourself only have ordinary-programmer cognitive abilities, and not (for example) figure-out-which-organizations-aren’t-likely-to-get-torn-apart-by-internal-conflict abilities, then on average your funding will just go to the same place as the ordinary programmer’s. And so either you won’t fund the organization at all, or lots of ordinary programmers will fund it too and your funding won’t mean much on the margin.

And you can’t outsource your judgement to an organization-evaluator – because if your ability to judge the judgement of organization-evaluators is the same as an ordinary programmer’s, then lots of ordinary programmers will follow the recommendations of the organization-evaluator and you get the same problem. The ability to contribute by offering funding is, to a first approximation, only valuable insofar as the funder personally has unusual abilities, not possessed by any billionaire or by more than a small fraction of Silicon Valley career software developers, to judge which things need more money and which need less. (And if you do have that ability – not meaning Kelsey-Piper-you here, but hypothetical-abstract-you – and don’t already have a good chunk of change to contribute, why not become an accountant? All the important-to-humanity organizations I’ve been closely involved with have been in desperate need of good accountants. Again, it’s not accounting itself that’s valuable here, but accounting combined with highly-unusual-for-accountants-judgment-of-which-organizations-to-contribute-to.)

Show more