What Peter Hurford is Doing and Why: interviewing an everyday utilitarian

Peter Hurford is an unusually thoughtful and productive EA currently working with .impact, Animal Charity Evaluators and the Greatest Good Foundation. On Tuesday March 11, I talked with Peter to learn about his current strategy. In order to explore a new interview format, we did the entire thing through instant messenger. The interview is in two parts, and this one is all about Peter.

RC: Hi Peter, I know that you've long been interested in ethics, do you think you've changed your course very much because of the effective altruism movement?

PH: On a meta-ethical standpoint, I think I've remained constant. I've always held a sort of anti-realist meta-ethics with a utilitarian practical ethics, though this has been reinforced by finding out that many effective altruists seem to hold very similar views, though I think my particular views on meta-ethics may be more unique.

On matters of what a utilitarian ought to do, however, I've been very informed by the effective altruist movement. I considered myself utilitarian before I learned about effective altruism -- I was originally concerned with things like present-day US politics. It was only after I read the works of Peter Singer did I realize that I could be doing so much individually to live out my utilitarian values and make the world a better place. When I learned that there was an entire movement dedicated to this, I was really excited!

RC: Are you tempted to drop your bags in Oxford to build the EA movement? After all, if you find one person to replace you, then you will have made a lifetime’s difference!

PH: I definitely liked my time in Oxford when I was an intern for Giving What We Can over the summer of 2013. And I do think community building is important. I just also suspect it is hard. I think the people willing to be really dedicated EAs are few and far between, therefore I think it is not very likely I'll find too many people who will replace me.

That's not to say I've given up on community building. I'm trying to spend some time doing research on marketing and doing some trial-and-error stuff to learn how to make more EAs and pitch our message. For example, just this last month I gave a TEDx talk at my university where I introduced the topic of charity choice and GiveWell to an audience of about 200 people. I've also been working with some researchers from the University of Texas to study the impact of Facebook ads on vegetarianism.

But maybe I'm not the best person to do community building. Perhaps there's someone else better suited to find people to be like me! So I'm also trying to learn more about what else I can do to be useful, instead of focusing solely on community building.

RC: Learning about and trying marketing sounds pretty valuable. Have you any tips so far?

PH: Not a whole lot yet. I think the biggest problems effective altruists face in marketing EA is that they don't think like normal people — they think like EAs. I'm guilty of this too, and I've found it hard to rethink the EA message in a way that's easy for people to grasp when they haven't yet been exposed to our message. For example, the fact that charities may differ by 100x to 1000x is powerful, but very hard to grasp. People aren't very good at comparing things at that magnitude.

RC: It's like Eliezer Yudkowsky has said about scope insensitivity: “The human brain cannot release enough neurotransmitters to feel emotion a thousand times as strong as the grief of one funeral. A prospective risk going from 10,000,000 deaths to 100,000,000 deaths does not multiply by ten the strength of our determination to stop it. It adds one more zero on paper for our eyes to glaze over."

PH: Exactly. It certainly doesn't come naturally — otherwise there would be a lot more EAs. You can't just draw one dot and then one thousand dots and expect people to really grasp the difference on an intuitive level. I can’t do it, and I’ve been thinking like an EA for awhile now.

RC: So marketing EA involves building a bridge between EA thinking and normal thinking?

PH: I think so. Another example is that for many people giving is very personal. They donate to cancer research not because of a rational calculation that cancer is the most tractable source of suffering in the world, but because their grandmother died of cancer. That's hard to overcome, and can result in an angry and defensive audience if you don't approach it right.

RC: Has this been reflected in your experiences promoting EA?

PH: Yeah. When I was putting together my TEDx talk, I made sure to emphasize how giving was personal and that any sort of giving is a good thing to be doing. I framed my suggestion of donating overseas to global poverty as a suggestion I was excited about, not as a moral imperative. I think people were really receptive to this.

I also made sure to get a lot of feedback from people who weren't EAs on how to sharpen the message. I'm excited for when it comes out on YouTube, because I think it will be a decent step forward for marketing the EA message in a focused and "normal" way.

RC: Cool, and are the animal advertisements getting some traction?

PH: I don't know yet. There's a lot of anecdotal evidence that advertising videos of factory farming on Facebook is an inexpensive way to get someone to go vegetarian or vegan, at least for awhile. However, we still don't really have any good data on this. We're currently in the first pilot stage for a study to learn more. I've written about progress in the study on the .impact page.

My hope is that not only will we be able to settle the question of whether this method works for creating vegetarians, but that we'll also learn more broader things about how to engage people on important issues, which could help us market EA. Also, I hope to learn more about research methodology. Maybe I'll figure out some ways to measure advertising for other EA activities, or EA generally.

RC: Who would you try to attract to EA activities?

PH: I'm not sure yet. My current intuition is that people either get EA immediately or they don't, and an unpublished survey that Joey Savoie did for Giving What We Can seems to back this up. So that would suggest that I should market to as many people as possible, in the hopes that it will stick for some. Though others have pushed back against me on this, suggesting that a more tailored message could reach those we are not currently reaching.

That's not to say that I don't think some groups are more promising than others. I'd definitely say that we should reach out to young people (like college students), as they are generally more open to ideas and are more willing to carry out major life changes. Not to mention that you get more EA years by focusing on those who are young.

Another group I'm really excited about as a target market is atheists. Atheists are already skeptical about things, already tend to think in logic and rationalism like most EAs, already tend to be consequentialist in their ethics, and already don't think that society is automatically correct about things. I think a message of "skeptical altruism" and "skepticism toward charities" could go really far in the atheist/skeptical movement, but I (and other people) haven't really done the work to make it happen yet.

RC: Atheism seems to be how Luke Muehlhauser got into EA and he's certainly one of the most effective.

PH: Yeah, and it's probably no coincidence that the majority of EAs are atheists. I was an atheist before I became an EA too.

RC: Sure, although there's this problem that you don't want to exclude people who are theist, or who hold non-utilitarian views.

PH: That's very true. Perhaps ironically, I also think that the devoutly religious could be a very good target market too, as they already tend to tithe their income and are very dedicated to altruism. Maybe we could convince a church to do a fundraiser for a GiveWell top charity, for example? A local church here just had a fundraiser that raised $700,000, so there's certainly no lack of willingness to move money to charity. It's just that the current charities aren't effective. But I don't know how receptive churches would be to this.

RC: The other thing is that it depends which area you want to focus on, for example if you think that it's more important to donate to Givewell itself, then that's going to be much harder to fundraise for.

PH: The idea is that even if raising money for GiveWell is say 10x better than raising money for SCI or something, we'd reach more than 10x as many people with a pro-SCI message than with a pro-GiveWell message. But maybe that's wrongheaded, and it's certainly really sensitive to just how much better the best charity is.

RC: Right. Do you see any other projects that are neglected in the space of fundraising and recruitment?

PH: Right now, I do think that there's not enough EA resources invested in some things, like learning more about community building and fundraising and the like. Or really just learning through trial-and-error in addition to learning through writing research papers.

RC: Where would you put $10,000 for an experiment in EA community building?

PH: That's a tough question. I'd be tempted to fund a large study of EA marketing materials, though I'm not sure what such a study would look like. I'd also consider hiring a few different professional marketing firms to see what sort of advice they'd offer. Though I've heard that CEA has had bad experiences with hiring marketers in the past where the marketers just don't really get EA.

I hope my Facebook ad study might help inform more what EA community building studies could look like.

RC: What do you think the EA community should do with all the people and money they gather together?

PH: That's an even harder question. I like the current portfolio approach of trying multiple different projects and seeing what works and what doesn't. The current culture of extreme transparency and a frequent self-skeptical review cycle makes this very useful. I'm not sure I'd want to change all that much — I'd just encourage some people to try new things and get at some low hanging fruit.

RC: Do you think we have any good reasons yet to think that some of these projects will turn out more effective than the others?

PH: Right now it's hard to say. Certain causes seem more important than others, so we might be tempted to think that orgs working on the more important causes will turn out to be more effective. But tractability is another important issue that's not easily taken into account.

I'm personally fine with seeing all the EA orgs as they are run for another two years or so before I'm willing to make judgments about who should shut down. But orgs that can't demonstrate clear impact after a couple of years better should face a strong burden of proof against continuing.

RC: Do you think that position might make it hard for organizations driving toward improving the far future?

PH: Yes and no. Obviously organizations working toward the far future aren't going to be able to demonstrate their impact in traditional ways, and people will be understanding about that. But they still need accountability, as it's all to easy to spin your wheels in research without knowing it. I suspect these organizations can demonstrate their worth by engaging with external experts and getting lots of external reviews. I also think they can pay careful attention to their progress on issues over time and the impact of individual articles, given their goals. But I also don't have any experience with these institutions, so I could be very wrongheaded with this approach.

RC: What do you think of the argument that we should work on reducing existential risk because of the potential scope of future civilization?

PH: I find it very persuasive, but I'm concerned that work on reducing existential risk is intractable. Right now I've worked on more general learning about effective altruism, but I wouldn't be too surprised if I was focused more particularly on x-risk in a few years time.

RC: What do you think would convince you to be more or less focussed on x-risk?

PH: I think reading up more on x-risk would be very helpful. But right now I'd want to be convinced that there are tractable things I could be doing to knowably reduce x-risk with some rigor above that of pure speculation.

RC: You have argued in a post that we should favour exploration over exploitation at this stage, which must play into your thinking here. Concretely, how do you think we can best explore our impact on the far future?

PH: Yeah, that article basically sums up everything I've been saying so far. Obviously exploring our impact on the future is hard, because we can't check the future to see if what we have has an effect. This leaves us without easy feedback loops, which is very troubling. I think this is an issue that needs to be thought through more.

Right now, I'd suggest trying to engage a broad expert community on our far future research, but I don't know how feasible that is.

I'd also suggest doing things in the here-and-now that have narrow and easy feedback loops so that we can learn more about what we are capable of, and then slowly begin to translate this to the far future. This is the approach I am taking, and I think the approach that GiveWell favors as well.

A similar idea is to work a lot more on improving decision making and predictive abilities, through stuff like DAGGRE and the Good Judgment Project.

Though that's another example of using here-and-now feedback to improve our ability to forecast the farther future.

RC: If you had to put money on it, which of these three projects would you back?

PH: Right now, the second one — doing more work on tractable EA stuff, like community building. But I'm pretty uncertain about that choice.

RC: Holden Karnofsky said in a recent conversation with MIRI roughly: I change my mind when an argument reaches a certain strength threshold, and x-risk arguments have not reached that level in terms of credibility and track record of speaker and intellectual methodology, and outside views. Do you endorse this epistemology?

PH: Yes. In that interview, Holden said that he had Knightian uncertainty with regard to x-risk. I feel much the same way. In my essay Why I'm Skeptical About Unproven Causes, I characterized the issue as playing a lottery with no stated odds.

RC: But Knightian uncertainty is apparently immeasurable. If that's how you truly felt about speculative causes, it would disprove utilitarianism

PH: I don’t think of it as immeasurable, just an estimate with incredibly large error bars. The uncertainty is in my mind, not in the real world. I basically just don't trust myself to evaluate the impact of x-risk reduction, yet.

RC: Ok, so back to the rather nearer future, where do you see yourself in five years?

PH: I'm not quite sure. Right now, I'm an undergraduate, but I'll be graduating this May. In June, I'll be going to work in a start-up doing web development and statistics. That sounds like a cool job with a lot of useful skill development. I imagine I'll either stay in web development and earn to give while doing EA volunteering on the side, or I'll save up money and retire early so I can work on EA stuff full-time.

RC: And how is .impact going?

PH: It's going well. Consistent with the culture of transparency and self-skepticism, we're aiming to publish a candid review of our impact within the next couple of weeks. Overall, I think we've accomplished a lot for an organization fo just volunteers. But we've definitely faced some limitations and setbacks.

RC: What have you learned about coordinating EAs?

PH: Well, we've found it very difficult to successfully recruit committed volunteers beyond the original four people that made up .impact from the start, though we have gotten a few people involved to do a few things.

Overall, we've found that the demand for EA projects generally outpaces the supply of people willing to do those projects by a noticeable margin.

RC: What projects does .impact have in the pipeline?

PH: Right now we have fourteen projects in active development, which I find kind of impressive. Right now, our biggest project is called "Skillshare", and it's a place where EAs gather to request and share things, like skills, advice, ebooks, whatever. It's ended up being a good community of advice so far, and it's not even launched yet. We intend to launch it any moment now.

I've been working on the veg ad study which we talked about earlier. I've also been working on making a database of jobs relevant to EAs. We've also been trying to develop some other useful infrastructure for EA, like an EA reddit and an EA Wiki.

Some people outside of the four founding members are also working on projects. Brian Tomasik is working on expanding Google Ads for non-profits, Tom Ash is working on a survey of EAs, and Pablo Stafforini and Vipul Naik are working on improving the EA presence on Wikipedia. I imagine these five projects would have happened anyway, but .impact provides a convenient meeting place to facilitate these projects forward.

RC: Would .impact ask for financial backing now, or at some point in the future?

PH: Not now, but some point in the future, maybe. Our policy so far is to look for funding on a project-by-project basis rather than fund for .impact as a whole. Right now none of our projects need more funding than we can provide on our own, though it seems likely that my veg study will need a lot of backing sometime in the future.

Another possibility is that people might consider quitting their jobs to work on .impact full-time, and then we'd seek funding to cover their cost of living. But we're not anywhere near this point yet.

RC: Where might EAs be able to meet you to continue this conversation?

PH: I’d consider going to a CFAR camp or the EA Summit. I’d like to visit the Bay Area soon. I’ve also thought of maybe trying to land a web development job there if I want to keep on earning. For now, I encourage people to reach out to me via Facebook or email, or read my blog. I’m usually very quick to respond and love talking to new people!

RC: Thanks very much for your time, for sharing information about your projects and I look forward to hearing more from you!

Photo credit: K. Lowry