People trying to guard civilisation against catastrophe usually focus on one specific kind of catastrophe at a time. This can be useful for building concrete knowledge with some certainty in order for others to build on it. However, there are disadvantages to this catastrophe-specific approach:
- Catastrophe researchers think that there are substantial risks from catastrophes that have not yet been anticipated. Resilience-boosting measures may mitigate risks that have not yet been investigated.
- Thinking about resilience measures in general may suggest new mitigation ideas that were missed by the catastrophe-specific approach.
One analogy for this is that an intrusion (or hack) to a software system can arise from a combination of many minor security failures, each of which might appear innocuous in isolation. You can decrease the chance of an intrusion by adding extra security measures, even without a specific idea of what kind of hacking would be performed. Things like being being able to power down and reboot a system, storing a backup and being able to run it in a "safe" offline mode are all standard resilience measures for software systems. These measures aren't necessarily the first thing that would come to mind if you were trying to model a specific risk like a password getting stolen, or a hacker subverting administrative privileges, although they would be very useful in those cases. So mitigating risk doesn't necessarily require a precise idea of the risk to be mitigated. Sometimes it can be done instead by thinking about the principles required for proper operation of a system - in the case of its software, preservation of its clean code - and the avenues through which it is vulnerable - such as the internet.
So what would be good robustness measures for human civilisation? I have a bunch of proposals:
- Build research labs to survey and study catastrophic risks (like the Future of Humanity Institute, the Open Philanthropy Project and others)
- Prediction contests (like IARPA's Aggregative Contingent Estimation "ACE" program)
- Expert aggregation and elicitation
- Build research labs to plan risk-mitigation measures (including the Centre for Study of Existential Risk)
- Improve political systems to respond to new risks
- Lobby for mitigation measures
- Build a culture of prudence in groups that run risky scientific experiments
- Building systems for disaster notification
- Improving the foresight and clear-thinking of relevant decision-makers
Preventing large-scale violence
- Improve focused surveillance of people who might commit large-scale terrorism (this is controversial because excessive surveillance itself poses some risk)
- Improve cooperation between nations and large institutions
Preventing catastrophic errors
- Legislating for individuals to be held more accountable for large-scale catastrophic errors that they may make (including by requiring insurance premiums for any risky activities)
- Build underground bomb shelters
- Provide a sheltered place for people to live with air and water
- Provide (or store) food and farming technologies (cf Dave Denkenberger's Feeding Everyone No Matter What
- Store energy and energy-generators
- Store reproductive technologies (which could include IVF, artificial wombs or measures for increasing genetic diversity)
- Store information about building the above
- Store information about building a stable political system, and about mitigating future catastrophes
- Store other useful information about science and technology (e.g. reading and writing)
- (maybe) store biodiversity
- Grow (or replicating) the international space station
- Improve humanity's capacity to travel to the Moon and Mars
- Build sustainable settlements on the Moon and Mars
Of course, some caveats are in order.
To begin with, one could argue that surveilling terrorists is a measure specifically designed to reduce the risk from terrorism. But there are a number of different scenarios and methods through which a malicious actor could try to inflict major damage on civilisation, and so I still regard this as a general robustness measure, granted that there is some subjectivity to all of this. If you know absolutely nothing about the risks that you might face, and the structures in society that are to be preserved, then the exercise is futile. So some of the measures on this list will mitigate a smaller subset of risks than others, and that's just how it is, though I think the list is pretty different from the one people think of by using a risk-specific paradigm, which is the reason for the exercise.
Additionally, I'll disclaim that some of these measures are already well invested, and yet others will not be able to be done cheaply or effectively. But many seem to me to be worth thinking more about.
Additional suggestions for this list are welcome on the Effective Altruism Forum, as are proposals for their implementation.