In Defense of Totalism

The repugnant conclusion (RC) is a commonly discussed problem in population ethics. If our ethical theory merely aggregates the total happiness of a population, then in some scenarios with many slightly-happy people would be preferred to scenarios with a few very-happy people. Why does this matter? Because making decisions about future populations is hugely important, and if we don’t have a good approach even in theory, what hope do we have of facing the messy realities of the situation?

Unfortunately, there is no possible theory which avoids the repugnant conclusion without having another, even worse property. This paper provides a nice summary of different population approaches and their flaws.

It turns out that every impossibility theorem relies on the axiom “population ethics must avoid the RC”, this suggests that this is at the heart of the issue. It makes me wonder: is the repugnant conclusion really so bad?

After thinking about it, I don’t think there is anything wrong with the RC. In fact, the “repugnance” goes away once we consider the problem in a different framing.

Let’s look at the choice between two possible worlds:

  1. A world of 1 million people, where each citizen has a happiness level of 0.1.
  2. A world of 1000 people, where each citizen has a happiness level of 1.

Of course, the standard repugnant conclusion arises when we prefer scenario 1 over scenario 2.

We can make this problem more exciting by asking if we should switch between 1 and 2. Put this way, the RC becomes “If we lived in world 2, we should grow the population in order to reach world 1”. Perhaps this still seems wrong to you, but lets look at the alternative, what I call the reversed RC: “If we lived in world 1, we should eliminate most of the population in order to get to world 2”. This should make you just as uncomfortable (if not more) than the repugnant conclusion itself! While the RC advocates for more births, the reversed RC advocates that we cull the population to achieve higher average happiness.

Notice also that in the supposedly “repugnant” scenario with a high population but low average happiness, every single person is still happy to be alive. Put this way, it doesn’t sound bad to me at all. This touches on the neutrality of happy lives. Should we be indifferent to the possibility of creating new, happy beings? I hope to write more on this topic, but it seems to me that creating happy beings is clearly a good thing. I think this piece makes the point well.

You may still dislike the idea of totalism, but consider some of the issues that alternative systems of ethics face. For example, the Sadistic Conclusion is a problem which plagues approaches that seek to maximize average happiness rather than total happiness. It is a situation where adding a few people with negative welfare is preferred to adding many people with positive welfare. This seems significantly worse than the repugnant conclusion itself.

In a pragmatic sense, totalism also seems preferable to other approaches since it directly chooses to maximize the total good rather than impose additional constraints. In a direct competition, totalism will always produce more happiness and outgrow the other approaches, leaving one to wonder if it might be instrumentally better to follow a totalist approach despite believing something else.

In all, I believe totalism presents a simple, reasonable approach to population ethics. Even if you find the repugnant conclusion strange, all other approaches to population ethics have similar or worse implications while being competitively disfavored. In fact, the supposed repugnance of such an approach seems to come from a misleading framing of the problem, but, upon closer inspection, the repugnant conclusion is not nearly as bad as it seems.

Links #1

First links post! Inspired by Tommy Collison.

I intend for these to be “lists of interesting things I am reading” rather than links which are particularly current. As usual, my focus is on interesting ideas rather than beautiful presentation.

What Do We Do With the Science of Terrible Men?

A Soul’s View of the Optimal Population Problem, note the similarities to the short story The Egg.

Progress made towards artificial Wombs? Babies from bone marrow? Sperm on the moon?

Against neutrality about creating happy lives. I was hoping to write something along these lines but this says it better than I could.

Politics is way too meta

Social Penumbras seem like an important concept to consider when thinking about social dynamics and movement-building.

How to end stagnation?

Maps of Matter

Experimental evolution of bet hedging. Could this type of evolution be relevant to viruses?

Handling the Next Pandemic

Now is a great time for deep introspection and technical analysis on how to handle the next pandemic by a group of qualified people in relevant leadership positions.

But unfortunately, I’m all you’ve got.

Why now? Because society’s memory is short, and people’s memories are shorter. Now is absolutely a good time to review what worked, what didn’t, and build new institutions before the next pandemic.

So first, what worked? Some ideas:

  • vaccines
  • masks
  • remote work
  • lockdowns (EDIT: maybe? I need to review the evidence on this)

What didn’t work? Actually ask yourself, what seemed like it would work but didn’t? These things are important to remember as we prepare for the next pandemic. Some suggestions:

  • Travel bans. Perhaps these would work better if they were stricter, but the actual travel allowances for citizens allowed the new strains in, even though we saw them coming.
  • Individual compliance with guidelines
  • Mass testing
  • Contact tracing
  • Top-down governance and regulations
  • Fast vaccine authorization
  • Fast vaccine rollout

So what should we do next? What are some ideas you wish were tried? What worked in some places even if it wasn’t widespread? Some suggestions:

  • Supply chain management. Creating stockpiles of masks, disinfectant and vaccine inputs is a good start, but more sophisticated policy is needed. Subsidizing companies which maintain more robust supply chains might also help.
  • Testing new possibilities with immunizations: mix and match vaccines, half doses of vaccines, variolation, intradermal delivery of vaccines, and first doses first should all be seriously considered in the future (EDIT 4/22: Convalescent Plasma Therapy also seems valuable)
  • Human challenge trials
  • Much less regulation in pandemic-related medicine. Existing regulations slowed down every step of the response to the pandemic, from medical devices, to masks, to testing, and vaccines.
  • More and better public information. There needs to be robust systems in place to outline local laws, list local advisories, report case counts, deaths, hospital capacity, current research, and activity based risk factors for people to be able to make informed decisions.
  • Better handling of misinformation
  • Right-to-try legislation. People should be allowed to try out plausible treatments. Everyone gains from risk-takers willing to give an experimental therapy a shot.
  • More individualized laws. By giving localities more flexibility in choosing laws to deal with the pandemic, can better adapt policy to the local environment and increase local compliance by ensuring that there is buy-in from key groups.
  • More effective, cheaper, and better-supplied masks.
  • Contact tracing
  • Cheap, frequent, and less-regulated testing
  • Global surveillance of existing pathogens and potentially zoonotic strains. The Emerging Pandemic Threats program should be improved on and expanded.
  • National guard of medics. Why not have many people with basic medical training? This team could be called upon during a pandemic to monitor patients, distribute medical supplies, administer vaccines, disinfect surfaces, run testing centers, set up field hospitals, and administer basic treatments.
  • Advance market commitments for vaccines, antimicrobials, and other treatments.
  • Market supporting legislation in all medical supplies and services.
  • Better handling of national economies using targeted, automatic stimulus. Studying the legacy of this pandemic will help identify which forms of stimulus worked, and which were superfluous.
  • Supported markets for pandemic insurance to allow individuals, companies, and governments better cover their risk.
  • Country-level reparations for pandemic damages. This is more speculative, but it seems like there is room for a sort of international tort law where countries can demand payment from the countries who’s public health practices increase global pandemic risks or cause pandemics. International agreements of this sort would give countries better financial incentives to prevent the spread of diseases outside their borders.
  • Fight vaccine hesitancy.
  • Airflow management. By better controlling the way air flows through a building, we can reduce the likelihood of aerosol transmission, especially in hospitals.
  • Better prediction. Currently, pandemic modelling is not very good. But I still think it is worth much more research. Even a slight edge in predicting outbreaks could pay for all the research put in. Some gains could be had from working on predictions beyond simple spread. Understanding supply chains, understanding the conditions for herd immunity, doing inference on R0, predicting virulence, or predicting mutations would all help.
  • Consider what additional preparations are needed if an engineered pandemic or bioweapon is used.

There are certainly other good ideas out there which I couldn’t list here, such as the countless, ground-level improvements made in hospitals, factories, and other places. Recording the details of these innovations will help people adapt more quickly the next time around.

There are also important questions which need to be answered by looking carefully at the data in the future. How much will a loss of schooling effect children’s wellbeing and future earnings? What are the long term health effects of COVID? Precisely how much did lockdowns work? The pandemic changed our world in so many ways, and it is important to learn from all of them.

As politicians say, “don’t let a good crisis go to waste”. This is our chance to virtually eliminate pandemics as a catastrophic risk by learning from our collective mistakes. Building robust safety measures and following through on some of the many good ideas that have been proposed could save millions of lives in the future. The time to implement these changes is now, not during the middle of the next pandemic.

Charity Prediction Markets

Prediction markets are pretty cool, but not uh, technically speaking, legal right now because of similarities to gambling.

Charity prediction markets are a tweak on normal prediction markets which might be able to provide the similar benefits, while sidestepping legality concerns and donating money to charitable causes.

A charity prediction market (CPM) operates exactly like a prediction market, but all of the profits individuals make go to charities of their choice.

It seems to me that since all the proceeds go to charity, these prediction markets might be able to sidestep the legality issues which plague ordinary prediction markets while encouraging charity.

Let’s examine this in a little more detail.

Like a normal prediction market, traders place bets on the outcomes of different events. They can reinvest their winnings as much as they want on future bets within the market. However, when it comes time to cash out, instead of receiving a check, traders choose from a list of charities to donate to, and the CPM simply transfers money from their account directly to a participating charity. Essentially, when someone puts money into the market, they are donating to charity, but can increase the amount of money they give if they make accurate predictions.

At first glance, this might seem like a zero sum game, since every dollar you make takes away from someone else’s preferred charity. However, charity prediction markets can be mutually beneficial despite this. First, people may choose to donate to the prediction market itself, which can be given as a bonus to the best traders, and thus drawing more activity to the market and multiplying the amount individuals donate. Second, charity prediction markets can elicit useful information, a public good in it’s own right. Third, charity prediction markets can act as a sort of donor lottery. Fourth, prediction markets move charitable dollars into the hands of those with more accurate beliefs, which hopefully would mean that the money gets better spent. Finally, by joining a charity prediction market, people send a signal that they are committing to donating to charitable causes, and, as the size of the market grows, it becomes more valuable to trade on, and can induce others to commit funds to the CPM as well.

I think that a prediction market of this sort could attract a significant amount of interest. The opportunity to grow your donation adds to the draw of prediction markets as a form of entertainment, and might induce more overall spending on charity.

But, there is an important objection I want to consider. Prediction markets rely on rewarding those with more accurate beliefs with more buying power in the future, but that’s missing in a CPM. First, though people with more accurate beliefs do get more influence in the market, they may periodically donate their winnings to charity. I am not sure to what degree this will influence the quality of the prediction market results1. Second, donating to charity may not offer the incentive to succeed as much as normal prediction markets do, which could affect the quality of the results. One way to sidestep this issue is to hire others to predict on your behalf. You could pay highly accurate forecasters a commission to grow your donation, thus creating direct financial incentive to succeed. I am not sure that this fixes the issue entirely, so the importance of this effect will have to be considered more carefully if charity prediction markets ever become a reality.

There is another detail which seems crucial to get right before a charity prediction market is opened. The same combination of altruism, entertainment, and rewards for accuracy that might draw people to a CPM would also encourage a significant amount of partisan signalling behavior.

To give an example, imagine if hoards of U.S. Democrats placed bets on the outcome of the election, and hoards of U.S. Republicans did as well, with members of both sides pledging to donate to a charity their opponents would hate.

These kind of dynamics would be a problem for several reasons. First, groups may have a strong emotional incentive to influence the outcome of real world events in order to prove their tribe right. Second, partisan signalling could draw money away from other places and into a zero sum game where groups donate to charities which directly oppose one another, cancelling out the value of their donations. Third, when people attach partisan value to the outcomes of market events, they may begin to attack the institution itself when things don’t go their way (sound familiar?).

Normal prediction markets will face some of these problems, but not to the same extent. In normal prediction markets, the ability to siphon away money from groups that are trying to signal partisan loyalty prevents this from being much of a problem. Perhaps this would happen in CPM’s as well, but I am not sure2.

To mitigate this problem, I believe the charity prediction market should make all betting and donation decisions private so that it is hard for people to publicly demonstrate how they are betting and donating. This way, partisans can publicly root for their tribe while privately betting outside the party line and donating to their preferred charities. Making sure that the set of allowed charities is relatively apolitical will also help reduce strife. To prevent partisans attacking the validity of the market itself, a credibly neutral approach to decide outcomes is crucial. Finally, there must be a way to punish those who contest results in the market frivolously.

I have to think more about these issues, but CPM’s still seem worth trying.

  1. Of course, traditional prediction markets have an analogous situation since traders can always go into cash.
  2. In any case, this kind of behavior would likely draw in a lot more activity, which may raise donations to non-partisan charities despite everything.

A Short Note on Ideas

You might notice that I post a lot of crazy ideas on this blog. This is a short note about what it means when I spend the time to write about an idea.

The act of considering new ideas is entertaining for me. This is why I strive to generate new ones as much as possible. The goal is to work my “idea muscle”, and so far this strategy has been successful for me, as I have seen my rate of idea generation grow steadily over the years. I have written before about the simple approach I use here.

Because of this quantity-oriented approach, I don’t take myself too seriously. I understand that there are many reasons why something might look good on paper, but fail in reality. This is probably true of most of my ideas, but I don’t mind if they mostly end up being misguided; worrying about being unconditionally correct is not a good way to be creative or have fun.

Despite this, I still take ideas seriously in general. Even a few good thoughts have the potential to make a positive impact, and the simple act of thinking through new ones might encourage others to write about new ones. This is why I spend time writing about them, even if I am not entirely confident they will work.

For these reasons, I try to keep an open mind towards all ideas, while always considering how they might be improved. I hope you do the same.

A Formula for the Value of Existential Risk Reduction

How important is existential risk reduction (XRR)? Here, I present a model to estimate the value of eliminating existential risks.

Note that this model is exceedingly simple; it is actually contained within other studies of existential risk, but I feel the implications have not been fully considered.

The Model

Let’s imagine that, if we did nothing, there is a certain per-century probability of human extinction p . This means that the discount rate (survival probability) is 1 - p = \gamma . This would mean that the total value of the future, discounted by the probability of extinction each century, is1:

V_{total} = \sum\limits_{t=0}^{\infty} \gamma^t*V(t)

Where V(t) is the expected value (however that might be defined) of each century t .

For the purposes of this post, our goal is to maximize V_{total} via XRR.

To do this, imagine we eliminate a specific existential risk (e.g. get rid of all nuclear weapons). This has the effect of raising \gamma by a multiplicative factor r , making the new discount rate r\gamma (note that 1 \leq r \leq \frac{1}{\gamma} since the overall discount rate can’t be larger than one).

Now, the new value of the future V_{total}^{'} is:

V_{total}^{'} = \sum\limits_{t=0}^{\infty} (r\gamma)^t*V(t)

Using this “risk reduction coefficient” r is a natural representation of XRR because different risks multiply to produce the overall discount rate.

For example, if there was a per-century probability of extinction from asteroids p_{a} = 1 - \gamma_{a} and a per-century probability of extinction from nuclear war p_{n} = 1 - \gamma_{n} , then the overall discount rate is \gamma_{a}*\gamma_{n} . So, if we eliminated the risk of nuclear extinction, the new discount rate is now \gamma_{a} . In this case, r = \frac{1}{\gamma_{n}} .

Lets assume V(t) is a constant V each century2, we obtain a simple formula for V_{total}^{'} in this case:

V_{total}^{'} = \frac{V}{1-r\gamma}

What happens to V_{total}^{'} when we reduce existential risks in a world where X-risk is high, versus a world where X-risk is low?

Imagine two scenarios:

  1. A world with nuclear weapons which also has a low level of existential risk from other sources (e.g. not many dangerous asteroids in it’s solar system, AI alignment problem has been solved, etc.). \gamma = 0.99 in this world (that is, there is a 99% chance of surviving each century).
  2. A world with nuclear weapons which also has a high level of existential risk from other sources (e.g. many dangerous asteroids, unaligned AI, etc.). \gamma  = 0.98 in this world (that is, there is a 98% chance of surviving each century).

It is easy to calculate the value of the future in each scenario if nothing is done to reduce existential risk:

V_{1} = \frac{1}{1 - 0.99} = 100

V_{2}= \frac{1}{1 - 0.98} = 16

Next, let’s posit that the existential risk from nuclear war is 0.5% each century in both worlds. An action which completely eliminated the risk of nuclear war would increase \gamma to 0.995 and 0.985 in scenario 1 and 2 respectively. V_{total}^{'} in each case is:

V_{1}^{'} = \frac{1}{1 - 0.995} = 200

V_{2}^{'}= \frac{1}{1 - 0.985} = 66.\overline{6}

So in this example, the removal of the same existential risk is more valuable to the “safe” world in both an absolute and relative sense.

Let’s examine this in a more general way.

What happens to the value of the future in each world when we change the risk reduction parameter r from 1 (no reduction in risk) to 1.005 (completely eliminate a 0.5%/century risk) or higher? Let’s plot V_{total}^{'} versus r with two different initial discount rates 0.99 and 0.99 :

Once again, the trend is very different depending on the baseline level of risk!

The Implications

There are several interesting implications of this model.

First, as the previous example illustrates, the higher the level of X-risk, the lower the benefit of the same constant reduction in X-risk.

The second implication follows from the first: the more you have reduced X-risk, the more valuable further reductions are. You can imagine sliding along the upper blue line; as you go further right, the slope increases, meaning that the marginal X-risk reduction has an even higher value than the last.

What does this model imply for the strategy of existential risk reduction? With the accelerating nature of returns, it might be good idea to find quick, early ways to reduce risk. Potentially, initial X-risk reductions would increase the value of subsequent reductions and create a snowball effect, where more and more investment occurs as the value increases.

There is a flip side to this effect. Innovation in certain areas can increase existential risk (e.g. developments in AI) and thus lower \gamma , reducing the value of XRR. This means that in order maintain the value of the future, efforts to prevent human extinction have to keep pace with technological growth. This further increases the urgency of finding ways to reduce existential risk before technology advances too quickly.

Do the increasing returns and high urgency imply that some sort of “hail Mary” with regards to XRR would be a good idea? That is, should we look for unlikely-to-succeed but potentially large reductions in X-risk? Since the value of the future increases so quickly with increases in r , even a plan with a small chance of success would have a high value. I am hesitant to endorse this strategy since the model overestimates the upside of XRR by assuming that the lifetime of the universe is infinite3; but it seems like something which deserves more thought.

Even this trivial formula has implications for XRR strategy, and I am interested to see if these considerations hold up in more complete treatments of existential risk. Aschenbrenner 2020 and Trammel 2021 present more sophisticated models of the tradeoff between growth, consumption, and safety. I hope to consider these in more depth, but that will be the focus of a future post.

This model may also be useful as a pedagogical tool by demonstrating the value of eliminating small risks to our existence while highlighting different strategic concerns.

  1. If you have issues with the fact that I am using and infinite series when the lifetime of the universe is probably finite, you can be consider this formula to be an upper bound on the value of XRR.
  2. What happens in the more realistic case that V(t) is not constant? More concretely, we can expect that the value of the future will increase significantly with time. An increasing V(t) means that XRR is more valuable than the constant V(t) case. Since most of the value will occur in the future, it becomes even more important to survive longer. If V(t) peaks early or falls in the future, then XRR is less valuable since there is less reason to be concerned about surviving long term. Note that the other implications hold as long as V(t) is independent of XRR efforts.
  3. To see how this assumption raises the value of a “hail Mary”, note that V_{total}^{'} goes to infinity when r = \frac{1}{\gamma} .

The Disregarded

For every person breathing tainted air.

For every aspiring immigrant who can’t move to prosperity.

For every child who inherits an imperfect world.

For every animal suffering from a lack of compassion.

For all of the Disregarded, named and nameless, I hope to build a better world for you.

Who are the Disregarded? You are most certainly part of the Disregarded to some degree, but unfortunately, many of the others have it worse than you. They unite all of these problems, and more.

But what connects all of this suffering? What defines “the Disregarded”?

The Disregarded are the set of beings who are negatively affected by a decision, but who are not considered when making the decision, and hence, are disregarded1. The totality of these negative effects can be enormous, and warrants further consideration on how we might better represent these groups in our decision-making processes.

But first, let’s look at some examples of who the Disregarded are; it’s easiest to see by contrasting them with their opposite, The Regarded.

Meet Bob, who likes listening to loud music. He decides to buy a gigantic speaker and listen to Metallica through the night. Unfortunately, Bob’s neighbor Alice cannot sleep because of the loud music coming from next door. In this situation, Bob’s decisions to buy a large speaker and play loud music came about by considering Bob’s preferences and acting on them (almost by definition). Alice however, was not consulted when Bob decided to play loud music, and Bob bore none of the costs he incurred on Alice. This makes Bob a member of the ‘Regarded’ in this scenario, while Alice is a member of the ‘Disregarded’.

For another example, consider buying a meal. You go to a restaurant, look at the menu items and their prices, and order a hamburger. You and the restaurant owner make a mutually agreeable exchange of food for money. In order for the exchange to happen, your needs must be satisfied and the owner’s needs must be satisfied. Both of you are ‘Regarded’ in this case since your needs determine the outcome of the decision-making process. But what about the cow that was raised for your burger? Regardless of whether you believe the cow’s needs matter, they were certainly not considered when this transaction occurred, and neither party felt the repercussions of their decision on the cow2. Additionally, neither party feels the full costs of the greenhouse emissions produced by raising the cow. In my terminology, the cow and the people affected by climate change are ‘Disregarded’ in this case.

To be fair, the examples I give are somewhat contrived. Very few decisions are made which account for the needs of everyone involved, and this is usually okay! In many transactions, the needs of others are disregarded for good reason, since the consequences are often mild. Additionally, considering everyone’s needs imposes a cost which simply is not worth it in most cases.

Though the harm done by disregarding others is small in most situations, there are a few cases where ignoring others needs is very consequential.

Externalities are a classic example of a situation where disregarding others needs can have harmful effects. For example, a company may choose to pollute in order to make a product more cheaply. This company and it’s customers do not consider the people harmed by the pollution this produces. However, the harms from pollution (and other externalities) are large, which is why it is important for governments to regulate or tax these activities for the public good.

Government decisions weigh heavily on the Disregarded. For example, dictatorial regimes frequently ignore the needs of citizens when making choices.

But even in a democracy, it is staggering how many Disregarded there are. Current immigrants, potential immigrants, future generations, prisoners, citizens of other countries, and non-human animals are all heavily impacted by the decisions made by governments they cannot vote in. This is especially true for large, wealthy countries like the U.S., who influence trade, international treaties, and migration patterns.

If the U.S. government started taking into account the needs of it’s future citizens3, how would it change it’s approach to climate policy today? Or debt?

To state this more clearly, countries which choose to exclude potential immigrants, act against the interests of future citizens, and ignore the suffering of non-human animals are directly harming all of these members of the Disregarded. In all three cases, these choices have large, negative repercussions which the decision-makers themselves cannot feel.

I am not claiming that governments are beholden to these groups; the needs of their citizens are still important. But increasing people’s consideration for the Disregarded would lead to significant moral progress.

So what do we do?

From a policy standpoint, the needs of the Disregarded strongly favor climate change mitigation, reductions in pollution (and other negative externalities), looser immigration restrictions, animal welfare legislation, and long-termist policy.

From a more abstract standpoint, tools for thought such as counterfactual contracts can help us reason about making better decisions on behalf of the Disregarded. Considering the history of our expanding moral circle might help us empathize with distant people and animals. Old concepts like the golden rule should be reapplied in the context of all the ways our decisions can harm others without our knowledge.

People are becoming more aware of how their actions affect the individuals, animals, and ecosystems around us. But for things to improve, this awareness must translate into better decision-making. Fully considering the needs of the Disregarded can ameliorate their suffering, and better align collective decisions with collective desires. Simply seeing them is the first step.

  1. Essentially, this definition refers to the victims of negative externalities, but extends it to include non-economic cases, non-human animals, and future people. I find that having an actual name for this group helps connect many related problems and make them more visceral.
  2. Or, more precisely, the repercussions of their decision on the future cow which is created by the increased demand for hamburger.
  3. A common response to the idea that potential people matter is: “Why worry about future citizens? They don’t exist”. Debating this point is not the focus of this post, but I think there is a clear, appealing response. If we are entirely concerned with the welfare of current people, why worry about climate change, ecological damage, economic growth, innovation, or stable institutions? Or, for that matter, save money? Societies which ignore the needs of future citizens fall prey to the same problems myopic individuals do, and fail in the long term. I hope to elaborate on the importance of taking future generations into account in a later post.

States Should Try Harder To Draw Talent

Attracting scientific, engineering, and business talent from across the world has been a key component of the success of the U.S. and other developed nations in the past. Today, high earnings opportunities and a prestigious higher education system makes the U.S. an attractive destination for high skilled immigrants.

Having more talent accelerates economic growth, increases innovation, and helps support an aging population, which means that encouraging high skilled immigration is a good policy for these reasons alone.

But attracting talent is also a peaceful way to handle authoritarian regimes. The U.S. already draws top talent from enemies like China, Russia, and Iran to universities in the U.S. But oftentimes these high skilled immigrants return to their country of origin, not because they prefer to leave the U.S., but because they cannot get a work visa. Doesn’t it seem strange that the U.S. is training talented students and then forcing these students to return to it’s geopolitical rivals?

By actively drawing and retaining talent from these countries, democratic nations can significantly increase their relative rate of innovation (for example, moving one genius from China means that China has one less genius and the U.S. has one more, a 2-genius difference overall). The U.S. would effectively force these regimes to find ways to retain talent, which may come in the form of higher public goods, higher salaries, or more legal rights. Note that this incentive runs both ways, as countries like the U.S. will need to compete with other countries to retain talent.

These considerations suggest that the U.S. and other developed countries should make it significantly easier for high skill workers to immigrate, and push for international accords where states mutually agree to allow their citizens to leave.

But I care about states attracting talent for other reasons.

First, because the free movement of people seems like a good thing in and of itself.

Second, because allowing people to move to the places they are best suited can accelerate innovation and entrepreneurship, thus growing the world economy.

Third, competition over talent naturally lends itself to reducing immigration restrictions which helps move the world towards open borders.

Fourth, competition would also make governments more responsive to people voting with their feet. They might design policies which make finding a job easier, or build more public parks. This is the Tiebout model in a nutshell, and a major motivation for my interest in the Archipelago.

But under this system, won’t governments only be responsive to high skill workers needs while ignoring their citizens? No. First, note that the number of high skill immigrants will remain small relative to the population of a country. Second, high-skill workers do not exist in a vacuum, they are complementary to medium and low skill labor. For example, a doctor would be useless without nurses to provide care, managers to run the hospital, construction workers to build the hospital, and restaurants to feed everyone. In order to create a nation that a doctor would want to move to, the state needs to create an inviting society for the entire network of people the doctor depends on, most of which is low and medium skill labor. Because of this complementarity, governments will naturally want to draw and retain low and medium skill labor as well.

Clearly, accepting high-skilled immigration in developed countries is a key step towards reducing the power of authoritarian regimes, accelerating economic growth, and producing competitive governance. It is important to find avenues to affect this change and secure buy-in from groups which have traditionally been skeptical of immigration in developed countries.

A Note on the Organization of the Blog

A quick note on what I have written and plan to write.

So far, I have tended to focus on three things:

  1. Outlining my worldview
  2. Identifying important, overlooked areas of innovation
  3. Highlighting major problems to solve in the near term and long term

Most of this writing has been in the form of “pointing to things that are interesting” rather than actually tackling questions in a concrete or rigorous way.

My goal has been to paint in broad strokes my point of view and highlight why certain areas are essential to progress. Once that is accomplished, I will write more detailed posts examining specific ideas, research, or policies in addition to the kind of stuff I have been writing so far.

Stay tuned!

Challenges of Interstellar Expansion

We rightfully focus on near term innovations which will make the world better, but today I want to zoom out and look at what problems have to be solved for humanity1 to survive and expand into space. I propose a broad plan for how this might be done, examine the steps needed to get there, and sort the existing challenges into those I consider mostly solved (or trending towards solved) and those I consider unsolved.

Note that by focusing on interstellar expansion in the long term, I am ignoring many near-term, critical problems. This is not because I am dismissive of these problem’s importance! However, when thinking on the scale of millennia, the priorities change significantly.

How did I categorize things? I decided to group different challenges as “mostly solved” versus “mostly unsolved”. I did this by asking the question: “If no special effort is made, does this problem seem like it will be solved ‘by default’ as part of a larger effort to explore the solar system?”. For example, though the cost to launch objects into space is pretty high right now, recent competition in space launch has made prices much more reasonable, and suggests that this problem is trending towards being solved on its own. This distinction seemed natural to me, since it is important to know which fields might limit interstellar expansion in the long term if nothing is done to accelerate them.

Determining if a challenge seemed solvable also depended on whether the task required fundamentally new technology, or simply a scaled up, sophisticated version of existing technology. In this framing, building (say) large information processing systems looks tractable since this task only requires scaled up versions of the supercomputers we use today. This is not to say that it will be easy to create these systems, but rather, that humanity already has completed important milestones and is on the path to reaching these goals.

These are not objective categorizations, and I encourage others to complete a similar analysis. Determining where there has been inadequate progress can help better direct efforts towards expanding into space.

The Challenges

It seems to me that humanity will need to complete the following challenges in order to expand beyond our solar system (they are approximately chronological):

  1. Manage existential risks
  2. Develop affordable transport into space
  3. Develop industry and communications in space
  4. Build self-sustaining colonies within our solar system
  5. Capture a large percentage of the sun’s energy
  6. Store large amounts of energy
  7. Build massive information processing systems
  8. Invent efficient means of interstellar travel
  9. Create self-sustaining spaceships for interstellar travel
  10. Develop general purpose techniques to adapt to new solar systems
  11. Agree on a unified system of ethics and apply it to our situation (this should happen sooner rather than later, but could, in theory, come at the end)

Some other things which would be nice to have but are not strictly necessary:

  • Highly sophisticated science and engineering
  • Effective coordination and governance
  • Cryonics
  • Brain Emulations (Em’s)
  • Controlled fusion

For the remainder of this post, I divide the listed problems into challenges we are moving towards solving and those which are not close to being solved.

Mostly Solved Problems

I label these problems “mostly solved” because the path to success is pretty clear, there are not too many unknowns, and there are no fundamental limitations in our way. Despite this, I am aware that an extraordinary amount of work still needs to go into these areas! In fact, we may not see some of these problems solved in our lifetimes. But these challenges look tractable given that we will have centuries to work on them.

Manage Most Existential Risks

See my post on existential risks. The overall conclusion I draw is that only AI risk and the risk of global totalitarianism seem like major, unsolved risks. More on these later. The other existential risks seem unlikely, solvable, or both.

Develop Affordable Transport into Space

Space transport has become much, much cheaper. As the technology continues to develop over centuries, space tethers, renewable fuels, and manufacturing in space will drastically reduce launch costs.

Develop Industry and Communications in Space

Satellites already support a substantial amount of communications in space, but these will need to expand throughout the solar system. Developing general purpose industry in space is a much larger step. This would require advances in robotics and the creation of space colonies. There will be a lot of new challenges to doing this in space, but the problems seem tractable with enough trial and error. Casey Handmer’s blog has more qualified, technical discussion about how this might be done than I could possibly provide.

Build Self-Sustaining Colonies Within Our Solar System

Space manufacture might already require colonies in space, but taking the extra step of developing resource independence from earth is crucial. This is because self-sufficient colonies can serve as a trial run for interstellar ships. Developing this capability will require solar energy, space mining, space manufacture, fuel-from-air, and yeast bioreactors to eliminate reliance on terrestrial support. Like with space manufacture, I do not see any major reason this can’t be achieved given enough time.

Capture a Large Percentage of the Sun’s Energy

To reach the next level on the Kardashev scale, we need to harvest much more of the suns energy. To do this, we can simply repurpose solar panel technology to create a Dyson swarm. Controlling the flight of the swarm or inventing better systems for energy capture are hard, but not infeasible.

Build Massive Information Processing Systems

Computers on Earth already have a large information processing capacity. Simply scaling up existing systems on Earth is fine for now, but in the very long term there may need to be a way to compute on more common substrates such as hydrogen or black holes. This was hard to categorize, since scaling up existing computer systems for space will be viable for a long time, while creating something which can compute using a black hole is much harder. I ended up deciding to put it in the mostly solved category since it is possible that scaling existing approaches to computation might provide all the capacity we need.

Create Self-Sustaining Spaceships for Interstellar Travel

Projects like the ISS are a good start but ships will need more protection from radiation and flying objects during interstellar travel. Lessons learned from developing self-sufficient colonies in our solar system will be essential to this project. However, this challenge is significantly harder than making independent colonies, since interstellar ships will need to protect against impacts and radiation while functioning for centuries. Regardless, this still feels like a solvable problem.

Highly Sophisticated Science and Engineering

I expect than science and engineering will keep progressing and I am confident that with centuries of work and solutions to the other problems on this list, this will not become a major binding constraint.

Effective Coordination and Governance

Yes, I know, lots of people have complaints about modern governance (often for opposing reasons). But some places actually can govern well. Centuries of building better institutions and testing out different forms of governance will produce superior systems of cooperation than today.

Problems Still to Solve

Here are the problems which I put into a distinct category from the solved problems. These seem much harder, either because there is not currently a clear path to success, or because we have so little direct experience to learn from at the moment.


There are many important questions in ethics which we haven’t even begun to answer, let alone come to consensus on. A few examples:

  • What moral value should be assigned to animals, AI’s, and Em’s?
  • How should we make decisions regarding population ethics?
  • Is it moral for society to continue and grow?
  • Should we seed new life elsewhere?
  • If we discover alien life, should we leave it alone?

These questions are more slippery than the engineering problems of the previous section. It is hard to determine which sub-problems need to be solved or if any progress has been (or will be) made, putting this problem into the unsolved category.


Here I am considering the difficulty of developing a system which can preserve a person and successfully reanimate them. This is different from the modern practice of cryonics, which focuses on preserving people while relying on future medical technology in order to revive preserved patients.

Cryonics has strong compliments with the development of Em’s. New scanning techniques, better understanding of the brain, and experimentation on different fixing techniques are all needed. Designing better legal infrastructures to support cryonics seems important too. The fact that there are so many unknowns in cryonics and the fact that ethical experimentation is extremely hard (who wants to be the first trial run for resurrection?) suggests that progress on this area could take much longer than the “mostly solved” challenges.

Brain Emulations (Em’s)

Brain emulations would be a huge innovation, allowing us to create many happy lives with fewer resources and provide something akin to artificial intelligence. Unfortunately, it seems like Em’s are very far off; there are many research questions which need to be answered before they become a reality. We know so little about how the brain works, and it is uncertain what degree of detail is required for an accurate simulation.

Even if they are built, they raise several important questions. First, how do Em’s change our systems of governance and ethics? Second, how can we prevent massive amounts of suffering from occurring once brain emulations exist?

The technical, legal, and ethical challenges brain emulations present indicate that progress towards creating Em’s will be very hard, and their invention will present new challenges.

Store Large Amounts of Energy

Though Dyson swarms can quickly generate lots of energy from a star, it might be hard to utilize all of it immediately. Instead, there should be a system to store the captured energy so that it can be used later, like a massive battery. It may be possible to store energy as chemical fuel, but this approach is limited by the total mass of chemicals we have access to. Rotating celestial bodies could potentially store large amounts of energy, but once again, this approach has scale limitations. The ultimate battery might be a black hole, but we have no way of determining if this is viable until we get near a black hole that we can experiment with.

Invent Efficient Means of Interstellar Travel

Relativistic time dilation means that passengers can reach the edges of the galaxy in only a few years of experienced time, but the fuel requirements and collision dangers are enormous. Methods of propulsion need to become more efficient at converting energy into thrust. Additionally, complicated route planning will need to be used to hasten the colonization of new worlds. Is it better to fly straight to a destination with a large amount of fuel? Or is it better to stop at many intermediate stars and refuel along the way? Is there an efficient way to slingshot around different stars to reduce fuel requirements?

Additionally, the longevity of the passengers is an important constraint, which is why the development of cryonics and Em’s would be very useful for interstellar travel.

At this stage, there are too many unknowns for us to make much progress on the logistics problem and many existing propulsion methods are too inefficient for interstellar travel.

Develop General Purpose Techniques to Adapt to New Solar Systems

It is hard to prepare for living in new solar systems when we haven’t fully adapted to our own solar system. Given our inability to prepare, progress on this challenge will have to wait on the solutions to many other problems.

AI Risk

The difficulty of mitigating AI risk has been thoroughly detailed in other places so I won’t reiterate here. My impression is that we do not have a good handle on what needs to be done or what success looks like (though there has been a lot of amazing work in this field). This is a particularly tricky problem because we must get it right the first time without complete knowledge of what an artificial general intelligence looks like.

Risk of Global Totalitarianism

In a previous post on existential risk, I noted that some form of global totalitarianism could prevent humanity from going into space and thus curtailing society’s potential. How do we prevent this sort of thing from happening? Like AI risk, we have little direct experience with this problem and cannot afford even a single failure. Adding to the complexity, certain forms of global governance and coordination seem essential. But how do we prevent global systems of governance from becoming too powerful? What does a transition to global totalitarianism even look like? Similar to other hard problems, the number of unknowns make this a distinctly difficult problem.

Controlled Fusion

Controlled fusion is nice because it might provide cheaper energy and better rockets. But more interesting is the possibility of using fusion for nucleosynthesis (e.g. making metals from hydrogen) since we probably will need heavier elements than hydrogen and helium to build anything interesting in the universe. Unfortunately, we don’t have any way of experimenting with the conditions needed to make heavy elements in the near term (which are much higher than the conditions required for simply producing energy), so this idea will remain on the back burner for quite a while.

Bonus: Unknown-unknowns

This analysis assumes certain basic facts about reality. But what if we discover something fundamentally new? For example, what if we really are in a simulation? Or figure out how to travel between multiple universes? Or travel into the past? Or harvest dark energy? Though these results are unlikely, they would entirely change the set of problems which need to be solved. We should be on the lookout for these kinds of paradigm shifts and be ready to change plans. Determining where and how to search for these revelations is quite hard.


Having identified some key challenges, which ones (if any) are the most important to work on right now?

Work on many of these problems are not possible in the present. For example, we can’t figure out how to store energy in a black hole until we can actually experiment on one (the nearest known black hole is approximately 1000 light-years away). However, these hard problems seem like they could benefit from more contemporary effort2:

  • AI risk
  • Global totalitarianism risk
  • Ethics

All of the problems I place in the “mostly solved” category would also benefit from further study and are very important to work on. But I find this categorization interesting because it suggests that some problems might be solved on their own as humanity works to explore the solar system, while others may need an additional push.

But let’s step back from the details for now, since I am sure that others will have different opinions on which problems present the largest challenge.

The most important takeaway from this post is that others should identify concrete challenges facing civilization in the long term. Having people consider and debate which problems are truly difficult (as opposed to problems which “merely” require massive engineering effort) will better direct our collective efforts.


1. Though I use the word “humanity”, I don’t imagine this process involving only modern humans. I think it is likely that AI’s and Em’s will be part of this project as well.

2. In addition, the other hard problems might benefit from more theoretical work.