Mapping the Good

For the 50th post, I thought it would be good to illustrate my worldview a little more directly. Instead of writing this out, I made a map:

(You may want to open the image in a new tab to see it more clearly)

Let’s walk through what I’m showing here. The idea is to demonstrate how different concepts flow into each other in order to achieve some overall goal. In this case I am illustrating how I think important concepts flow into creating overall ethical value (without trying too hard to determine what that means for now).

For example, there is a bubble for “reducing AI risk” which points to “reducing x-risk” because I believe that reducing AI risk is a key way we can reduce overall existential risk. The “reducing x-risk” bubble then points to “ethical value” because I believe reducing x-risk leads to more happy lives in the future (in expectation).

Some notes:

  • This approach doesn’t cover everything of value but I tried to include the major concepts, connections, and examples here
  • All arrows have a “conversion efficiency” which is also important for creating ethical value
  • Good ideas and implementation can benefit virtually every bubble
  • For clarity, not all connections between concepts are shown
  • Not all important bubbles are shown, just major examples

Colors group similar bubbles, they are as follows:

  • White for specific examples
  • Blue for consumption goods
  • Green for major societal inputs
  • Orange for concepts

I want to stress that this map is not fixed in my mind. I anticipate finding new topics of importance and new connections as time goes on. I also expect that others will have quite different maps of how to achieve the good. They might even have entirely different ways of communicating their mental model of how to make a better world. Regardless, I encourage you to try something similar, I would love to see what you come up with.

Three Paths to Existential Catastrophe from AI

[EDIT 6/17: A previous version of this post had the title “Three Paths to Existential Catastrophe from AI”. I changed it to more clearly convey my point]

In the past, I have pointed out that AI risk is one of the most significant, near term problems we face as a society.

It is particularly important to focus on how AI might present an existential (civilization-ending) risk. As I have noted before, this is because civilization can recover from catastrophic events, while existential events are fundamentally irreversible, preventing potentially billions of years of future society.

Of course, AI also presents a catastrophic risk and significant efforts should be made to mitigate this. But focusing on existential risk can give us a clearer view of the largest dangers of AI and how to deal with them.

Since intelligence is not itself an existential risk, a malicious AI would either need to compose several catastrophic events together to create an x-risk (this is exemplified by the Skynet Scenario below) or use its intelligence to bring about an existential event.

Given this, I see three important ways an AI can become an x-risk induce an existential catastrophe:

The Skynet Scenario: This is the most vivid route by which AI could become an x-risk. Essentially, like in the Terminator movies, AI gains control of autonomous weapons and physically attacks humans, using superior strategy to eliminate the human race. This includes the use of drones, cyber-attacks, nuclear weapons, asteroids, engineered pandemics, and political tactics designed to incite wars.

This scenario will become technically possible soon and could occur over very a short timescale; both of these factors raise the importance of this threat. Autonomous weaponry is already being developed and states may face competitive pressures to deploy them. However, I personally believe this scenario is pretty implausible. It seems that it would require an extreme degree of military automation combined with an omnicidal AI which is significantly more powerful than its opponents.

Regardless, this problem has a relatively straightforward set of solutions. First, AI-controlled weaponry should be banned by international treaty (especially automated nuclear weaponry). Second, protection against various catastrophic risks can make it harder for malicious AI’s to attack humanity.

The AI-Totalitarian Scenario: This is a path by which AI could present an x-risk in a much more subtle manner. Rather than attacking humanity, the AI gains our trust and slowly takes over the functions of government. Since an AI’s may be extremely patient, and multiple AI’s can engage in value handshakes, one can imagine many AI’s agglomerating over centuries as they slowly take over the functions of government.

How would this present an x-risk? Once AI’s gained significant control of government, they might form a world government which can enforce a ban on space travel, severely curtailing human expansion. Note that controlling the functions of government also gives an AI a better platform from which to physically attack humanity.

This would constitute a much slower catastrophe and would be harder to notice until it is too late. This scenario seems relatively plausible as we can imagine ceding more and more of our decisions to an AI as we grow to trust it.

I hope to write more on how to deal with this AI-totalitarianism problem, but for now, we can take steps to prevent global government from forming and protect people’s right to access space.

The Space-Race Scenario: Even if an AI never attacks humanity and never prevents human space exploration, it can still greatly limit our future prospects. This is because an AI might quickly expand into space and vastly reduce the galactic resources available to us. The timeline for this sort of catastrophe depends on how soon we develop space-faring technologies and how feasible interstellar travel is in general.

International regulation on AI control of space technologies can help mitigate this risk. Additionally, carefully observing the behaviors and innovations of AI space explorers might allow us to “strategy-steal” and expand into space as fast as the AI can (though the link includes good points on why strategy stealing might not work).


Focusing on the routes by which AI increases existential risk gives a clearer understanding of what to do to prevent a civilization-ending event.

But I am not entirely sure which issue deserves the most attention. Though it seems unlikely, the Skynet Scenario is also the most near-term problem and thus may be the most important to work on. The AI-Totalitarian Scenario seems more probable and harder to fix, but may take centuries to occur, indicating that this problem might be better suited to future generations of AI-safety researchers. In some sense, the Space-Race Scenario seems like the least bad option. Even if a competing AI somewhat curtails our prospects in space, the possibility of stealing their innovations and thus accelerating our own expansion might be a net benefit. But this hinges on how feasible it is to copy the AI’s strategy.

Though this list seems to cover the major possibilities, it is dangerous to believe that we have everything figured out. To be safe, we should always assume that there are unanticipated ways AI might pose an x-risk. In addition to working on the problems above, we should continue developing AI safety techniques, create general purpose failsafes, and mitigate other sources of catastrophic and existential risk.

Wealth Fraction Dominant Assurance Contracts

Dominant Assurance Contracts (DAC’s) are a means of funding public goods between individuals without the need for government intervention.

However, DAC’s make everyone pay the same amount into a public good. This is a problem because it means that DAC’s don’t account for strength of peoples preferences. For example, even if one person values a public park at $100 while another person values it at $10, both would pay the same amount on a dominant assurance contract to build a park. This means that DAC’s leave money on the table which could have gone towards building the good.

It also strikes me as unfair that extremely wealthy individuals would pay the same amount into a public good as poor individuals. Finding an approach which feels fair to participants is a crucial step towards designing mechanisms with legitimacy.

Fortunately, these problems can be fixed with a small tweak: instead of paying the same dollar amount, people might pay the same fraction of their wealth to the public good. I call this system a wealth-fraction dominant assurance contract (WF-DAC). Here, individuals with more wealth and thus higher willingness-to-pay will spend more for the same good1. This makes things feel more fair, as wealthier individuals pay proportionally more for the good.

But there is a problem with this system. How exactly to we measure people’s wealth? This can be quite tricky. For example, how do we consistently assess the value of different assets such as investments, art, or real estate? If someone has high income but an empty bank account, should they really pay nothing, or should future income be factored into wealth? Worse, people will have an incentive to under-report their wealth in order to pay less into public goods.

There are a couple possible solutions to this problem. For example, it may be possible to create contracts which require people to reveal their wealth as stated on their tax returns. Or, instead of relying on tax returns, one might imagine a smart contract which uses people’s current cryptocurrency holdings or net income as a measure of wealth.

These solutions are somewhat ad hoc, since there are still ways for people to misrepresent their true wealth. But there is a key case where the issue of wealth reporting becomes less significant: public goods finance between countries.

As I have noted before, public goods finance between countries is an essential way we might tackle global problems. Applying the WF-DAC to the country level, there might be an international agreement which requires all countries to devote one millionth of their GDP to pandemic prevention efforts.

But wouldn’t countries still have an incentive to under-report their GDP? They would, however, it is much harder for a country to hide trillions of dollars from other countries than it is for a wealthy individual to hide assets from peers. Overall, state efforts to hide wealth in order to under-fund public goods would amount to a rounding error; this system would still vastly increase global public goods spending despite attempts to hide true GDP.

Ideally, simple extensions of dominant assurance contracts like this would have been thoroughly examined by now. The fact that (to my knowledge) WF-DAC’s have not been discussed illustrates that there are many unexplored possibilities in the field of public goods finance, and more experimentation is needed to find robust ways of providing for the common good at different scales.

  1. This also has the nice property that if everyone has utility logarithmic in wealth, individuals sacrifice the same amount of utility in order to fund the public good.

Links #3

Freezing ovarian tissue to extend fertility.

Cash Transfers as a Simple First Argument

SoGive is not updating charity recommendations in light of the recent malaria vaccine news. Since I have made the point before that malaria vaccines reduce the value of bednets, it is worth considering the arguments here. Essentially, bednets still have value right now since it takes a while to roll out vaccines, so we shouldn’t neglect the people we can help today by waiting for the vaccine.

Global Public Goods: A Survey

Things Marius Hobbhahn has changed their mind about.

Does Technology Drive The Growth of Government? Related to one of the arguments I made about growing state power in Against World Government

A course on “The Great Problems”. Perhaps this is a class all people should take in high school.

A website which computes the probability of achieving AGI by certain dates.

I enjoyed this book review of George’s Progress and Poverty. I think a lot more work needs to go into how to practically implement a land value tax and how the principle applies to other domains (For example, George considered solar energy to be part of “land”. How might this change interstellar colonization?).

Is power-seeking AI an existential risk?

C4 rice’s first wobbly steps towards reality

I went through a lot of posts on the blog Reflective Disequilibrium recently, here are some I particularly enjoyed:

Minimal Education for Modern Citizens

Education is long overdue for a re-prioritization on what should be studied; modern citizens need to start with a better understanding of today’s problems, technologies, and major policy debates.

Democratic systems in particular rely on the knowledge of citizens to make good decisions. Education is a key institution for giving people the tools they need to make informed choices at the ballot box. The point is not to turn every citizen into an expert, but rather to ensure that every person is equipped with the basic ideas needed to get up to speed. On top of this, schooling must provide a framework for argumentation that encourages the best ideas to spread through the population.

Beyond voting, knowing how the modern world works can help individuals take steps to make it better. A population needs to be equipped with both an understanding of modern problems as well as diverse technical knowledge in order to solve the complicated issues we face today.

Though there are a lot of things it might be useful for students to learn, I want to focus on the minimal set of classes a high-school student would need in order to understand the technology and issues of the day. Setting minimum standards of education also makes it more feasible to teach every citizen these essential topics. It also avoids forcing students to learn excessive amounts and leaves time in their schedules so they may participate in additional courses of their choosing.

Given these desiderata, I propose a new list of required classes in the high school curriculum, in roughly the order they should be taught:

  • English: Lots of reading and writing are crucial for students to participate in discourse, communicate, and learn new things.
  • Statistics: This underpins the understanding of data from all fields and helps people identify how numbers might be manipulated for political ends.
  • Linear Algebra: Specifically, all students should learn how to solve equations of the form “mx + b = c” in addition to simple systems of linear equations along with a graphical interpretation (this is significantly less than current high school algebra, but it is the most important bit for everyone to know). In general, linear algebra underpins machine learning, quantum physics, economics, and many other fields, so giving students an introduction to it gives them the opportunity to contribute to these areas in the future.
  • Derivative Calculus: This can be significantly pared down compared to the existing curriculum. Showing students how to compute derivatives of polynomials, along with the “slope of the tangent line” interpretation of the derivative is the key idea of calculus. This is a good starting point for learning lots of useful mathematics, without the baggage of memorizing the derivatives of special functions.
  • Biology: This sits at the heart of key debates today over creationism, vaccines, and healthcare. A basic understanding of biology can help citizens better understand medical advice and biotechnology.
  • Physical, Mental, and Relationship Health: Though some form of health education is required in most developed countries, existing education is quite limited and there are many missed opportunities. First aid, nutrition, sex-ed, good sleep habits, exercise habits, mental health education, and social health are all crucial components of living a good life and should be taught to everyone.
  • Modern History: While studying other periods of history is useful, modern history is most relevant to international relations and political science. Understanding the political and economic changes in the last few centuries highlights the amazing progress that has been made and can help students understand modern international relations.
  • Economics: Of course, economic thinking is crucial to policy making. All citizens should know where markets work and don’t work and understand the role of policy in creating equitable markets which serve everyone. Part of this course should also teach financial literacy.
  • Comparative Civics: Current civics seems to focus heavily on how the existing government works along with justifications for the current system. Focusing on one form of government can reinforce the existing system even if the course content itself is somewhat critical of the status quo. Spending time comparing different political systems across the world, pointing to what outcomes they have achieved, and highlighting key debates in political science can give students a more holistic view of policy debates.
  • Programming and Data Science: Programming is a highly valuable tool that virtually everyone could use to improve their lives in some way. General programming knowledge can increase the supply of programmers and help citizens better understand tech policy. This class could be taught in Python, its easy to learn and useful.
  • Speech and Debate: With students now versed in the basics of modern science, technology, and statescraft, they should have a safe space to explore those ideas and consider how to improve upon them. Speech and debate can help all citizens analyze arguments, produce new ideas, and improve communication all while reinforcing norms of free speech critical to the functioning of a democracy.

With the exception of English (which might take 3 years perhaps), each item could be adequately taught in a 1 year class. Note that the total number of classes (13 class-years) is much less than the typical high school curriculum (~24 class-years). The remaining time does not have to be filled, but it might be best to allow students to choose their own electives for the remainder or spend the time exercising.

The curriculum is only a small part of how to make education better, but it seems like there are clear ways to improve schooling by refocusing on modern issues. The value of most of these classes is apolitical and can significantly improve our discourse while better preparing students to inherit global challenges.

Kittens and Counterfactuals

Alice just adopted a kitten. It’s fun having a new pet, but there is one problem: he keeps clawing up Alice’s couch! Despite the scratching posts, new toys, and play, he still likes to scratch her sofa.

Her kitten loves sleeping on the couch as well, it’s perfectly positioned next to the window to catch some afternoon sun.

She tried putting a cover over the sofa to protect it, and though it prevented her cat from scratching, he no longer enjoys curling up on it like he used to.

Alice is a reasonable person, there might be a way to get the best of both worlds. She can just talk to her cat and explain the situation right? Alice says “Kitty, I know you love napping on the sofa without the cover, but the scratching has to stop. If you promise not to scratch, I will remove the cover.” But the cat just looks at Alice like he doesn’t understand! She tries removing the cover to see if he will choose not to scratch, but the kitten goes right back to scratching. She decides to keep the cover on indefinitely.

Alice shakes her head and thinks, “if only the kittens understood counterfactuals, we wouldn’t be in this situation”.

Why write this story? Because it highlights an overlooked fact: unlike humans, most animals do not have the ability to connect their behavior with long term or distant consequences.

In other words, the kitten does not understand the counterfactual “If I don’t scratch, the sofa will be comfy”. In the case of cats, this would be true even if you put the couch cover on pretty soon after they scratched, because they have a hard time relating the two events. Other animals exhibit higher intelligence and might understand this, but there are other simple situations which animals cannot learn. If you delay the time between an action and a reward for a few hours, a dog will not be able to learn the action, even though it is trivial for humans to relate actions now with results hours later. We demonstrate this ability every time we pack a lunch.

Who cares? I’m not writing this to insult animals intelligence, they clearly exhibit extraordinary abilities despite not being able to prepare lunch. Rather, I want to consider the possibility that people are missing important counterfactuals too!

What if we are missing an opportunity to change our behavior in order to get what we want? Perhaps, compared to more intelligent beings, we are like the kitten in the story, ignoring a better reality that we could have reached if only we changed how we act.

Let me give an example of what this might look like. Cats have trouble connecting between action and reward over a period of minutes, while humans can plan decades head; we call this “saving for retirement”. But why only consider actions over decades? In theory, we should be able to consider the consequences of actions indefinitely long into the future1. While this is intuitively the right thing to do, as I have written before, many of our institutions neglect the long term future!

In addition to shortsightedness, I want to explore some of the other ways we might be missing an opportunity to act differently. I do not think that all of these are “right” in some sense, but are worth considering. Some are already accepted by decision theorists.

Here are a couple starting points:

  • Missing a causal connection: This is essentially the goal of science, we need to find out what causes what and use it to our advantage. The cat in the story missed the fact that scratching made the sofa worse.
  • Missing a logical fact: This is the goal of mathematics, finding things which are logically true can advance many endeavors (e.g. P=NP)
  • Failing to consider acausal trade
  • Failing to consider possible people
  • Failing to consider non-existent people
  • Failing to consider future people
  • Failing to consider past people
  • Situations similar to Newcomb’s problem
  • Anthropics and simulations: How did we come to be? Should we condition on the fact that we exist? If we determine we are in a simulation, how should we act?
  • Counterfactual contracts
  • Reputation and pre-commitment: We need to think carefully about the value of an institution’s reputation. Oftentimes it is important to establish a consistent way of doing things and precommit to that course of action. For example, this is why governments “don’t negotiate with terrorists”.
  • Meta-counterfactuals: Perhaps the way we consider counterfactuals changes certain outcomes.

A lot of this is probably covered by carefully applying something like logical decision theory. But what else would you add to the list? What are some “known unknowns” and where might we look for “unknown unknowns”? It is worth considering how embedded agency changes our treatment of counterfactuals as well.

This examination of counterfactuals is pretty abstract, but important to consider as we try to develop better systems of institutional decisionmaking.

  1. Of course, there are practical limitations to this.

Against World Government

Imagine if someone told you that there was a right way to make music, and advocated for creating the best possible song, abolishing other music and having everyone enjoy the same tune. Or imagine a person proposing that there was a best TV show, and was working to make sure everyone watched the same show, establishing a “one-world TV channel”. You would rightly call them crazy! People’s needs are diverse, and not everyone has to like the same things. Additionally, when it comes to subjective experiences like music or TV, there really is not a “right” answer!

But why do we seem to think this about government? Ideas about world government have a long history and many people seem to hold it as an ideal. World government is also a common trope in science fiction, where it acts as an unquestioned backdrop for fantastical adventures. This uncritical inclusion of world government in futuristic fiction has played a role in making it the default expectation for the future of governance.

But advocates of world government seem to be committing the same mistakes as advocates of “world music” or “world TV” by believing that there is a “right” way to govern and that all people have the same tastes. In truth, world government is a highly paternalistic way to approach public choice which implicitly disregards others preferences. It would profoundly reduce peoples options when deciding how they are governed. I have previously advocated for the opposite of this system, the Archipelago, where each person chooses their citizenship and a diverse set of governments are allowed to flourish. Contrasting the Archipelago with world government highlights the key benefit of competitive governance; it prevents the concentration of power into a handful of states and creates stronger incentives for states to serve their people.

It also seems implausible that scaling up governance will lead to better outcomes. Most problems in public choice become massively more difficult as the size of the population increases. Additionally, some of the most successful governments today are highly localized. This makes sense given that many of the problems government is suited to solve have to do with local public goods problems. By reducing local control, the overall quality of governance may become much worse.

But one of the biggest problems with world government is that it constitutes a new kind of existential risk. If we determine how to effectively govern the entire world, what happens if we fall into a totalitarian state? The tools for global control may work just as well for a dictator as a democracy. I highlighted global totalitarianism as an existential risk previously because a malign world government could prevent expansion into space and create massive amounts of suffering indefinitely. This scenario may seem implausible, but a transition to global dictatorship may be irreversible, meaning that over the long term, this result is inevitable. If this is the case, it becomes critical to find a way to prevent the formation of a world state entirely.

But doesn’t world government have some benefits? For example, it should allow people to coordinate globally in ways they couldn’t otherwise. For example, having a global state might be a solution to the problem of financing global public goods. Coordinating on these problems is critical, but do we really need a single government in order to coordinate? Already, international accords and negotiations between countries are sufficient to create global coordination on many issues. Additionally, it seems like there is plenty of possibility for inter-state finance of global public goods1.

While international agreements seem sufficient, I worry the body of international law and cooperation will continue to grow until there is effectively a single state. Technological improvements in administering a state will help countries govern larger domains, accelerating this trend2. It seems to me that if we do nothing, world government is a foregone conclusion.

So how do we prevent the formation of a single state? As I have already discussed, the Archipelago can prevent many of the ill’s of global government by fostering numerous independent states. Finding ways to finance global public goods between states can provide many of the benefits we might want from a global state. Beyond independent states, there must be systems for dealing with interstate conflict which avoid using violence or coercion. For example, if burning wood is essential to a local culture, the citizens should pay taxes for their increased pollution rather than be entirely prohibited by other states in the Archipelago.

But my solutions are somewhat hypocritical: In place of world government, I am proposing to create mechanisms for public goods finance and an overarching “archipelago government” which ensures fair competition between states. It is hard to see how this is fundamentally different from some sort of world government! Perhaps it is an impossible task, and, like Nozick, we must recognize that some form of government will occur, and focus instead on creating the minimal functional world government; a sort of minarchy for states. The same ideas apply to designing world minarchy, where strong norms for independent governance and freedom are critical while important checks are needed to prevent further growth of the global government.

World government is one of many political ideals which are seductive but deeply flawed. Unfortunately, I believe some form of global state will eventually grow out of efforts to coordinate individual states. The goal should not be to prevent global government, but rather, to shape its development into a stable organization which fosters global collaboration while respecting the independence and diversity of it’s citizens.


  1. This is a problem that the mechanism design literature has not even begun to examine.
  2. Technology cuts both ways, making governance more scalable, but also making it easier to engage in more covert, uncontrollable activities.

Global Public Goods

In economics, externalities and public goods get introduced as distinct concepts, but can also be useful to consider as a single concept. Public goods, public bads, positive externalities, and negative externalities all have similar characteristics.

In the case of public goods and positive externalities, society is best off when spending money to fund public goods or subsidize positive externalities. The opposite is true for public bads and negative externalities, where the social optimum is achieved when public bads are prevented and negative externalities are taxed.

Some problems of this type only involve a few individuals and may be solved through negotiation. However, there are many problems which affect a large number of individuals and require a different approach.

With a large population, many externalities can be understood as public goods problems, where society “funds” a positive externality by subsidizing it and “defunds” a negative externality by taxing it. These problems have the same strategic difficulties as funding public goods, where freeriders avoid paying the appropriate costs.

Given the similarity of these coordination problems, I will refer to all of them as “public goods” for the remainder of this post.

Now, notice that the more localized a public good, the easier it is to provide, since the overall size of the problem is smaller, and individuals can negotiate directly. Additionally, all of the gains from a local public good accrue to the people in that location, providing a direct incentive to appropriately provide the good. Local public goods are something that government is well suited to providing, and under a system of competitive governance, they will be appropriately funded like in the Tiebout model.

Global public goods present a distinctly different problem from more localized public goods. Since there is no overarching state to provide global public goods, the original coordination problem returns. Global public goods will not be provided even under an efficient global market and effective local governance. In this sense, global public goods problems are harder and more neglected than most issues. For this reason, they deserve our attention.

With this motivation, here is a list of important global public goods:

  • Increasing population growth
  • Ideas
  • New technologies and innovation
  • Entertainment (Music, writing, TV, etc.)
  • Individual and group rationality
  • Information (news, prediction markets, statistics, etc.)
  • Effective global markets
  • Global monetary policy
  • Handling of global market failures (e.g. handling global monopoly)
  • International law
  • Ecosystem and historical site preservation
  • Eliminating pollution (CO2 emissions, water pollution, space debris, etc.)
  • Preventing existential catastrophe (e.g. AI risk)
  • Mitigating global catastrophic events (e.g. preventing pandemics)
  • Mechanisms for producing global public goods (a mechanism for global public goods finance is itself a global public good!)

We can ask how the world should coordinate in all of these cases. There are several characteristic scales of cooperation, and important work to be done at each: coordination between individuals, coordination within communities, coordination within governments, and coordination between governments.

So far, lots of theoretical work has examined the financing of public goods between individuals (see for example dominant assurance contracts and quadratic funding). At the community level, Elinor Ostrom won a Nobel prize for her work on community management of the commons. Previously, I have looked at the problem of financing public goods within governments themselves (though my solution is a bit impractical) and I hope to offer some approaches to between-government public goods finance as well. More theoretical and experimental work on financing mechanisms at all of these levels is crucial.

Global public goods problems present a unique challenge compared to other issues since these are underprovided relative to local public goods or private goods. This indicates that methods for global public goods finance could have a huge impact by catalyzing cooperation on the biggest problems facing our society.

Links #2

Air-based DNA sampling reminds me of the movie GATTACA. DNA privacy issues are arguably more important than a lot of the privacy issues we worry about today.

Black Death, COVID, and Why We Keep Telling the Myth of a Renaissance Golden Age and Bad Middle Ages (note: pretty long).

Predictive Coding has been Unified with Backpropagation

Given that I just spent a lot of time talking about which things should matter in ethics, The importance of how you weigh it is a good reminder that often, the “mattering” part isn’t contested! Instead, the question becomes, how much should different things matter? This is part of what makes utilitarianism so flexible, but at some point, someone is going to have to do the hard work of deciding how much different things matter relative to one another.

People are worried about population decline, I am too.

Related: Artificial wombs may come about through advances in incubators.

More on how avoiding the repugnant conclusion is not a good desiderata in population ethics. Many philosophers are apparently in agreement about this now.

Twitter thread on immigration and cash transfers.

Interesting lab page Sculpting Evolution.

Mechanism Design Lectures

The top 150 intellectuals, selected competitively

Beings Other than People Matter

I have spent a lot of time talking about ethical premises I agree with. All of these have exclusively concerned people, but you might notice that I take a more inclusive view of the beings that matter in posts like The Disregarded.

In fact, while human beings are important, it is clear to me that other beings matter as well.

Already, the realization that animals are moral patients seems to be gaining steam. Pet owners already have an intuitive sense of this and many consumers already feel that animal welfare concerns are important when considering which products to buy.

It seems that the idea that animals matter is generally agreed-upon. While some people hold the position that animals matter less than humans, I have never seen a principled position claiming that animals don’t matter at all.

Put simply, we have no good reason to create an ethical boundary between people and animals. They display a range of complex behaviors and truly seem to experience joy and suffering similar to us. Even if you are skeptical of this, most would agree that, other things being equal, it is better to be kind towards other beings. Even if it seems unlikely that animals have moral value, if we are uncertain about this, then it is better to assume that they at least have some value.

But my moral circle expands past people and animals.

If you agree with the previous point, why stop at animals? All of the previous principles can apply just as well to other beings. Insects, plants, microbes, and even physical systems might have moral value, and we should keep an open mind and potentially include them in our ethics.

A more clear-cut example involves brain emulations (Em’s). If we had the ability to perfectly simulate a mind, this mind should be a moral agent in its own right; it should not matter what hardware minds are being run on. If an Em had the same feelings, relationships, and behaviors as a person, what right do we have to deny their sentience?

The fact that simulated minds matter morally extends to artificial intelligence. If we create new minds that are as or more sophisticated than our own, it seems natural to extend the same kindness that we would to Em’s. Alan Turing foresaw the importance of artificial sentience when he formulated the Turing Test. In the modern day, foundations such as PETRL (People for Ethical Treatment of Reinforcement Learners) are already advocating for the fair treatment of artificial minds. Once again, if a program displays the nuanced behaviors of sentient beings, and truly seems to feel pleasure and pain, what right do we have to deny them moral value?

Of course, I am not arguing that all these beings matter as much as people, but rather, that their importance is not identically zero. I think it is possible that some of these beings matter much less than people, but to assume that they all have no significance seems foolish. As such, we should consider how our ethics change when we take other beings into account.

If you accept these ideas, all of the previous posts about people apply to other beings as well. Future, potential beings matter, and making good decisions on their behalf becomes crucial.

This raises many important questions: How do we infer the needs of beings we cannot communicate with and how do we cooperate with them? Can we balance the needs of many different beings? Is it morally wrong to create new beings? Should make decisions on behalf of future beings?

These questions seem extremely hard to answer, but crucial to consider. I hope to offer some preliminary approaches to these problems in the future, but the majority of this work will have to be done by decision makers far into the future.