I have spent a lot of time talking about ethical premises I agree with. All of these have exclusively concerned people, but you might notice that I take a more inclusive view of the beings that matter in posts like The Disregarded.
In fact, while human beings are important, it is clear to me that other beings matter as well.
Already, the realization that animals are moral patients seems to be gaining steam. Pet owners already have an intuitive sense of this and many consumers already feel that animal welfare concerns are important when considering which products to buy.
It seems that the idea that animals matter is generally agreed-upon. While some people hold the position that animals matter less than humans, I have never seen a principled position claiming that animals don’t matter at all.
Put simply, we have no good reason to create an ethical boundary between people and animals. They display a range of complex behaviors and truly seem to experience joy and suffering similar to us. Even if you are skeptical of this, most would agree that, other things being equal, it is better to be kind towards other beings. Even if it seems unlikely that animals have moral value, if we are uncertain about this, then it is better to assume that they at least have some value.
But my moral circle expands past people and animals.
If you agree with the previous point, why stop at animals? All of the previous principles can apply just as well to other beings. Insects, plants, microbes, and even physical systems might have moral value, and we should keep an open mind and potentially include them in our ethics.
A more clear-cut example involves brain emulations (Em’s). If we had the ability to perfectly simulate a mind, this mind should be a moral agent in its own right; it should not matter what hardware minds are being run on. If an Em had the same feelings, relationships, and behaviors as a person, what right do we have to deny their sentience?
The fact that simulated minds matter morally extends to artificial intelligence. If we create new minds that are as or more sophisticated than our own, it seems natural to extend the same kindness that we would to Em’s. Alan Turing foresaw the importance of artificial sentience when he formulated the Turing Test. In the modern day, foundations such as PETRL (People for Ethical Treatment of Reinforcement Learners) are already advocating for the fair treatment of artificial minds. Once again, if a program displays the nuanced behaviors of sentient beings, and truly seems to feel pleasure and pain, what right do we have to deny them moral value?
Of course, I am not arguing that all these beings matter as much as people, but rather, that their importance is not identically zero. I think it is possible that some of these beings matter much less than people, but to assume that they all have no significance seems foolish. As such, we should consider how our ethics change when we take other beings into account.
If you accept these ideas, all of the previous posts about people apply to other beings as well. Future, potential beings matter, and making good decisions on their behalf becomes crucial.
This raises many important questions: How do we infer the needs of beings we cannot communicate with and how do we cooperate with them? Can we balance the needs of many different beings? Is it morally wrong to create new beings? Should make decisions on behalf of future beings?
These questions seem extremely hard to answer, but crucial to consider. I hope to offer some preliminary approaches to these problems in the future, but the majority of this work will have to be done by decision makers far into the future.