Alice just adopted a kitten. It’s fun having a new pet, but there is one problem: he keeps clawing up Alice’s couch! Despite the scratching posts, new toys, and play, he still likes to scratch her sofa.
Her kitten loves sleeping on the couch as well, it’s perfectly positioned next to the window to catch some afternoon sun.
She tried putting a cover over the sofa to protect it, and though it prevented her cat from scratching, he no longer enjoys curling up on it like he used to.
Alice is a reasonable person, there might be a way to get the best of both worlds. She can just talk to her cat and explain the situation right? Alice says “Kitty, I know you love napping on the sofa without the cover, but the scratching has to stop. If you promise not to scratch, I will remove the cover.” But the cat just looks at Alice like he doesn’t understand! She tries removing the cover to see if he will choose not to scratch, but the kitten goes right back to scratching. She decides to keep the cover on indefinitely.
Alice shakes her head and thinks, “if only the kittens understood counterfactuals, we wouldn’t be in this situation”.
Why write this story? Because it highlights an overlooked fact: unlike humans, most animals do not have the ability to connect their behavior with long term or distant consequences.
In other words, the kitten does not understand the counterfactual “If I don’t scratch, the sofa will be comfy”. In the case of cats, this would be true even if you put the couch cover on pretty soon after they scratched, because they have a hard time relating the two events. Other animals exhibit higher intelligence and might understand this, but there are other simple situations which animals cannot learn. If you delay the time between an action and a reward for a few hours, a dog will not be able to learn the action, even though it is trivial for humans to relate actions now with results hours later. We demonstrate this ability every time we pack a lunch.
Who cares? I’m not writing this to insult animals intelligence, they clearly exhibit extraordinary abilities despite not being able to prepare lunch. Rather, I want to consider the possibility that people are missing important counterfactuals too!
What if we are missing an opportunity to change our behavior in order to get what we want? Perhaps, compared to more intelligent beings, we are like the kitten in the story, ignoring a better reality that we could have reached if only we changed how we act.
Let me give an example of what this might look like. Cats have trouble connecting between action and reward over a period of minutes, while humans can plan decades head; we call this “saving for retirement”. But why only consider actions over decades? In theory, we should be able to consider the consequences of actions indefinitely long into the future1. While this is intuitively the right thing to do, as I have written before, many of our institutions neglect the long term future!
In addition to shortsightedness, I want to explore some of the other ways we might be missing an opportunity to act differently. I do not think that all of these are “right” in some sense, but are worth considering. Some are already accepted by decision theorists.
Here are a couple starting points:
- Missing a causal connection: This is essentially the goal of science, we need to find out what causes what and use it to our advantage. The cat in the story missed the fact that scratching made the sofa worse.
- Missing a logical fact: This is the goal of mathematics, finding things which are logically true can advance many endeavors (e.g. P=NP)
- Failing to consider acausal trade
- Failing to consider possible people
- Failing to consider non-existent people
- Failing to consider future people
- Failing to consider past people
- Situations similar to Newcomb’s problem
- Anthropics and simulations: How did we come to be? Should we condition on the fact that we exist? If we determine we are in a simulation, how should we act?
- Counterfactual contracts
- Reputation and pre-commitment: We need to think carefully about the value of an institution’s reputation. Oftentimes it is important to establish a consistent way of doing things and precommit to that course of action. For example, this is why governments “don’t negotiate with terrorists”.
- Meta-counterfactuals: Perhaps the way we consider counterfactuals changes certain outcomes.
A lot of this is probably covered by carefully applying something like logical decision theory. But what else would you add to the list? What are some “known unknowns” and where might we look for “unknown unknowns”? It is worth considering how embedded agency changes our treatment of counterfactuals as well.
This examination of counterfactuals is pretty abstract, but important to consider as we try to develop better systems of institutional decisionmaking.
- Of course, there are practical limitations to this.