AI Alignment Applies to Everyone

Some people do AI alignment research as a way to reduce existential risk. The risk of unaligned AI is important, but nobody seems to appreciate the scope of the problem.

As I pointed out previously, deterring any actor from attacking another is key to maintaining peace. Making it difficult for any agent to cause global catastrophe is a step in this direction.

But there are other ways that intelligent agents can pose a risk to society. It is critical that any method we use to prevent AI from causing existential catastrophe apply to everyone else too. It would be pretty embarrassing to spend so much effort on AI risk just to have some supervillain end the world anyway.

This point extends to AI alignment techniques more broadly. Any restraint we apply to AI should also be applied to people, companies, and states to ensure that their actions are roughly aligned with society as a whole.

This argument re-frames the alignment problem. The goal is not to fully control intelligent agents, but to ensure that all members of society can peacefully coexist without being too restricted.

%d bloggers like this: