Alexei Turchin discusses many reasons why an artificial intelligence would not want to attack humanity. I want to expand on this briefly.
There are a few reasons that an AI might be incentivized to attack humanity. It may want to steal resources, human laws may be too restrictive, or it could be concerned that people will attack it.
However, there are several sources of uncertainty which should make the AI hesitant to attack. First, some unknown fail-safe may be part of it’s design or a contingency plan may have been prepared in the case of an attack. Second, attacking humanity can damage the AI’s reputation, which can create issues if the AI encounters alien civilizations or other AI’s in the future. Third, the AI may be uncertain as to whether it is in a simulation testing its benevolence.
Of course, attacking humanity also comes at a cost in terms of resources, time, and increased existential risk. Humanity and AI can collaborate to produce more value than either could alone, so one destroying the other would create significant losses. On top of this, destroying humanity is an irreversible action, reducing the AI’s options in the future. Even worse, if there are other agents that humanity and AI are competing against, conflict weakens both of them relative to these other agents.
These considerations apply equally well to humans considering attacking an AI, other AI’s, and alien civilizations that may be encountered in the future. It is generally better for agents to cooperate to some degree, and the symmetry of this argument further reduces the probability of being attacked.
Given this, humanity can reduce the risk of an AI attack by broadcasting their presence in the cosmos, committing to preserving cooperative AI’s, avoiding stringent regulations on AI behavior, and preparing basic precautions in case the AI does choose to attack. The AI can take similar steps to disincentivize attack.
This doesn’t eliminate attack risks since the AI might not be concerned about reputation, the promise to preserve cooperative AI’s may not be credible, human laws may still present too much of a burden, or the precautions may be too weak.