Tech companies are always looking to the future in order to stay relevant, and for them, the future is artificial intelligence. In order to stay competitive, many companies have started to research the various benefits AI can have for their service. However, while facilitating our lives, researchers are also worried about other grim outlooks as they continue to improve artificial intelligence.
In this regard, AI researchers from Google DeepMind were interested to examine the behavior of AI in conflict resolution scenarios. They introduced two AI systems against each other in two different games in order to test their cooperation or aggression triggers, as well as their overall choice.
The first scenario they used was similar to the prisoner’s dilemma, a very common situation used in game theory which reveals the behavior of two agents regarding the benefit of cooperation over always seeking your self-interest. They used this scenario as an overall framework for two games: Wolf Pack and Gathering.
The purpose of the experiment in both scenarios involving AI systems, not necessarily at the same level, was to observe their behavior and precisely analyze the extent of cooperation. Scientists set up the experiment in a way that the AI system’s behavior was only driven by rules of the games.
In the game called Gathering, the AI agents were each required to pick as many apples as they can, while having the option to use a laser on the other one to stun it for a brief period. Scientists revealed that simpler AIs just ignored each other, at least until the supply of apples was very limited. They also observed that smarter and more complex AI systems also became more aggressive, choosing to shoot their opponents much earlier in order to get ahead.
In the other Google DeepMind game, Wolf Pack, the AI systems were required to find another AI in a maze with several obstacles, which were difficult to overcome just by one agent. The researchers observed that even smarter AIs tended to cooperate with one another in order to reach their objective.
The findings of the Google DeepMind researchers serve to highlight the problems and potential risks of developing AI technology without any other concerns. However, these experiments allow researchers to implement condition into AI system that favor cooperation instead of the aggressive behavior.
Image source: DeepMind