A new research conducted by scientists at the Cardiff University and the Massachusetts Institute of Technology (MIT) has found that robots powered with artificial intelligence could show discrimination based on sex and race. The research found that the concept of prejudice is not solely human-specific, and machines could also learn this trait from each other.
Machines can learn from humans and its peers
During the research, the study team used computer simulations and found that machines powered with artificial intelligence could learn from humans and from their robotic peers. While conducting the research, the AI bots were asked to donate something to someone from their own team or in a different team based on their reputation.
As artificial intelligence bots started to pick others based on their reputation, researchers came to know that AI was learning from its peers to only donate to others who hold a similar reputation.
It has been also learned that artificial intelligence bots used to build a prejudice level by mimicking the traits of their fellow peers.
"By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it. Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivized in virtual populations, to the detriment of wider connectivity with others," said Professor Roger Whitaker, from Cardiff University's Crime and Security Research Institute and the School of Computer Science and Informatics, Science Daily reports.
The possibility of creating a fractional population
Roger Whitaker also added that prejudicial groups will create a fractional population among robots, and in all probabilities, this widespread prejudice may even be hard to reverse.
"Protection from prejudicial groups can inadvertently lead to individuals forming further prejudicial groups, resulting in a fractured population. Such widespread prejudice is hard to reverse. It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population," added Whitaker.
Experts believe that the prejudicial attitude of artificially intelligent robots could be dangerous for humans, as these bots are probable to gain more and more power in the future.
It should be noted that Elon Musk, the founder of SpaceX had previously warned about a robotic apocalypse. A few months back, Musk had revealed that increasing power of robots could turn out to be more dangerous than the nuclear missiles posed by Kim Jong-un's North Korea.