China and Russia Developing AI Drones That Can Launch Attack Without Human Approval

A high-ranking defense official says that China and Russia are developing autonomous drones on kill missions.

Ahead of the US Election 2020, a high-ranking defense official has warned that countries like China and Russia are developing autonomous drones that have the potential to kill without any human involvement or approval from any government body.

Nand Mulchandani, the Acting Director of the Joint Artificial Intelligence Center, claimed that both China and Russia are ignoring all the ethical limits or considerations regarding the usage of military drones.

ASAT
China conducted a successful test of an anti-satellite weapon from the from the Xichang Space Center in Sichuan province. The missile successfully engaged an aging low-earth orbit Chinese weather satellite on a polar orbit, 537 miles in space. Air Force Space Command

"We know that China and Russia are developing and exporting AI-enabled surveillance technologies and autonomous combat systems without providing any evidence of adequate technical or ethical safeguards and policies," he told the reporters, according to the Daily Star.

The United States and other countries have put in place detailed rules and regulations when it comes to unmanned aircraft (UA). With the advancement in science and technology, several countries have incorporated Artificial Intelligence into these unmanned aircraft, which has made them much deadlier. These Unmanned combat aerial vehicle has received a lot of outrage from civil right activists as these drones are usually under real-time human control and can be operated from miles away.

Nand Mulchandani further highlighted China's use of Artificial Intelligence for domestic policing issues such as censorship and facial recognition. It was reported earlier this month that China has deployed drones to fight locust swarms in the farmland. Mulchandani warned of many military applications of using such AI-enabled drones.

The ability to perform all the functions like detecting a suspect, tracking a suspect, and then destroying a target can be performed by a machine. But all decisions regarding the use of lethal force are to be made by a human being and they have to adhere to the Pentagon doctrine.

Predator Drone
An MQ-1 Predator drone on a combat mission over southern Afghanistan. Wikimedia Commons

As per Mulchandani, Russian and Chinese weapons developers might not possess the ethical constraints that are much needed while developing such lethal weapons and could leverage on different kinds of AI-enabled robots. These developers are reportedly learning to build algorithms to learn enemy behavior that would help them in fostering an integrated "spoofable" image to defeat the enemy.

Experts are now arguing that perhaps Russian and Chinese drones could operate with complete autonomy and any error would bring catastrophic consequences.

READ MORE