Business magnate Elon Musk obviously cannot keep his qualms over artificial intelligence to himself as he expressed Friday at the recent National Governors Association Summer Meeting in Rhode Island the imminent harms of AI-reliant technologies. Musk said the surging popularity of AI technologies is a "fundamental risk" the humankind is about to face in the coming years.

Musk's pronouncement can be interpreted with a sense of urgency since the technology industry sees the Tesla chief as someone who does not fear anything—from leading SpaceX venture to send the mankind on Mars to Tesla's vision to make all cars hitting roads autonomous 10 years from now. At the said event, Musk spoke about the looming dangers "we face as a civilisation" with machines, robots, and anything electronic soon dominating the ecosystem.

"AI is a fundamental risk to the existence of human civilisation," declares Musk.

Musk urged attending governors to establish a strong foundation of AI regulations before it is too late. He stressed out that "robots going down the street killing people" may seem "so ethereal" for people today but not until they would witness it in the future.

"AI is a rare case where I think we need to be proactive in regulation instead of reactive," says Musk. "Because I think by the time we are reactive in AI regulation, it's too late."

The SpaceX and Tesla founder warned the concerned authorities of the mistakes in policy-making, particularly that regulations are usually set up when "bad things happen" and when there is a "public outcry".

It is not the first time that Musk talked in public about the harms of artificial intelligence. In April, he announced Neuralink, a San Francisco, California-based startup that builds devices connecting human brains and computers in hopes to protect humanity from the perils of artificial intelligence. Musk first declared that AI is a threat to the humanity in October 2014, which sparked global discussion.

Recently, Microsoft Research managing director Eric Horvitz revealed his fears towards "the potential misuse of this technology by malevolent forces, by people with ill will, by state and non-state actors who can gain strong powers with these technologies." Horvitz, nevertheless, stressed that AI is also the solution to the problem.

Google, on the other hand, seeks to develop human-centred AI technologies in its efforts to veer away from untoward effects of AI.