In March Google had introduced its new artificial intelligence that can classify videos and it only took one month for the University of Washington to defeat it. The researchers have developed an easy trick to fool the system by simply inserting still photo periodically into the video and that's all that takes to fool the "deep-learning" classifier.

Previously Google had described their technology as a deep-learning classifier has been developed using different frameworks like TensorFlow and then applied on large scale platforms such as YouTube. Nevertheless, the researcher from the university was able to easily manipulate the results.

In their research paper, the researchers explained how the manipulation can fool the so-called super intelligent AI. They wrote that the research team altered Google's demo video by inserting one picture of an Audi after every couple of seconds and this is why, although, the actual video is about tigers, Google AI classified that the media is about cars. That is not it, researchers also experimented with a video of primatologist Jane Goodall and some apes. They inserted a picture of a bowl of pasta into the video in regular intervals and that resulted into Google AI classifying the video under the spaghetti theme instead of gorillas.

This research proves that an adversary could easily get away with posting an illegal content, just by incorporating a benign image into the media.

As per Digital Trends, the fact that fooling an AI developed by Google doesn't even require any knowledge about AI and its algorithms is particularly disturbing.

At last the research paper explained that AI still has a long way to go to match humans' capabilities in areas such as determining what a video is all about. Subliminal messages might affect the human psyche but humans are very less likely to think a video with a clear topic is actually about something totally unrelated just due to some pictures inside it.