Months after bringing augmented reality to mobile cameras with its Lens app, Google is now in development to enhance your photos even while you frame the shot through the virtual viewfinder. The search giant has collaborated with MIT to build the new technology based on artificial intelligence (AI) and uplifts existing image-processing algorithms.
At annual digital graphics conference Siggraph in Los Angeles this week, researchers from Google and MIT's Computer Science and Artificial Intelligence Laboratory will come together to showcase the latest system that can automatically retouch images just like a professional photographer. The new technology is touted to produce "indistinguishable" results even from a new Google's algorithm that is used to develop high-dynamic range (HDR) images on Android devices. Further, the AI techniques within the technology help to deliver upgraded images in about one-tenth the time from the prevailing developments.
The researchers' team has trained neural networks of the new system on as many as 5,000 images -- each of which was retouched by five different photographers. Also, the group used thousands of pairs of images produced by traditional image-processing algorithms such as the ones created by HDR mode. All that helped to enhance shots in a short span of time, with comparably less storage consumption than a general technology.
'Potential to be very useful'
Once developed, the researchers compared the advanced system to that of a machine-learning system that processes images at full resolution. The results were quite promising as the new technology consumed one-hundredth or about 100 megabytes of memory to execute the operations over the full-resolution version that needed nearly 12 gigabytes.
"This technology has the potential to be very useful for real-time image enhancement on mobile platforms," Google's senior research scientist Jon Barron said in a statement.
Barron participated in the development with MIT professor of electrical engineering and computer science Frédo Durand and two fellow researchers at Google, namely Jiawen Chen and Sam Hasinoff. Moreover, the innovation was conceived by MIT graduate student in electrical engineering and computer science MichaëlGharbi. The student had built a project with a "transform recipe" that helped to retouch the image by sending its low-resolution version to a web server.
Neither Google nor MIT has a certain schedule for the new technology. Nonetheless, considering its fast and storage-conscious processing, the new system would soon be a part of Android.