Google neural net can figure out where photos are taken

Google computer vision specialist Tobias Weyand and his team have trained a deep-learning machine to work out the location of where almost any photo was taken, the MIT Technology Review reported.

The machine outperforms humans, and can also determine the location of indoor images and pictures of specific things such as pets and food.

The team began by dividing the world into a grid consisting of over 26,000 squares of varying sizes, depending on the number of images taken in a location.

Places where more images are available, like big cities, have a more fine-grained grid, while areas like the oceans and polar regions were ignored completely.

After this, the team built a database of geo-located images from the web and used the location data to determine the grid square in which each image was taken.

This data set is huge, containing around 126 million images and their location data.

Roughly 91 million images were used to train the neural network, while the remaining images were used to validate the system.

Called PlaNet, the idea is that you input an image into the neural net and get a grid location or a set of likely candidates.

PlaNet
PlaNet

Here’s what powers Google and Facebook’s AI

Google has open sourced its artificial intelligence engine

Why you need a deep-learning antivirus

The superintelligence revolution

Google buys artificial intelligence company DeepMind

Latest news

Partner Content

Show comments

Recommended

Share this article
Google neural net can figure out where photos are taken