|A Neural Net Colorizes Photos|
|Written by David Conrad|
|Thursday, 07 April 2016|
We have looked at the problem of auto colorization before, but this new solution produces bright results rather than unsaturated colors. It is almost good enough for real use.
When Google's TensorFlow system was made available, Ryan Dahl had fun seeing how well it could colorize black and white images, see Automatic Colorizing Photos With A Neural Net. The results were surprisingly good given the limited resources thrown at the problem.
Now a team from the university of California at Berkeley has used a similar technique to produce results that fool a human 20% of the time. This may not sound good, but it is a big advance on previous methods.
The basic approach is to use a neural network to learn what colors things are. Notice that this is slightly different to the majority of classification problems in that when it comes to color there are multiple possibilities for some things - what color is a shirt, for example. The problem is that using the standard loss functions the network tends to learn the average color and this is the reason why previous attempts have produced unsaturated colors. In general, a network trained in the usual way would learn that an object that comes in many colors should be a sludge brown.
To get brighter colors this new network is trained using a more suitable measure of how much error it is making involving the probability of any particular color being correct.
Another problem is that the pixels in most natural scenes are desaturated because of the presence of large areas of background color. Bright colors tend to occur in small localized regions and are hence outnumbered by washed-out colors. This makes the neural network think that washed-out colors are the norm and hence it learns not to be adventurous and bold with color. The solution is to weight the loss function by the rarity of the color making it more important to match saturated colors.
The neural network is initialized from an existing trained vision network and then trained to predict pixel color using 1.3 million images from ImageNet. As pointed out in the paper, the good thing about the colorization task is that you can get training samples just by reducing any color photo to grey scale.
The method performed well against alternatives and when it got something wrong it was usually because of its tendency to prefer bright colors when none originally existed.
To see how well the network was doing in an objective way the originals and the network's coloring in was shown to test subjects who had to pick which was the original. In 20% of cases the network's reconstruction was preferred. It seems some times reality just isn't colorful enough:
Finally the technique was tried on some classic photos. Perhaps the most impressive and disturbing if you like the photography of Ansel Adams are these colorized landscapes:
It is interesting to note that Adams tended to use a red filter, making blue much darker in his pictures.
In many ways the network did a lot better than the 20% figure suggests as the majority of the colorings were reasonable and comparing them to the ground truth is too strict for many purposes. This seems good enough to be used for real tasks.
Colorful Image Colorization Richard Zhang, Phillip Isola, Alexei A. Efros
Automatic Colorizing Photos With A Neural Net
TensorFlow - Googles Open Source AI And Computation Engine
Microsoft Wins ImageNet Using Extremely Deep Neural Networks
Baidu AI Team Caught Cheating - Banned For A Year From ImageNet Competition
The Allen Institute's Semantic Scholar
Removing Reflections And Obstructions From Photos
See Invisible Motion, Hear Silent Sounds Cool? Creepy?
Computational Camouflage Hides Things In Plain Sight
Google Has Software To Make Cameras Worse
To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, Facebook, Google+ or Linkedin.
or email your comment to: email@example.com
|Last Updated ( Thursday, 07 April 2016 )|