|DeOldify - Auto Colorization|
|Written by David Conrad|
|Saturday, 03 November 2018|
Yes it's a neural network and it colors old black-and-white shots to make them look good - to deoldify them. What is amazing about this particular effort is not just that it seems to work well, but it's an amateur (in the best sense of the word) effort.
You don't need to be an academic or have the support of a big company to get into AI. All you need is know how and the application of a lot of effort. Jason Antic, who describes himself as:
Software guy, currently digging deep into GANs to do some cool photo colorization and restoration!
has been doing the sort of work that usually needs a team of people. His only help seems to be a 1080TI GPU, and even then it takes two or three days to train a new model. The description of the design of the neural network for his DeOldify project is a bit vague, but it seems to be a GAN (Generative Adversarial Network) with lots of customization. A lot of the fun in this project seems to come from finding out what really works.
Before you get too critical of the results, you need to keep in mind that there is no human intervention in this process. It's all up to the neural network and what is more there is no right answer in the sense that for these old photos we really don't know what color anything actually was, only what sorts of colors objects of particular types are. This is presumably what the neural network is learning and it is difficult.
OK, the fairy's wing color has bled into the surroundings - but what color should a fairy be?
There are lots of examples available on GitHub and it is freely admitted that these are some of the best results. That is the output of the network is being cherry picked. Is this so bad? In many areas of AI research there is a sneaking suspicion that researchers are making it seem their their network is doing better than it really is by only showing the successes. In this case this isn't a problem.
This is great - but notice the red hand. What caused the neural network to see the hand as something that wasn't a hand - a purse perhaps?
The object of the exercise is to create good looking photos. Photos that pass the human "looks good" test. If some don't then perhaps that's just the way it is and we can try again. It is amazing that a neural network can do this job at all and the errors are instructive.
At the moment it seems that the model loves to color clothes blue. Perhaps one reason is that it has been trained on panchromatic (colors show with their true brightness) black and white images and the oldie images are orthochromatic (red shows blacker than blue of the same brightness).
Here is one of my favourites:
And for the future the aim is to not just colorize but to enhance:
You can download the code, which has an MIT licence, and get started on your own colorizations, but you will have to train the network yourself. It is promised that the coefficients of the model will be published at some time in the future, but for the moment you can't just play without doing some work.
The whole question of releasing the coefficients of the model is a difficult one give the recent case of the painting generated by a GAN that sold for nearly half a million. The network was borrowed from another programmer and then simply used to make money and repuation.
Let's not end on a sour note. You can do worthwhile AI without a team and with just a good GPU. Without open source this would be much more difficult.
|Last Updated ( Saturday, 03 November 2018 )|