Google has an algorithm that can take a photo with a large depth of field and create something that looks as if has been taken with a wide aperture on a professional camera. Exactly how this is done is an amazing application of advanced image processing using a 3D model.
One of the wonders of the modern mobile phone camera is that it has a huge depth of field. But apparently blurry bits are better because they look like the output old analog or high end digital cameras. So while one end of the computational photography team labours to create photos with infinite depth of field, Google has found a way to restrict it and all of the computation can be performed on the mobile device as part of the Google Camera App.
Depth of field roughly translates to the range of distances over which a lens produces a sharp image over. If you focus on, say, 10ft then you can't expect everything from 0 to infinity to be in focus - if it was why bother focusing at all. In practice with real lenses only a small interval around the point you have focused on is sharp.
One of the things that the budding photographer has to learn about is how lenses produce different depths of field depending on how small the aperture is. The primary purpose of changing the aperture, the size of the hole that the light passes through, is to control the amount of light passing through the lens, but for serious photographers the secondary effect is almost as important. Changing the aperture changes the depth of field. A pinhole lens has a theoretical infinite depth of field. As the size of the aperture increases the depth of field decreases.
The only complication is that the size of the aperture is measured by the f-number, which gets bigger the smaller the aperture. So small f-numbers, like f3 or f2 have very small depth of field and big f-numbers, like f11 and f22, have a very big depth of field.
Traditional analog film cameras have lenses that have a range of f-numbers that typically go from f2 to f22 and this means that you can take a photo of a person using an aperture of f2 and have the person in focus and the background out of focus. This is a desirable effect and being able to control depth of field is the mark of a good and creative photographer.
Enter the phone camera that makes use of a very small sensor and hence lenses with very large f-numbers. This is great for the not so keen photographer because you don't have to focus much. The very large depth of field means that most things are in focus - and this is also the problem if you do care about photography.
There isn't much you can do physically to make a camera have a smaller depth of field - other than change the lens and for a phone camera this isn't usually an option. This is why Google has invented an algorithm that puts the blur back into to the picture.
Applying a blur is very easy, it is a simple filtering operation - a Gaussian filter exactly mimics the action of an out of focus lens. The problem isn't applying a blur, it is where to apply it. To simulate depth of field you need to estimate how far away each pixel is in the image and apply an intensity of blur that is proportional to depth.
Now you can see the problem - how do you work out the depth of each pixel in the image when what you have is a 2D camera that has no depth information?
The answer is that the camera takes more than one image and the way that objects move between frames is used to infer the camera orientation. Then the multiple images are used in a standard Multi-View Stereo algorithm that basically works out the depth of each position in the image by triangulation.
This is how you perceive depth using two slightly offset cameras - i.e. your eyes. Finally the whole image is processed to optimize the depth map so that pixels that are similar are assigned similar depths.
Photo and its depth map - darker is closer.
The depth field that results is stored with the photo's metadata so that it can be used by post processing software. It is now fairly easy to apply a blur to each pixel according to its inferred depth.
It is slightly amazing that this algorithm can be packaged into a mobile app and provide the user with the ability to modify the simulated aperture using a slider and set the focal plane by tapping on the image.
So now a mobile phone camera can produce results that look like an upmarket Digital SLR.
It is a great piece of work, but there is something strange about using computational photography to restore the physical defects of real world lenses. It is on a par, but not quite, with the digital creation of effects like lens flare and Kodachrome colors.
In the world of photography better is not always desirable.
Google isn't one to keep something it doesn't want hanging around just in case someone else still wants it. So another API bites the dust. You have one year to - do what exactly? There is no alternati [ ... ]