//No Comment - Lensless Photography, Automatic Studio & Better Astro Shots
Written by David Conrad   
Friday, 24 February 2017

• Lensless Photography with only an image sensor

• Live smart studio doesn’t need a photographer

• Neural networks Improve Astronomy Pics


Sometimes the news is reported well enough elsewhere and we have little to add other than to bring it to your attention.

No Comment is a format where we present original source information, lightly edited, so that you can decide if you want to follow it up. 


Lensless Photography with only an image sensor

Computational photography is replacing many physical components of cameras with image manipulation but surely the final step is to remove the need for a lens. Why bother focusing all that light when you can simply compute the image: 

Photography usually requires optics in conjunction with a recording device (an image sensor). Eliminating the optics could lead to new form factors for cameras.

Here, we report a simple demonstration of imaging using a bare CMOS sensor that utilizes computation. The technique relies on the space variant point-spread functions resulting from the interaction of a point source in the field of view with the image sensor.

These space-variant point-spread functions are combined with a reconstruction algorithm in order to image simple objects displayed on a discrete LED array as well as on an LCD screen. We extended the approach to video imaging at the native frame rate of the sensor. Finally, we performed experiments to analyze the parametric impact of the object distance. Improving the sensor designs and reconstruction algorithms can lead to useful cameras without optics.




StyleShoots Live smart studio doesn’t need a photographer

This is another level of computational photography. Usually we see the technology helping the photographer to do better but this automated studio can do away with need for a photographer:



StyleShoots Live is an all-in-one “smart studio” designed to provide both stills and video of brands shooting their latest apparel on models in one large steel enclosure. With advanced robotics and AI technology, the machine handles all of the technical duties that would usually be performed by a camera crew - such as setting up shots and lighting. It allows for instant review of stills and video with incredible production speed.



A motorized camera head with three axis movement uses a 4K capable Canon 1DX Mk II and a 3D depth sensor, controlled by the system’s Style Engine™. The proprietary software controls the movements, camera and lights to produce the desired footage based on fully customizable styles.


Neural networks Improve Astronomy Pics

Neural networks can be trained to fill in missing details in images but what about improving astronomical imaging?

Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data.

Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon–Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results.

We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

Basically the network is trained to improve the images by being trained on a test set of images that have been artificially degraded by adding the sort of noise you find in a telescope/atmosphere system. Is this crossing the line of objectivity? With deconvolution or speckle interferometry the astronomer knows what the process is but in the case of a neural network it is necessary to trust that performance on the test set transfers to the real data. This is a problem common to the introduction of neural networks as a statistical technique - there are no confidence intervals or significance levels.








To be informed about new articles on I Programmer, sign up for our weekly newsletter,subscribe to the RSS feed and follow us on, Twitter, FacebookGoogle+ or Linkedin.


Spider Courtship Decoded by Machine Learning

Using machine learning to filter out unwanted sounds and to isolate the signals made by three species of wolf spider has not only contributed to an understanding of arachnid courtship behavior, b [ ... ]

Android 15 Developer Preview Updated

Google has released Android 15 Developer Preview 2 with changes including better handling of automatic language switching and updates for OpenJDK 17.

More News

raspberry pi books



or email your comment to: comments@i-programmer.info






Last Updated ( Saturday, 25 February 2017 )