Taking a look around his nose and mouth, we see that our method doesn’t have the magenta blocks and noise in the middle of the image as seen in JPEG. This is due to the blocking artifactsproduced by JPEG, whereas our compression network works on the entire image at once. However, there's a tradeoff -- in our model the details of the whiskers and texture are lost, but the system shows great promise in reducing artifacts.
|//No Comment - Cardboard Camera, Image Compression With NN & Anemia Detection|
|Written by David Conrad|
|Sunday, 09 October 2016|
• Capture and share VR photos with Cardboard Camera, now on iOS
• Image Compression with Neural Networks
• How to Make a Smartphone Detect Anemia
Sometimes the news is reported well enough elsewhere and we have little to add other than to bring it to your attention.
No Comment is a format where we present original source information, lightly edited, so that you can decide if you want to follow it up.
With Cardboard Camera—now available on iOS as well as Android—you can capture 3D 360-degree virtual reality photos. Just like Google Cardboard, it works with the phone you already have with you.
VR photos taken with Cardboard Camera are three-dimensional panoramas that can transport you right back to the moment. Near things look near and far things look far. You can look around to explore the image in all directions, and even hear sound recorded while you took the photo to hear the moment exactly as it happened. To capture a VR photo, hold your phone vertically, tap record, then turn around as though you’re taking a panorama.
Google Researchers have been hard at work finding ways to compress images with neural networks:
In "Full Resolution Image Compression with Recurrent Neural Networks", we expand on our previous research on data compression using neural networks, exploring whether machine learning can provide better results for image compression like it has for image recognition and text summarization. Furthermore, we are releasing our compression model via TensorFlow so you can experiment with compressing your own images with our network.
Our system works by iteratively refining a reconstruction of the original image, with both the encoder and decoder using Residual GRU layers so that additional information can pass from one iteration to the next. Each iteration adds more bits to the encoding, which allows for a higher quality reconstruction.
The basic idea is that the network learns to reproduce the encoding errors so that it can make the encoding more accurate at the next pass. Eventually the data capacity of the network is reached and the result cannot be improved.
So how good is it?
To demonstrate file size and quality differences, we can take a photo of Vash, a Japanese Chin, and generate two compressed images, one JPEG and one Residual GRU.
Both images target a perceptual similarity of 0.9 MS-SSIM, a perceptual quality metric that reaches 1.0 for identical images. The image generated by our learned model results in an file 25% smaller than JPEG.
Left: Original image (1419 KB PNG) at ~1.0 MS-SSIM. Center: JPEG (33 KB) at ~0.9 MS-SSIM. Right: Residual GRU (24 KB) at ~0.9 MS-SSIM. This is 25% smaller for a comparable image quality
A new way of detecting anemia, a condition caused by a lack of oxygen-carrying red blood cells, using a smartphone camera hints at how such devices might be used to provide early warning of an illness without the need for expensive equipment or a hospital visit.
Researchers at the University of Washington will present a simple anemia-tracking technique using a smartphone and a light source at a conference later this month. Their tests suggest the device’s accuracy rivals that of an off-the-shelf, FDA-approved anemia test. The technology was developed in the lab of Shwetak Patel, a professor in the university’s electrical engineering department
or email your comment to: email@example.com
|Last Updated ( Monday, 10 October 2016 )|