Google has open-sourced the AI-based camera technology powering the Pixel 2’s Portrait mode. The new Pixel phones used a Google exclusive “semantic image segmentation model” called DeepLab-v3+, which is now released on open-source software library TensorFlow.
The search giant shared a blog post to announce the release. As per the blog post, “This release includes DeepLab-v3+ models built on top of a powerful convolutional neural network (CNN) backbone architecture for the most accurate results, intended for server-side deployment.”
How does it work?
As seen on Pixel 2, the technology helps the camera to produce the synthetic shallow depth-of-field effect in the Portrait mode. The blog post explains how “semantic image segmentation model” enables cameras to achieve DSLR like bokeh effect. It assigns semantic labels, such as “road”, “sky”, “person”, “dog”, to every pixel in an image. Assigning these semantic labels require determining the outlines of objects, so it imposes aggressive localisation accuracy, which is very crucial for a good portrait shot.
The technology is powerful enough to make up for a missing secondary sensor to achieve good portrait shots. Pixel 2 has proved it against all other phones that ship with dual cameras to click natural looking bokeh pictures.
The post also mentions that these systems of image segmentation have improved drastically over the last couple of years with newer methods, hardware and datasets. The post adds that by sharing this system, Google hopes there will be more uses of this system in academics and the industry to help build newer applications.
from blogger-2 http://ift.tt/2InpzX7
via IFTTT
No comments:
Post a Comment