The new potential of smartphone images
You can only take good pictures with a good camera? Sure, the equipment plays an important role. But a good camera alone is no guarantee for good photo, just think about knowledge about functions or creativity.
But good photos with a smartphone camera? Material for many discussions. And one disadvantage is repeatedly mentioned: smartphone images lack depth of field. The image areas are almost all sharp, which means the photos look one dimensional and the famous bokeh effect is impossible to create. The effect of the depth of field traditionally requires large lenses as well as control of the aperture.
In the age of computational photography, modern technologies like dual-lenses are combined with machine learning and artificial intelligence. In order to achieve something with smartphones and apps that was previously only possible through optical processes, for example in DSLR cameras.
Depth data plays an important role. Many smartphones or apps no longer just make pictures, they also collect a lot of data. And then use this data for digital image processing. And information about depth is crucial, because the blur of an image area depends in its distance to the camera lens.
A so-called depth map is created from the collected data. And with a depth map it is possible to tell what is in the foreground and what is in the background of an image. And whether an object is positioned in front or behind another object.
This makes it possible to subsequently mimic the effect of depth of field, to blur out distant objects and the background and even to control the amount of blur. Also, the focus point or the lighting of the scene can be changed after taking the photo. Instead of flat 2D images, images with a 3D-effect can be achieved.
Many smartphones and apps already use depth map technology, with promising results. Like the latest iPhones (7 Plus, 8 Plus and X), Google Pixel 2 or apps like Portrait, Google Camera, img.ly and Focos.
However, the technique also has limitations. If objects are too far away from the camera, the depth cannot be determined. The range of a subsequent change of the focus is highly dependent on the image and not nearly as great as during the moment of shooting. For example, if the focus is on a subject in the foreground, the ability to change the focus to the background is limited. Also, with the much more complex DSLR objects, you can still get better and smoother results in terms of depth of field and focus.
The potential behind these technologies is definitely great. And we are curious what further benefits the new depth AIs will bring in the future.
Whether you take your pictures with a camera or a smartphone, professional image editing and retouching should not be missing.
Br24 – Your partner for image editing and more in the highest quality!
Edit images always and everywhere? Adobe plans a full Photoshop version for iPad, find out more.