Skip to main content

Apple launches Deep Fusion feature in beta on iPhone 11 and iPhone 11 Pro

Apple is launching an early look at its new Deep Fusion feature on iOS today with a software update for beta users. Deep Fusion is a technique that blends multiple exposures together at the pixel level to give users a higher level of detail than is possible using standard HDR imaging — especially in images […]

Apple is launching an early look at its new Deep Fusion feature on iOS today with a software update for beta users. Deep Fusion is a technique that blends multiple exposures together at the pixel level to give users a higher level of detail than is possible using standard HDR imaging — especially in images with very complicated textures like skin, clothing or foilage.

The developer beta released today supports the iPhone 11 where Deep Fusion will improve photos taken on the wide camera and the iPhone 11 Pro and Pro Max where it will kick in on the telephoto and wide angle but not ultra wide lenses. 

According to Apple, Deep Fusion requires the A13 and will not be available on any older iPhones. 

As I spoke about extensively in my review of the iPhone 11 Pro, Apple’s ‘camera’ in the iPhone is really a collection of lenses and sensors that is processed aggressively by dedicated machine learning software run on specialized hardware. Effectively, a machine learning camera. 

Deep Fusion is a fascinating technique that extends Apple’s philosophy on photography as a computational process out to its next logical frontier. As of the iPhone 7, Apple has been blending output from the wide and telephoto lenses to provide the best result. This process happened without the user ever being aware of it. 

E020289B A902 47A8 A2AD F6B31B16BEC8 52C2191B 02DD 41ED B33F 19279C89CA42

Deep Fusion continues in this vein. It will automatically take effect on images that are taken in specific situations.

On wide lens shots, it will start to be active just above the roughly 10 lux floor where Night Mode kicks in. The top of the range of scenes where it is active is variable depending on light source. On the telephoto lens, it will be active in all but the brightest situations where Smart HDR will take over, providing a better result due to the abundance of highlights.

Apple provided a couple of sample images showing Deep Fusion in action which I’ve embedded here. They have not provided any non-DF examples yet, but we’ll see those as soon as the beta gets out and people install it. 

Deep Fusion works this way:

The camera shoots a ‘short’ frame, at a negative EV value. Basically a slightly darker image than you’d like, and pulls sharpness from this frame. It then shoots 3 regular EV0 photos and a ‘long’ EV+ frame, registers alignment and blends those together. 

This produces two 12MP photos which are combined into one 24MP photo. The combination of the two is done using 4 separate neural networks which take into account the noise characteristics of Apple’s camera sensors as well as the subject matter in the image. 

This combination is done on a pixel-by-pixel basis. One pixel is pulled at a time to result in the best combination for the overall image. The machine learning models look at the context of the image to determine where they belong on the image frequency spectrum. Sky and other broadly similar high frequency areas, skin tones in the medium frequency zone and high frequency items like clothing, foilage etc.

The system then pulls structure and tonality from one image or another based on ratios. 

The overall result, Apple says, results in better skin transitions, better clothing detail and better crispness at the edges of moving subjects.

There is currently no way to turn off the Deep Fusion process but, because the ‘over crop’ feature of the new cameras uses the Ultra Wide a small ‘hack’ to see the difference between the images is to turn that on, which will disable Deep Fusion as it does not use the Ultra Wide lens.

The Deep Fusion process requires around 1 second for processing. If you quickly shoot and then tap a preview of the image, it could take around a half second for the image to update to the new version. Most people won’t notice the process happening at all.

As to how it works IRL? We’ll test and get back to you as Deep Fusion becomes available

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.