Close ad

iPhone 11 and iPhone 11 Pro (Max) have been on sale for the second week, but they still lack one of the most interesting features - Deep Fusion. However, according to the latest reports, Apple has the feature ready and will soon offer it in the upcoming beta version of iOS 13, most likely in iOS 13.2.

Deep Fusion is the name for the new image processing system for iPhone 11 (Pro) photography, which makes full use of the capabilities of the A13 Bionic processor, specifically the Neural Engine. With the help of machine learning, the captured photo is processed pixel by pixel, thereby optimizing textures, details and possible noise in each part of the image. The function will thus come in handy especially when taking pictures inside buildings or in medium lighting. It is activated completely automatically and the user will not be able to deactivate it - practically, he does not even know that Deep Fusion is active in the given situation.

The process of taking a photo will be no different with Deep Fusion. The user just presses the shutter button and waits a short while for the image to be created (similar to Smart HDR). Although the whole process only takes about a second, the phone, or rather the processor, manages to perform a number of complex operations.

The whole process is as follows:

  1. Before you even press the camera shutter button, three pictures are taken in the background with a short exposure time.
  2. Subsequently, when the shutter button is pressed, three more classic photos are taken in the background.
  3. Immediately after, the phone takes another photo with a long exposure to capture all the details.
  4. A trio of classic photos and a long exposure photo are combined into one image, which Apple refers to as a "synthetic long".
  5. Deep Fusion selects the single best-quality short-exposure shot (chooses from the three that were taken before the shutter was pressed).
  6. Subsequently, the selected frame is combined with the created "synthetic long" (two frames are thus merged).
  7. The merging of the two images takes place using a four-step process. The image is created pixel by pixel, details are highlighted and the A13 chip receives instructions on how exactly the two photos should be combined.

Although the process is quite complex and may seem time-consuming, overall it only takes a little longer than capturing an image using Smart HDR. As a result, immediately after pressing the shutter button, the user is first shown a classic photo, but it is replaced shortly afterwards by a detailed Deep Fusion image.

Samples of Apple's Deep Fusion (and Smart HDR) photos:

It should be noted that the advantages of Deep Fusion will mainly be used by the telephoto lens, however, even when shooting with a classic wide lens, the novelty will come in handy. In contrast, the new ultra-wide lens won't support Deep Fusion at all (as well as not supporting night photography) and will use Smart HDR instead.

The new iPhone 11 will thus offer three different modes that are activated under different conditions. If the scene is too bright, the phone will use Smart HDR. Deep Fusion is activated when shooting indoors and in moderately low light conditions. As soon as you take pictures in the evening or at night in low light, Night Mode is activated.

iPhone 11 Pro rear camera FB

source: The Verge

.