The iPhone XS Max, iPhone XS and iPhone XR are truly fascinating devices. Featuring intelligent and dynamic digital signal processing, the latest generation iPhones can generate images that appear to be beyond the ability of the optics and sensor alone. Of course, that’s exactly the beauty and promise of computational imaging.
I don’t have any information about the camera or image processing beyond what is public. What follows are my own assumptions based on the behavior of the camera and observation of the recorded images.
Early iPhone XS Max Low Light Video
On September 21st 2018, I arrived in Barcelona to join the FiLMiC team for some testing and brainstorming. The iPhone XS Max hit the shelves in Apple stores the same day. Of course, the FiLMiC team had pre-ordered one. For the next few days we tested the iPhone XS Max around Barcelona.
I shot the below video handheld on my last night in Barcelona with the FiLMiC team. I hadn’t planned to shoot anything that night, only to enjoy some drinks and good food with friends.
Of course, when I picked up the iPhone XS Max we had all been testing the previous days, I couldn’t put it down. So, I shot what was happening around me.
There are two versions in this video. The first version is color graded using FilmConvert in DaVinci Resolve with some of my own adjustments. In retrospect, I feel the look is a bit heavy handed, and there are shots that aren’t correctly matched. The ungraded version follows, which shows the original uncorrected video clips as recorded with FiLMiC Pro. This may actually be of more interest and value.
More Than Meets The Eye
While this may have “just” been a “S” year, the significance and impact of Apple’s clear direction towards sophisticated real time computational image processing should not be underestimated. Apple clearly believe that software is the future of the most popular camera in the world. I couldn’t agree more.
With the iPhone XS Max I was able to capture clean video in very low light conditions. Furthermore, it had more color information in the shadows than any previous iPhone I have shot with.
Here’s what I think is going on.
Dynamic Tone Mapping
Judging from the behavior of the camera in bright and normal lighting conditions, it appears that the luminance values of the recorded image are not determined by global gain (ISO), or a fixed gamma transform.
Something else is at play, and it is dynamic. I believe that luminance values are determined according to a combination of variables linked to a real time analysis of the scene. Furthermore, this algorithm may be making localized adjustments to different parts of the image. This would be extremely impressive if true. It may also be manipulating color and saturation dynamically as well.
I am not sure how sophisticated this algorithm is. I can only observe the results, and infer the likely underlying mechanics.
Noise Reduction
It goes without saying that Apple have implemented noise reduction. Noise reduction usually results in some very obvious artifacts. However, the expected artifacts are not obvious. I need to spend more time pixel peeping images captured in more varied conditions.
Whatever combination of spatial and temporal noise reduction is at work, it’s very good, and probably applied quite early in the signal chain. As with dynamic tone mapping, the application of noise reduction may be localized.
Apple have incorporated a larger sensor with larger photosites. I speculate there could also be architectural innovations at the chip level that help make all of this possible.
Better Imaging for Consumers. Challenges for Professionals
As an active and vocal proponent of the kind of computational imaging technology Apple has clearly employed, the way they have employed it has brought some consequences I didn’t expect.
I expected the target output of any image processing chain (augmented computationally or otherwise) to maintain a relationship between real-world scene values and recorded image values. This has been the target of all photographic technologies and methods since the first latent images were captured.
Instead, Apple have prioritized automatically generating an output that is likely to look good to the average viewer. This comes at the expense of the relationship of recorded image values to real-world scene values.
The consequence is that conventional post production processes which depend on consistent, accurate, scene referred source image information can no longer be applied in the same way.
As the iPhone is a consumer device, of course this makes sense. Most users are not technical, do not take their video into post for color correction and want to achieve the best looking result automatically from the camera. With the type of dynamic tone mapping I am observing, it’s obvious this is exactly what Apple have prioritized.
The Next Steps
I will share more insights about the iPhone XS Max, iPhone XS and iPhone XR for filmmakers as testing continues. A color managed workflow is required to neutralize the inconsistencies introduced by Apple’s dynamic tone mapping. You’ll find more information right here on my website, and on my YouTube channel.
iPhone XS Max Low Light Video Still Frames
Check out a few color graded still frames. A few of these shots didn’t make it into the video for various reasons but work well as stills.