I was fortunate to test the iPhone XS Max with the FiLMiC Pro team in Barcelona. Here’s a iPhone XS Max low light video test and my initial thoughts.
The iPhone XS Max and it’s bretheren the XS and XR are fascinating devices. Employing intelligent and dynamic digital signal processing, the latest generation iPhones can generate images that seem to be beyond the ability of the optics and sensor alone. Of course, that’s exactly the beauty and promise of computational imaging.
I don’t know anything that Apple are doing in terms of the camera beyond the general information that is public or was presented in the keynote. So these are my own assumptions and speculation based on what I am seeing in the recorded images. I could be on the right track, or tragically wrong.
If any Apple engineer happens to come across this article and wants to reach out personally and talk shop, really, you’ve got my head spinning trying to figure out what kind of voodoo you’re pulling off in this phone.
Of course, I know that conversation is not going to happen, commercial secrets are secrets for a reason.
iPhone XS Max Low Light Video
The video below was shot off the cuff, all handheld on my last night in Barcelona with the FiLMiC team. I wasn’t planning to shoot anything that night, and left my own beloved iPhone 7 Plus, Moondog Labs anamorphic lens and gimbal behind.
It just so happens I was handed the iPhone XS Max we had all been testing the previous days, and so I just shot what was happening around me. There are two versions of the video, the first version is color graded with FilmConvert (plus my own adjustments) in DaVinci Resolve. In retrospect I feel the look is a little bit heavy handed, and there are a couple shots that aren’t 100% matched, but it’s too late now. The ungraded version follows, which shows the original uncorrected video clips as recorded with FiLMiC Pro.
More Than Meets The Eye
The iPhone XS Max, XS and XR camera system seems to generate images that are beyond the sum of its parts.
While this may have “just” been a “S” year, the significance and impact of Apple’s clear direction towards sophisticated real time computational image processing should not be underestimated. Apple clearly believe that software is the future of the most popular camera in the world, and I couldn’t agree more.
With the iPhone XS Max I was able to capture clean video in very low light conditions. Not only was it clean, it had more color information in dark parts of the image than any previous generation iPhone I have shot with.
Here’s what I think is going on.
Dynamic Tone Mapping
Judging from the behavior of the camera in bright and normal lighting conditions, it appears that the luminance values recorded are not purely determined by a fixed gain value (ISO), or fixed gamma transform as would be the case with a “traditional” (I’ll call it “dumb”) camera.
Something else is at play, and it is dynamic, changing according to some combination of variables linked to a real time analysis of the scene. It may even be making separate localised adjustments to different parts of the image, which would be extremely impressive if true.
Exactly how sophisticated this all is I am not sure, as at this point I can only observe the results, I don’t yet understand the mechanics but I am interested to perform some objective tests to try and determine exactly what is going on.
Some form of fairly aggressive dynamic tone mapping is definitely occuring, probably not only affecting encoded luminance values, but color as well.
iPhone XS Video Noise Reduction
It goes without saying that Apple have implemented some noise reduction. I see only a hint of the normal telltale signs, but not enough to be sure exactly how noise reduction is being implemented. I need to spend more time pixel peeping images captured in more varied conditions. Noise reduction usually results in some very obvious artifacts, and the fact it isn’t obvious is actually very good.
Whatever combination of spatial and temporal analysis is at work, it’s very very good, and probably applied quite early in the signal chain. I don’t think it’s being applied globally to the whole image either. It seems like it could be localised to just the areas of the image that will benefit.
Is the image processing really sophisticated enough to make contextually aware localised adjustments and tailor processing to specific parts of the image dynamically in real time? I could be over thinking this, maybe the truth is much simpler. Who knows. Only an Apple engineer involved would be able to confirm or deny anything.
What surprises me the most is areas of the image in low light that I would normally expect to see a lack of detail and texture, have decent detail and texture. Go figure. Not quite sure how they are pulling this off.
To be totally honest, I think a lot of the apparent success on the computational side of things is down to having better image data to begin with. I think the system is getting better images from the sensor. It’s a larger sensor with larger photosites, but I speculate there could even be some other architectural innovations at the chip level that help make all of this possible.
I really have no concrete data to base any of these guesses on. I’m just describing what I would consider as possible and feasible methods to achieve the results I am seeing given the combination of increased processing power, GPU capability and neural engine.
As an active and vocal proponent of this computational future, the new iPhone XS generation image processing has brought some consequences I really didn’t expect.
I expected the target output of any image processing chain (augmented computationally or otherwise) to remain an objectively sampled image where the relationship between real-world values and encoded values were determined by some fixed fundamental transform curves, such as a typical gamma curve.
Instead it seems Apple have prioritised generating an output that looks good at the expense of an objective “scene referred” relationship of encoded data to real-world scene values.
For a consumer device, perhaps this makes sense, as most users are not technical, do not take their video into post for color correction and want to achieve the best looking result automatically from the camera. With the type of dynamic tone mapping I am observing, this is exactly what they are able to achieve.
The Next Steps
I definitely want to spend more time with the iPhone XS Max. I don’t yet know how to get the absolute best from it, and I feel the one video I’ve shot is far from that. The camera has more to give and I want to find the limits.
The image behaves very differently to previous iPhones in an app providing full manual control such as FiLMiC Pro. This is all down to the very active and dynamic image processing. It may not be possible for camera apps to lock or control this behavior entirely, or at all. I can tell after a few hours of use that the iPhone XS Max requires a very different approach when using FiLMiC Pro, and of course color correction.
I don’t see much reason to shoot with the flat or log gamma profiles. In fact I think the best results could very well come from letting the iPhone do it’s own thing. It may be doing a better job of maximizing recorded dynamic range through it’s own intelligent tone mapping than can be achieved manually.
As a complete control freak, it was interesting to see the iPhone camera system fight back against some of my attempts to lock it down. It attempts to automatically compensate for FiLMiC’s custom gamma curves and exposure control. I have a feeling the iPhone XS Max wants me to just let go, and to be honest there’s something quite liberating in that I’m keen to explore further.
More time shooting and color grading is needed.
Watch this space.
iPhone XS Max Low Light Video Still Frames
Check out a few color graded still frames. A few of these shots didn’t make it into the video for various reasons but work well as stills.