I was fortunate to test the iPhone XS Max with the FiLMiC Pro team in Barcelona. Here’s a iPhone XS Max low light video test and my initial thoughts.

The iPhone XS Max and it’s bretheren the XS and XR are fascinating devices. Employing intelligent and dynamic digital signal processing, the latest generation iPhones can generate images that seem to be beyond the ability of the optics and sensor alone. Of course, that’s exactly the beauty and promise of computational imaging.

I don’t know anything that Apple are doing in terms of the camera beyond the general information that is public or was presented in the keynote. So these are my own assumptions and speculation based on what I am seeing in the recorded images. I could be on the right track, or tragically wrong.

If any Apple engineer happens to come across this article and wants to reach out personally and talk shop, really, you’ve got my head spinning trying to figure out what kind of voodoo you’re pulling off in this phone.

Of course, I know that conversation is not going to happen, commercial secrets are secrets for a reason.

iPhone XS Max Low Light Video

The video below was shot off the cuff, all handheld on my last night in Barcelona with the FiLMiC team. I wasn’t planning to shoot anything that night, and left my own beloved iPhone 7 Plus, Moondog Labs anamorphic lens and gimbal behind.

It just so happens I was handed the iPhone XS Max we had all been testing the previous days, and so I just shot what was happening around me. There are two versions of the video, the first version is color graded with FilmConvert (plus my own adjustments) in DaVinci Resolve. In retrospect I feel the look is a little bit heavy handed, and there are a couple shots that aren’t 100% matched, but it’s too late now. The ungraded version follows, which shows the original uncorrected video clips as recorded with FiLMiC Pro.

More Than Meets The Eye

The iPhone XS Max, XS and XR camera system seems to generate images that are beyond the sum of its parts.

While this may have “just” been a “S” year, the significance and impact of Apple’s clear direction towards sophisticated real time computational image processing should not be underestimated. Apple clearly believe that software is the future of the most popular camera in the world, and I couldn’t agree more.

With the iPhone XS Max I was able to capture clean video in very low light conditions. Not only was it clean, it had more color information in dark parts of the image than any previous generation iPhone I have shot with.

Here’s what I think is going on.

Dynamic Tone Mapping

Judging from the behavior of the camera in bright and normal lighting conditions, it appears that the luminance values recorded are not purely determined by a fixed gain value (ISO), or fixed gamma transform as would be the case with a “traditional” (I’ll call it “dumb”) camera.

Something else is at play, and it is dynamic, changing according to some combination of variables linked to a real time analysis of the scene. It may even be making separate localised adjustments to different parts of the image, which would be extremely impressive if true.

Exactly how sophisticated this all is I am not sure, as at this point I can only observe the results, I don’t yet understand the mechanics but I am interested to perform some objective tests to try and determine exactly what is going on.

Some form of fairly aggressive dynamic tone mapping is definitely occuring, probably not only affecting encoded luminance values, but color as well.

iPhone XS Video Noise Reduction

It goes without saying that Apple have implemented some noise reduction. I see only a hint of the normal telltale signs, but not enough to be sure exactly how noise reduction is being implemented. I need to spend more time pixel peeping images captured in more varied conditions. Noise reduction usually results in some very obvious artifacts, and the fact it isn’t obvious is actually very good.

Whatever combination of spatial and temporal analysis is at work, it’s very very good, and probably applied quite early in the signal chain. I don’t think it’s being applied globally to the whole image either. It seems like it could be localised to just the areas of the image that will benefit.

Is the image processing really sophisticated enough to make contextually aware localised adjustments and tailor processing to specific parts of the image dynamically in real time? I could be over thinking this, maybe the truth is much simpler. Who knows. Only an Apple engineer involved would be able to confirm or deny anything.

What surprises me the most is areas of the image in low light that I would normally expect to see a lack of detail and texture, have decent detail and texture. Go figure. Not quite sure how they are pulling this off.

Sensor Improvements

To be totally honest, I think a lot of the apparent success on the computational side of things is down to having better image data to begin with. I think the system is getting better images from the sensor. It’s a larger sensor with larger photosites, but I speculate there could even be some other architectural innovations at the chip level that help make all of this possible.

I really have no concrete data to base any of these guesses on. I’m just describing what I would consider as possible and feasible methods to achieve the results I am seeing given the combination of increased processing power, GPU capability and neural engine.

Unforseen Consequences

As an active and vocal proponent of this computational future, the new iPhone XS generation image processing has brought some consequences I really didn’t expect.

I expected the target output of any image processing chain (augmented computationally or otherwise) to remain an objectively sampled image where the relationship between real-world values and encoded values were determined by some fixed fundamental transform curves, such as a typical gamma curve.

Instead it seems Apple have prioritised generating an output that looks good at the expense of an objective “scene referred” relationship of encoded data to real-world scene values.

For a consumer device, perhaps this makes sense, as most users are not technical, do not take their video into post for color correction and want to achieve the best looking result automatically from the camera. With the type of dynamic tone mapping I am observing, this is exactly what they are able to achieve.

The Next Steps

I definitely want to spend more time with the iPhone XS Max. I don’t yet know how to get the absolute best from it, and I feel the one video I’ve shot is far from that. The camera has more to give and I want to find the limits.

The image behaves very differently to previous iPhones in an app providing full manual control such as FiLMiC Pro. This is all down to the very active and dynamic image processing. It may not be possible for camera apps to lock or control this behavior entirely, or at all. I can tell after a few hours of use that the iPhone XS Max requires a very different approach when using FiLMiC Pro, and of course color correction.

I don’t see much reason to shoot with the flat or log gamma profiles. In fact I think the best results could very well come from letting the iPhone do it’s own thing. It may be doing a better job of maximizing recorded dynamic range through it’s own intelligent tone mapping than can be achieved manually.

As a complete control freak, it was interesting to see the iPhone camera system fight back against some of my attempts to lock it down. It attempts to automatically compensate for FiLMiC’s custom gamma curves and exposure control. I have a feeling the iPhone XS Max wants me to just let go, and to be honest there’s something quite liberating in that I’m keen to explore further.

More time shooting and color grading is needed.

Watch this space.

iPhone XS Max Low Light Video Still Frames

Check out a few color graded still frames. A few of these shots didn’t make it into the video for various reasons but work well as stills.

9 Comments

  1. Pingback:Filmmaker says iPhone XS sensor and image processing makes low-light video ‘voodoo’ good – TheTechFreaks

  2. Pingback:„Beeindruckende Videos bei schwachem Licht“: Filmemacher lobt iPhone XS Max | iTopnews

  3. Pingback:La fotocamera di iPhone XS e XS Max rende i video in low-light incredibilmente belli [Video] | iSpazio

  4. Pingback:iPhone XS digital camera assessments clarify Beautygate selfies and low-light voodoo | Tech News

  5. Richard, which of the two versions demonstrated looks more like what you were actually seeing with your own eyes? That kind of additional information would be most helpful.

    • Hi Jeff, definitely the ungraded version is closer although all the shots needed some technical correction to balance by the scopes before any kind of stylistic grade. I’m now regretting the grade a bit. I should have just gone with a basic color correction, matched everything and left it at that.

  6. Alexander Lüthi

    Great post! What profile were you shooting in? Did you have the noise reduction turned on in Filmic pro? I get way more noise when recording in filmic pro on my xs…

    • Hi Alex, this was in FiLMiC flat profile, no noise reduction turned on in FiLMiC. I kept noise to a minimum just by making sure I wasn’t underexposing in the first place. In low light I always want to protect highlights, so I make sure bright things like street lights, signs etc are not clipping, and I just let the mid tones and shadows fall where they fall. So for street scenes, this usually means I end up around 1/24th or 1/48th sec shutter speed at minimum ISO. This gives minimal noise, and my bright light sources are not over-exposed, retaining detail in and around the lights, however the trade-off is mid-tones and shadows usually get very dark. To be honest, I leave them darker than some people would, and I get some criticism for it, but it’s the way I do low-light. I’d rather work with the bright spots, look for areas where bright specular light sources reflect in surfaces, or windows, or throw pools of light on walls or pavements, and let the rest just be black. For me, the style I like, that approach is exposed just fine, but for others, it would be considered too dark, they want more shadow and mid-tone detail, but that means increasing ISO, which means noise, plus over-exposing bright light sources, which screams “phone video”. It’s always going to be a compromise, until we get much higher dynamic range out of phone cameras. A professional approach to these situations, with any camera that can’t see in the dark, is to light night time exteriors, you bring in big light sources, diffuse them over a large area and raise the ambient light level. This decreases the lighting ratio, or difference in brightness between shadows in the scene, and brightest areas, such as interior lit windows, signage, street lights etc. However, most of us don’t have access to a truck full of HMI lights to light a night time intersection like a Hollywood film set.

      • I see. Im pretty sure I was shooting in flat as well, with ISO under 50, but i still got quite some noise in the shadows when shooting outside, daytime, cloudy… Maybe I was shooting LOG. Well well…
        I totally agree with you regarding sacrificing some details in the dark and mids, in exchange for low noise and a more filmic look.
        I think you have a very interesting point which is worth some more discussing – the ting you mention about computational photography. I too feel like the software is fighting against me when trying to tweak the settings to my preferences, which makes me wonder if that’s not a totally bad thing to let loose of the manual controls. Sure, I would like to be able to shoot in a flat profile to make the colour grading easier, but I can live without it if I get better exposed footage. What I can’t live without, though, is the ability to change the shutter speed. Too fast a shutter speed screams “phone video” just as much as blown out highlights and noisy shadows, I think. The problem is that the stock camera app doesn’t let me change the shutter speed, so I need to go to third party apps (filmic pro and Moment is my favourites). But then Im afraid I don’t get all the photo-magic enabled by the A12 chip, like Smart HDR in 24 and 30fps… Is the computational part of the photograph/video made “inside” the sensor, or in the camera app? This leves me wondering when to use which app…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.