Indeed, with four the biggest changes this year, the 48MP sensor is the least important to me. But please bear with us as there are a lot of things we need to unpack before I can explain why I think the 48MP sensor is much less important than:
- Sensor size
- Pixel binding
- Photonic engine
One 48MP sensor, two 12MP
Colloquially we talk about the iPhone camera singular and then refer to three different lenses: main, wide, and telephoto. We do it because it’s familiar – that’s how digital SLRs and mirrorless cameras work, one sensor, multiple (interchangeable) lenses – and because it’s an illusion that Apple creates in the camera app, for the sake of simplicity.
The reality, of course, is different. The iPhone actually has three cameras modules. Each camera module is separate and each has its own sensor. When you tap on, say, a 3x button, you not only select the telephoto lens, but switch to a different sensor. When you move the zoom, the camera application automatically and invisibly selects the appropriate camera module and next performs any necessary pruning.
Only the main camera module has a 48MP sensor; the other two modules are still 12MP.
Apple has been completely open on this with the introduction of new models, but this is an important detail that some may have overlooked (our emphasis):
For the first time ever, the Pro line includes a new one 48MP main camera with a four-pixel sensor that adjusts to the captured photo and is equipped with second-generation optical image stabilization with sensor shift.
The 48MP sensor works part-time
Even if you are using the main camera with a 48MP sensor, you still only take 12MP photos by default. Apple again:
For most photos, a four-pixel sensor combines every four pixels into one large four-pixel pixel.
At 48 megapixels, you only take photos when:
- You are using the main camera (not telephoto or wide angle)
- You shoot in ProRAW (off by default)
- You shoot in decent light
If you want to do that, here’s how. But most of all, you won’t …
Apple’s approach makes sense
You may ask why give us a 48MP sensor and then mostly not use it?
Apple’s approach makes sense because they really are very several times when shooting at 48MP is better than shooting at 12MP. And since much larger files are created this way, gobbling up memory with a voracious appetite, it doesn’t make sense for that to be the default.
I can only think of two scenarios where taking a 48MP photo is useful:
- You are going to print the photo in a large size
- You need to crop the image very hard
The latter reason is also a bit questionable because if you need to trim that hard it might be better when you use a 3x camera.
Now let’s talk about sensor size
When comparing any smartphone camera to a DSLR or a high-quality mirrorless camera, there are two big differences.
One of them is the quality of the lenses. Standalone cameras can have much better lenses, both in terms of physical size and cost. It is not uncommon for a professional or avid amateur photographer to spend four figures on a single lens. Smartphone cameras, of course, cannot compete with this.
The second is the size of the sensor. All other things are equal, the larger the sensor, the better the image quality. Smartphones, due to their size and all the other technologies they have to fit into, have much smaller sensors than standalone cameras. (They also have a limited depth, which places another significant limitation on the sensor size, but we don’t have to worry about that.)
The smartphone’s size sensor limits the image quality and also makes it difficult to achieve a shallow depth of field – which is why the iPhone does it artificially, with portrait mode and cinematic film.
Apple’s large sensor + limited approach to megapixels
While there are obvious and less obvious limitations to the size of the sensor that can be used in a smartphone, Apple has long used larger sensors than other smartphone brands – which is one reason the iPhone has long been seen as the phone of choice in terms of camera quality . . (Samsung later started doing this as well.)
But there is a second reason. If you want to get the best possible photo quality from your smartphone, you want too pixels be as big as possible.
That’s why Apple religiously stuck to 12MP while brands like Samsung stuffed a whopping 108MP into a sensor of the same size. Squeezing a large number of pixels into a tiny matrix significantly increases the noise, which is especially noticeable in low-light photos.
Ok, it took me a while to get there, but now I can finally tell you why I think the bigger sensor, pixel bonding, and Photonic Engine are much bigger than the 48MP sensor …
# 1: The iPhone 14 Pro / Max sensor is 65% larger
This year, the main camera sensor in the iPhone 14 Pro / Max is 65% larger than that of last year’s model. Of course, it’s still nothing compared to a standalone camera, but in the case of a smartphone camera it’s (pun intended) huge!
But, as we mentioned above, if Apple were to press four times as many pixels into the sensor only 65% larger, it would actually result in worse quality! That’s why you’ll mostly still take 12MP photos. And this is thanks to …
# 2: pixel merging
To take 12MP photos with the main camera, Apple uses a pixel stitching technique. This means that data from four pixels is converted to one virtual pixel (averaging values), so a 48MP sensor is most often used as the larger 12MP.
This illustration is simplified but gives the basic idea:
What does it mean? Pixel size is measured in microns (one millionth of a meter). Most premium Android smartphones have pixels measuring somewhere in the range of 1.1 to 1.8 microns. The iPhone 14 Pro / Max effectively has pixels of 2.44 microns when using the sensor in 12 MP mode. This is really significant improvement.
Without pixel bonding, the 48MP sensor would be a downgraded most of the time.
# 3: The photonic engine
We know that smartphone cameras obviously cannot compete with stand-alone cameras in terms of optics and physics, but can compete in computational photography.
Computational photography has been used in SLR cameras for literally decades. For example, toggling measurement modes means instructing the computer inside the DLR to interpret the raw sensor data differently. Similarly, on consumer digital SLRs and all mirrorless cameras, you can choose from a variety of shooting modes that re-instruct the microprocessor to adjust the sensor data to achieve the desired effect.
So computational photography already plays a much bigger role in standalone cameras than many realize. And Apple is very, very good at computational photography. (Ok, he’s not good at Cinematic movies yet, but give him a few years …)
The Photonic Engine is a dedicated chip that drives Apple Deep Fusion’s approach to computational photography, and I can already see a huge difference in terms of dynamic range in photos. (Examples to be followed in the iPhone 14 diary next week.) Not only coverage alone, but also smart decisions made which shadow for extraction and which highlight to tame.
The result is much better photos that have as much to do with software as they do with hardware.
The dramatically larger sensor (in terms of smartphones) is a really big deal when it comes to image quality.
Pixel bonding means Apple has successfully created a much larger 12MP sensor for most photos, allowing the benefits of a larger sensor to be exploited.
Photonic Engine is a dedicated image processing chip. I can already see the real benefits of this.
More to follow in the iPhone 14 diary as I put the camera to a more detailed test over the next few days.
FTC: We use automatic affiliate links to earn. More.
Check out 9to5Mac on YouTube for more information about Apple: