A Question of Realism - The Fallacy of High Resolution
When I was a kid, there used to be an instructional television program on illustration hosted by Yardley Jones, who was an illustrator. On one of the episodes, Mr. Jones invited Ben Wicks, who was a well-known cartoonist of the day. Anyway, I remembered Mr. Jones commenting on the way Mr. Wicks typically illustrated hands and fingers. Essentially, they looked like banana bunches hanging off the cuff. You see, Mr. Wicks’ adopted style of illustration was highly simplified, whereupon he did away with all unnecessary details.
According to Mr. Wicks, if one were to include details like fingernails, knuckles, and skin folds in the illustration of a hand, then one must also include shoelaces on the shoes when illustrating a foot. If it is not included, then the illustration will look unbalanced. Of course, there’s nothing intrinsically wrong with increased details in any illustration. But at the same time, there’s nothing intrinsically right about it either. And, therein lies the question in need of addressing - which is how much detail is needed to make a picture appear complete?
Speaking conventionally, one can argue that increased details in a picture portrays greater realism. From that standard, greater realism from increased visual details must invariably make a picture appear more complete. For this reason, the mainstream trends in consumer electronics have gravitated towards the manufacturing of higher resolution display and capture. I mean, why not? The more resolution, the better. But in pursuit of more resolution, I believe manufacturers have completely lost sight of what makes a picture complete.
I’m not going to name any names, because it’s not in my interest to out the elephant in the room. This is not to say that I am reluctant because of any negative repercussion against me. I mean, I am a selfish person. I really want this elephant to succeed - in the way profitability of iPods and iPhones funded the development of more reliable Macs, a decade earlier. But just because this elephant is not for me, it does not necessarily mean that others should not be footing the bill for better research and development.
Still, a sense of duty compels me to go against my better judgment. High resolution sensors make little sense to me. Over the last seven years, since the introduction of the Nikon D800, I have owned every high resolution camera ever made. And in those seven years, I almost never used any of them beyond a couple of weeks. The novelty of a high resolution sensor begins to fade, because there’s no reason for anyone to capture any image beyond 24 megapixels, or 12 megapixels, or even 6 megapixels, under normal use.
If you’re one of those special individual who relish in seeing your image capture at high magnification on your screen, then high resolution sensors are definitely for you. Or if you happen to print out large format hardcopy, then high resolution sensors are also for you. But, that isn’t the description of normal use. For most of us today in the era of social media, almost all our photos are shared and viewed online at mobile or desktop resolution. Because of that, what exactly is the point of more resolution for everyone else?
High resolution offers the promise of more flexibility in image manipulation. Hence, if one wants to downscale an image, more megapixels in documentation can better retain definition of fine details - even at much tighter crops. So if one were to take a 47 megapixel image at the 28mm focal length, one has the wiggle room to zero in on - let us say the 75mm equivalent - and still retain sufficient integrity in image definition for a conventional 1200px x 1800px image, despite discarding roughly 85% of sensor coverage.
What is there not to like about downscaling 47 megapixel to a little more than 2 megapixel for most on-screen use? To be fair, there is nothing inherently inappropriate about cropping or downscaling from the perspective of documentation. But, high resolution sensors masks from plain view other material considerations. That is to say, when the overall standard of documentary optimization is stacked towards quantifiable factors like image resolution, falling for the distraction of large numbers blinds us from all else.
And it isn’t just the large numbers from high resolution. There are also high ISO, high dynamic range, and high color depth (for those who are truly initiated) - all technological innovations developed to ease and further the manipulability of digital image files in post processing. That said, do any of these innovations actually improve the image capture in a fundamentally meaningful way? Thing is, the more we chase after these superlatives in numbers, the more we disconnect ourselves from the subtlety that completes an image capture.
The notion that these superlatives in manipulability are necessary in optimizing the presentation of realism is misplaced. We really do not need to be hit over the head with the sledgehammer of increased resolution, ISO, or dynamic range in order to find an image capture to be convincing of real life. What may not be immediately clear to those who have been wooed over by the mainstream marketing narrative of expanded functionality is that our eyes do not need an intervention of innovation to convince us of reality.
The addition of higher resolved details from a high resolution image capture downscaled for practical viewing will not make the captured image appear more real than one without the benefit of technological intervention. Likewise, increased tonal gradation in shadows and highlights will not make the captured image appear more lifelike either. After a threshold of details, the naked eye no longer differentiates between what is already good in practical terms and what is exceptionally good in technical terms.
In the real world, when one sees the subject’s eye in a photograph, one knows it’s an eye. The fact that the iris or pupil can be appreciated at high magnification does not materially add to the presentation of realism. Most often, the viewers generally know an eye is an eye because of its relative placement to other facial features on the subject’s face. So as long as the magnitude of details is good enough in practical terms, the viewer will be sufficiently convinced of the image capture’s presentation of realism.
That said, technological innovation has its moments. When optimized in use for a desired result, it can be very effective. However, that doesn’t mean more details by themselves improve an image. In fact, when there’s too much focus on details, it can materially diminish the visual narrative of an image. That is to say, if the range of details are consistent throughout the image frame, from edge to edge and corner to corner, then all variables from the subject to the background, and all distractions in between are weighted the same.
But then again, isn’t that the nature of realism, wherein every detail in view is weighted the same? Fingernails, knuckles, and skin folds are all equally important as shoelaces, and eye lashes, and button holes. When every detail is treated the same, the captured image becomes visually cluttered without any point of focus drawing the viewer’s attention. Because of that, I find high resolution, and high ISO, and high dynamic range, and even high color depth to be superfluous in practice - unless if hyperrealism is the ultimate goal.
Fortunately, it is not. For the most part, interpreting reality as oppose to capturing it is the photographer’s core objective. Consequently, the need to capture more details with high resolution, dynamic range, or color depth is not important. Interpretation only requires the presentation of realism to be convincing, which is dependent on a lower range of details across the image frame. I mean, as long as the viewers know what they are looking at, does it really matter how much more real than real the image capture appears to them?
And you wonder why digital photographers obsess over shooting wide open. It’s the only time when they can sufficiently isolate a subject from the clutter of background distraction. Film on the other hand doesn’t obsess over realism. Rather, film favors aesthetic considerations over reality - which makes it visually more appealing to the eyes than digital capture. That said, it does not mean that film does not strive to be real. To be frank, film is already detailed enough to be perceived as real. I mean, how real must real be?
The images I shared on this blog entry were shot on film and digitized to no more than 6 megapixels. I believe the extent of resolved details were convincing enough to make the images look sufficiently real. Given a more simplified approach away from a preoccupation that emphasizes realism, other qualitative considerations like the aesthetics of texture, color bias, and tonality can be fully appreciated. As such, less details makes the final image captures appear less distracting and more complete in overall rendering.
I just believe it’s easier on the eyes. Besides, keeping it too real can really go wrong, to quote Dave Chappelle. Still, someone has got to feed the elephant in the room. If everyone is like this current version of me, the entire photo industry would ultimately collapse.
All images were tweaked on Adobe Lightroom and digitized on a Fujifilm S5 Pro + Nikon AF-S DX Micro 40mm f/2.8G + Bolt VM-210 + Nikon ES-2. Some images were leveled and cropped for the sake of presentation.