I am not an influencer. I am not a fashion model. I am not an interior designer. I don't use my cellphone camera to generate "content". I use it to document things. I need it to take clear pictures that accurately represent things that I see. We are now moving away from auto-focus and auto shutter speeds toward on-the-fly retouching, editing, of material by the camera. This is dangerous. Pictures taken buy such cameras can no longer be considered accurate representations. Correction of shadows, the replacement of dull color with vibrant, the smoothing of textures ... every photo is now a crafted work of art by the machine. They are a distorted representation. This will come back to haunt us.
Think of this: a cop body camera that auto-adjusts faces to display them more clearly at night. Sounds like a good idea. Then something happens. The cop says "I couldn't see the guy's face" but the body camera shows the face clear as day. Yes, the camera did take a more clear and useful photo, but it is not a proper depiction of the reality experienced by the officer.
> every photo is now a crafted work of art by the machine.
This was always the case. Unless you have a very specific camera setup where you're trying to avoid this, there have always been certain characteristics that come through in photos from cameras. In fact, it's the main selling point of some cameras. Hasselblad, polaroid, cannon, sony all have their own 'looks' when it comes to output.
> The cop says "I couldn't see the guy's face" but the body camera shows the face clear as day.
I'll use a similar but opposite argument here. Ever since iPhones came out they could never really capture dark-skinned people as we see them through our eyes. Unless you had perfect lighting, you could clearly see issues with the sensor catching the contrast in their face. With all the retouching you speak of, iPhones have gotten much better at showing some people more closely to how we see them in reality. So when that cop claims "I couldn't see the guy's face, the damn camera is too good!", I'd be very hesitant to believe him.
Yup. And the move from a camera to a cinema camera incorporating cinematography trickery represents a marked change in what a personal camera is and does.
This isn't any trickery, this is a file format. One that actually avoid the uncontrollable computational photography done by default and actually changes the nature of the picture without the input of the user. This is helping conserve what we think of pictures as a digital capture of light. Rather than what it's becoming, which is the product of hallucinated details by a neural network.
> I need it to take clear pictures that accurately represent things that I see. We are now moving away from auto-focus and auto shutter speeds toward on-the-fly retouching, editing, of material by the camera. This is dangerous.
You could argue that up until now you were not able to take photos or video that accurately represented the world you see but instead only using the rose color lenses of the device manufacturer. The photos and videos that you take today with your phones or cameras have distortions applied automatically based on presets provided by the software used to capture the media. Sometimes you get options like Vibrant, Indoor, Portrait, and Landscape mode to choose how the images or video are manipulated. You don't get to see what the camera actually saw, only what the device manufacturer wants you to see.
Log video is like Raw photos. As this capability becomes more prevalent, I could see it becoming a requirement for criminal investigators and other to capture evidence using a Log or Raw mode.
What I would argue is that, if it's not there already, we need signatures and metadata stored in the EXIF of photos and video captured that tells how the image was capture. With that you could determine to what extent the media has been manipulated.
Rose colored filtering across an image is one thing, something we all understand. Nobody would say that a black-and-white photo is an accurate depiction but we all understand what a black-and-white photo is. Alterations to specific aspects of a scene, per-pixel changes that are not used across the image, are something else. A camera that detects and alters images where people close their eyes or fail to smile, that will not be recognized. A camera that corrects a scene to make it look as if it were in daylight rather than interior office lights, that too will not be recognized by the vast majority of viewers.
> A camera that detects and alters images where people close their eyes or fail to smile, that will not be recognized. A camera that corrects a scene to make it look as if it were in daylight rather than interior office lights, that too will not be recognized by the vast majority of viewers.
Those are two very different things and neither is new. Of the two, the later is closer to taking a sepia or black and white photo. It's simple grading that's been done for decades. Log video is mostly an extension of Raw photography which has been available to consumers for decades. The former is the more concerning technology and it's been available for at least 5 years on consumer grade devices.
Your camera (including film cameras) never could take a fully accurate picture to represent what you see. Digital sensors and film don't perceive what our eyes do. It's always been up to you the photographer to ensure that. If you choose to shoot on auto, that's your choice to let the camera guess at the accuracy. Most people don't like what is actual reality so they under, over, long and short expose to choose what reality they represent. They light things artificially and they put make up on. The might even create stage scenes. Even in pure film days, humans have been altering the output. Whether it be for realism or artistic purposes, dodging/burning were effectively retouching practices in film.
Yes smart phone cameras are using computation to get a more "correct" output, unless it's being marked as a feature to alter the image such as face smoothing. Camera makers are always trying to make their camera sensors (or film makers) better perceive the range our human eyes can or at least give use the choice through data to make the decision on realism or art.
Your bit about the police officer is 100% irrelevant to your main point.
I was trying to take a passport photo, and one of the requirements is "has not been touched up". But when I took the photo with my phone, I noticed that it had been very helpful in touching up my face by removing almost all of my wrinkles and made my skin nice and soft. Even with all "enhancements" off. This was on an Samsung S10, I tried with an iPhone SE, it was slightly less visibly touched up, so I used that, but still definitely had a "beauty" filter built in. It's probably implemented in ASIC so you basically can't turn it fully off.
Think of this: a cop body camera that auto-adjusts faces to display them more clearly at night. Sounds like a good idea. Then something happens. The cop says "I couldn't see the guy's face" but the body camera shows the face clear as day. Yes, the camera did take a more clear and useful photo, but it is not a proper depiction of the reality experienced by the officer.