Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Okay, but can’t I take 8 raw images and merge them into one image with 8 times higher dynamic range?


Of course - that's the current standard way to produce HDR photos. The advantage here is being able to do it all in a single image, which prevents mismatches between images due to subject or photographer movement.


Yes, but if I understand correctly, unless those images occur at the same time (or really, really close) you run into issues involving subject and camera motion.


Yes, that's what modern cameras with the built in feature do. (although I'm not sure about 8 times) The article talks about this feature.


Yes, but why can’t the camera do it continuously? Film at 240fps and provide a 60fps output with 10bit color depth.


Because the images would look funny. Imagine you're capturing someone running - at frame 1, they would be at position x, and at frame 2, they'll be at position x + 1. If you try to stack them together, you'll get a weird ghosting effect.


Well, that ghosting effect is also called motion blur. If I integrate continuously, that is.


Motion blur, but different parts of the image would be exposed differently due to the nature of HDR, resulting in an odd look. Imagine instead of a runner, that you're panning from a dark to light scene.


Not necessarily.

Assume we say an 8-bit image is taken every 0.1 second, if I merge 4 of them, wouldn’t I just get a 10-bit image with 0.4 second shutter time?

Because I essentially just take the RAW image, load them into a 10-bit colorspace, add them up, and then store the image.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: