Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can someone explain why alignment can't be done in software? i.e. collect data from each mirror separately, then switch together in software ?


Because sensors in the IR and visible range of the EM spectrum can only capture magnitude, not phase information. At radio frequencies you could do this (in fact it's pretty common).

Without phase information, you can combine different captures to improve SNR, but it won't improve the resolution. To improve resolution you need light interference, which requires phase information to be preserved.


I'm not certain but I believe to get a high resolution you need a large aperture, i.e. photons must physically interact from places that are far away from each other.

Doing each mirror separately would observe the photons before their wave functions are combined and so it would be the same as many small low resolution cameras instead of one big high resolution one. It defeats the purpose of a large mirror.


Doesn’t that imply that the incoming photons are spread out over the entire mirror, several meters?


Yeah I was also surprised to find that photons can be really big: https://youtu.be/SDtAh9IwG-I

Before I saw the experiments in that video, I assumed photons were about as wide as their wavelength.


That guy is hilarious.[1][2][3] I'd like to know what would happen if the split beam was sent through fibre optics to Pluto and back, if the interference pattern would persist instantly.

[1] "It's a golden oldie..."

[2] "... just to give the setup a nice high tech look and feel."

[3] "The physics behind this is pretty hefty, and not, like, youtube video material."


From what little I know about quantum mechanics, the statement about changing the light source and expecting to not see an interference pattern is very surprising. I’d expect to see one.


Different light sources emit with different "Coherence length". A continuous-wave laser has high coherence length, and a light source that isn't a laser at all has low coherence length even if it's monochromatic, for example a sodium lamp.

The laser used by the presenter has a coherence length longer than (or in the same ballpark as) the difference in optical paths in their experiement, so they get a clear interference pattern.

The Wikipedia article may explain. https://en.wikipedia.org/wiki/Coherence_length

Since you can measure coherence length (and higher-order temporal and spatial coherence statistics), it is part of the information carried by light from a luminous object that is available for imaging by a suitably designed camera.


Yes, pity that is not in the video.


Woah. Thank you. This is nuts.


amazing video. thanks.


Hm? Yes, that’s how telescopes (and camera lenses) work!


Information theory. You can’t fix information you don’t have.

Think of the csi ‘enhance’ meme and why that is physically impossible without introducing potentially fake information.


As user 4ad explains, one of the reasons is that phase information isn't collected by the sensors, and is needed to reconstruct all the image detail.

In radio astronomy the phase information is actually collected and sometimes recorded, which is why you can have arrays of radio telescopes far apart that combine. The most extreme version of this is Very Long Baseline Interferometry (https://en.wikipedia.org/wiki/Very-long-baseline_interferome...) which was used to image the black hole at the centre of our galaxy (https://en.wikipedia.org/wiki/Event_Horizon_Telescope).

At infra-red and visible wavelengths, it is technically possible to collect some phase information, subject to noise though. So in principle it is possible to collect images including some phase at each mirror location instead of using a mirror, and then stitch them in a similar way to how it's done with radio. However, collecting the phase would be difficult and complex, especially with current technology, and likely to degrade the image so much that it's not worth doing anyway. Using mirrors is better.

In future, it is plausible that this will be done to combine images from optical telescopes far apart in space, for a very wide aparture. But it seems just as likely that they will use mirrors far apart in space, directing the incoming light to a small number of focal locations to combine the light in the optical domain first, before converting it to image data.




Other great answers, but if those things weren’t an issue I think also you’d need one sensor for each mirror, and the sensor is bigger and heavier and more complex than a single mirror. Or you could just watch the same mirror for X times longer where X is the number of mirrors, but then you’d get 1/X as many observations.


Yes. It's the same reason you can't take 10 out of focus pictures of yourself and get one in focus picture. You need actual good data to combine.


You can, with enough processing (and knowledge of how out of focus it is). It worked for Hubble. But you get more good pictures if you get it in focus in the first place.


Yup

A engineering friend of mine was working on hardware related to mil satellite imagery, and was sent to a course that covered all the kinds of post-processing techniques they had to improve resolution and what could help those techniques upstream. He said that at the end of the course, the instructor said the bottom line was that while they could do all kinds of 'magic' to improve & enhance the photos, the best input to all their techniques that would yield the best end result, was to take a better photo in the first place.

So, yes, there really is no substitute to a good original image.


Deconvolution. Haven't followed that in awhile, but getting to original function was next to impossible. Recording calibration image and/or recent AI developments might've taken that into something awesome. I'll go and catch up on material!


We really need to watch it with trusting AI.

While my example here isn't photo recognition, the same principle applies. I recently sat for a deposition where the stenographer used an "AI"transcription system. The result was literally pages of errata (vs the standard errata sheet that has space for about a dozen lines).

The consistent error I noticed was that the erroneous words were (probably) the word most expected in that position, and NOT the word that I said.

So, at a glance, it seemed like a really good transcription. In fact, many errors were barely noticeable to me and I had to go back to the audio recording to confirm. And these were errors that substantially changed the meaning, or even inverted it.

This is not merely information loss — the least surprising/lowest information item was inserted instead of the real item — this is actual information CORRUPTION.

I'd fully expect parallel phenomena from image - "AI - filling in the item most expected from the training set, and actively corrupting the data by stripping out the highest-value info bits and replacing them with the most expected.

Beware


Yeah, it's called signal reconstruction for a reason. There are classes of it where it's verifiable and decent enough however.


Yes, I'm sure that with properly constrained and well-tested data spaces, it could produce outstanding and very helpful results.

But accurate reconstruction in the wild is just sooo far away. And for good reason - it would need to have insane amounts of experience and exposure to every bit of unusual data that existed in the world to get it right...


as long as you know the point spread function you can focus out of focus images through deconvolution. https://imagej.net/imaging/deconvolution Basically, take a picture of a well-known object that should look like a single pixel, measure how spread out it is, and then use that to backconvert images taken from the same camera.


Would you prefer this approach over actually having the target in focus?


It depends. Personally, I'd apply the technique even to focused images, after using a set of fluorescently labelled beads of known diameter to calibrate the PSF.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: