Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the interesting things was the 'white noise' which was identified as various animals. I reminded me of people looking at noise and "seeing" data. Which for me suggests that at some level this isn't completely an artifact. If the algorithms developed are so closely modeled on human perception are susceptible to this sort of thing, humans probably are too. Perhaps that explains reports of people seeing things in the electronic 'snow' pattern of a disconnected TV?


The difference is that humans know they are seeing noise, they don't claim 99% confidence in their image, but rather a very low confidence.


It's probably very naive, but that makes me wonder whether these neural nets are trained to recognize noise or meaningless images as such. If we train a system to tell us what an image represents, the system will do its best to classify it in one of the existing categories. But having a low confidence in what an image represents it's not the same as having high confidence in the fact that it doesn't represent anything. So maybe we should train the networks to give negative answers, like "I'm totally confident that this image is just noise".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: