Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How can an algorithm create more resolution (i.e. information) from less information?

Do you mean a wavering 4k sream outputted as 1080p?



Video Upscaling via Spatio-Temporal Self-Similarity: http://vision.ucla.edu/papers/ayvaciJLCS12.pdf

The Freedman and Fattal paper they mention can be found here: https://pdfs.semanticscholar.org/7df0/39049948d54fd1f4d75526...


We all laughed at the ridiculous "Zoom and Enhance" bits on TV crime shows, but it's become much more plausible in the past couple of years.

It's called super-resolution, or upsampling. Here is a good overview of techniques: http://www.robots.ox.ac.uk/~vgg/publications/papers/pickup08...

More recently, Google's RAISER: https://research.googleblog.com/2016/11/enhance-raisr-sharp-...

This repo pulls together techniques from several papers with impressive results: https://github.com/alexjc/neural-enhance

Anyway, it's an area of active research, there are already four dozen relevant papers in 2017 alone: https://scholar.google.com/scholar?q="machine+learning"+"sup...


It's using information from adjacent frames to add extra detail.


The general term is "video super-resolution". There's software available off the shelf to do it, IIRC.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: