Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Actually most of the video frame interpolation programs in the market uses two frames interpolation.

> Theoretically, you can do a better job with multiple frames but this doesn't bring much more values beside of some extreme cases.

Edge cases that require more information than is present in two frames are very common. That's why most frame interpolation methods also have an "artifact masking" feature.

But what if we did use the information from surrounding frames? That would probably be too complicated for traditional frame interpolation, but that's not what we're talking about.

What if we used a data set trained on the entire video file - or even a collection of similar video files - to fill in the gaps?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: