Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's related to double buffering and vsync together. If the pure rendering performance was 45fps, when the render is complete the next render can't progress until the buffer currently being displayed is available - hence degrades to 30 fps.

http://www.anandtech.com/show/2794/2



Ah OK. I was assuming a world in which triple-buffering was a given; in other words, that vsync in practice was a rendering implementation detail for avoiding tearing, rather than something that blocked the renderer. I can see how things are different on a resource-constrained device that can't afford triple-buffering.

I'll add that usually you don't see a constant rate of rendering performance. Render time for a frame wanders above or below 16.67ms (with the design target being under that, if 60fps is the aim) depending on complexity or how busy other tasks are. It's not usually the case that every frame takes 22ms (i.e. 45fps) to produce; so it would be very unusual to see a game / app flip from 60fps straight to a locked 30fps. Rather, the odd frame will take slightly more than 16.666, with most falling under. The statistical distribution will give an FPS rate somewhere between 30 and 60.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: