Actually, no, there is no fundamental problem with the approach.
Search engines have a similar problem: They estimate how useful a search result link is, by counting the number of people that click that link. To do this, they need to take into account that people tend to click the topmost search results only.
But they are handling it just fine, without hiding any search results or something like that. Instead they simply count clicks on the search result link, and apply some mathematics related to the probability that you click that link, when it's so and so far away from the top. And this is similar to the approach I suggested in example 1, which relies on clicks on the vote up/down button (instead of search result link).
(If people don't interact with the page (don't upvote anything at all), I think an algorithm could simply disregard those people.)
Search engines have a similar problem: They estimate how useful a search result link is, by counting the number of people that click that link. To do this, they need to take into account that people tend to click the topmost search results only.
But they are handling it just fine, without hiding any search results or something like that. Instead they simply count clicks on the search result link, and apply some mathematics related to the probability that you click that link, when it's so and so far away from the top. And this is similar to the approach I suggested in example 1, which relies on clicks on the vote up/down button (instead of search result link).
(If people don't interact with the page (don't upvote anything at all), I think an algorithm could simply disregard those people.)