Or you delegate. Grant trusted individuals power to moderate sub-communities.
How do you ensure you select trusted individuals? Create institutional mechanisms with accountability and checks and balances; e.g. community voting to bestow time-limited power on individuals, and appeal boards to as a check on abuses of that power, and perhaps recall votes to remove someone.
Opaqueness isn't the answer. Opaqueness leads to injustice.
You certainly don't have a chance of keeping up if you don't use the technology we've developed over thousands of years specifically to deal with people gaming systems. The technology of government.
Are there sites that have done this at scale? Reddit and Wikipedia come to mind as counter examples, where the people with the most time and desire for power end up doing almost all the moderation.
I'd say the issue with Reddit, in the context of this discussion, is that it doesn't work. For whatever reason you like, Reddit doesn't delegate the community to its moderators. It delegates the responsibilities, but not the rights. Reddit still has an overarching "community standards" and is thus, at that level, still just one big community, with the accompanying failure. The attempt to solve that with subreddits was a good try, and solved some things, but it doesn't get around the sorts of problems being discussed in this overall thread.
(I'm only talking about what the case is, in this context, not why, nor judging it at the moment.)
Reddit is an inconsistent mess because it's fundamentally reactive at the very top. Someone Who Matters notices they host racist communities? They ban the most obvious ones, but not communities like /r/Sino which are blatantly racist in ways the Western Media finds difficult to complain about. Someone Who Matters notices they have transphobic communities? Ban the most obvious ones, except ones like /r/ShitRedditSays which hide behind a twisted interpretation of feminism, as they could play a more effective card were they to be banned.
And so on and so forth. It's administration by reacting to bad press.
That may be somewhat due to policy changes. Reddit did try to be more hands-off in the past.
But perhaps a better example than reddit is the internet as a whole. ISP and hosters are dumb carriers and moderation is mostly left to the individual sites. Depending on your hoster it may take a court order for them to do anything at all.
In other words we need more platforms that put themselves front and center, avoiding that also avoids the reputation problems with a hands-off approach.
Many city, state, and even country subreddits have been taken over by right-wing moderators. This has been a coordinated process over many years. And while you can create new subreddit for your country easily you're still can't be /r/canada.
Moderation is labor, and you get what you pay for. Given how notoriously mental health threatening Facebook moderation has turned out to be, adding unpaid moderators with no health care benefits and especially no mental health protections seems the exact wrong direction for Facebook.
That said, I absolutely agree that the answer is likely one of checks/balances/as much transparency to the adjudication process as possible, and yes Facebook has a lot to learn there from existing governmental technology.
That's certainly a great strategy if you have those options available. However, if you can't maintain a certain moderator to user ratio then that's going to be impossible.
How do you ensure you select trusted individuals? Create institutional mechanisms with accountability and checks and balances; e.g. community voting to bestow time-limited power on individuals, and appeal boards to as a check on abuses of that power, and perhaps recall votes to remove someone.
Opaqueness isn't the answer. Opaqueness leads to injustice.
You certainly don't have a chance of keeping up if you don't use the technology we've developed over thousands of years specifically to deal with people gaming systems. The technology of government.