Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> it will strain Wikipedia's governance model and become more of a single point of failure

Why can't the Wikipedia model be adapted to a federated, directly community run approach? This works well enough for services such as email, matrix, and the fediverse. There's gravitation towards centralized hosting services but that's largely behavioral - the model itself works perfectly fine with lost of small players.

Heavyweight multimedia can be a challenge but text content itself is quite easy to serve up from very small devices.



Wikidata and Wikibase, the software it runs on, are expanding it into a "federated" network of knowledge stores. You can, for example, link from wikidata to some other instances of the software and query them transparently. It's used by a few museums that want to keep control over the description of the art, but link to wikidata for, say, the artists' place of birth. Then, you can use their query interface (SPARQL etc.) to get all the art they have from "artists born in a city that had a commercial port in 1960" without the museum ever having to enter more information than "this is a van Gogh".

EDIT: Here's an example from the EU, which has their complete budget in a triple store: https://query.linkedopendata.eu/#SELECT%20%3Fproject%20%3Fpr...

(you have to click the blue "play" button to run the query.)

I'm not sure if they are federating with wikidata or just importing the data, but the result is similar either way.

(For budgets, always go for the treemap: https://query.linkedopendata.eu/#%23defaultView%3ATreeMap%0A...)


I'm not sure such a complex system of content moderation & process could so easily be federated; I'd love to see a federated system equivalent to Wikipedia out there or one that has successfully transitioned governance like that. Email spam, by comparison, is far far less nuanced. Regardless, it'd be a new effort and wouldn't just work or be trusted year one. It'd need to be tested and refined over years and years, like Wikipedia has.

I could see several nonprofits and news brands along with Wikipedia shift to becoming a set of sources of truth for different & likely overlapping topics. That shift could happen gradually, as part of a mix of monetization incentive changes and more explicit 'here's how you participate' coersion (medium is message stuff; see FB's pivot to video, or youtube algo changing how content creators create). The generated result that Google spits out could reference those and note them as inputs, including noting where they disagree or choose to include or exclude certain context.

I still don't see how these ideas get funded without Google directly funding them, where algorithm transparency comes in play, etc.


The problems havve absolutely nothing to do with technology in the streaming-video sense of the word. It's about trust, versioning, truth, reality, and similar concepts.

Maybe Linux development is a good example, with some centralisation but other power centers of varying size connected, such as distributions or non-kernel software projects.

But then again Wikipedia already is federated into hundreds of local communities and horizontal projects like commons or wikidata, and it works not quite as terrible as one would think.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: