> No military or aid interference of any kind, no taxes or laws but also no import/export commerce, communication or organizing of any government or militia/military will be allowed
Multi-national/Global treaty, they become military target practice if they arm themselves or try to import/export by whoever is available or wants to train their soldiers.
Everyone else is going to spontaneously force rules onto others? And how exactly do you see that working when the party breaking some rule resists?
It's exactly this kind of magical thinking that has resulted in essentially no state-equivalent anarchist societies actually managing to function and exist in modern times, the few that pop up get slammed down by states, because they really suck at resisting. The closest you seem to get are situations like the Zapatistas that are libsoc, not anarchist, since they still have a state.
> And how exactly do you see that working when the party breaking some rule resists?
They get killed/droned. Have you seen Escape from NY/LA? This would be like the modern australia except people can volunteer to go there and we have the tech and resources to enforce a perimeter and monitor the goings in the place.
Anarchism really isn't the goal. It is to have the equivalent of the old west where while it is brutal and you don't enjoy rights and benefits of civilization, you also get to live a life, however short, that is truly yours. Anyone cal flee global persecution and go there. Snowden could have for example. The place itself is considered the equivalent of dying to society once you enter it and you can never come out of it. Whether technology, disease, ideology or something else, this would be the redundancy, a controlled environment that is immune to the changes of the outside global world. A place for those who have no place anywhere else. A place criminals can volunteer to go to for certain crimes (especially victimless ones) instead of spend their lives in a cage. Where deserters from war and those who abandon their lives can go to, at the risk of their safety and with no guarantees of wellbeing other than that which they could fight for. It would be more mad max than somalia.
Using Authentik as a part of my selfhosted setup, mostly positive things to say. I tried with Keycloak first but had too much trouble getting the Docker image to work, so switched to Authentik.
I also checked out some other options along the way, and ultimately realized that pretty much all of the options come with enterprise-oriented features that are just added complexity for the self-hosting use case.
Ultimately, I've gotten at least somewhat familiar with all the complexities of Authentik, so I'd have a hard time switching off. Would definitely love to see a solution geared towards selfhosting that's more barebones, though.
Getting a Reddit API key seems to be a more complex process than the usual, with the first step being submitting a service ticket [0]. I imagine that this would also help Reddit prevent people from doing what you're suggesting, if it were to become popular.
No, but instead means that GPT can't be smarter than the people who are writing the published content on the web (generally, the very smartest humans if you're considering high quality published works alone).
If you could do a perfect job of training a GPT on the entirety of published academic literature, the total of what that GPT could spit out would be limited by the knowledge contained in academic literature. At the same time, you'd have created a tool that is cheap and does a good job of synthesizing knowledge/answering questions across all disciplines. The model will never replace the scientists who are working at the very bounds of their fields, but it doesn't have to in order to be extremely useful, even useful enough to replace a majority of knowledge workers.
Just because GPTs can't be smarter than the smartest humans doesn't mean they can't be smarter than most humans.
Whilst that is probably true, it is not necessarily true. Nothing fundamentally prevents GPT from synthesizing new and novel insights. If a novel insight is trivial to notice when combining knowledge from 4 disciplines, it could be neigh impossible for humans to find, and obvious to GPT.
Or perhaps the insight isn't trivial but follows analagous reasoning from some other obscure result. Or perhaps a million other things.
I don't mean to claim GPT will do this. I just mean to point out it can't be fully excluded GPT is able to.
Google has a whole bunch of products/tools in the healthcare space, and it seems like their contribution there is only growing. I've been working with FHIR/EHR adjacent tooling lately for a personal project, and a good number of both open source resources and SAAS products I've seen have been from Google.
More broadly, all big 3 cloud providers (Azure, AWS, and Google) have offerings for FHIR data storage and API access, as well as common NLP based healthcare data analysis workflows. Many of these seem relatively new, or as if they have had a lot of recent attention focused on them. I'm definitely interested in how/why these companies (as well as some other VC funded ones, like Medplum), are entering this space with products that are not directly sellable, but are rather things that other tools would have to build upon. It seems like AWS works directly with end-customers to use their APIs to build products, but I'm not sure what Azure and Google are doing.
This one's probably too complex for my use case, but I thought the concept looked very neat and wanted to share.
I agree that it's probably not anything more than statistics, but why can't statistics alone generate emergent phenomena? What convinces you that the human brain isn't also just statistics at a massive scale?
because there is no generative mechanism in the definition of "statistics" with which to generate anything.
> What convinces you that the human brain isn't also just statistics at a massive scale?
Because the human brain created the concept of "statistics" so if statistics created the human brain this would mean statistics created statistics, leading to infinite regress.
But the idea that the brain functions on a basis of complex statistical processes doesn't imply that statistics created the brain itself, it just suggests that the brain's processes can be modeled/understood through the lens of statistical methods. Statistics is just how we describe a tool we invented to analyze observable data, the brain could be a similar tool and it doesn't need to be the same tool.
It's akin to saying that we created the concept of "physics", yes we created it, and physics govern, for example, how a car moves, but it doesn't mean our concept of physics created the car or physics itself, we just use physics to describe and understand the car's movement.
Maybe statistics can't generate anything, but if you imagine everything we can do as very complex, unimaginable multi-dimensional functions that can generate outputs based on inputs, we can use statistics to find functions that fit any real (ground truth) function in the observable universe.
> the idea that the brain functions on a basis of complex statistical processes doesn't imply that statistics created the brain itself
Agreed. However, the phrasing and context of the question did imply the brain is "just statistics" and somehow emerged from statistics. If we are to interpret this as "the brain functions on just statistics" then the answer is still "it does not" because the brain can be said to function on countless different systems simultaneously, such as pure counting, algebra, calculus, etc which would mean that its not "just statistics."
> It's akin to saying that we created the concept of "physics"
This will boil down to our exact definitions, but most people conceive of physics as having a generative mechanism. If something were to ever be "created", like an atom or a new car, we would retroactively declare it to have been created in accordance with "the laws of physics." We wouldn't make the same retroactive assessment with something like "the rules of chess" because there is nothing in the rules of chess justifying such a creation. So we choose to give physics a special status.
> we can use statistics to find functions that fit any real (ground truth) function
A given statistical model might fit a function of the universe, but so might other models. Physics describe a function of the universe, chemistry describes a function of the universe, biology describes a function of the universe, politics describe a function of the universe. Describing a ground truth is one thing, elevating the description itself to the status of ground truth is another.
I've been working in systems neuroscience for a few years (something of a combination lab tech/student, so full disclosure, not an actual expert).
Based on my experience with model organisms (flies & rats, primarily), it is actually pretty amazing how analogous the techniques and goals used in this sort of research are to those we use in systems neuroscience. At a very basic level, the primary task of correlating neuron activation to a given behavior is exactly the same. However, ML researchers benefit from data being trivial to generate and entire brains being analyzable in one shot as a result, whereas in animal research elucidating the role of neurons in a single circuit costs millions of dollars and many researcher-years.
The similarities between the two are so clear that I noticed that in its Microscope tool [1], OpenAI even refers to the models they are studying as "model organisms", an anthropomorphization which I find very apt. Another article I saw a while back on HN which I thought was very cool was [2], which describes the task of identifying the role of a neuron responsible for a particular token of output. This one is especially analogous because it operates on such a small scale, much closer to what systems neuroscientists studying model organisms do.
Oh sure! So the OCR is a statically linked build of tesseract based on [1] and pytesseract [2] which is a super thin wrapper but easier than writing it yourself. The I stole/modified the prompt from [3] to get the bot to write Python programs that do date calculations. Then I used [4] to take the output in case the Llm didn't use the format I asked. I run the Python code it generates in a rootless container that uses the Lambda RIE [5] because I was too lazy to make my own thing. So I'm mildly lied about the v8 isolate because everyone gets the implication and who wants to hear about RIE and seccomp profiles.
Who is going to enforce this?