Gonna put out a blanket assertion about my preferences, to get a read on whether these are shared or not:
As humans, we have directives (genetic, cultural, societal, etc.) to prioritize humanistic endeavors (and output) above all else.
History has shown that humans are overwhelmingly chauvinistic in regards to their relationship to other animals in the animal kingdom, even to the point of structuring our moral/ethical/legal systems to prioritize human wellbeing over that of other animals (however correct/ethical that may ultimately be, e.g., given recent findings in animal cognition, such as recent attempts to outlaw boiling lobsters alive as per culinary tradition).
But, it seems that some parties/actors are willing (i.e., benefiting) from subverting this long-standing convention (of prioritizing human interests) in the face of AI (even to the point of the now-farcical quote by Sam Altman that humans take far more nurturing than LLMs...)
So: should we be neglecting our historical and genetic directives, to instead prioritize AI over human interests? Or should we be unashamedly anthropic (pun intended), even at the cost of creating arbitrary barriers (i.e., the equivalent of guilds) intended to protect human interests over those of AI actors?
I strongly recommend the latter, particularly if the disruptions to human-centric conventions/culture/output are indeed as significant (and catastrophic) as they will likely be if unchecked.
Thank you for your new substack, very illuminating (and scary) reading.
It's as if someone was watching the show "Person of Interest" as a guidebook for how to build (and weaponize) the all-seeing eye of sauron that we have today.
It could make it worse. IP from companies that got chopped up and sold for parts can be a nightmare. You may have to do deals with multiple parties, and it can be unclear who owns what (even to the potential owners themselves).
There is debate as to whether the FreeZFS license (CDDL) is compatible with the GPL, which is why FreeZFS is not part of the Linux Kernel. Some distros are baking it in, but there has long been concern about if merging it violates the license or not.
Even if Oracle evaporated and their contemporary ZFS source became unencumbered, I doubt OpenZFS would want to try and merge significantly parts. They already have their own encryption implementation for example.
There are a lot of things that are "perfectly legal" to do. Doesn't mean that, in practice, law enforcement would necessarily take the most extreme legal action possible.
This is a new development (well, new-ish for many communities... I imagine predominatly-black communities have always experience this) whereby LE is explicitly instructed to look for any legal course of action to punitively enforce the law (rather than using a more judicious interpretation, which was more in line with the spirit of many of these regulations).
So yes, technically, law-abiding citizens should always xxxx. Does that mean that, in real life, folks always do this? Only if they are in a paranoid state whereby LE maliciously enforces the law for any minor violation and enforces overwhelming (often illegal) responses to these infractions.
> Only if they are in a paranoid state whereby LE maliciously enforces the law for any minor violation and enforces overwhelming (often illegal) responses to these infractions.
It sounds like the current administration is specifically targeting democratic stronghold states, but I don't know if that means you would be any safer in Republican majority states.
Generally no, because those are more likely to have state or local government employees working on ICE’s behalf. For instance in Florida a weigh station employee said they were calling ICE on people who looked (and sounded?) Hispanic.
Reading that article, it reads more like the person skipped the weigh station and got pulled over by the highway patrol cop usually at the weigh station. Weigh stations will often have a patrol car sitting by the highway that pulls over anyone speeding or who needs to stop at the weigh station but didn't. So CBP (not ICE) was called by a highway officer.
Still to your point of a state government authority calling the feds because someone they interacted with looked Hispanic.
That is an implementation detail. What matters is the outcome:
Notion leadership has signed off on this being opt-out.
The calculus here, as you indicated, was that opt-in has little buy-in.
What leadership didn't take into account was the risk of this being publicized, and the blowback from this awareness.
That, or leadership has already calculated that not enough people will care (possibly true).
I suppose it's then up to those that do care to make more noise about this, to tilt the odds?, so this specific calculus (also known as enshittification) doesn't keep occuring (i.e, if the blowback costs are disproportionate to the value provided by default opt-out....)
The risk raised in the article is that AI is being promoted beyond its scope (pattern recognition/creation) to legal/moral choice determination.
The techo-optimists will claim that legal/moral choices may be nothing more than the sum of various pattern-recognition mechanisms...
My take on the article is that this is missing a deep point: AI cannot have a human-centered morality/legality because it can never be human. It can only ever amplify the existing biases in its training environments.
By decoupling the gears of moral choice from human interaction, whether by choice or by inertia, humanity is being removed from the mechanisms that amplify moral and legal action (or, in some perverse cases, amplify the biases intentionally)
to build on your point, we only need to look at another type of entity that has a binary reward system and is inherently amoral: the corporation. Though it has many of the same rights as a human (in the US), the corporation itself is amoral, and we rely upon the humans within to retain moral compass, to their own detriment, which is a foolish endeavor.
even further, AI has only learned through what we've articulated and recorded, and so its inherent biases are only that of our recordings. I'm not sure how that sways the model, but I'm sure that it does.
As humans, we have directives (genetic, cultural, societal, etc.) to prioritize humanistic endeavors (and output) above all else.
History has shown that humans are overwhelmingly chauvinistic in regards to their relationship to other animals in the animal kingdom, even to the point of structuring our moral/ethical/legal systems to prioritize human wellbeing over that of other animals (however correct/ethical that may ultimately be, e.g., given recent findings in animal cognition, such as recent attempts to outlaw boiling lobsters alive as per culinary tradition).
But, it seems that some parties/actors are willing (i.e., benefiting) from subverting this long-standing convention (of prioritizing human interests) in the face of AI (even to the point of the now-farcical quote by Sam Altman that humans take far more nurturing than LLMs...)
So: should we be neglecting our historical and genetic directives, to instead prioritize AI over human interests? Or should we be unashamedly anthropic (pun intended), even at the cost of creating arbitrary barriers (i.e., the equivalent of guilds) intended to protect human interests over those of AI actors?
I strongly recommend the latter, particularly if the disruptions to human-centric conventions/culture/output are indeed as significant (and catastrophic) as they will likely be if unchecked.