I have hit points in this in my career where making a moral stand would be harmful to me (for minor things, nothing as serious as this). It's a very tempting and incentivized decision to make to choose personal gain over ideal. Idealists usually hold strong until they can convince themselves a greater good is served by breaking their ideals. These types that succumb to that reasoning usually ironically ending up doing the most harm.
Ever since I first bothered to meditate on it, about 15 years ago, I've believed that if AI ever gets anywhere near as good as it's creators want it to be, then it will be coopted by thugs. It didn't feel like a bold prediction to make at the time. It still doesn't.
Yes. There will always be people who see opportunity in using it destructively. Best case scenario is that others will use it to counter that. But it is usually easier to destroy than to protect. So we could have a constant AI war going on somewhere in the clouds, occasionally leaking new disasters into the human world.
I keep hearing this word "progress". We've been stuck here on earth for 1.5 billion years, we're not progressing, we haven't gone anywhere. We're not going anywhere. There is nowhere better for lightyears in any direction. Don't delude yourself with that narcissistic bunk and don't play with fire.
I cherish my UCSD education and (equally, if not more) all the socializing that came with it.
The concept (and aims) of university faces the same headwinds of any business based on intellectual property in America: artificial moats of IP law.
China's manufacturing success partially stems from the government's inability to enforce IP law.
Thankfully, ideas want to be free, and LLMs give us the best-yet UI to information.
To me, Hacker News is a university. It's a place where I come to learn (from thousands of "teachers") and these "teachers" are actually also students, learning as well.
I don’t know if it is “unstoppable” or a “force,” but nepotism is a natural behavior, selected for in humans by kin selection.
Likewise, I think public choice theory would probably argue that corruption is a predictable outcome in politics that has to be constantly guarded against.
> corruption+nepotism are unstoppable forces of nature
History suggests it's the other way round. They're awfully prevalent - what is a hereditary monarchy but nepotism - but the value of meritocracy over nepotism enables such better governance that it tends to win handily in proxy or actual conflicts. Similarly, if your society is too corrupt when you go to war you discover that someone has sold the tyres off all your stored vehicles, or suchlike.
You also can't have a complex society without a complex government. This goes all the way back to Qin dynasty vs. "barbarians".
A strong checks and balances without influence of bias, relationships, and politics can be implemented using a 2-way blind system where:
1. decision makers (of sound judgement) are not aware of any identifiable information related to any users on whom the decision will be made, nor of each other.
2. Users are not aware of the decision makers who will decide on them, nor of each other.
Possibly AI can play a role here, but a strong system of checks & balances would be a prerequisite for this.
The justice system would definitely benefit from this.
I don't know how any AI system would not eventually determine that humans are the problem. Sci-fi uses this as a plot numerous times for a reason. What humans are doing is not logical, and better choices can be made if it weren't so damn profitable for some to keep going as is.
Because unlike natural life, which has evolved to be highly competitive and self-interested, we would explicitly set the AI's objectives to always benefit society.
That will definitely be a problem, but I suspect and hope that there will be governing AI models that can be "prompted" with clear and concise instructions that will be demonstrably free of bias towards any group, either by a direct reading or by evaluation with trusted 3rd party models.
If the public does not trust the fairness of the AI prompt, that will hopefully lead to revolution and replacement of the prompt with something more principled, similar to how rigged elections (sometimes) trigger revolutions.
I think this paragraph from the wikipedia article captures it nicely:
>Many observers disagree that any meaningful "productivity paradox" exists and others, while acknowledging the disconnect between IT capacity and spending, view it less as a paradox than a series of unwarranted assumptions about the impact of technology on productivity. In the latter view, this disconnect is emblematic of our need to understand and do a better job of deploying the technology that becomes available to us rather than an arcane paradox that by its nature is difficult to unravel.
So many websites and apps are still broken in so many little ways. Maybe broken isn't the right word. But all kinds of annoyances and breaches still happen all the time.
I generally don't complain/review, and just learn the workarounds/shortcuts, but I very much welcome the increased (albeit perhaps less skilled) workforce leverage, because I think in a year or so we'll see steady improvements accumulating.
I appreciate the fun, but he's clearly messing with them or has Asperger's. You can definitely reduce hoops by knowing the bins, which they helped him with.
reply