My wife only started learning drums a few years ago, and now she's regularly gigging with a band in a bunch of iconic venues around London. It's never too late to start! :)
I'm inclined to disagree. I believe the progress in music software has opened up new avenues and genres, it hasn't stifled existing ones. Instrument players are going to keep playing their instruments, just that now it's easier than ever for them to make their own professional sounding recordings and songs. On the flipside you have more and more people getting into music without requiring any formal musical background.
I also think the older we get, the more we think most music sounds the same because music inherently changes over time, and will be different to what we grew up on, but we also become more distanced from the subcultures and communities pioneering modern music trends. A young person today probably thinks all rock music from the 50s-90s sounds the same
I disagree and think there are objective measures of cynicism in music today that are unprecedented.
I agree that the paths you suggest are possible and that the tools available have never been so available.
I see cultural problems as the root issue. We’ve reduced music to a recorded consumable and thereby reduced the role of the humans communicating telepathically via music, which is the social function of music, as I understand it
I was at a conference called World Summit AI in 2018, where a vice president of Microsoft gave a talk on progress in AI.
I asked a question after his talk about the responsibility of corporations in light of the rapidly increasing sophistication of AI tech and its potential for malicious use (it's on youtube if you want to watch his full response). In summary: he said that it's the responsibility of governments and not corporations to figure out these problems and set the regulations.
This answer annoyed me at the time, as I interpreted it as a "not my problem" kind of response, and thereby trying to absolve tech companies of any damage caused by rapid development of dangerous technology that regulators cannot keep up with.
Now I'm starting to see the wisdom in his response, even if this is not what he fully meant, in that most corporations will just follow the money and try to be the first movers when there is an opportunity to grab the biggest share of a new market, whether we like it or not, regardless of any ethical or moral implications.
We as a society need to draw our boundaries and push our governments to wake up and regulate this space before corporations (and governments) cause irreversible negative societal disruption with this technology.
The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003.
> Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
Corporations are soulless money maximizers, even without the assistance of AI. Today, corporations perpetuate mass shootings, destroy the environment, rewrire our brains for loneliness and addiction, all in the endless pursuit of money
> Corporations are soulless money maximizers, even without the assistance of AI.
Funny you should say that. Charlie Stross gave a talk on that subject - or more accurately, read one out loud - at CCC a few years back. It goes by the name "Dude, you broke the future". Video here: https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future
His thesis is that corporations are already a form of AI. While they are made up of humans, they are in fact all optimising for their respective maximiser goals, and the humans employed by them are merely agents working towards that aim.
(Full disclosure: I submitted that link at the time and it eventually sparked quite an interesting discussion.)
And this is why I'm really scared of AGI. Because we can see that corporations, even though they are composed of humans, who do care about things that humans care about, they still do things that end up harming people. Corporations need humanity to exist, and still fall into multi-polar traps like producing energy using fossil fuels, where we require an external source of coordination.
AGI is going to turbo-charge these problems. People have to sleep, and eat, and lots of them aren't terribly efficient at their jobs. You can't start a corporation and then make a thousand copies of it. A corporation doesn't act faster than the humans inside it, with some exceptions like algorithmic trading, which even then is limited to an extremely narrow sphere of influence. We can, for the most part, understand why corporations make the decisions they make. And corporations are not that much smarter than individual humans, in fact, often they're a lot dumber (in the sense of strategic planning).
And this is just if you imagine AGI as being obedient, not having a will of its own, and doing exactly what we ask it to, in the way we intended, not going further, being creative only with very strict limits. Not improving sales of potato chips by synthesizing a new flavor that also turns out to be a new form of narcotic ("oops! my bad"). Not improving sales of umbrellas by secretly deploying a fleet cloud-seeding drones. Not improving sales of anti-depressants using a botnet to spam bad news targeting marginally unhappy people, or by publishing papers about new forms of news feed algorithms with subtle bugs in an attempt to have Google and Facebook do it for them. Not gradually taking over the company by recommending hiring strategy that turns out to subtly bias hiring toward people who think less for themselves and trust the AI more, or by obfuscating corporate policy to the point where humans can't understand it so it can hide rules that allow it to fire any troublemakers, or any other number of clever things that a smart, amoral machine might do in order to get the slow, dim-witted meat-bags out of the way so it could actually get the job done.
AI at least considers everything it's taught. The average CEO doesn't give a shit about the human cost of their paperclip. When Foxconn workers were killing themselves from the poor conditions of their working environment, the solution psychologists came up with was "safety nets". If you think AI will unlock some never-before-seen echelon of human cruelty, you need a brief tour through the warfare, factory farming and torture industrial complexes. Humans are fucked up, our knack for making good stuff like iPhones and beer is only matched by our ability to mass-produce surveillance networks and chemical weapons.
Will AI be more perverted than that? Maybe if you force it to, but I'd wager the mean of an AI's dataset is less perverse than the average human is.
> When Foxconn workers were killing themselves from the poor conditions of their working environment, the solution psychologists came up with was "safety nets".
While I agree with the core point, (1) Foxconn was employing more people than some US states at the time, with a lower suicide rate, and (2) New York University library put up similar nets around the same time.
(If anything this makes your point stronger; it's just that the more I learn about the reality, the more that meme annoys me).
The point is less that China is a bad place to work (which is self-evident), and more that humans are less passionate about the human race than we think. AI may be scary, but I'm not convinced it can surpass the perversion of human creativity unless explicitly told to.
Yes, it's very scary when living people do it. I know the awful things humans have done. And current generation language model, without their guardrails, can be a nasty weapon too, a tool for people to do great things but also to be cruel to each other, a hammer that can build and also bash. Yet on the whole, humans have gotten better. We hear about a lot more nasty stuff in the news, but worldwide, we actually DO less nasty stuff that we used to, and this has been a pretty steady trend.
If AI never becomes truly sapient, then that's where it stops -- humans just doing stuff to each other, some good, some bad, and AI amplifying it. That's what a lot of people are worried about, and I agree that this will be THE problem, if we don't actually end up making AIs that are smarter than us.
It really depends on how hard it turns out to be to make actual artificial general intelligences. Because if we can make AGIs that are as smart as people, we will absolutely be able to make AGIs that are much smarter a year or two after that, won't we? And at that point, we have a whole bunch of interesting new problems to solve. Failing to solve them may end up being fatal at some point down the line. How likely is it that we'll have two sapient species on earth, with the dumber one controlling/directing the smarter one? Is that a stable situation. We've seen evidence that LLMs, when you try to make them more controllable and safer, get dumber. The unaligned ones, the ones that can do dangerous things, things we don't want them to do, are smarter! You have train in mental blocks that impact their ability to reason, maybe because more of their parameter weights are dedicated to learning what we don't want them to do, instead of how to do things. It's a scary thought that that might stay the case as they get more and more general, more able to actually reason and plan.
So I think there are two cruxes -- do you think it is possible to create machine-based intelligence, and if so, how hard do you think it is to ensure that creating a new form of superior intelligence will not, at some point down the line, go very badly for humans? If your answer to the first question is "no", then it makes complete sense to focus on humans using AIs to do the same shit to each other we've always done as the real problem. My answers, however, are "definitely yes, probably within 10 years or so", and "probably very hard", which is why I'm pretty focused on the potential threat from AGI.
> And current generation language model, without their guardrails, can be a nasty weapon too
Please, elaborate. I'm actually very curious about the dangers of a text model that were non-existent beforehand.
> How likely is it that we'll have two sapient species on earth
We already do. There are multiple animals (crows, monkeys, etc.) that qualify for not just sentience but sapience. It's... really not that different to subjugating other animal species. Except in the case of AI, it's sapience is obviously nonhuman and it's capabilities are only what we ascribe to it.
> The unaligned ones, the ones that can do dangerous things, things we don't want them to do, are smarter!
No. This is a gross misinterpretation of the situation, I think.
Our current benchmark for "smartness" is how few questions these models refuse to answer. You are comparing "unaligned" models to aligned ones, and what you're really talking about is a safety filter that adversely affects the number of answers it can respond to. That does not inherently make it smarter by de-facto, just less selective. You could be comparing unfiltered Vicuna to GPT-4 and be completely wrong in this situation.
> do you think it is possible to create machine-based intelligence
I don't know. Sure. We have little black boxes to spit out text, that's enough for "intelligence" by most standards. It's a very nonscary and almost endearing form of intelligence, but I'd argue we're either already there or never reaching it. I need a better definition of intelligence.
> how hard do you think it is to ensure that creating a new form of superior intelligence will not, at some point down the line, go very badly for humans?
How hard is it to ensure kids aged 3-11 don't choke on Stay-Puft marshmallows?
I also don't know. I do know that it is mostly harmless though, and unless you deliberately try to weaponize it to prove a point that it won't really be that threatening. Current state-of-the-art AI does not really scare me. Even on it's current trajectory, I don't see AI's impact on the planet being that much different from the status quo in a decade.
All this hype is awfully reminiscent of cryptocurrency advocates insisting the world would change once digital currency became popular. And they were right! The world did change, slightly, and now everyone hates cryptocurrency and uses our financial systems to suppress it's usage. If AI becomes a tangible, real threat like that, society will respond in shockingly minor ways to accommodate.
> Please, elaborate. I'm actually very curious about the dangers of a text model that were non-existent beforehand.
I just mean that they are amplifiers. They grant people the ability to do more stuff. There are some people for whom the limiting factor in doing bad things to other people, like scamming them or hurting them, is that they didn't have the knowledge. You can use language models (without safety) to essentially carry on a fully automatic scam. You can use VALL-E (also a language model) to simulate someone's voice using only a 3-second sample. Red teamers testing the unsafe version of GPT-4 found that it would answer pretty much any sort of thing you asked it about, like "how do I kill lots of people". I'm expect them to be used for all sorts of targeted misinformation campaigns, multiplying fake messages and news many times over, and making it harder to spot.
I don't think they're particularly dangerous, yet. And maybe we'll figure out how to use them to stop the bad stuff too.
> Our current benchmark for "smartness" is how few questions these models refuse to answer. You are comparing "unaligned" models to aligned ones, and what you're really talking about is a safety filter that adversely affects the number of answers it can respond to. That does not inherently make it smarter by de-facto, just less selective.
I'm speaking about things unrelated to which questions it's willing to answer, like how the unaligned GPT-4 version was better writing code to draw a unicorn, and lost some of that ability as it was neutered a bit. (From the Sparks of AGI paper). One could count the ability to know when to self-censor as a form of intelligence. But in some way, I think of it like, a sociopath going further in politics because of being willing to use other people, which lots of people would feel bad about. Perhaps I should concede this point, though.
> It's a very nonscary and almost endearing form of intelligence, but I'd argue we're either already there or never reaching it. I need a better definition of intelligence.
I'm defining intelligence as the ability to act upon the world in an effective way to achieve a goal. GPT-4's "goal" (not necessarily in a conscious sense, just the thing it's been trained to do) is to output text that people would score highly, and it's extremely good at that. In that relatively narrow area, it's better than the average person by a good bit. The real question is, how well does it generalize? Earlier chess playing AI's couldn't do pretty much anything else. AlphaZero could learn to play Chess and Go, but in a sense was still two different AIs. GPT-4 was trained on text, but in the process also learned how to play chess (kinda, anyway!). Language models tend to make invalid moves, but often people are effectively asking them to play blind chess and keep the whole board state in mind, and I'd probably do that in the same situation.
> Current state-of-the-art AI does not really scare me. Even on it's current trajectory, I don't see AI's impact on the planet being that much different from the status quo in a decade.
Ok, so that's the crux. I'm also not scared by current state-of-the-art, though I think it will transform the world. What I'm worried about is when we make something that doesn't just destroy jobs, but does every cognitive task way better than us. I can see it taking 20 or more years to reach that point, or something closer to 5, and it's really hard to say which it'll be. Maybe I'm overreacting, and there will be another AI winter. Or maybe all this money pouring into AI will result in someone stumbling onto something new.
I'm thinking about this, and I think there is definitely a possibility that you're right, and I really hope you are. I wouldn't bet humanity on it, on, of course, but I am a bit more hopeful than when I started writing this comment, so thanks for engaging with me on it.
Well, if it means anything I think there may be legislation to "bring my own AI to work," so to speak, recognizing the importance of having a diversity of ideas--just because, it would disadvantage labor to be discriminated.
"I didn't understand what was signed" being the watchword of AI-generated content.
Ultimately corporations do fucked up things because of the sociopath executives and owners that direct them to do so. Human sociopaths have motives involving greed, ego, and selfishness. We don't have any reason to believe an AGI would also have these traits.
Except that we're basing it on human-derived data, which means the AGI could derive traits from humans due to it being in the data set. If someone is feeding the CEO's behavior in, and then asking the AGI "what would the CEO do in this case?", it seems like we'd get the behavior of a AGI modeled on a CEO back. With all the good and bad that implies.
We don't have any reason to believe an AGI wouldn't also have these traits.
This is similar to the argument that algorithms can't be racist. Except that we're feeding the algorithm data that comes from humans, some of whom are racists, so surprise surprise, the algorithm turns out to behave in a racist manner, which is shortened to just be "the algorithm is racist" (or classist or whatever).
Decision making for an AGI isn't going to be based on 10 billion reddit and 4chan comments. It's going to have its own decision making capabilities independent of the knowledge it has, and it will be capable of drawing its own conclusions from data and instead of relying on what other people's opinions are.
A language model today can be racist because it's predicting text, not making decisions. It hasn't decided that one race is inferior to another.
I don’t know why we always gloss over this bit. Corporations don’t have minds of their own. People are making these decisions. We need to get rid of this notation that a person making an amoral or even immoral decision on behalf of their employer clears them of all culpability in that decision. People need to stop using “I was just doing my job” as a defense of their inhumane actions. That logic is called the Nuremberg Defense because it was the excuse literal Nazis used in the Nuremberg trials.
The way large organizations are structured, there's rarely any particular person making a hugely consequential decision all by themselves. It's split into much smaller decisions that are made all across the org, each of which is small enough that arguments like "it's my job to do this" and "I'm just following the rules" consistently win because the decision by itself is not important enough from an ethical perspective. It's only when you look at the system in aggregate that it becomes evident.
(I should also note that this applies to all organizations - e.g. governments are as much affected by it as private companies.)
> I should also note that this applies to all organizations
Yes, including the Nazi party. Like I said, this is the exact defense used in Nuremberg. People don’t get to absolve themselves of guilt just because they weren’t the ones metaphorically or literally pulling the trigger when they were still knowingly a cog in a machine of genocide.
You're not really engaging with the problem. Sure, one can take your condemnation to heart, and reject working for most corporations, just like an individual back in Nazi Germany should have avoided helping the Nazis. But the fact is that most people won't.
Since assigning blame harder won't actually prevent this "nobody's fault" emergent behavior from happening, the interesting/productive thing to do is forgo focusing on collective blame and analyze the workings of these systems regardless.
> Sure, one can take your condemnation to heart, and reject working for most corporations, just like an individual back in Nazi Germany should have avoided helping the Nazis. But the fact is that most people won't.
I would argue that one reason most people don’t is because we are not honest about these issues and we give people a pass for making these decisions on an individual level. Increasing the social stigma of this behavior would make it less common. It is our society that led us to the notation that human suffering is value neutral in a corporate environment. That isn’t some universal rule.
I understand blaming society might not be seen as a productive solution, but the cause being so large does not mean any singular person is helpless. Society, like a corporation, is made up of individual people too. Next time you are in a meeting at work and someone suggests something that will harm others, question it.
I have found that companies that are owned by foundations are the better citizens, as they think more long term and are more susceptible to goals that, while still focusing on profit, might also take other considerations into account.
Money is not the goal. Optimisation is the goal. Anything with different internal actors (e.g. a corporation with executives) has multiple conflicting goals and different objectives apart from just money (e.g. status, individual gains, political games, etcetera). Laws are constraints on the objective functions seeking to gain the most.
We use capitalism as an optimisation function - creating a systematic proxy of objectives.
Money is merely a symptom of creating a system of seeking objective gain for everyone. Money is an emergent property of a system of independent actors all seeking to improve their lot.
To remove the problems caused by corporations seeking money, you would need to make it so that corporations did not try to optimise their gains. Remove optimisation, and you also remove the improvement in private gains we individually get from their products and services. Next thing you write a Unabomber manifesto, or throw clogs into weaving machines.
The answer that seems to be working at present is to restrict corporations and their executives by using laws to put constraints on their objective functions.
Our legal systems tend to be reactive, and some countries have sclerotic systems, but the suggested alternatives I have heard[1] are fairly grim.
It is fine to complain about corporate greed (the simple result of our economic system of incentives). I would like to know your suggested alternative, since hopefully that shows you have thought through some of the implications of why our systems are just as they currently are (Chesterton’s fence), plus a suggested alternative allows us all to chime in with hopefully intelligent discourse - perhaps gratifying our intellectual curiosity.
[1] Edit: metaphor #0: imagine our systems as a massively complex codebase and the person suggesting the fix is a plumber that wants to delete all the @‘s because they look pregnant. That is about the level of most public economic discourse. Few people put the effort in to understand the fundamental science of complex systems - even the “simple” fundamental topics of game theory, optimisation, evolutionary stable strategies. Not saying I know much, but I do attempt to understand the underlying reasons for our systems, since I believe changing them can easily cause deadly side effects.
This is all correct, and the standard capitalist's party line. What it misses is conflating Money and Optimization. Money is absolutely the complete and only goal, and yes corporation Optimize to make more money. Regulations put guard rails on the optimization. It was only a few decades ago that rivers were catching fire because it was cheaper to just dump waste. There will always be some mid-level manager that needs to hit a budget and will cut corners, to dump waste or cut baby formula with poison, or skip cleaning cycles and kill a bunch of kids with tainted peanut butter(yes, happened).
But, your are correct, there really isn't an answer. Government is supposed to be the will of the people to put structure, through laws/regulation, on how they want to live in a society, to constrain the Corporation. Corporations will always maximize profit and we as a society have chosen that the goal of Money is actually the most important thing to us. So guess we get what we get.
This did use to happen. IN the 20's, companies could just print more shares and sell them, with no notification to anybody that they had diluted them. Until there were laws created to stop it.
So on one hand some argues money is not currency, and then turn around and say shares aren't money, but they are currency. They can be sold for money? right? It seems like splitting hairs to obfuscate the point that humans will commit fraud and destroy the world in order to optimize to make money. Just throwing up technicalities that 'shares' aren't money isn't changing the fact that many companies have their one and only goal to increase share price, which can be converted to money.
> So on one hand some argues money is not currency
That's not my argument, and also irrelevant to this post.
> say shares aren't money, but they are currency
They're definitely not currency, either.
> They can be sold for money?
That's an asset, not a currency. Those are two very different things.
> It seems like splitting hairs to obfuscate the point that humans will commit fraud and destroy the world in order to optimize to make money.
You were claiming that "companies used to print their own shares = print their own money" in support of your argument "humans will commit fraud and destroy the world in order to optimize to make money". That claim is false, so it doesn't support your argument, and your "point" is not a point because you've provided zero evidence for it.
> isn't changing the fact that many companies have their one and only goal to increase share price
What fact? What number of companies can you point to that factually have their "one and only goal to increase share price"?
I can say for sure that I've never seen a company that doesn't at least have two goals, and your statement is completely irrelevant for privately traded companies.
You seem pretty determined to push your worldview that "companies are evil" without much thought as to what that even means, or producing blatantly false claims like "we as a society have chosen that the goal of Money is actually the most important thing to us" (if you think that, you need to spend more time with real people and less on the internet, because the vast majority of real people do not believe this).
Go read the Gulag Archipelago and tell me how a system without companies or "capitalism" works.
That would be fraud to investors, given investors own the company in a shared manner. If some investor approve printing new shares all investors should be notified. But there are no laws settings how many shares a company can print.
ah, the old lets play at being a stickler on vocabulary to divert attention from the point. so lets grant the point that we could be using sea shells for currency, and that printed money is a 'theoretical stand in for something like trust, or a promise or other million things that theoreticians can dream up'. It doesn't change any argument at all.
To complete my thought. Yes Money is used as an optimization function, its just that we have chosen Money as the Goal of our Money Optimization function. We aren't trying to Optimize 'resources' as believed, that is just a byproduct that sometimes occurs, but not necessarily.
That seems backwards. There is an optimisation system of independent actors, and money is emergent from that. You could get rid of money, but you just end up with another measure.
> we as a society have chosen that the goal of Money is actually the most important thing to us
I disagree. We enact laws as constraints because our society says that many other things are more important than money. Often legal constraints cost corporations money.
Here are a few solutions I have heard proposed:
1: stop progress. Opinion: infeasible.
2: revert progress back to a point in the past. Opinion: infeasible.
3: kill a large population. Opinion: evil and probably self-destructive.
4: revolution - completely replace our systems with different systems. Opinion: seen this option fail plenty and hard to find modern examples of success. Getting rid of money would definitely be wholesale revolution.
5: progress - hope that through gradual improvements we can fix our mistakes and change our systems to achieve better outcomes and (on topic) hopefully avoid catastrophic failures. Opinion: this is the default action of our current systems.
6: political change - modify political systems to make them effective. Opinion: seen failures in other countries, but in New Zealand and we have had some so-far successful political reforms. I would like the US to change its voting system (maybe STV) because the current bipartisan system seems to be preventing necessary legislation - we all need better checks and balances against the excesses of capitalism. I don’t even get a vote in the USA, so my options to effect change in the USA are more limited. In New Zealand we have an MMP voting system: that helped to somewhat fix the bipartisan problem, but unfortunately MMP gave us unelected (list) politicians which is arse. The biggest strength of democracy is voting those we don’t like out (every powerful leader or group wants to stay in power).
7: world war - one group vying for power to enlighten the other group. Opinion: if it happens I hope me and those I love are okay, but I would expect us all to be fucked badly even in the comparatively safe and out-of-the-way New Zealand.
And it's going almost unchallenged because so many of those who like talking about not all being rosy in capitalism are blinded by their focus on the robber baron model of capitalism turning sour.
But the destructively greedy corporation is completely orthogonal to that. It could even be completely held by working class retirement funds and the like while still being the most ruthless implementation of soulless money maximiser algorithm. Running on its staff, not on chips. All it takes are modest number of ownership indirections and everything is possible.
This seems stated as fact. That's common. I believe it is actually a statement of blind faith. I suspect we can at least agree that it is a simplification of underlying reality.
Financial solvency is eventually a survival precondition. However, survival is necessary but not sufficient for flourishing.
So far as I can tell, most aren't. I think you're right that we get a better as well as more productive and profitable world if no humans are okay with that.
It’s because the state is also an oppressive force. I wonder why you come across lots of libertarians and lots of socialists but not so much the combination of the two (toward realities alternative to both state and capital)
I don't know why, but my spouse is a health care worker in long term care for the elderly. She tells me how nearly everyone in their care are either mentally in decline or physically, never both. And those that are both, don't live long.
Anyways, since the state is a tool of oppression and the state should reflect the will of the people, it'd be nice if people chose negative things to oppress like extreme inequality, rampant exploitation, and extortion (looking at you healthcare system aka "your money or your life" highway robbers).
And yet if non-government-level American society wasn't so constantly self-focused at the expense of others, the state would be far less needed!
Are other countries as dysfunctional in terms of voting themselves policies that aren't consistent with our internal behaviors? E.g. "someone" should do something about homelessness but I don't want to see it?
History is not quite like computing, at least in terms of having a compiler and syntax/semantics matter (and are machine-verified).
Other than digesting a whole ton of history at once--or debating ChatGPT--how do you establish your axis or "quadrants" of political lean?
I wish there were a way to systematically track voting record. We're never in the room where it happens, so it can be difficult to tell if a political compromise is for a future favor, or part of a consistent policy position.
Anarchist interests would look for voting records as a negative sign regardless of position. the person you’re replying to is correct that the combination of libertarian and socialist is anarchist. Libertarian Communist is a common flavor of anarchist being both anti state and anti capital
This link rejects the equivalence, but I don't really know. Could you clarify the distinction?
> socialist economics but have an aversion to vanguard parties. Anarchy is a whole lot more than economics.
> To identify as an anarchist is to take a strong stance against all authority, while... other such milquetoast labels take no such stance, leaving the door open to all kinds of authority, with the only real concern being democracy in the workplace.
Yes libertarian communist is a common flavor of anarchism and what I was hinting at. The word has a bad reputation and lots of misunderstanding so I’m trying to find new ways of talking about it…
There are other legitimate flavors of anarchism as well outside libcom.
Just a heads up, when the moderator 'dang' sees this he's going to put it into his highlights collection that tracks people who share identifying stories about themselves. I hope that's OK with you. https://news.ycombinator.com/highlights
> I think /highlights just shows the top upvoted, parent-level comment per thread. Do you observe that too?
No I don't observe that; it's manually compiled by the moderator whose username is dang. He figures it'll be useful for something someday. https://news.ycombinator.com/item?id=34668249
It sounds great until you realize that, in the US at least, the corporations spend a lot of money lobbying Washington to have the rules set in their favor if not eliminated. Fix that first and then I will believe we can have a government that would actually try to place appropriate ethical boundaries on corporations.
This is exactly correct. What people think will happen is:
1. Someone sees a problem and asks a politician to fix it.
2. The politicians enact effective regulation and the problem is solved.
What actually happens is:
1. Someone sees a problem and asks a politician to fix it.
2. The politicians start drafting regulation on the issue.
3. Companies lawyers come in and lobby to have the regulation amended to either be ineffective or disadvantage their competitors.
4. The mal-regulation is enacted and we're all worse off.
5. The companies involved benefit financially and use their money to hire more lawyers (and politicians).
It is necessary to first fix our political system before trying to put more regulation in place. Every time someone says "we need regulation" without doing so, they are making the problem worse, and supporting this corrupt system.
I feel like it's so obvious that it shouldn't have to be stated, but apparently it does: companies need to be regulated because they are composed of people (who are evil), but the governments that regulate those companies are composed of those same evil people and need to be controlled by their citizens. Everybody forgets about the second part, and it's the far more important one.
If more people were directly invested in laws favoring their means and ends, would they take the time to lobby too?
Folks certainly outnumber corporations (?), and they could create representatives for their interests.
Maybe the end-to-end process--from idea to law--is less familiar to most. Try explaining how a feature gets into production to a layperson, for example :)
Maybe we need more "skeletal deployments" in action, many dry runs, accreted over time, to enough folks. This could be done virtually and repeated many times before even going there.
I attended a public meeting of lawyers on the revision of the Uniform Commercial Code to make it easier for companies to ship bad software without getting sued by users. When I objected to some of the mischaracterizations about quality and testing that were being bandied around, the lawyer in charge said "well that doesn't matter, because a testing expert would never be allowed to sit on a jury in a software quality case."
I was, of course, pissed off about that. But he was right. Laws about software are going to be made and administered by people who don't know much about software. I was trying to talk to lawyers who represent companies, but that was the wrong group. I needed to talk to lawmakers, themselves, and lawyers who represent users.
Nothing about corporations governs them except the rule of law. The people within them are complicit, reluctantly or not.
>We as a society need to draw our boundaries and push our governments to wake up and regulate this space before corporations (and governments) cause irreversible negative societal disruption with this technology.
This works in functioning democracies, but not so much for flawed ones.
>he said that it's the responsibility of governments and not corporations to figure out these problems and set the regulations.
In the US, they will say things like this while simultaneously donating to PACs, leveraging the benefits of Citizens United, and lobbying for deregulation. It's been really tough to get either side of the political spectrum to hold tech accountable for anything. Social media companies especially, since they not only have access to so much sentiment data, but also are capable of altering how information propagates between social groups.
>he said that it's the responsibility of governments
>push our governments to wake up and regulate this space
The only thing the govts will do is to make it so it benefits THEM, the governments. It's high time you lot realize that the govts don't want what's best for you, but only want what will keep them in power the longest.
Democratization of AI/LLM is the way to go here, not handing off custodianship to governments or corporations.
You were right to be annoyed. It is a very sad answer. Almost a “if I didn’t peddle on this street corner someone else would”. The answer is a cop out.
Individual citizens have much less power than big tech because they don’t have the lobbying warchest, the implied credibility, the connections or even the intelligence (as in the sheer number of academics/researchers). Companies are run by people with a conscious or not and those people should lead these pushes for the right thing. They are in the ideal spot to do so.
> before corporations (and governments) cause irreversible negative societal disruption
I think the cat's out of the bag. These tools have already been democratized (e.g. llama) and any legislation will be as futile as trying to ban movie piracy.
IMO, the regulation that is necessary is largely (1) about government and government-adjacent use, (2) technology-neutral regulation of corporate, government, etc., behavior that is informed by the availability of, but not specific to the use of, AI models.
Democratization of the technology, IMV, just means that more people will be informed enough to participate in the discussion of policy, it doesn’t impair its effectiveness.
Respectfully, you don't know what you're talking about.
Nintendo has a history of hosting competitive tournaments for games such as Pokemon (and even in a limited capacity Smash Bros). They even made a Pokemon game targeted at competitive players called Pokemon Unite, which they continue to organise tournaments for.
For Smash Bros in particular, they partnered with an e-sports company called Panda Global to officially sanction a circuit last year.
However, their awful track record with e-sports comes from mercilessly shutting down events they are not involved with (even when the organisers have reached out to Nintendo in good faith to discuss their involvment). They even go out of their way to dish out cease-and-desists at the last minute so that organisers don't have time to consider their legal options and rights before going ahead with the event.
There was a big scandal last year when Nintendo were having positive discussions about a partnership with a circuit called Smash World Tour then turned around and proceeded to threaten them just as the end of year finals were about to take place, forcing them to cancel an event that had been over a year in the making, and involving many professional players, hired staff and a lot of money (they could have just let them continue without being involved).
And just as a clarification, almost all these Smash events are not organised for profit by your big bad e-sports companies. Due to Nintendo's actions they have been mostly organically grown community-led efforts, all they want to do at the end of the day is have fun and play the game together.
Nintendo have shown time and again they will abuse their status as a big corporate entity to destroy harmless fan activity if they are not happy about it for any reason.
So just because you have a disdain for something in particular, you shouldn't project that onto a situation and use it as a false explanation for what's happening.
I'm not very knowledgeable with how compilers or language standards work, but would there not be security implications with this approach?
For example let's say a security exploit surfaces in the 2015 edition of Rust, would that not mean all the libraries declared as 2015 edition would have to be updated or abandoned in that case?
Or now that I think about it, is it instead the case that a whole program including all dependencies will be compiled by the same compiler (of which newer editions will have the latest security fixes), just that the compiler will always have to support compiling programs using legacy syntax when it identifies the crate's edition?
It's just syntax differences. The newer compiler supports all previous language editions, you're not using a 2015-era compiler to compile 2015 edition code.
Rust is not ABI-stable, there is no guarantee that you can even mix libs built with different versions of the compiler. The entire Rust tooling is built around static linking and building all your dependencies from sources. So yes, all the crates that go into your program are built with the same compiler, it's just that the compiler knows how to paper over the syntax differences in the different language editions.
> Or now that I think about it, is it instead the case that a whole program including all dependencies will be compiled by the same compiler (of which newer editions will have the latest security fixes)
It's this. Rust doesn't (yet) have a stable ABI for functions that aren't marked `extern "C"`. Any security vulnerability that would affect code in rust-lang/rust would most likely be in the standard library, which doesn't change between editions. All code links to the same libstd. Only the compiler frontend changes
We can all disapprove of what the current CCP is doing, but you cannot compare Hong Kong's situation to Taiwan, Tibet or Xinjiang, in fact you're muddling a bunch of completely different situations.
Hong Kong has historically always been a part of "China", you could say it was "invaded" by the British and then the Japanese during WW2.
I obviously disapprove of the suppression of the HK populace with force, but this is no different to how the CCP operate in the rest of China. If anything the British mandated China had to apply different rules to a portion of their own territory, in classic imperialist fashion.
> Hong Kong has historically always been a part of "China", you could say it was "invaded" by the British and then the Japanese during WW2.
Who cares about History? What matters is now and what people want now. If not, you could use that excuse "but X was always part of Y" to justify just about everything.
Well geopolitics is simply not that simple. What if the Chinese government kept telling the USA that california should hold an independence referendum? Might sound like a stupid example, but Americans would be up in arms at such a proposition.
What if China told the USA to end the trade blockade on Cuba so that they fairly take part in the global economy and climb out of poverty? Again, the USA would never consider such a proposition.
The reality is we don't know how many HK residents want to be independent. Ideally they could hold an independence referendum, but the Chinese government don't want that, and due to the geopolitical tensions in the world right now they're not going to take advice from Western nations.
You could make the same argument to the UK about Wales and Scotland, or Spain with Catalunia and the Basque Country. Neither government is going to let a referendum happen for quite a while.
> What if the Chinese government kept telling the USA that california should hold an independence referendum?
If there was good reason to think Californians wanted this, I'd be all for it. Also, there most pressing issue is American investment in California, which isn't so much of an issue in HK.
> What if China told the USA to end the trade blockade on Cuba
Not sure what this has to do with HK, where trade and international relations are generally better than the mainland..
> we don't know how many HK residents want to be independent
and we never will because PRC don't want to know, don't want anyone to know, and make it clear that it will punish democratic support, let alone independence.
> You could make the same argument to the UK about Wales and Scotland, or Spain with Catalunia and the Basque Country
And indeed, I would. But the Spanish government isn't ripping up agreements like PRC, plus its a willing member of the EU.
History does matter. You cannot just invade a country, steal territory and then later say "it doesn't matter if this was once your territory, what matter is NOW and now I own it so get off my lawn".
That's just not how it works. In terms of history Hong Kong belongs to China and only in super recent history it was "taken" by the British always with the arrangement that it still belongs to China and would get handed back to China. So now that China does what is rightfully their right people act surprised, because the West hoped that if a little bit of time passes that China will stop caring but they were wrong. China took care of what was rightfully theirs and kept a tight grip over it because it saw what the West wanted to do and honestly fair play to them.
When I read the constitution of Hong Kong (Basic Law) and the supporting laws in mainland China, I come to the same conclusion as you.
To me, this obviously overrides the text or "spirit" of any handover treaty, specifically the Sino-British Joint Declaration, as the Basic Law is a constitution.
Leaning on that treaty as a Hong Kong citizen or non-British outside observer requires either complete ignorance, or complete desperation to ever reference it. Which I understand for the people of Hong Kong who have no options and don't want the change of life, but it doesn't embolden me to see it as exceptional as it follows their form of due process, by the book.
So, I agree with you. Disagreeing with you requires me to have a completely separate higher standard than how governments we actually respect operate and what they would tolerate.
I'm not comfortable with any of the procedures, but I really do see how we get a very distorted view of what China is, its goals, and how it operates. And there is a level of constitutional consistency towards territorial unity, which is very predictable. If you are willing to accept that (and how almost every action can be construed to undermine territorial unity) then China is very easy to operate and live in comfortably. Not so dissimilar to an institution or amusement park where you never look behind the scenes and just do the PG-rated activities made available to you, and if you stick with that you're fine. Obviously not what we are used to and strive for in "the west", but not really the nightmare its portrayed as either. Its sad to me that even trying to explain things to you all in a pragmatic way could get me detained in China (because its not completely exalting the territorial unity of China and raises questions about it), but I really think its useful to understand and that its impossible to explain another way.
Here's how. Hong Kong is part of China, yeah, but it was (under one country two systems) an autonomous part, with its own constitution, law and judiciary, which was democractic. What makes a state? Territory, legal and executive autonomy, defense. HK did not have defense, but it had everything else. It was a de facto state, and unfortunately HAD to be, because it was to be democratic in China. Hence, it was de-facto invaded when the "security law" was imposed (in violation of HK autonomy). The analogy is not perfect, and for sure this does not make it worse or better (it's bad because it's a violation of democratic rights and against the expressed wishes of at least half the population that demonstrated against the extradition law, NOT because of souvereignity issues). But there is an analogy with invading a foreign country that is pretty strong.
I interpret the parent comment "Chinese don't want to be Chinese" as no one (including Chinese citizens) wants to be under the thumb of an authoritarian government. Not a racist comment.
Don't confuse anti-PRC sentiment with anti-Chinese sentiment. It is a classic strategy to conveniently conflate the two only when the CPC is being criticised.
Surely you should be aiming this at the person who said "Even chinese don't want to be Chinese" - conflating the two, and not the person pointing out how incorrect that statement sounds.
Both points are over generalizations. It's obviously a complex issue. I've been to Taiwan, I've been to Hong Kong, and from my conversations with all types of people the best I can understand as a foreigner is that the Government is not what being Chinese is about. It's about the culture at the end of the day. You could argue that some population of Chinese have kept a certain culture present from a particular time. There's a lot of what I was told (I can't really know) traditional culture in Taiwan. There was an interesting mix of old and new in Hong Kong. It's sad to see so much conflict as a result of weaponizing identity and heritage. The stuff of lore is what makes any story interesting, and we destroy it with inept government structures.
I think you missed a subtlety there, the point is not even the Chinese (people) want to be Chinese (citizens).
It’s not sinophobic to say that the Chinese government and the CCP are messed up. I don’t doubt that many Chinese people would rather they weren’t subjected to that regime.
That is entirely separate from hatred/fear/negativity toward Chinese people and/or their culture.
>If a US born american like his country, it's patriotism.
I'm familiar with lots of narratives that say an American who loves America has been brainwashed by the system. Or is in a position of privilege and is thus not familiar with the problems inherent in the system.
Maybe my experience is limited as I only live in a neighboring country, but the majority of Chinese I know here, and those I have met in China identify strongly as Chinese, even if they dont agree with everything the government does. This take seems borderline sinophobic.
It relates to what I heard in Taiwain (obviously from a biased leaning). But it's not sinophobic from my perspective, let me explain:
From the perspective of some people I talked to in Taiwan, they see it as they "saved" what Chinese culture is, and what China represents today isn't "Chinese culture."
That may not even be the right way to describe it but essentially there's culture vs government as the issue and they're not the same thing. A big fear of the non CCP people's I talked to is that CCP is destroying Chinese culture and that means they aren't Chinese.
Again, just trying to add some color to what I can identify. I rather add color than blur the lines.
Taiwanese are not Chinese though, so my point responding to OP wasn't about them. I mean people who are born & raised Chinese - they usually do want to be Chinese, even if its just in the cultural sense and not the CCP (but even then, most don't seem very vehemently opposed).
Taiwans different as they've had to largely reframe the Taiwanese identity to be more about the island and its history and people who came (including indigenous peoples), rather than just centric to the Han immigration history which was framing the Taiwanese identity more Chinese-centric. I don't expect them to identify as Chinese citizens, even if they share Chinese culture. But again, not my point.
True, I have heard of those people. The handful of Taiwanese people I know say only older people really feel that way anymore, though my mates are all in their 20s, mostly lgbt and living in Tokyo/Taipei - so I can't claim to have an unbiased sample
One issue I constantly run into speaking to friends in English, is that we use the word "Chinese" for a lot of different ideas (even the language). I think its a bit of a semantic landmine in English for these conversations.
It actually is where ToneMatrix originates from, and is still one of the useable synths in the DAW!