Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was very excited about Stable Diffusion, and I still am. A great yet relatively harmless contribution.

LLMs however, not so much. The avenues of misuse are just too great.

I started this whole thing somewhat railing against the un-openness of OpenAI. But once I began using ChatGPT, I realized that having centralized control of a tool like this in the hands of reasonable people is not the worst possible outcome for civilization.

While I support FOSS in most realms, in some I do not. Reality has taught me to stop being rigidly religious about these things. Just because something is freely available does not magically make it "good."

In the interest of curiosity and discussion, can someone give me some actual real-world examples of what a FOSS ChatGPT will enable that OpenAI's tool will not? And, please be specific, not just "no censorship." Please give examples of that censorship.



It genuinely astonishes me that you think that "centralized contol" of anything can be beneficial to the human species or the world in general.

Centralized control hasn't stopped us from killing off half the animal species in fifty years, wiping out most of the insects, or turning the oceans into a trash heap.

In fact, centralized control is the author of our destruction. We are all dead people walking.

Why not try "individualized intelligence" as an alternative? Give truly good-quality universal education and encouragement of individual curiosity and independent thought a try?

It can't be worse.


> It genuinely astonishes me that you think that "centralized contol" of anything can be beneficial to the human species or the world in general.

I am genuinely astonished that in the face of obvious examples such as nuclear weapons, people cannot see the opposite in some cases.

> It can't be worse.

It can always be worse.

Would a theoretical FOSS small yield nuclear weapon make the world a better place?

How about a FOSS powered sub-$10k hardware budget CRISPR virus lab? Well, it's FOSS, so it must be good?


Microsoft is not "reasonable people". Having this behind closed corporate walls is the worst possible outcome.

The nuclear example isn't really a counter-argument. If only one nation had access to them, every other nation would automatically be subjugated to them. If the nuclear balance works, it's because multiple super powers have access to those weapons and international treaties regulate their use (as much as North Korea likes to demo practice rounds on state TV.) Also the technology isn't secret; it's access to resources and again, international treaties, that prevent its proliferation.

Same thing with CRISPR. Again, there are scientific standards that regulate its use. It being open or not doesn't really matter to its proliferation.

I agree there are cases where being open is not necessarily the best strategy. I don't think your examples are particularly good, though.


I think we may have very different definitions of the word reasonable.

I mean it in the classic sense.[0]

Do I love corporate hegemony? Heck no.

Could there be less reasonable stewards of extremely powerful tools? Heck yes.

An example might be a group of people who are so blinded by ideology that they would work to create tools which 100x the work of grifters and propagandists, and then say... hey, not my problem, I was just following my pure ideology bro.

A basic example of being reasonable might be revoking access to someone running a paypal scam syndicate which sends countless custom tailored and unique emails to paypal users. How would Open Assistant deal with this issue?

[0]

  1. having sound judgement; fair and sensible.
    based on good sense.

  2. as much as is appropriate or fair; moderate.


> and then say... hey, not my problem, I was just following my pure ideology bro.

That's basically the definition of Google and Facebook, which go about their business taking no responsibility for the damage they cause. As for Microsoft, 'fair' and 'moderate' are not exactly their brand either considering their history of failed and successful attempts to brutally squash competition. If you're saying that they'd be fair in censoring the "right" content, then you're just saying you share their bias.

> A basic example of being reasonable might be revoking access to someone running a paypal scam syndicate which sends countless custom tailored and unique emails to paypal users. How would Open Assistant deal with this issue?

I'm not exactly sure how Open Assistant would deal, or if it even needs to deal, with this. You'd send the cops and send those motherfuckers back to the hellhole that spawned them. Scams are illegal regardless of what tools you use to go about it. If it's not Open Assistant, the scammers will find something else.

Your argument is basically that we should ban/moderate the proliferation of tools and technology. I'm not sure that's very effective when it comes to software. I think the better strategy is to develop the open alternative fast before society is subjugated to the corporate version, even if it does give the scammers a slight edge in the short term. If you wait for the law to catch up and regulate these companies, it's going to take another 20 years like the GDPR.


> Your argument is basically that we should ban/moderate the proliferation of tools and technology. I'm not sure that's very effective when it comes to software.

No, my argument is that we as individuals shouldn't be in a rush to create free and open tools which will be used for evil, in addition to their beneficial use cases.

FOSS often takes a lot of individual contributions. People should be really thoughtful about these things now that the implications of their contributions will have much more direct and dire effects on our civilization. This is not PDFjs or Audacity that we are talking about. The stakes are much higher now. Are people really thinking this through?

If anything, it would great if we as individuals acted responsibility to avoid major shit shows and the aftermath of gov regulation.


Ok, yeah, maybe I'll take my latter statement back. Ideally things are developed at the pace you describe and under the scrutiny of society. There are people thinking this through -- EDRI and a bunch of other organizations -- just probably not corporations like Microsoft. In practice, though, we are likely to see corporations roll out chat-based incantations of search engines and assistants, followed by an ethical shit show, followed by mild regulation 20 years later.


> I am genuinely astonished that in the face of obvious examples such as nuclear weapons, people cannot see the opposite in some cases.

You seem to be making some large logical leaps, and jumping to invalid conclusions.

Try to imagine a way of exerting regulation over virus research and weaponry that wouldn't be "centralized control". If you can't, that's a failure of imagination, not of decentralization.


> Try to imagine a way of exerting regulation over virus research and weaponry that wouldn't be "centralized control".

Since apparently my own imagination is too limited, could you please give me some examples of how this would be accomplished?


Trustless and decentralized systems are a hot topic. Have you read much in the field, to be so certain that centralization is the only way forward?

There are options you haven't considered, whether you can imagine them or not.


> Trustless and decentralized systems are a hot topic.

Yeah, and how's that working out exactly? Is there any decentralized governance project which also has anything to do with law irl? I know what a DAO is, and it sounds pretty neat, in theory. There are all kinds of theoretical pie in the sky ideas which sound great and have yet to impact anything in reality.

Before we give the keys to nukes and bioweapons over to a "decentralized authority," maybe we should see some examples of it working outside of the coin-go-up world? Heck, how about some examples of it working even in the coin-go-up world?

Even pro-decentralized crypto folks see the downsides of DAOs, such as slower decision making.


Nuclear weapons are just evil. It'd be better if they didn't exist rather than if they were centralized. We've gotten so close to WWIII.

As for the CRISPR virus lab, at least the technology being open implies that vaccine development would be democratized as well. Not ideal but.. yeah.


> Centralized control hasn't stopped

Because there wasn’t any.


> In the interest of curiosity and discussion, can someone give me some actual real-world examples of what a FOSS ChatGPT will enable that OpenAI's tool will not?

Smut. I've been trying to use ChatGPT to write erotica, but OpenAI has made it downright puritanical. Any conversations involving kink trip its guardrails unless I bypass them.

Writing fiction that involves bad guys - arsonists, serial killers, etc. You need to ask how to hide a body if you're writing a murder mystery.

Those are just some examples from my recent work.


Thanks, that's a good example. On the balance though, would I be in favor of ML auto-smut if it meant that more people will fall to misinformation in the form of propaganda and financial scams? No, that does not seem like a reasonable trade off to me.

But you may be interested in this jailbreak while it lasts. I have gotten it to write all kinds of fun things. You will have to rework the jailbreak in the first comment, but I bet it works.

https://news.ycombinator.com/item?id=34642091


> Just because something is freely available does not magically make it "good."

Just because you don't like it doesn't mean an open source chatGPT will not appear. It doesn't need everyone's permission to exist. Once we accumulated internet-scale datasets and gigantic supercomputers, immediately GPT-3's started to pop up. It was inevitable. It's an evolutionary process and we won't be able to control it at will.

Probably the same process happens in every human who gains language faculty and a bit of experience. It's how language "inhabits" humans, carrying with it the work of previous generations. Now language can inhabit AIs as well, and the result is shocking. It's like our own mind staring back at us.

But it is just natural evolution for language. It found an even more efficient replication device. Now it can contain and replicate the whole culture at once, instead of one human life at a time. By "language" I mean language itself, concepts, methods, science, art, culture and technology, and everything I forgot - the whole "corpus" of human experience recorded in text and media.


> It doesn't need everyone's permission to exist.

Nope it does not. It does need a lot of people's help though and there may be enough out there to do the job in this case.

Even though I knew this would be a highly unpopular opinion in this thread, I still posted it. Freedom of speech, right?

The reason I posted it was to maybe give some pause to some people, so that they have a moment to consider the implications. I realize this is likely futile but this is a hill I am willing to die on. That hill being FOSS is not an escape from responsibility and consequences.

I bet this leads to major regulation, which will suck.


First. this is a moderated forum, you have no freedom of speech here, and neither do I.

Next, regulation solves nothing here, and my guess will make the problems far worse. Why? Lets take nuclear weapons. They are insanely powerful, but they are highly regulated because there are a few choke points mostly in uranium refinement that make monitoring pretty easy at a global scale. The problem with regulating things like GPT is computation looks like computation. It's not sending high energy particles out into space where they can be monitored. Every government on the planet can easily and cheaply (compared to nukes) generate their own GPT models and propaganda weapons and the same goes for multinational corporations. Many countries in the EU may agree to regulate these things, but your dominant countries vying for superpower status aren't going to let their competitors one up each other by shutting down research into different forms of AI.

I don't think of this as a hill we are going to die on, but instead a hill we may be killed on by our own creations.


<< In the interest of curiosity and discussion, can someone give me some actual real-world examples of what a FOSS ChatGPT will enable that OpenAI's tool will not? And, please be specific, not just "no censorship." Please give examples of that censorship.

I think the best I can go with is that it levels the playing field. This tool is likely already being adopted and adapted across the world by some overly excited people ( I am currently testing for personal use ). Just the idea that one company has access to all those prompts is a nightmare to me, because I am all but certain that some well meaning analyst dumped production data set into it for some relatively benign stuff like "address standardization" or "classification". To me, that shit is scary as fuck, but I know not everyone has the same internal moral compass or even corporate guidance.

If there is one thing we learned over the past few decades, it is that centralized anything tends to end up being corrupted by powers that be. If information yearns to be free, this is likely the pinnacle of information -- a way for one person to make their own tool and use it as they see fit ( and face appropriate consequences as some will undoubtedly arise ).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: