Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No need to worry about it happening, it already has. In terms of diffusers alone:

Stable Diffusion 3 came out recently and immediately fell flat on its face. Terrible anatomy and basically unusable for human forms unless you use so many “unsafe” negative words that it’s “safety” training doesn’t destroy the result.

In the meantime… Chinese models Lumina, PixArt, and Hunyuan are all Chinese projects or heavily contributed by Chinese researchers and companies. These are rapidly gaining steam. They’re far less “safe” with far less lobotomization.

We are already losing the AI race in this realm exactly because of an over-reaction to “safety”.



There’s three kinds of AI safety:

1. “AI is going to (help someone) take over the world”. We don’t need this yet but it’s better to think about it before you need it rather than afterwards.

2. AI is going to do genuinely bad things: generate CP, teach people to make dangerous chemicals.

3. AI is going to do things that people in California think are dangerous: generate a picture of a boob, generate picture of a group of people who are not a mixture of different ethnicities, or sexes, give an answer to a question that isn’t aligned with how people in California think.

China is not concerned with any of these, although it is concerned with its own version of point 3:

3b. Generate criticism of the CCP or anything that the CCP doesn’t like people to know about.


It seems like the cat is out of the bag, and it may take a catastrophic disaster before any risks are taken seriously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: