S230 says that a company won't be held responsible for some (but not all) user generated speech on their website. But it also says they do not lose those protections if they moderate that speech. The people who made the bill realised that companies need the ability to moderate content, and so they built it into the law.
The vast majority of Internet users do not want dumb pipes and unmoderated content.
What the parent is saying is pretty plain. If <social media company> is moderating content then they should be liable for it. If they're not liable for it then they shouldn't be moderating it.
I understand that point of view but honestly it won't work for the simple reason that no one - not the providers and not the users - wants it to work that way. I firmly believe that if you try and setup the legal framework to get that configuration people will create technical work around after technical workaround until they get back to the status quo.
It will be like nothing so much as the way SPAC's are used today. Whatever else they are they're a way to do an IPO as it was done before SOX. It's a technical end run of a law no one likes.
S230 says that a company won't be held responsible for some (but not all) user generated speech on their website. But it also says they do not lose those protections if they moderate that speech. The people who made the bill realised that companies need the ability to moderate content, and so they built it into the law.
The vast majority of Internet users do not want dumb pipes and unmoderated content.