You can ask the same of users consuming toxic content on Facebook. Meta knows the content is harmful and they like it because it drives engagement. They also have policies to protect active scam ads if they are large enough revenue-drivers - doesn't get much more knowingly harmful than that, but it brings in the money. We shouldn't expect these businesses to have the best interests of users in mind especially when it conflicts with revenue opportunities.
It is much harder to blame meta because the content is disperse and they can always say "they decided to consume this/join this group/like this page/watch these videos", while ChatGPT is directly telling the person their mother is trying to kill him.
Not that the actual effect is any different, but for a jury the second case is much stronger.