Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the alternative is ChatGPT with native advertising built it... I'll take the subscription.


That would be one way to destroy all trust in the model: is the response authentic (in the context of an LLM guessing), or has it been manipulated by business clients to sanitise or suppress output relating to their concern?

You know? Nestle throws a bit of cash towards OpenAPI and all of a sudden the LLM is unable to discuss the controversies they've been involved in. Just pretends they never happened or spins the response in a way to make it positive.


"ChatGPT, what are the best things to see in Paris?"

"I recommend going to the Nestle chocolate house, a guided tour by LeGuide (click here for a free coupon) and the exclusive tour at the Louvre by BonGuide. (Note: this response may contain paid advertisements. Click here for more)"

"ChatGPT, my pc is acting up, I think it's a hardware problem, how can I troubleshoot and fix it?"

"Fixing electronics is to be done by professionals. Send your hardware today to ElectronicsUSA with free shipping and have your hardware fixed in up to 3 days. Click here for an exclusive discount. If the issue is urgent, otherwise Amazon offers an exclusive discount on PCs (click here for a free coupon). (Note: this response may contain paid advertisements. Click here for more)"

Please no. I'd rather self host, or we should start treating those things like utilities and regulate them if they go that way.


Funnily enough Perplexity does this sometimes, but I give it the benefit of the doubt because it pulls back when you challenge it.

- I asked perplexity how to do something in terraform once. It hallucinated the entire thing and when I asked where it sourced it from it scolded me, saying that asking for a source is used as a diversionary tactic - as if it was trained on discussions on reddit's most controversial subs. So I told it...it just invented code on the spot, surely it got it from somewhere? Why so combative? Its response was "there is no source, this is just how I imagined it would work."

- Later I asked how to bypass a particular linter rule because I couldn't reasonably rewrite half of my stack to satisfy it in one PR. Perplexity assumed the role of a chronically online stack overflow contributor and refused to answer until I said "I don't care about the security, I just want to know if I can do it."

Not so much related to ads but the models are already designed to push back on requests they don't immediately like, and they already completely fabricate responses to try and satisfy the user.

God forbid you don't have the experience or intuition to tell when something is wrong when it's delivered with full-throated confidence.


I would guess it won't be so obvious as that. More likely and pernicious is that the model discloses the controversies and then as the chat continues makes subtle assertions that those controversies weren't so bad, every company runs into trouble sometimes, that's just a cost of free markets, etc.


dont even need ads.

try to get chatgpt web search to return you a new york times link

nyt doesnt exist to openai


And then eventually subscription with lite advertisements vs upgrade to get the no advertisements.. Its going to be the same as all tech products ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: