Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a very useless debate.

Fact is, those are LLM, not "AI" the way the general populace understand that term.

Fact is, that means they just give whatever answer they can derive from their training corpus, while people think they give facts.

Fact is, you do not want to be liable for your half research / half pr product being manipulated into giving wrong or distorted facts regarding one of the most controversial and pivotal election, or worse not needing manipulation just straight up hallucinating. "Prompt engineering" is a thing, after all.

Fact is, Google is terrified of being seen as being unfair or taking a side or ..., more than even their competitors. Their entire "our AI cannot draw white people" thing smelled more of an over-reaction to a PR threat than them trying to push a belief.

Fact is, if you're seen as taking too hard a side in this election your company might be at risk after the result come down, one side or the other. Just look at how Fox is behaving since the Dominion suit.

And last but not least, politics and religion are two of those subject where belief are stronger than facts and can get people very riled up very quick, so if you're an information company you want to treat it as encyclopedia factual after the fact, not as an opinion matter.

I'm european, and I abhor many of the restrictions put on the current gen of LLM and image generator that come from american societal value being imposed, but on the matter of politics it doesn't matter what country in the world it's never a good idea to play that game.



Yeah, LLMs both in popular understanding and potential "rocket-stock" application, seem crazy over hyped.

Best case it generates something somewhat accurate, which progressively decreases in probability of being accurate over time.


One thing I disagree is that Google is not terrified being unfair. They are terrified because their bias would be unveiled so funny they will be shamed and ridiculed by it.


I did not say "terrified [of] being unfair"

I said "Google is terrified of BEING SEEN AS being unfair or taking a side"


> Fact is, Google is terrified of being seen as being unfair or taking a side or ..., more than even their competitors. Their entire "our AI cannot draw white people" thing smelled more of an over-reaction to a PR threat than them trying to push a belief.

What PR threat? The bad PR of Google being able to generate an image of a White European descent family? Who thinks it's bad PR to do that?

Nah, Google was doing some dumb prompt stuffing, or just refused to answer in some cases purely for ideological reasons (intersectionality). They were absolutely pushing a belief.


The PR threat of the risk a prompt like "happy and successful family" or "famous person", trained on the data set they had, showed a truth that had a massive western bias.

You might laugh and think naaah, but as someone who is not from the USA nor an english speaking country, US/English bias in the "western web" is very common.

> Nah, Google was doing some dumb prompt stuffing, or just refused to answer in some cases purely for ideological reasons (intersectionality). They were absolutely pushing a belief.

While what I suggested is only a maybe, I am on the other hand 100% sure that what you suggested isn't it.

If that's what they tried, it would have been more subtle, not over the face obvious.


> If that's what they tried, it would have been more subtle, not over the face obvious.

The "Oh they are so smart they wouldn't do that" argument.

https://arstechnica.com/information-technology/2024/02/googl...

> Google's Gemini system seems to do something similar, taking a user's image-generation prompt (the instruction, such as "make a painting of the founding fathers") and inserting terms for racial and gender diversity, such as "South Asian" or "non-binary" into the prompt before it is sent to the image-generator model. Someone on X claims to have convinced Gemini to describe how this system works, and it's consistent with our knowledge of how system prompts work with AI models. System prompts are written instructions that tell AI models how to behave, using natural language phrases.

It's such a basic propaganda tactics it makes google engineers look like idiots.


> The "Oh they are so smart they wouldn't do that" argument.

That's not at all my argument, and you clearly didn't bother trying to understand what I meant.


> That's not at all my argument, and you clearly didn't bother trying to understand what I meant.

... that empty sentence for a lack a better rebuttal, because you had no sound argument to begin with.


The argument is that you're accusing someone to turn left when they turned right, and I'm saying you're wrong because they could miss their turn or over turn or go over the curb but not go entirely the wrong side.

You're clearly taking this personally, this is internet commenting, take a deep breath and stop fixating on winning internet points.


> You're clearly taking this personally, this is internet commenting, take a deep breath and stop fixating on winning internet points.

And then you have the audacity to claim I'm "taking that personally" while engaging into personal attacks against me yourself? Look at you in the mirror, the hypocrisy.

I linked to an article explaining how stupid these google engineers were in their attempt at manipulating Gemini's results for partisan reasons, while you provided no source yourself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: