Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google's Rush to Win in AI Led to Ethical Lapses, Employees Say (bnnbloomberg.ca)
42 points by hassanahmad on April 19, 2023 | hide | past | favorite | 47 comments


I don't understand why Google inverstors put up with Sundar, who clearly doesn't care about search results being correct.

Ilya was working there on Google brain team, and Deep Mind still have great people in it who could fix Google search (even some of the old search stuff, who probably moved to other projects out of frustration).

It doesn't matter if 100000 Google engineers test Bard vs Google search vs ChatGPT4, if the CEO doesn't care about the product Google is monetizing.


Investors don't care about search results being correct either.


At this point they should. Google had a monopoly for 20 years, but it's over.


A bit too early to say, don't you think?


I don't think so, no. Search engines as we have known them are in their final hours. I make this prediction with high confidence


As high confidence as chatgpt exhibits in it's answers?


Do you mean 3.5 or 4?


So OpenAi has already won?


If winning means shrinking Google's market share, yes.

Quite often I don't know if Google or ChatGPT4 will give me the better answer for the same query, so they are already competitors in my head.


> One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash

What is the employee expecting? That bard should not be released unless it is an epitome of perfection? When you have employees like these you don't really need enemies.


Previously, Google displayed the relevant 3rd party links (or, paid links), and Google didn't take responsibility for the content.

This clear boundary was blurred once google started auto-summarizing results with some sort of knowledge panel, which was frequently wrong. Google 1st party data has more weight & liability to it than a link to 3rd party sites. https://www.hollywoodreporter.com/business/digital/10-months... https://www.vox.com/recode/22550555/google-search-knowledge-... https://www.executiveprivacy.com/resources/when-google-gets-...

Already these AI searches provide fatal advice (suggesting 40+ mile hiking days in the desert, lying about the distance between water sources & campsites), this example about the plane, inaccurate dosage of substances, etc. Essentially, any time there's a number that can be wrong with a serious outcome, these services will bullshit the number with disastrous consequences. And there's no 3rd party to hide behind now. There's no 'just a link' - a trillion dollar corporation is directly responsible for these results.

Sort of like manual-driven-cars vs self-driving-cars, removing the millions of small 3rd parties could change who is responsible for the outcome. Rushing these out could be setting the stage for the next tobacco / opioid / talc lawsuits.


They passed the algorithm test though.


For example, it's unethical or screened to have an AI answer a question about crime rates and demographics (race / gender).

The answers you get are things like "It's essential to examine the broader context and address the underlying factors contributing to criminal activity." or that crime "is influenced by various factors such as socioeconomic status, education, and access to resources and opportunities." or "It is more useful to focus on addressing social and economic equality for all communities."

You can't actually get things like per capita rates of reported murders by gender/race out of the models or is there some setting / prompt you have to use for these questions?

I'm wondering Bard maybe didn't have this filtered as properly?


Why didn’t the same thing being said about chatGPT?


It was said, but not by employees, since OpenAI has a much lower percentage of very well-paid political activists on their staff.


Yes, it is exactly the point I want to drive. GG has been built on the foundation of being total transparent and engineer/IC drive everything. That has pros and cons. That makes them move fast when they are small and is not an incumbent. But slow them down when they have most to lose and need to make risky decision quick. Amazing how company culture can have so many nuances.


GG?


It is said to a certain extent but OpenAI doesn't have my emails and chatlogs.


Notably, none of these anonymous employees accused Google of doing such - as that would be provably defamatory.


they didn't have anything to lose so they didn't worry about ethics


I think that it is.


Is anyone else getting tired of the "employees say", "experts say", "researchers say" tactic in journalism?

How many? And importantly, what percentage is that? What do the other employees or experts say?


I'm getting tired of the obvious deflection attempts when something bad comes out about big tech corps.


Media coverage, taken uncritically, to me would seem to suggest that big tech is worse than oil, gambling, tobacco, etc. - at least in terms of how much ink gets spilled on their misdeeds.

I don't understand why we have this fantasy that this is unrelated to the very real impact that big tech has had specifically on the media in terms of redirecting advertising revenue and commoditizing their business.


Not worse, but I do think that "big tech" are the modern-day oil barons/railroad tycoons/etc., with all that comes with that.


The problem is all the old tycoons still exist!


> How many?

The article provides that information:

The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg.


> according to 18 current and former workers at the company

So 18 of > 200k people? I bet I could find 18 X/Googlers to say a lot of things


The article does give more information, e.g. "...according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg," and identifies "the AI governance lead, Jen Gennai" directly for some of the claims.



21 experts, according to 90% of researches - an employee said.


In a headline? No, its making a statement about whose opinion is featured in the article. I find that weakly informative.


Sometimes all it takes is 1.


Yes. Alternatively accurate title:

Google's Rush to Win in AI did not lead to Ethical Lapses, Employees Say"


I think you mean:

Google's Rush to Win in AI Led to Ethical Lapses, Employees Do Not Say


I mean, Elon Musk just said in his interview yesterday that he was friends with Larry Page until he realized Larry Page had zero concerns about ethics and AI safety and called Musk a “speciesist”.

I’d say if that’s the attitude of the founder it’s very likely this story is true.


I believe “self-driving” Teslas have a higher body count than Google Search, but go off.


Musk is not a very reliable source at this point. Especially when discussing ethics.


Musk says lots of nonsense. I wouldn't believe him just because he says something.

I mean, it wouldn't surprise me if that's true, but it's pretty risky just taking Musk's word for it.


Usually it means two people were willing to talk to a reporter.


Does the AI even have a frame of reference to apply "Do no evil" in a manner that makes sense to anyone?


For comparison:

Do human beings have a frame of reference to apply "Do no evil" in a manner that makes sense to everyone?

Do corporations have a frame of reference to apply "Do no evil" in a manner that makes sense to someone?

I think it is safe to say that two preceding questions are hotly contested, so it's hard to give AI a frame of reference.


The lack of a universal agreement about ethics doesn't mean that having an ethical frame of reference isn't important.

"Nihilists! Fuck me. I mean, say what you want about the tenets of National Socialism, Dude, at least it's an ethos." -- Walter Sobchak, The Big Lebowski


> doesn't mean that having an ethical frame of reference isn't important.

I can't emphasize how strongly I agree with you. My intent was only to point at how difficult the problem is, maybe the hardest problem in AGI or its precursors, but not nearly the best attended or funded.


I mean, it was trained on the dregs of the internet. It has a frame of reference to argue that any thing is evil, which will make sense to someone; and that the same thing is not evil, which will make sense to someone else. It's a bit like Poe's law, but now with skynet.


"Do no exploding gradient"


If it is indeed learning from humans, I fear that it has a terrible source to model itself on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: