I don't understand why Google inverstors put up with Sundar, who clearly doesn't care about search results being correct.
Ilya was working there on Google brain team, and Deep Mind still have great people in it who could fix Google search (even some of the old search stuff, who probably moved to other projects out of frustration).
It doesn't matter if 100000 Google engineers test Bard vs Google search vs ChatGPT4, if the CEO doesn't care about the product Google is monetizing.
> One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash
What is the employee expecting? That bard should not be released unless it is an epitome of perfection? When you have employees like these you don't really need enemies.
Already these AI searches provide fatal advice (suggesting 40+ mile hiking days in the desert, lying about the distance between water sources & campsites), this example about the plane, inaccurate dosage of substances, etc. Essentially, any time there's a number that can be wrong with a serious outcome, these services will bullshit the number with disastrous consequences. And there's no 3rd party to hide behind now. There's no 'just a link' - a trillion dollar corporation is directly responsible for these results.
Sort of like manual-driven-cars vs self-driving-cars, removing the millions of small 3rd parties could change who is responsible for the outcome. Rushing these out could be setting the stage for the next tobacco / opioid / talc lawsuits.
For example, it's unethical or screened to have an AI answer a question about crime rates and demographics (race / gender).
The answers you get are things like "It's essential to examine the broader context and address the underlying factors contributing to criminal activity." or that crime "is influenced by various factors such as socioeconomic status, education, and access to resources and opportunities." or "It is more useful to focus on addressing social and economic equality for all communities."
You can't actually get things like per capita rates of reported murders by gender/race out of the models or is there some setting / prompt you have to use for these questions?
I'm wondering Bard maybe didn't have this filtered as properly?
Yes, it is exactly the point I want to drive. GG has been built on the foundation of being total transparent and engineer/IC drive everything. That has pros and cons. That makes them move fast when they are small and is not an incumbent. But slow them down when they have most to lose and need to make risky decision quick. Amazing how company culture can have so many nuances.
Media coverage, taken uncritically, to me would seem to suggest that big tech is worse than oil, gambling, tobacco, etc. - at least in terms of how much ink gets spilled on their misdeeds.
I don't understand why we have this fantasy that this is unrelated to the very real impact that big tech has had specifically on the media in terms of redirecting advertising revenue and commoditizing their business.
The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg.
The article does give more information, e.g. "...according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg," and identifies "the AI governance lead, Jen Gennai" directly for some of the claims.
I mean, Elon Musk just said in his interview yesterday that he was friends with Larry Page until he realized Larry Page had zero concerns about ethics and AI safety and called Musk a “speciesist”.
I’d say if that’s the attitude of the founder it’s very likely this story is true.
The lack of a universal agreement about ethics doesn't mean that having an ethical frame of reference isn't important.
"Nihilists! Fuck me. I mean, say what you want about the tenets of National Socialism, Dude, at least it's an ethos." -- Walter Sobchak, The Big Lebowski
> doesn't mean that having an ethical frame of reference isn't important.
I can't emphasize how strongly I agree with you. My intent was only to point at how difficult the problem is, maybe the hardest problem in AGI or its precursors, but not nearly the best attended or funded.
I mean, it was trained on the dregs of the internet. It has a frame of reference to argue that any thing is evil, which will make sense to someone; and that the same thing is not evil, which will make sense to someone else. It's a bit like Poe's law, but now with skynet.
Ilya was working there on Google brain team, and Deep Mind still have great people in it who could fix Google search (even some of the old search stuff, who probably moved to other projects out of frustration).
It doesn't matter if 100000 Google engineers test Bard vs Google search vs ChatGPT4, if the CEO doesn't care about the product Google is monetizing.