> Why should AI need permission to perform the equivalent of "human looking" at content?
Why should a car need permission to perform the equivalent of "human walking" down the sidewalk? A car converts fuel into kinetic energy and uses friction to propel itself forward, just like a human does.
The thing is, we don't refer to what cars do as "walking" because the difference is very visible to us. The difference is not so visible with ML algorithms, so people keep trying to compare them to humans, when they're not.
We need a new vocabulary to describe what these ML algorithms are doing with data.
So... You're likening AI to cars that need registration to drive on roads; in other words AI needs permission to train on the world's content. I like this analogy.
You had me stumped for a minute. But whenever a car drives from A to B, the "damage" is done. Fuel spent, occupants transported, pedestrians given way, road surfaces slightly worn. When AI does its training thing, no damage has occurred. There is only potential damage later if someone decides to misuse what AI has learnt. I wonder if this makes me an optimist. I want AI to know more, because it will be better for us humans when we use it responsibly.
Why should a car need permission to perform the equivalent of "human walking" down the sidewalk? A car converts fuel into kinetic energy and uses friction to propel itself forward, just like a human does.
The thing is, we don't refer to what cars do as "walking" because the difference is very visible to us. The difference is not so visible with ML algorithms, so people keep trying to compare them to humans, when they're not.
We need a new vocabulary to describe what these ML algorithms are doing with data.