Except that we're basing it on human-derived data, which means the AGI could derive traits from humans due to it being in the data set. If someone is feeding the CEO's behavior in, and then asking the AGI "what would the CEO do in this case?", it seems like we'd get the behavior of a AGI modeled on a CEO back. With all the good and bad that implies.
We don't have any reason to believe an AGI wouldn't also have these traits.
This is similar to the argument that algorithms can't be racist. Except that we're feeding the algorithm data that comes from humans, some of whom are racists, so surprise surprise, the algorithm turns out to behave in a racist manner, which is shortened to just be "the algorithm is racist" (or classist or whatever).
Decision making for an AGI isn't going to be based on 10 billion reddit and 4chan comments. It's going to have its own decision making capabilities independent of the knowledge it has, and it will be capable of drawing its own conclusions from data and instead of relying on what other people's opinions are.
A language model today can be racist because it's predicting text, not making decisions. It hasn't decided that one race is inferior to another.
We don't have any reason to believe an AGI wouldn't also have these traits.
This is similar to the argument that algorithms can't be racist. Except that we're feeding the algorithm data that comes from humans, some of whom are racists, so surprise surprise, the algorithm turns out to behave in a racist manner, which is shortened to just be "the algorithm is racist" (or classist or whatever).