Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An insurance model that uses data on financial stability in a society where a specific race of people have been, through bigotry alone, forced into a less stable position by default, will be bound to perpetuate the original bigoted patterns of behavior. It's not inaccurate to describe the model as therefore racist because it's modeling itself off a risk assessment where certain groups have historically been forced into the fringe and are therefore inherently risky.

I'm sure if black wall street wasn't razed to the ground by a racist white mob (among several other attempts to gain stability, wealth, etc. by black people being destroyed by racist white anger) then maybe the model wouldn't need to "reflect the underlying stats that correlate with race...". But those underlying stats, and those correlations, didn't just happen in a vacuum, hewn out of the aether like a magical and consequentless thing.



The points you are making are entirely political and apply generally, not just to insurance. Insurance is a business and it's not the job of the insurer to correct long-standing social ills.


Whether or not my points apply generally does not mean it does not apply to insurance. In fact, it very likely means insurance is a subset of generally, and therefore insurance also has to deal with racism. You cannot extricate models of society from their ills by pretending the ills aren't relevant.


"But our business processes avoid all bias; all decisions are based on a ML model, not on human decisions."


It's not political to point out bad data will make bad stats.


Is it bad data if it reflects the underlying observed frequencies regardless of their social origins? I don't think so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: