Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Typical practice is to find images whose inclusion causes high variation in final accuracy (under k-fold validation, aka removing/adding the image causes a big difference)

How do you identify these images? It sounds like I'd need to build small models to see the variance but I'm hoping that there's a more scientific way?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: