Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An interesting research direction would be to see how much the GPT3 deviates as we get more precise on various computational tasks. Possibly this would give some measure of some of the concepts the model has learned


Do we today have any test suites/benchmarks for models along those lines?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: