Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I can only imagine the risks in professions that aren't dealing with logic that has a very finite list of options or where the answers AI produce require researched effort to validate

This can be done the same way as it’s done in the non ai-assisted process. Mistakes are made then the trained professionals correct the course of action. The problem is with people dealing and selling the problem like it’s a simple logical procedure. Then the layman will think that since a machine crafted the answer, and since machines are logical, the answer must be right.



And of course, there's no way to verify the output of an llm without professional experience or training, so what's the point?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: