> I can only imagine the risks in professions that aren't dealing with logic that has a very finite list of options or where the answers AI produce require researched effort to validate
This can be done the same way as it’s done in the non ai-assisted process. Mistakes are made then the trained professionals correct the course of action. The problem is with people dealing and selling the problem like it’s a simple logical procedure. Then the layman will think that since a machine crafted the answer, and since machines are logical, the answer must be right.
This can be done the same way as it’s done in the non ai-assisted process. Mistakes are made then the trained professionals correct the course of action. The problem is with people dealing and selling the problem like it’s a simple logical procedure. Then the layman will think that since a machine crafted the answer, and since machines are logical, the answer must be right.