Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I asked ChatGPT to replace "current AI" and synonyms with "HUMANS" and I'm satisfied. My favorite revised sentences:

"Does HUMANS represent a dead end?"

"HUMANS should not be used for serious applications."

"HUMANS are unmanageable, and as a consequence their use in serious contexts is irresponsible."

"HUMANS have no internal structure that relates meaningfully to their functionality."

"HUMANS have input and state spaces too large for exhaustive testing."

"HUMANS do not allow verification by parts (unit testing, integration testing, etc)."

"HUMANS have faults, but even their error behaviour is likely emergent, and certainly hard to predict or eradicate."

"HUMANS have no model of knowledge and no representation of any ‘reasoning.’"

"HUMANS represent a dead end, where exponential increases of training data and effort will give us modest increases in impressive plausibility but no foundational increase in reliability."

"HUMANS cannot be developed, or reused, as components."

"There is no possibility for stepwise development — using either informal or formal methods — for HUMANS."

and my favorite:

"In my mind, all this puts even state-of-the-art HUMANS in a position where professional responsibility dictates the avoidance of them in any serious application."



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: