Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't dismiss them. I think there's a huge potential in LLMs, but they _also_ happen to be really good at generating plausible, difficult to detect bullshit.

Now, people seem to miss or ignore that fact, and I think is a very risky path.

I'd say that ca. 8/10 founders who pitched to me an idea leveraging LLMs completely missed that limitation. An example would be something like using LLMs as a replacement for therapy.

> How this is dismissed because it’s not 100% perfect (might at add, “yet”) is beyond me.

Again, I'm not dismissing them, but the current tech behind GPT or LLaMA has no concept of "correctness". These models don't understand what they're saying and this is not a trivial issue to fix.

> I struggle to see how even the current GPT-4 is any worse than your average human.

Where, how, and what do you mean by worse? I'm pretty sure there are cases where I'd agree with you, but this is a very broad statement.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: