Agreed. A lot of these responses read like they haven't actually tried it yet.
Which is also interesting, I myself actively put off trying it until I eventually gave in. It seems a lot of us are doing the same, maybe its a case of "how good could it actually be?"
Not trying it yet is fine. Making declarative statements on a product you haven't even used is just absurd.
Dude clearly hasn't used GPT for translation before and his next reply is telling me the ways GPT should fail based on his pre-conceived notions of its abilities. Except i have actually extensively tested(publicly too) LLMs for translation (even before GPT-4) and basically everything he says is just plain wrong.
I'll never understand why people behave like this.
Apparently GPT-4 can't handle "all this talk about only getting explicit meaning across would be easily dispelled in an afternoon if you only bothered to try.", which isn't simple as "Ace Attorney" but I'd think it's still a small stretch to say "everything he says is just plain wrong".
1) is literally opposite of intent, shrugs off the idea that the talks clear up, 2) can be interpreted as someone discussing about keeping scope on a topic, 3) is not so literal and also turning sentence inside 「」 into a sort of an imperative, 4) ... I'm not sure what it's trying to say ...
本当に、/使ってみたら/簡単に/明確な/意味/だけ/を伝える/という話/は消えるでしょう。/
"Really,/ if used /simply/clear/meaning/only/is conveyed/that story/will disappear./"
... Machine translations used to be like that when I was installing game demos from CD.
Which is also interesting, I myself actively put off trying it until I eventually gave in. It seems a lot of us are doing the same, maybe its a case of "how good could it actually be?"