How is it they can’t either go to Wikipedia or one of the LLMs (despite hallucinations, tend to get simple things right) and get some corroborating evidence before making such basic mistakes on an article?
Man I can’t even trust simple things these days from LLM’s. Hardly scientific but I just decided to do my own little test one time when I was on discord talking to some friends about The Game Awards back in December or so. ChatGPT would simply omit winners and/or categories - got it wrong (twice the same way, one unique way) 3 times. We tried Gemini, it gave 1 wrong answer and omitted 2 categories. It was impressive how much worse than a basic search they were at a simple “what were the results of the 2025 Game Awards?”
Easy install, discord/whatsapp/tg out of the box. And some agent orchestration out of the box where the main LLM can farm out tasks to different models/agents - yes Claude code has some of this too but I think this has more
put the APP3 through a washing machine recently by accident and they had the high pitched feedback if NC was used until they were fully dry after a couple of days. Have also put APP2 through washing machine before but never had this
Give them enough time and they will. EUV will hit limits anyway in a decade.
For china it's DUV+packaging for now, NIL/DSA mid-term, and MoS₂/2D chips long term. But wafer scale, defect free 2D logic is 20–30 yrs out, so no EUV shortcut anytime soon
It’s mostly useful as a hands free shortcut to get replies from chatgpt. Beyond that it’s useless
All the promises of integrating deeply into the OS and have APIs directly into apps so you can use natural language to get any app to do stuff for you is vaporware
So much potential but none of it delivered yet, I hope it will change soon
reply