At this point it’s quite likely that they could pivot and just be the chatgpt company. I’ve found chatgpt-4o with web search and plugins to be more useful than o1 for most tasks.
It’s possible we’re nearing the end of the LLM race, but I doubt that’s the end of the AI story this decade, or OpenAI.
Id be hard pressed to come up with a valuation under 30B based on the publicly known finances. OpenAI is certainly crushing the metrics of other highly valued startups like snowflake and databricks.
The cash burn and claim of imminent agi is where the valuation trouble could be.
We’ve barely seen the first wave of companies being built of their APIs too. The billions being put in thousands of startups will take around 5yrs to hit full scale.
Yes but it also hasn't been attacked by ads yet. Google doesn't suck for lack of search results, it sucks because of ads.
Imagine asking chatgpt to tell you about slopes in Colorado, and the first five answers are about how awesome North Face is and how you can order from them. You probably wouldn't use it as much.
Not worth 1B ? Come on man. I see them improving the tool enough for most people willing to pay 50$ a month for a subscription. And for most companies to be willing to pay 300$ per employee. It's perhaps not there yet but I'm sure they'll reach this amount of value for their offering.
It remains to be seen what competition will do to the prices though.
The market of people willing to pay $50 a month for OAI vs $0/month for one of the open source LLAMA variants is not large enough to justify their current valuation, imo
It doesn't really matter how much people are willing to pay. It matters how much margin the market will allow you to charge. OpenAI may be a bit better than most competitors most of the time (IMO they keep getting leap-frogged by Anthropic et al. though), but if your customers can get 90% of the value for 50% less, they will bail. There is no moat. Margins will be razor thin. That's not a 1B+ company.
I think the difference between 90% and 95% is huge.
As a coder, if the LLM is wrong 10% of the time that's pretty bad, I can't really trust it. If it's wrong 5% of the time, still not great but much better - I'd pay much more for that kind of reliability improvement.
It’s possible we’re nearing the end of the LLM race, but I doubt that’s the end of the AI story this decade, or OpenAI.