Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given the high risk, investors likely want a shot of earning at least a 10x return. $157 billion x 10 = $1.57 trillion, greater than META's current market capitalization. Greater returns would require even more aggressive assumptions. For example, a 30x return would require OpenAI to become the world's most valuable company by a large margin.

All I can say to the investors, with the best of hopes, is:

Good luck! You'll need it!



It's fine, Sam's bulletproof plan is to build AGI (how hard could it be) and then ask the AGI how they can make a return on their investments.

https://www.threads.net/@nixcraft/post/C5vj0naNlEq

If they haven't built AGI yet that just means you should give them more billions so they can build the AGI. You wouldn't want your earlier investments to go to waste, right?


How old is that though? They seem to be making revenue pretty well now, so I suspect this might be quite old?


Revenue isn't profit. They're burning money at an impressive rate: https://www.nytimes.com/2024/09/27/technology/openai-chatgpt...

I wouldn't even be surprised if they were losing money on paying ChatGPT users on inference compute alone, and that isn't even factoring in the development of new models.

There was an interesting article here (can't find the link unfortunately) that was arguing that model training costs should be accounted for as operating costs, not investments, since last year's model is essentially a total write-off, and to stay competitive, an AI company needs to continue training newer frontier models essentially continuously.


Training costs scale to infinite users making them a perfect moat even if they need to keep updating it. Success would be 10-100x current users at which point training costs at the current scale just don’t matter.

Really their biggest risk is total compute costs falling too quickly or poor management.


The potential user base seems quite finite to me, even under most optimistic assumptions.


Not everyone is going to pay 20$/month, but an optimistic trajectory is they largely replace search engines while acting as a back end for a huge number of companies.

I don’t think it’s very likely, but think of it like an auction. In a room with 100 people 99 of them should think the winner overpaid. In general most people should feel a given startup was overvalued and only looking back will some of these deals look like a good investment.


WSJ reported today that ChatGPT has 250M weekly users. 10x that would be nearly the the majority of internet users. 100x that would be significantly more than the population of Earth.


Someone can be a direct user, be on some companies corporate account, and an indirect user via 3rd parties using OpenAI on their backend.

As long as we’re talking independent revenue streams it’s worth counting separately from an investment standpoint.


> I wouldn't even be surprised if they were losing money on paying ChatGPT users on inference compute alone

I'd be surprised it that was the case. How many tokens is the average user going through? I'd be surprised if the avg user even hit a 1m tokens much less 20m.


With o1? A lot.

Even for regular old 4o: You’re comparing to their API rates here, which might or might not cover their compute cost.


o1 is about $2-$4 per message over the API. I'm probably costing OpenAI less than 24hrs after my subscription renewal each month.

Voice mode is around $0.25 per minute via API. I don't use that much, but 3 minutes ago per day would already exceed the cost of a ChatGPT Plus subscription by quite a bit.


> o1 is about $2-$4 per message over the API

I’m not sure I understand this, sorry. I see GPT-4o at $3.75 per million input tokens and $10 per million output tokens, on OpenAI’s pricing page.

That’s expensive and I can’t see how they can run Copilot on the standard API pricing. But, it makes a message (one interaction?) lower cost than $4 to me.

How many tokens are in a typical message for you?


I think this might be the article you mentioned

https://benn.substack.com/p/do-ai-companies-work


That was it, thank you!


Thought of this way makes AI companies remarkably similar to Bitcoin mining companies which always just barely stay ahead of the difficulty increases and often fail.


AGI's answer will be easy: hyperinflation. Kills many birds with one stone.


The answer would need to be "LET THERE BE LIGHT!" for these valuations to make sense.


Even before reading your username I could've kissed you <3 Asimov


The Last Question is a wonderful short story. One of my favorites.


Reminds me of Stanislav Lem's The Phools.


The funny thing about that statement is that if it actually does become true, all of those VCs (and Altman himself), whose job is ostensibly to find the optimal uses for capital, would immediately become obsolete. Heck, the whole idea that capitalism could just continue along in its current form if true AGI existed is pretty laughable.


There are so many things wrong in this statement (starting with "immediately"). Let's assume they built a system they claim is AGI. Let's assume its energy consumption is smaller than that of a small country. Let's assume that we can verify it's AGI. Let's assume its intelligence is higher than average human. That's many "ifs" and I omitted quite a few.

Now, the question is: would you trust it? As a human, a manager, a president? With the current generation, I treat is as a dumb but quick typist, it can create code much faster than I can, but the responsibility to verify it all is entirely on me. We would need decades of proof such an AGI is reliable in order to actually start trusting it - and even then, I'm not sure how safe it would be.


Would you settle for a few days of testing in perfect conditions? Just kidding, companies don't care!


/All Gifts, Bestowed/ by Gayou is a great read that explores this topic.


If anyone in the thread has used o1 or the real-time voice tools, it's pretty clear AGI is here already, so we are really talking about ASI.

You have no option but to trust an ASI as it is all-powerful by definition. If you don't trust ASI, your only option is to prevent it from existing to begin with.

Edit: please note that AGI ≠ "human intelligence," just a general intelligence (that may exceed humans in some areas and fall behind in others.)


> please note that AGI ≠ "human intelligence," just a general intelligence (that may exceed humans in some areas and fall behind in others.)

By this definition a calculator would be an AGI. (Behold -- a man!)


Meh, what I've seen is that we continually move the goalposts for AGI, and even GTP-3.5 would have been considered AGI by our standards from just 5 years ago.

But if I can't convince you, maybe Norvig can: https://www.noemamag.com/artificial-general-intelligence-is-...


Let’s say you gave o1 an API to control a good robot. Could it throw a football? Could it accomplish even the most basic tasks? If not, it’s not generally intelligent.


> Let’s say you gave o1 an API to control a good robot. Could it throw a football?

Maybe.

> Could it accomplish even the most basic tasks?

Definitely: https://youtu.be/Sq1QZB5baNw


I stand corrected


> If you don't trust ASI, your only option is to prevent it from existing to begin with.

I don't understand this sentence. I don't trust Generative AI because it often spits out false, inaccurate or made up answers, but I don't believe my "only option is to prevent it from existing".


ASI by definition will be all-powerful or close to it. What are you gonna do if it comes into existence?

Because if you don't trust it, you're fucked:

- Your attempts to limit its influence on your life will be effective only if the ASI decides to willfully ignore or not notice you.

- If it lies or not, there's nothing you can really do. The outcome it wants is practically guaranteed regardless. It will be running circles around your mental capacity like a human drawing circles around an ant on a piece of paper.

So what option do you really have but to trust it? I mean, sure, you can not trust the all-powerful god, but your lack of trust will never really have any effect on the real world, or even your life, as the ASI will always get what it wants.

Really, your only option is to prevent it from happening to begin with.

All that said - I think ASI will be great and people's concerns are overblown.


> ASI by definition will be all-powerful or close to it.

That's a huge assumption.

AI in SF: all-powerful all-controlling abstract entity.

AI in reality: LLMs on thousands of servers with GPUs, operated and controlled by engineers, gated by a startup masquerading as a non-profit and operating for years at a loss, with a web interface and an API, fragile enough that it becomes unavailable at times.


You are conflating AGI (what we have today) with ASI (what you are referring to in science-fiction.) These are completely different things and my comment refers to ASI, not AGI.


The problems with central planning that capitalism ostensibly solves don't exist because of a lack of intelligence, but due to the impedance mismatch between the planner and the people.

Making the central planner an AGI would just make it worse, because there's no guarantee that just because it's (super)intelligent it can empathize with its wards and optimize for their needs and desires.


I don't think the concern is that an AGI would become a central planner, but that an AGI would be so much better than human investors that the entire VC class would be outclassed, and that the free market would shift towards using AGI to make investment/capital allocation decisions. Which, of course, runs the risk of turning the whole system into a paperclip optimizing machine that consumes the planet in pursuit of profit; but the VC class seems to desire that anyway, so I don't think we can assume that a free market would consider that a bad outcome.


A fair amount of evidence has existed for at least 50 years that a chimpanzee throwing darts at a wall can outperform most active fund managers, yet this has done nothing to reduce their compensation or power.


That VC class appears to really enjoy the frisson of bullshit, elaborate games of guess-what’s-behind-the-curtain, and status posturing. Remove that hedonistic factor and the optimization is likely to be much more effective.


The problem isn't AGI becomes the oracle central planner. The problem is AGI becomes the central planner, the government and everybody else who currently has a job.


I think there won't be just one AGI, no central planner. LLM abilities leak, other models can catch up in a few months.


The problem with all such criticisms is that there is an implicit assumption that humans can be trusted.


I know human limitations. I don’t know AGI limtations.

Most humans can not lie all the time. Their true intentions do come out from time to time.

AGI might not have that problem - AGI might hide its true intentions for hundreds of years.


Have you… met humans?


It has been known since the 1920s that capitalism isn't perfectly efficient. The competition has always been between an imperfect market directed by distributed human compute vs a planner directed by politicians directed by human computed.

It is an argument about signal bandwidth, compression, and noise.


Honestly, it isn't a bad plan at all.

Assuming money even makes sense in a world with AGI, that is.


In the Star Trek/Culture/Commonwealth equally distributed, benevolent AI, sure. In the I’ve-got-mine reality, I assume only the select few can speak with the AI and use it to control the serfs.


There's no future where OpenAI makes everyone else a "serf" though. In 1948 certain Americans imagined that the US could rule the Earth because it got the atomic bomb first, and they naively imagined that other countries would take a generation to catch up. In reality the USSR had its own atomic bomb by 1949.

That's what the competition with OpenAI looks like to me. There are at least three other American companies with near-peer models plus strong open-weights models coming from multiple countries. No single institution or country is going to end up with a ruling-the-Earth lead in AI.


I am not thinking some better LLM, but a genuine AI capable of original thought. Vastly superior capabilities to a human. A super intelligence which could silently sabotage competitor systems preventing the key breakthrough to make their own AI. One which could manipulate markets, hack every system, design Terminator robots, etc

Fanciful, yes, but that is the AI fantasy.


In many ways the USA does rule the earth now. The Grand Area is big.

With AI, I think there is extremely strong power laws that benefit the top performing models. The best model can attract the most users, which then attracts the most capital, most data, and best researchers to make an even better model.

So while there is no hard moat, one only needs to hold the pole position until the competition runs out of money.

Also, even if no single AI company will rule the earth, if AI turns out to be useful, the AI companies might get a chunk of the profits from the additional usefulness. If the usefulness is sufficiently large, the chunk doesn't have to be large percentually to be large in absolute terms.


America not using its nuclear advantage to secure its nuclear advantage doesn’t mean it couldn’t have.


I mean, it isn't a bad plan for VCs. Never said it's a good plan for us peasants. My opinion of sama is 'selling utopia, implementing dystopia' and that's assuming he's playing clean, which he obviously isn't.

As for a post-money world, if AGI can do every economically viable thing better than any human, the rational economic agent will at the very least let go all humans from all jobs.


This is a company at a $4 bill annual run rate.

In times gone by this would be a public company already. It's just an investment in a company with almost 2000 employees, revenue, products, brand etc. It's not an early stage VC investment, they aren't looking for a 10x.

The legal and compliance regime + depth of private capital + fear of reported vol in the US has made private investing the new public investing.


Another POV is that, how many new household names familiar to adults can you think of? Since 2015? Basically TikTok and ChatGPT. If you include kids you get Fortnite, Snapchat and Roblox. Do you see why this is such a big deal?


Moviepass was a new tech driven household name in my social circle back in 2017. It... didn't end well.


"tech driven" is very generous.


Expected return is set by risk and upside, not offering size. What do you think the risk of ruin is here? I think there is a substantial chance that Open AI wont exist in 5 years.


There was an implication in there on risk. If you don't believe a company doing $4 bill of revenue is significantly less risky than the average VC investment, you might be in dreamworld.


My understanding is that it is 4B of losses, not revenue.


Both are approximately true. But, anything software has some accounting for things (like training a model) that really should be categorized as capex instead of opex and it distorts the numbers.


My understanding was that total revenue was an order of magnitude lower, in the multi-millions.


No, it's reported to be at a run rate of $4 bill pa.


>You'll need it!

If they can IPO, they will easily hit a $1.5T valuation. All Altman would have to do is follow what Elon did with Tesla. Lots of massive promises marinated in trending hype that tickles the hearts of dumb money. No need to deliver, just keep promising. He is already doing it.


The difference is Tesla had a moat with the electric car market, there were no affordable and practical EVs 10 years ago. OpenAI is surrounded by competition and Meta is constantly releasing Llama weights to break up any closed source monopolies.


Tesla is still overvalued today with a moat that is more a puddle than anything. Elon realized that cars weren't gonna carry the hype anymore, so now it's all robotaxi, which will almost certainly be more vaporware.


I think he’s even past robo taxi and onto AI and robots that build robots that build robotaxis. I wish I were joking.


The Nissan Leaf was far more affordable than a Tesla 10 years ago and very practical for anyone living in a city.


While it was an affordable vehicle, saying that it was practical is an overstatement. Charging networks were abysmal and actually still are for non-Tesla compatible vehicles. If you had experience using EVgo and similar small networks you probably wouldn't sound as confident.


People back then didn't use charging networks; they charged at home (or work).


Since I am technically "people", I can assure you that there indeed existed non-Tesla charging stations in 2014. I was living in a medium size city in an apartment. Since your original comment is specifically about cities, I would like to point out that cities are often associated with apartment buildings, lack of individual garages, etc. Even today saying that EV owners in cities mostly rely on charging at home or at work does not seem valid.


That wasn't my comment but I will say that lots of people have houses with garages in cities. Those that don't often will choose to not purchase an electric vehicle.


Eh it was pretty limited. The Leaf (then) couldn't go from my house, to the airport in my city (Melbourne) and back on one charge. That always made it a dealbreaker for me.

And that's going by Nissan's claimed range, not even real world. So that's on a 100% charge, when the car is brand new with no battery degradation, and under the ideal efficiency conditions that you never really get.


Surely your city is not 50 miles wide?


What's dogecoins valuation? Cardanos? Bitcoins? There is a nigh-infinite amount of capital ready to get entranced by a sexy story.


If OpenAI hits $100B in revenue, $15B in profit with a 50% CAGR they will likely be worth even more than Tesla was at those numbers.

Tesla has really dropped off on its 50% CAGR number so now it is worth half that.


It took around 20 years for Amazon to get to $15B profit, and over 10 years for Meta/FB. Both had very clear paths to profit: sales and ads. OpenAI did not yet demonstrate how they will be able to consistently monetize their models. And if you consider how quickly similar quality free models are released today, it's definitely raising questions.


Yah, I need a "big if" interjection in my comment or something. I highly doubt they'll get there. But like Tesla & Meta, if they did get there they'd be a trillion dollar company.


Yeah, there's no upper limit to hype and exuberance.

As Isaac Newton once said, "I can calculate the motion of heavenly bodies, but not the madness of people."[a]

---

[a] https://www.goodreads.com/quotes/74548-i-can-calculate-the-m...


Don't some of these investors, such as Microsoft get access to run the models on their own servers as well as other benefits?

I thought Satya said Microsoft had access to everything during the Altman debacle.


My understanding is that Microsoft has already earned a large return, from incremental Azure revenues.


I think later rounds generally have lower return expectations - if you assume the stock market will return ~10%/year, you probably only need it to 2X by IPO time (depending on how long that takes) for your overall fund's IRR to beat the stock market.


You would if it were the fund’s only investment. But it won’t be. And this is still not a mature company, as their expenses currently vastly outnumber revenue, so there’s always a chance of failure.

Your general sense that the later stage higher dollar figure raises look for a lower multiple than the earlier ones is correct, but they’d consider 2x a dud.


If they accomplish AGI first, they will be the world’s most valuable company, by far.

If they fall short of AGI there are still many ways a more limited but still quite useful AI might make them worth far more than Meta.

I don’t know how to handicap the odds of them doing either of these at all, but they would seem to have the best chance at it of anyone right now.


If AGI is accomplished, there’s unlikely to be a “secret sauce” to it (or a patentable sauce), and accomplishing AGI won’t by itself constitute a moat.


Maybe. Moats are often surprising. Google’s moat is just that people think of Google when they think of search. Bing could be significantly better than Google, and in fact, a lot of people think it is, and still not get anywhere.

A lot of people said Microsoft’s Windows moat in desktop operating systems was gone when you could do most of the things that a program did inside a browser instead, but it’s been decades now and they still have a 70% market share.

If you establish a lead in a product, it’s usually not that hard to find a moat.


Google’s moat is their search index and infrastructure (which is significantly larger-scale than an LLM), and the fact that non-Google/Microsoft web crawlers are being blocked by most websites.

Windows’ moat is enterprise integration, and the sheer amount of software targeting it (despite appearances, the whole world doest’t run on the web), including hardware drivers (which, among other things, makes it the gaming platform that it is).

OpenAI could build a moat on integrations, as I mentioned.


Eh, Bing’s index and infrastructure are perfectly adequate and they’ve still got a single digit market share. One might argue other people dont have them (others once did) because Google’s brand moat drowned the competition and makes nobody else bother.

OpenAI could build a moat in a lot of different ways including ones that haven’t been thought of yet.

They’ll find several I am sure.


My bet would be on Anthropic or Meta winning the AI race.

Anthropic because their investment in tools for understanding/debugging AI.

Meta because free/open source.


People can't even consistently define what AGI is. Ask 10 different people, you'll get 11 different answers.


Aren't they proving the opposite of your proposed alternative already? A limited AI is not making them money and since every new model becomes obsolete within a year, they can't just stop and enjoy the benefits of the current model.


The fact that it isn’t making money now isn’t indicative it never will. I can think of a lot of very large tech companies who people once said the same about.


It’s an arms race, then, no? Whichever company can survive the burn can sit on their LLaurels and recoup?


That's the thing, nothing points to a world with a single winner in AI models. I get what you are saying, but not sure OpenAI can survive the burn unless they build an unmatchable AGI. And that's pure speculation at this point.


I mean, someone needs to rise to the top, unless society as a whole just says "There's no value here." and frankly there's too much real value right now for that. So someone's surviving, at least at the service level. Maybe they just end up building off of open source models, but I can't see how the best brains in the business don't find a way to get paid to make these models. Am I missing something?


There’s definitely a future for LLMs from an enterprise point of view. Even current capability models will be widely used by companies. But it’s seems that will be highly commoditized space, and OpenAI lacks the deep pockets and infrastructure capabilities of Meta and Google to distribute that commodity at the lowest cost.

OpenAI valuation is reliant IMO on them on them 1) AGI possible through NNs, 2) them developing AGI first and 3) it being somewhat hard to replicate. Personally I’d probably stick 10%, 40%, and 10% on those but I’m sure others would have very different opinions or even disagree with my whole premise.


I am not saying that LLMs don't provide value, just that this value might not be captured exclusively by OpenAI in the future. If the idea is that OpenAI will have an unmatched competitive advantage over everyone else in this area, then that has already been proven to be wrong. The rest is speculation about AGI, the genius or Altman, etc.


They accomplished AGI (artificial general intelligence) years ago. What do you think ChatGPT is?

Alternatively, what are you imagining this “AGI” you speak of to be?


OpenAI defines AGI as "autonomous systems that outperform humans at most economically valuable work."

ChatGPT is not autonomous or capable of doubling global GDP.


That’s not the definition of AGI that has been in wide use within the research community for two decades prior to the founding of OpenAI.

The founders of OpenAI were drawn from an intellectual movement that made very specific, falsifiable predictions about the pipeline from AGI (original definition) to superintelligence, predictions which have since been entirely falsified. OpenAI talks about AGI as if it were ASI, because in their minds AGI inevitably leads to ASI in very short order (weeks or months was the standard assumption). That has proven not to be the case.


They haven't. Why are you stating lies as facts?


Artificial: man-made.

General: able to solve problem instances drawn from arbitrary domains.

Intelligence: definitions vary, but the application of existing knowledge to the solution of posed problems works here.

Artificial. General. Intelligence. AGI.

As in contrast to narrow intelligence, like AlphaGo or DeepBlue or air traffic control expert systems, ChatGPT is a general intelligence. It is an AGI.

What you are talking about is, I assume, a superintelligence (ASI). Bostrom is careful to distinguish these in his writing. Bostrom, Yudkowsky et al make some implicit assumptions that led them to believe that any AGI would very quickly lead to ASI. This is why, for example, Yudkowsky has a very public meltdown two years ago, declaring the sky is falling:

https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-annou...

(Ignore the date. This was released on April 1st to give plausible deniability. It has since become clear this really represents his view.)

The sky is not falling. ChatGPT is artificial general intelligence, but it is not superintelligence. The theoretical model used by Bostrom et al to model AGI behavior does not match reality.

Your assumptions about AGI and superintelligence are almost certainly downstream from Bostrom and Yudkowsky. The model upon which those predictions were made has been falsified. I would recommend reconsidering your views and adjusting your expectations accordingly.


I appreciate these definitions and distinctions. Thanks for sharing. You've helped me understand that I need a better, more precise vocabulary about this topic. I think on an abstract level I would think of AGI as "the brain that's capable of understanding", but I really then have no way to truly define "understanding" in the context of something artificial. Maybe ChatGPT "understands" well enough, if the output is the same.


It does understand to a certain degree for sure. Sometimes it understands impressively well. Sometimes it seems like a special needs case. Ultimately its understanding is different than that of a human’s.

The issue with the “once OpenAI achieves AGI [sic], everything changes” narrative is that it is based off models with infinite integrals in them. If you assume infinite compute capability, anything becomes easy. In reality as we’ve seen, applying GPT-like intelligence to achieve superhuman capabilities, where it is possible at all, is actually quite difficult, field-specific, and time intensive.


Billion dollars isn't cool, you know what is? A trillion dollars.


If everyone is building datacenters, sell nuclear reactors.


You need to consider time and baseline growth. Google tells me Nasdaq CAGR for the past 17 years is around 17% so that will be just under 5x over 10 years. 10x over 10 years will be about 25%. High, but not as crazy as you suggest.


At their stage and size, it's probably 3x-5x. Still sky high!


Maybe they view it as at least a sure thing for a 2x return...

Another issue here is that at this value level they are now required to become a public company and a direct competitor to their largest partners. It will be interesting to see how the competitive landscape changes.


My understanding is that the company is burning $0.5+ to $1+ billion each month.

I'd say that's very high risk.


That is also much lower than Uber at its peak.


Uber's spending was directly attributed to growth. They were launching in new countries, new cities, new markets every day, and that required burning through an immense amount of money. Of course that growth didn't need to last forever, and once the service was fairly established everywhere the spending stopped.

OpenAI on the other hand has to spend billions to train every new iteration of their model and still loses money on every query you make. They can't scale their way out of the problem – scaling will only make it worse. They are counting on (1) the price of GPUs to come down in the near term or (2) the development of AGI, and neither of these may realistically happen.


Uber was a literally life-changing product with an obvious value for anyone. LLMs have neither benefit.


Every cafe, airport, school I've been to has people using ChatGPT or its competitors. Its obviously valuable for almost anyone. Just like how people cant imagine life before smartphones, people wont be able to imagine life before LLMs became ubiquitous. Its everywhere.


True, but there isn’t really a moat for text LLMs. Llama is open source, Gemini is basically free.


define "moat", Open source search engines and competitive scraping strategies were built in the early 2000s. The Google search moat has always been that they were better for 5-6 years - and then they were the default.

Once they were the default option, they merely had to be the "same" as other options. If someone were to make a 1:1 clone of google with ~5% fewer ads - I do not believe they would break get a substantial market share of web search traffic (see bing).


Google has decades of people making it their habit as a pretty strong moat. They have inertia that chap bought simply do not yet. People have been using ChatGPT for a year or two now, it’s much easier for them to switch to another one.


I’m sorry, but what? How can you tell people at the cafe/airport/school are using LLMs?


You just walk around and see chatgpt interface on their screens?


That's just not true. Source: a two-digit division.

Previous to this, they had about ~10B (via MS), and they've been operating for about 2 years at this scale. Unless they got this $$$ like a week away from being bankrupt, which I highly doubt.

Note: I'm not arguing they're profitable.


It is speculated that a majority of those 10B is Azure cloud credits. Basically company scrip. You can't pay Nvidia in the scrip, or the city electricity department, or even the salary.


I remember when openai first raised and had the 100x cap and everyone said that was ridiculous and insane and of course they're not going to 100x from 1b... That would require them to become a 100b company!


The 10x return is on the investment amount, not the total valuation. And is a rule of thumb for early stage companies, not late rounds like this.


The investor will probably have no say or be told to stfu and leave if they try to do some stuff like forming an activist group


For “an” AI company, that can achieve market dominance, to achieve 1.57T market cap is not unrealistic.

I think the question is, is OpenAI that company and is market dominance possible given all other players? I believe some investors are betting that it is OpenAI, while you and others are sceptic.

Personally I agree with you, or rather hope that it is not, primarily as I don’t trust Sam Altman and wouldn’t want him to have that power. But so far things are looking good for OpenAI.


OpenAI feels like the most politicaly active with its storylines, flashy backstabs, and other intrigue.

But as far as the technology, we're drowning in a flood of good AI models, which are all neck to neck in benchmarks. Claude might be slightly stronger for my use, but only by a hair. Gemini might be slightly behind, but it has a natural mass market platform with Android

I don't see how a single player sticks their neck out without being copied within a few months. There is — still — no moat.


Investors probably aren't expecting a 10x return on a late stage investment like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: