Exactly. That's the point. It is beneficial only when you have a huge number of requests relative to the number of platform threads.
The idea is that you can map a million of these virtual threads on to a small number of platform threads, and the JVM will schedule the work for you, to achieve maximum throughput without needing to write any complicated code. Virtual threads are for high throughput on concurrent blocking requests.
EDIT: the sequential calls to "callRemote" still process in sequence, blocking on each call. Don't get confused there. But the overall HTTP request itself is running on a virtual thread and does not block other HTTP requests while waiting for the callRemote invocations to complete.
> EDIT: the sequential calls to "callRemote" still process in sequence, blocking on each call. Don't get confused there. But the overall HTTP request itself is running on a virtual thread and does not block other HTTP requests while waiting for the callRemote invocations to complete.
Yeah I got that. But then the use case seems to be much narrower, because not every scenario is one where many concurrent requests come in at the same time. If you write non-blocking code you'll get high throughput no matter the number of requests. Ok, maybe non-blocking code is harder to write (and I don't think it's actually that hard if you use, say, Kotlin coroutines), but honestly this seems to me like something that developers should learn eventually anyway.
Or maybe it's me being weird. But I remember 10 years ago when node.js was being hyped for being "fast" because it was "async" and now suddenly using threads and blocking operations is again all the rage now.
The difference now is that it's implemented in the JVM instead of a library / framework. It is easier, simpler, and probably more efficient. You can get higher throughput from existing code with minor refactoring.
Also there are some traceability issues involved in using asynchronous APIs: "In the asynchronous style, each stage of a request might execute on a different thread, and every thread runs stages belonging to different requests in an interleaved fashion. This has deep implications for understanding program behavior: Stack traces provide no usable context, debuggers cannot step through request-handling logic, and profilers cannot associate an operation's cost with its caller. "
Collecting those potentially very expensive stack traces, specially in those highly concurrent environments with high throughput requirements would be great for debugging but probably is gonna kill the performance of the system. But it would be nice (even if as a very heavy hammer) as an option.
I'm not saying that virtual threads aren't a good thing (other runtimes than the JVM have had green threads for a very long time now), but that it doesn't seem like such a paradigm shift. It's still threads (with all the benefits and drawbacks), it's just now that they have less overhead.
Presumably I could just keep an existing server written in Spring MVC or a similar technology and just wait for the underlying container to support virtual threads to get the same benefits. I believe Jetty already does support them. So why would I need a new framework?
It removes the need to write async code in many cases. It is a more straightforward and efficient way to accomplish what we have already been doing for years on the JVM.
The idea is that you can map a million of these virtual threads on to a small number of platform threads, and the JVM will schedule the work for you, to achieve maximum throughput without needing to write any complicated code. Virtual threads are for high throughput on concurrent blocking requests.
EDIT: the sequential calls to "callRemote" still process in sequence, blocking on each call. Don't get confused there. But the overall HTTP request itself is running on a virtual thread and does not block other HTTP requests while waiting for the callRemote invocations to complete.