That was really informative, but it raised a question for me:
It described the browser as single threaded, but then talked about multiple concurrent tasks. Aren't those threads?
One more question: are there any browsers that use multiple threads to lay out the various object models and render the page? If it's been found to be too difficult, what were the issues?
To paint broad strokes, the layout phase (~= take the HTML, take the CSS, determine the position and size of boxes) is largely sequential in production browser engine today. Selector matching (~= what CSS applies to what element) is parallel in Firefox today, via the Stylo Rust crate originally developed in the research browser engine Servo. Servo can do parallel layout in some capacity (but doesn't implement everything), https://github.com/servo/servo/wiki/Servo-Layout-Engines-Rep... is an interesting and recent document on the matter.
Parallel layout is generally considered to be a complex engineering problem by domain experts.
Rendering the page, as in deciding the colour of each pixel and putting them on the screen, based on the layout, style, and various other things, can be done with lots of parallelism, on the CPU or on the GPU (that is preferred on most platforms in production browser engines, these days).
MDN is just simplifying and describing the browser from the web developer's perspective.
Browsers are not literally implemented in a single thread.
Javascript execution is determined by an event loop. Events are queued up by whatever means the browser implementer wants as long as there is only a single thread actually consuming the events. This is where the notion of being "single-threaded" originates. The web developer assigns handler functions to event listeners which are called by this consuming thread later when the event occurs.
This kind of concurrency is cooperative multitasking. The code is executed asynchronously, but not in parallel.
The renderer is the entry point since the HTML contains the CSS and JS tags. Generally speaking, the HTML is rendered line-by-line in the order the tags are written, but in practice there are some aspects that deviate from this such as the "async" and "defer" attributes for script tags as well as any HTML or CSS requiring network requests that cannot block rendering the rest of the page (img tags, url CSS values, etc.)
Naturally this ability to make network requests is implemented as a thread pool (at least on modern browsers), but any Javascript waiting on that would not execute until the event is consumed and its handler is called which preserves the illusion of being "single-threaded". As for loading images, fonts, etc. from CSS/HTML, the developer cannot control when they are loaded and rendered. Anything that really does need threads is handled by the browser already.
My understanding is that single-thread concurrency is essentially what Javascript does. It basically flickers between tasks very rapidly to simulate concurrency. Does that match your understanding or am I incorrect?
I don't think that "flickers between tasks very rapidly to simulate concurrency" is a good mental model for event loops. It's more like "runs one task at a time until it hits a suspension point," where a suspension point is something like an I/O operation. If you had an event loop that switched tasks between suspension points, then you'd still need locks for shared data.
It doesn’t simulate, and the “flicker” is named “event loop”, but otherwise you’ve got it right. The concurrency model is essentially cooperative, ie pending tasks wait for the current task on the event loop to unblock, and then they are each executed in turn (with several different scheduling priorities based on how they became pending, eg synchronous event callbacks, Promises, various timers).
> It basically flickers between tasks very rapidly to simulate concurrency.
That’s usually used a description of preemptive multitasking, like what you get running a modern OS on a single core (on multiple cores it’s also this but on each core). Every once in a small while, the current task is made to inhale chloroform, another task is chosen to wake and takes its place. Pro: you can’t hang any given task from a single thread; con: shared memory is very difficult to impossible, as you never know when you’ll be interrupted.
Browser JavaScript and UI interactions instead use cooperative multitasking, which you’ll also find in classic Mac OS, 16-bit Windows, basically every embedded system ever, and languages like Python and Lua. A task has to explicitly call out to the scheduler to hand off execution resources to the next one. Pro: as task switching only happens at clearly visible points, many of the horrors of shared memory are diminished; con: you can and will hang the UI (or other outside-world interactions) if you accidentally write an infinite loop or just do too much CPU-bound work without yielding (as any developer using one of the aforementioned systems knows).
For how this works in the browser environment specifically, see Jake Archbald’s superb talk[1].
It described the browser as single threaded, but then talked about multiple concurrent tasks. Aren't those threads?
One more question: are there any browsers that use multiple threads to lay out the various object models and render the page? If it's been found to be too difficult, what were the issues?