UTF-8, UTF-16 is variable length, UTF-32 is not. JS spec says you can use UCS-2 or UTF-16. I believe the author meant to say: If you have UTF-16, on average, your operations are faster, but use more memory. With UTF-8 you use less memory, but operations are slower in the web environment.
Note that you don't necessarily use less memory using UTF-8. It only saves memory for languages that can be represented in latin1. Non-western languages usually end up using more memory in UTF-8 than in UTF-16.
UTF-8 apparently usually ends up faster as you’re shifting a smaller amount of data from the RAM to the CPU and back out. This makes it more likely that your strings will fit in the CPU’s cache rather than having to hit the main RAM every time, leading to overall speed improvements.
(this from an interest in the subject, rather than any actual implementation work I’ve done)