For the company: infrastructure management. You don't have a local PC (other than mainly a low cost thing acting as a fairly dumb terminal) that may have parts fail and will otherwise need upgrades every now and then, with local work that may need to be encrypted and backed up, ... You are working in "the cloud", your environment is running on a common set of VMs/containers, a fault in a node just means a new one spins up (or you get shunted onto the still running ones), hardware redundancy is handled at that level reducing single points of failer, local machines don't need to be monitored for data/apps/other they should not have, resource management (does anyone run their dev PC at full tilt 24/7? no? so CPU/IO/other resources can be shared), ...
For the individual: similar concerns of hardware failures losing work go away a bit (there are still ways to lose everything, but less of them), easy moving between environments (desktop, laptop, phone), ...
Though it depends how much is pushed to the "cloud". You may still need some meat on the local resource bones if not pushing any CPU crunching into the sky too.
Essentially we are reinventing the thin client from the 90s/00s, which in turn reinvented many mainframe concepts, not that either ever completely went away, with an eye on much the same benefits.
It's the "servers are pets, not cattle" but applied to local machines. That's sort of how IT has been run for a while now, but only half-hearted and in the worst way possible.
Almost every organization I've worked with has the policy of "if you get malware, we wipe your whole machine and reinstall the gold image" which is quite disruptive because you then have to reconfigure your settings and reinstall all your software packages and regenerate SSH keys etc. It can be a whole day of downtime and then a week of slowly getting back up and running full speed.
But if your hardware and local software are irrelevant, you can just swap your dumb terminal for another dumb terminal without skipping a beat. And with things like Chromebooks or iPads (actual real dumb terminals) the likelihood of getting to a "wipe it and start over" goes down a lot compared to machines running a full-fledged OS with a privileged user account.
If you drop your Chromebook in a lake, you could run to Best Buy and get a new one for $300 and you've only lost an hour or so, and if all your data is stored in OneDrive and your IDE is Codespaces you haven't lost anything of real value.
From a security perspective, you remove the possibility of exfiltration of client data, especially PII or other sensitive data. Many orgs that have to work with PII already have strict controls around them, but that usually means that the company installs crapware on dev machines.
Exactly! Either developer laptops are part of the network that has access to lots of very sensitive data (and get treated accordingly) or they aren't. There's no sane middle ground where developers have infinite free reign and root on their laptops while also doing dumps from production databases of PII.
There are a lot of situations where people tolerate less sane practices because they are convenient, but this isn't a good strategy.
It could cut IT costs down dramatically. Depending on what you're working on you might need a pretty beefy machine, but nobody really wants to deal with the actual hassle of managing a fleet of high powered machines. If you can use commodity hardware + nice monitors and run the machines remotely then the machine itself can be scaled up and down arbitrarily.
This feels like another solution a lot like Stadia. There have been countless other attempts at the same idea, but the problem is always the same. Latency and user interaction between the local and remote hosts always end up being overwhelming constraints.
I have a friend who works for a big 3D animation house. When COVID hit and everyone was remote, people used Teradici to remote into their powerhouse workstations. It is apparently very performant. And touching on the IT think, no files need to leave their secure home.
The most important one for us is credential management: say you do most of your work using a CI/CD pipeline but you need the AWS CLI to run reports, troubleshoot, etc. If that means you have credentials floating around in ~/.aws/credentials, there's a risk that an attacker could exfiltrate those. If you use a short-term credential system to load them via SSO, you have more infrastructure to maintain. If you setup a bastion host, you need to keep that secured because it's a really high-value target and might allow an attacker to get higher access credentials than the person they compromised if there are any mistakes on setup (common in what I've seen - internal infrastructure is often neglected compared to production servers)
None of that is something which can't be solved, of course, but this is a nice way to avoid having to deal with O&M yourself, which is the point to a lot of cloud services.