You have to use the system Webkit though, right? So even though Orion is based on Webkit they are still constrained compared to other platforms, where they can patch the engine if necessary.
> The business case is that someone else is doing the infrastructure management for you.
Many business people believe that moving to cloud can reduce the headcount needed for managing the infra, but that is usually not what's happening. You will still need more or less the same amount of people to patch the OS and configure the networking, but with a slightly different skill set -- instead of Cisco IOS commands, they now need to deal with AWS transit gateway.
Every data center I worked with offers on-site remote hands that will rack new servers or replace hard drives or PSUs for you. There are also third-party companies that offers this service. Redundant power and air conditioning are the responsibility of the colocation provider and those are covered by the SLA. Those data centres do have generators on-site.
That said, configuring you top-of-rack switch is usually not covered by the co-location contract. This need to be done by a network engineer, usually the same person that would manage VPCs.
If you're treating cloud as a sort of 1:1 replacement of hardware infrastructure, then what you're saying makes sense.
But we don't ever "patch the OS." Our cloud provider does that for us. Upgrades and patching are automatic within the maintenance window. We deploy containers, the OS is just a platform layer that the cloud provider manages.
As for needing "more or less the same amount of people", that simply isn't true IME. I've seen startups with whole teams devoted to running systems in a datacenter, whereas cloud-based startups will often have one or two guys, possibly even part-time, dealing with their cloud requirements.
At larger companies, it can be harder to make this comparison, since you'll typically have a mixture of cloud and onprem. But if you look at teams with products deployed in the cloud, they'll often have much more frequent deployment, with better automation, and much less dependence on the always overcommitted infrastructure team(s).
> Every data center I worked with offers on-site remote hands that will rack new servers or replace hard drives or PSUs for you.
That's not very compatible with infrastructure as code. You're essentially saying "keep doing things the old inefficient way, it's not really that bad."
There are all sorts of failure scenarios that a managed cloud system can cover from completely automatically within minutes, that would require frantic calls to remote hands in the old world. And even when recovery isn't automatic, it may still require nothing more than some pointing and clicking to recover.
> instead of Cisco IOS commands, they now need to deal with AWS transit gateway.
> This need to be done by a network engineer, usually the same person that would manage VPCs.
These aren't comparable at all. Unless a company has some sort of very special networking requirements, they can set up VPCs with some pointing and clicking in the cloud provider's web UI. This is a far cry from the level of knowledge required to manage networks at the hardware level.
> But we don't ever "patch the OS." Our cloud provider does that for us.
This is absolutely not true. Unless your cloud services are limited to function-as-a-service and serverless/containerized applications,you are indeed responsible for "patching the OS".
This kind of thing is explicitly covered even by AWS' intro to AWS courses, namely in its shared responsibility model.
> As for needing "more or less the same amount of people", that simply isn't true IME. I've seen startups with whole teams devoted to running systems in a datacenter, whereas cloud-based startups will often have one or two guys, possibly even part-time, dealing with their cloud requirements.
Sorry, your claim is outright unbelievable. I've never seen a company who owned any web application that only had "one or two guys, possibly even part-time, dealing with their cloud requirements." Unless the whole company is only "one or two guys, possibly even part-time", your claim simply is extremely far-fetched and blatantly unbelievable.
> Unless your cloud services are limited to function-as-a-service and serverless/containerized applications,
Yes, I'm talking about containerized applications, deploying to fully managed environments like EKS, Fargate, GKE, Cloud Run. If you're not using containers then yes, it will be more difficult to achieve what I'm describing.
I've been the one part-time guy for three different SaaS startups with funding in the $20-30m range, and globally distributed dev teams. I'm a software architect and dev primarily, so setting up the cloud platform is just a side activity.
For one of those companies, I was literally just a part-time contractor. Once it's set up properly, it should be easy for regular admins to operate, and devs can just follow the templates for configuring a service. It's not rocket science.
All the stuff you're imaging is so complex is exactly the stuff that the cloud lets you delegate to the provider. But you have to be willing to do it, you can't stick to the old way you've always done things and expect the cloud to make anything easier for you.
> For one of those companies, I was literally just a part-time contractor. Once it's set up properly, it should be easy for regular admins to operate, and devs can just follow the templates for configuring a service. It's not rocket science.
It's also a gross misrepresentation of the initial statement. Having a part time employee setting up a few containers to run in A container orchestration system is absolutely not what "setting up the company's cloud requirements" means. It's a blatant attempt to oversell a couple of clicks worth of work as covering a company's whole cloud requirements.
Our build pipelines rebuild container images for every deployment. All you need to upgrade some dependency is to set a version number in the Dockerfile.
There's no "too" because that's the only place we deal with that kind of version dependency. (Not counting dependencies internal to a service, like libraries.)
You're right, I should have clarified that IaaS is exactly the stuff you want a cloud provider to take care of for you, and that's where many of the benefits come from.
But IaaS has been around for nearly 20 years now, and containers for at least 10 years, so I tend to assume people have got that message by now, especially on HN.
Re scale, you can run pretty high scale on managed Kubernetes clusters. E.g. GKE supports up to 15,000 nodes per cluster, hosting over 3.8 million concurrently running pods. Multi-cluster ingress is also supported if you need to run multiple clusters.
> Section 6.1.3.2 of [RFC1123] is updated: All general-purpose DNS implementations MUST support both UDP and TCP transport.
For stub resolvers like the ones provided by glibc and musl:
> Stub resolver implementations (e.g., an operating system's DNS resolution library) MUST support TCP since to do otherwise would limit the interoperability between their own clients and upstream servers.
`getaddrinfo` is a POSIX function, and the POSIX specification for it[1] allows implementations that only support UDP (it even allows implementations that don't use DNS). Such implementations should (but are not required to) set the `struct addrinfo` `hints` field to `IPPROTO_UDP`. Note that POSIX only mentions "In many cases it is implemented by the Domain Name System, as documented in RFC 1034, RFC 1035, and RFC 1886." and not any later RFCs, so RFC 7766 changes aren't required for POSIX-compliant implementations.
A market has two sides: supply and demand. The assumptions only involves the supply side but we do not know much about demand, so would be hard to give a full picture of how the market would look like.
That's said, if we have ample supply of any given commodity/occupation, then the market will greatly shaped by what the demands look like.
We should have a linter for issue/PR comments that flag out sentences starting with “You are...”. The scope of the comments should be limited to the issue or code and must never extend to the person that bring it up.
You might have uploaded your name and face willingly to Facebook in order to set up your profile, but without proper safeguards and legislation, the data might be used to train an AI model to use your face to identify your relations with other user using photos, which they also willingly upload, to power features such as people you might know and of course, advertising. The data might also be sold or transferred to third-parities like Cambridge Analytica for political advertising or government agencies for "national security" -- all without your explicit consent.
It is true that it does not matter if a piece of data is stored in either side of the Atlantic, but this is not a engineering problem about data locality and latency. As someone who spent months working on a global distributed GDPR-compliance identity store, my life will be much easier if the problem can simply be solved by paying a slightly higher inter-region data transfer fee.
Unfortunately, US and EU here are not referring to cloud regions, but as jurisdictions because different laws on data protection apply. None of us likes this kind of complexity, but "power move" would be an overly-simplified abstraction of this problem.
You're making a lot of assumptions about people's stacks. Not everyone is running behind a third-party proxy. There are many reasons not to do so and many aren't up for debate (like regulation compliance). And that's ignoring the fact that most protocols aren't like HTTP and don't know a thing about hostnames, so the only way for a "gateway" to exists is using different ports - something that isn't always possible and may require a lot of work to adapt to.
I only said this "proxy" need to be dual-stacked. This "proxy" or "gateway" or whatever refers to the public facing part of the their stack -- if they don't have a public facing part then they don't have this problem.
The "proxy" can outsourced or manged on-perm and does not have to be shared with anyone. This "proxy" may or may not be a L7 proxy that only understands HTTP.
I run my own proxy for HTTP(S), SSH, SMTP and DNS. It took me about 1 hr to set it up. Only 4 IPv4 address are used for my whole stack, the rest are all IPv6-only.
reply