Almost double that, as we didn't hit the tier for the lower $0.05 pricing. We also used some of their more expensive locations for some storage too, like Singapore - the 23TB was spread between different regions.
Yes. AWS is that awesome with the number of different and sophisticated services that they offer. Azure is probably the only other cloud that comes close. DigitalOcean is cheap, unreliable, and the amount of services being offered can be counted on the fingers of my one hand.
I can talk about the issues there are with DO on a daily basis. Once you are past MVP, your business needs a better home.
I would love to hear more about the issues you’ve experienced with DO. We have been using them more and more lately and have had a much better experience there than on AWS, GCP, or Linode.
That's a bad analogy but yes, AWS is a vast portfolio of services to solve your problems. If all you need are some bare-metal servers and nothing else then you really don't need AWS in the first place and are paying a premium for stuff you're not using.
They have a vast portfolio of services to solve resume driven architectures problem. The scale at which you outgrow 2 modern high end load balanced boxes is something 99% of the projects will never achieve. There are legitimate use cases for cloud services but what majority of projects are using them for is not it. Reminds me of service at FB that issues and processes data from roughly several hundred million requests per day runs on a single box and is a regular python app. 99% of startups would implement same thing as some data processing pipeline with kafka, spark , hadoop or whatever else is "absolutely critical" to have to process 1/1000 the size of data.
And comparing AWS to GCP is like comparing Hubble telescope to James Webb telescope :)
Recently I have moved my infra to a more hybrid architecture where all complex services run on major cloud providers while stateless compute services run on a cheap vultr vms. Result is quite nice!
1. Spin up 50 AWS Lightsail instances[0] for parallelism
2. In each instance download from S3 and upload to B2.
S3 to any AWS service in the same region is free[1]. $5 Lightsail instances come with 2TB of data transfers each, so 50 of them can easily handle 23TB. The whole transfer can be done within a few hours so the total computing cost is less than $10 ($5 / 30 * 50 = $8.3). Total data retrieval cost for S3 is ($0.0007 per GB) * 23,000GB = $16.1.
[1] "Transfers between S3 buckets or from Amazon S3 to any service(s) within the same AWS Region are free." according to https://aws.amazon.com/s3/pricing/
LightSail has (or at least had) a hard limit of 20 instances. They also have a soft limit of 2 instances, after which you must request an upgrade to a higher limit. I had to submit a support request explaining my intended usage. It took a week to get approved.
The stated reason for these limits is to avoid unexpectedly large bills. But I suspect that it's also to prevent crazy-ass strategies for getting around bandwidth costs.
Lightsail instances have terrible bandwidth throughput. It has high limits because it's all overbooked low-priority traffic shunted off their network as soon as possible.
https://aws.amazon.com/s3/pricing/
I'd be pretty nervous about hitting the delete button after the data transfer.