Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

@ USD$0.05 cents per gigabyte out from S3, that's about USD $1150 to transfer 23TB is that right?

https://aws.amazon.com/s3/pricing/

I'd be pretty nervous about hitting the delete button after the data transfer.



Almost double that, as we didn't hit the tier for the lower $0.05 pricing. We also used some of their more expensive locations for some storage too, like Singapore - the 23TB was spread between different regions.


Ain't aws great would've being like what $250 on DO


And then you also get to deal with DO losing entire clusters of Spaces, for a week+ at a time with no timely status updates. That was great ..


You're not buying the same thing. Comparing AWS to DO is like comparing your home printer-scanner with the Hubble telescope.

Yes, one of them is more expensive.


Really?

I'm not saying AWS doesn't offer anything over bare servers, but do people think it's a Hubble to their home printers?


Yes. AWS is that awesome with the number of different and sophisticated services that they offer. Azure is probably the only other cloud that comes close. DigitalOcean is cheap, unreliable, and the amount of services being offered can be counted on the fingers of my one hand. I can talk about the issues there are with DO on a daily basis. Once you are past MVP, your business needs a better home.


I would love to hear more about the issues you’ve experienced with DO. We have been using them more and more lately and have had a much better experience there than on AWS, GCP, or Linode.


That's a bad analogy but yes, AWS is a vast portfolio of services to solve your problems. If all you need are some bare-metal servers and nothing else then you really don't need AWS in the first place and are paying a premium for stuff you're not using.


They have a vast portfolio of services to solve resume driven architectures problem. The scale at which you outgrow 2 modern high end load balanced boxes is something 99% of the projects will never achieve. There are legitimate use cases for cloud services but what majority of projects are using them for is not it. Reminds me of service at FB that issues and processes data from roughly several hundred million requests per day runs on a single box and is a regular python app. 99% of startups would implement same thing as some data processing pipeline with kafka, spark , hadoop or whatever else is "absolutely critical" to have to process 1/1000 the size of data.


It's a bit of a stretch, but yeah there's a pretty vast difference between AWS or Azure and DigitalOcean.


And comparing AWS to GCP is like comparing Hubble telescope to James Webb telescope :)

Recently I have moved my infra to a more hybrid architecture where all complex services run on major cloud providers while stateless compute services run on a cheap vultr vms. Result is quite nice!


You could buy the storage disks for $2000 from Amazon.com.


AWS is overpriced in a way, but people justify it. IMHO they justify it more than they should, but...


Would this crazy-ass strategy work?

1. Spin up 50 AWS Lightsail instances[0] for parallelism

2. In each instance download from S3 and upload to B2.

S3 to any AWS service in the same region is free[1]. $5 Lightsail instances come with 2TB of data transfers each, so 50 of them can easily handle 23TB. The whole transfer can be done within a few hours so the total computing cost is less than $10 ($5 / 30 * 50 = $8.3). Total data retrieval cost for S3 is ($0.0007 per GB) * 23,000GB = $16.1.

[0] https://aws.amazon.com/lightsail

[1] "Transfers between S3 buckets or from Amazon S3 to any service(s) within the same AWS Region are free." according to https://aws.amazon.com/s3/pricing/


LightSail has (or at least had) a hard limit of 20 instances. They also have a soft limit of 2 instances, after which you must request an upgrade to a higher limit. I had to submit a support request explaining my intended usage. It took a week to get approved.

The stated reason for these limits is to avoid unexpectedly large bills. But I suspect that it's also to prevent crazy-ass strategies for getting around bandwidth costs.


Lightsail instances have terrible bandwidth throughput. It has high limits because it's all overbooked low-priority traffic shunted off their network as soon as possible.


This is absolutely sneaky and would work. I believe S3 also has file access cost but it's minimal.

If you had a partitioned list of what you needed to move per node, so that coordination was minimal between them, this would be pretty viable.


It would be about $2053.03 if it's only the 23TB for that month from N.Virginia region.


It was cheaper to buy some NAS


12 bay Synology is $1,600, load it with 24 4TB drives for another $2,000




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: