Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I would love nothing more than to be able to enable on-demand GPUs, but unfortunately this is a limitation from AMD right now. We can't do PCIe pass through to a virtual machine, it just doesn't work. This is why our minimum is 8 right now. If you look at all of our competitors, they have the same issue. Even Azure "VM", is 8 at a time, but they are all sold out due to high demand.

Thank you for the response!

Renting 8 GPUs at once is fine and desired; it's not the issue.

The issue is that right now one has to commit to at least 1 week of use; my use patterns are bursty and it does not map well to the current proposition.



This is very good feedback. We are just getting off the ground and on/off-boarding is still a bit of work for us.

Right now, we are trying to attract people who are a mix of wanting to kick the tires on a new product, as well as take compute for longer term.

I did mention in the pricing section that we can store your data locally, as part of the advertised pricing. This is our effort to recognize your use case.

I also understand that you want to optimize and don't want to pay for something that you're not using. We will eventually get to that point, but honestly just not there yet.

Think also from our end, we have these GPUs and if you're not using them... then who is? We've put out the capex/opex to make these available to you at any time, so the only way to be efficient on our side is to do a week long block right now.

Regardless, if you want to reach out to me directly, please do so. Maybe there is a middle ground we can both work from. Happy to consider all options and getting in early with us will always have first mover advantages.


Thank you for being open; if my spend on vast.ai grows another 10X, I will consider reaching out. Right now, I am still a fairly small fish for your supercomputer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: