Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm curious, where are you getting the GPUs?


in most cases the teams already had their own source or platform of choice!


Maybe build a FAQ for others on how to accomplish, or resources you can point people at?


A lot of people are using RunPod for experimental/small-scale workloads. They have good network and disk speeds and you can generally find availability for a latest-gen GPU like an L40 or 4090 if your workload can fit on a single GPU. One GPU is plenty for fine-tuning Llama 2 7B or 13B with LoRA or QLoRA. They also sometimes have availability for multi-GPU servers like 8xA100s, but that's more hit-or-miss.

If you want to go even cheaper vast.ai is a popular option. It's a P2P marketplace for individuals to rent out their GPUs. You can generally get a ~20-30% discount vs RunPod prices by using Vast, but network speeds and perf are much more variable and there's always the possibility that the host will just shut you off without warning. I also wouldn't recommend using it if you're training with proprietary data since they can't guarantee the host isn't logging it, but most of the OSS fine-tuning community publishes their datasets anyway.


I've done a version of this: https://news.ycombinator.com/item?id=36632397

Let me know what you'd want to see added!


That was great! thank you.

One thing I cant glean ; What GPu/kit are preferred for which type of output?

Like chat vs imaging...

Do locally run models/agents have access to the internet?

Whats the best internet connected crawler version on can use?


1. I've updated the section now: https://gpus.llm-utils.org/cloud-gpu-guide/#so-which-gpus-sh... - that should answer it. Basically 1x 3090 or 1x 4090 is an ideal set up for stable diffusion, 1x A100 80GB is an ideal setup for llama 2 70b GPTQ (and you can use much smaller GPUs or even CPUs if needed, for smaller llama 2 models)

2. No, they don't have access to the internet unless you build something that gives them access

3. I'm not sure what you're asking


Interesting, hadn’t thought of that, thank you! If you want to host end points Replicate is a great option, they also have a newer fine tuning api and solution! for raw VMs with GPUs right now it’s a bit situational and you have to try multiple different vendors tbh, also really depends on the capacity you need and which machines!!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: