A lot of people are using RunPod for experimental/small-scale workloads. They have good network and disk speeds and you can generally find availability for a latest-gen GPU like an L40 or 4090 if your workload can fit on a single GPU. One GPU is plenty for fine-tuning Llama 2 7B or 13B with LoRA or QLoRA. They also sometimes have availability for multi-GPU servers like 8xA100s, but that's more hit-or-miss.
If you want to go even cheaper vast.ai is a popular option. It's a P2P marketplace for individuals to rent out their GPUs. You can generally get a ~20-30% discount vs RunPod prices by using Vast, but network speeds and perf are much more variable and there's always the possibility that the host will just shut you off without warning. I also wouldn't recommend using it if you're training with proprietary data since they can't guarantee the host isn't logging it, but most of the OSS fine-tuning community publishes their datasets anyway.
1. I've updated the section now: https://gpus.llm-utils.org/cloud-gpu-guide/#so-which-gpus-sh... - that should answer it. Basically 1x 3090 or 1x 4090 is an ideal set up for stable diffusion, 1x A100 80GB is an ideal setup for llama 2 70b GPTQ (and you can use much smaller GPUs or even CPUs if needed, for smaller llama 2 models)
2. No, they don't have access to the internet unless you build something that gives them access
Interesting, hadn’t thought of that, thank you! If you want to host end points Replicate is a great option, they also have a newer fine tuning api and solution! for raw VMs with GPUs right now it’s a bit situational and you have to try multiple different vendors tbh, also really depends on the capacity you need and which machines!!