Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unscientifically that puts the M1 Pro GPU at about 25% of the performance of a RTX 3080.

Not too shabby...

EDIT - this comment implies it's much faster: https://news.ycombinator.com/item?id=32679518

If that's correct then it's close to matching my 3080 (mobile).



It's likely that a significant fraction of the perf difference between Apple' GPUs and NVIDIA GPUs is due to NVIDIA's CUDA being high optimized, and Pytorch being tuned to work with CUDA.

If Pytorch's metal support improves and Apple's Metal drivers improve (big ifs), it's likely that Apple's GPUs will perform better relatively to NVIDIA than they currently do.


img2img runs in 6 seconds on my GeForce 3080 12 GB. 6+ it\s depending on how much GPU memory is available. If I have any electron apps running it slows down dramatically.


Curious about:

1. Image size

2. Steps

3. What your numbers are for text2img

4. (most importantly) are you including the 30 seconds or so it takes to load the model initially? i.e. if you were to run 10 prompts and then divide the total time by 10, what are your numbers?


Re 4 the lstein repo gives you an interactive repl, so you don't have to reload the model on every prompt.

I also have a 3080 and as far as I remember (not at my pc right now) it was 3-10 secs for img2img 512px cfg13 50 steps batch size 1 dimm sampler.


what args are you passing to img2img?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: