The blog post doesn't explicitly mention the performance problems that you are now saying prompted your move.
Then you go on to say it was simple and talk about how much cheaper it is.
I feel like I was replying to what you said in the first instance, not the more interesting underlying cause.
Did you talk to Heroku about your performance problems? I'd be interested to see how much leeway they would grant.
Edit: for people wondering why the comment was deleted, I think because zeeg accidentally replied to me instead of a different poster; nothing silly going on.
I talked to them (the people I knew) some, but it was mostly characteristics of the app.
So various events were like this:
* Perf problem w/ the code (e.g. didnt handle this kind of spike)
* Perf problem with service (e.g. had $200 db instead of $400)
* Couldnt max cpu due to lack of memory
* Couldnt max cpu due to IO issues (db)
* Couldnt maintain a reasonable queue (had to use RedisToGo which is far from cheap)
The biggest one I couldnt get around was my queue workers required too much memory to operate (likely because of them dealing with larger JSON loads). Too much was like 600mb (or something along those lines) total on the Dyno (not just from the process). I routinely saw in the Heroku logs "using 200% of memory" etc, and thats when things would start going down hill.
Things could have been a lot better if I had more insights into the capacity/usage on Dynos (without something like NewRelic, which doesnt give it well enough)
Then you go on to say it was simple and talk about how much cheaper it is.
I feel like I was replying to what you said in the first instance, not the more interesting underlying cause.
Did you talk to Heroku about your performance problems? I'd be interested to see how much leeway they would grant.
Edit: for people wondering why the comment was deleted, I think because zeeg accidentally replied to me instead of a different poster; nothing silly going on.