I can't give any benchmarks, but I was doing a compile (openssh) the other day, and I noticed something peculiar.
I started the ./configure, and it blasted through 40 or 50 checks almost instantly, then stopped. Suddenly, it started taking about 3-5 seconds for each check (i.e. 'check for blarg in -lfoo'). For the rest of the compile, it was like I'd gone back in time ten years; incredibly slow. Checking top, it showed the %steal at 99.5-100% - the host system was scheduling almost no CPU at all to my machine.
Playing around, I found that you basically get a specific allotment of CPU cycles per a certain amount of time, and once you run out you don't get any more for a while (other than the bare minimum to keep the server working). This makes it great for 'bursty' load cases, but after you burst burst, they cut you off, so it's terrible if you suddenly get a sustained burst of traffic.
It's so bad that I spun up a clone of the machine, compiled SSH there, and then spun it down afterwards, which turned out to take easily half the time as compiling it on the micro instance itself.
So: great for personal blogs; terrible for suddenly getting traffic to your personal blogs.
I can't release the full results of the ones I've run, and they didn't cover Micro instances (this was about a week before they came out).
I can say that when executing kcbench (set to do 4 kernel compiles, same configuration on all machines, using a '-j' equal to the number of CPUs in /proc/cpuinfo) I got the following times:
EC2 Small Node: ~849 seconds
Rackspace 1GB Node: ~81 seconds
From what I've seen, a micro instance can supposedly (that is according to Amazon get more "burst" cycles than a small instance, but in the long term will be significantly slower. A sibling to this comment refers to some work that compared small nodes to micro nodes, and found the small node to be over 2 times faster than the micro node for large processing jobs.
Based on that, for CPU heavy jobs I would put a 1GB Rackspace node at something like 25x faster than the EC2 micro node.
I found that for the most part, in order to compete with Rackspace on CPU you needed to go with at least an Extra Large (which was about on par with Rackspace) or a High-CPU Extra Large (which managed kcbench in ~44 seconds).
This is true, generally all Rackspace Cloud servers perform about the same in terms of CPU and disk IO, so the small Rackspace instances will outperform small EC2 instances. However, they don't scale well and on the high end, they under-perform relative to comparable EC2 instance. Here is some sample data validating this (these web service links will provide XML formatted benchmark results):
The results reported by danudey should not come as a surprise. The EC2 micro instances are designed for situations where short bursts of CPU are the norm. They were not intended to be used for continuous, compute-intensive chores.
I think the reason I was 'surprised' is that I expected the 'good for sudden bursts of CPU' to be what it was good for, rather than an actual hard limitation on how it works. Perhaps this is because I'm not terribly familiar with how EC2 is managed behind the scenes, being a new convert from Rackspace.
My post was mostly meant to illustrate that Amazon puts hard limits on how your VM operates (which makes it inconsistent over time under load), vs. Rackspace, which gives you a constant amount of CPU capacity all the time.