I'm guessing the "pretraining" described in this 2006 Science article: http://www.cs.toronto.edu/~hinton/science.pdf. (Possibly the same line of research the article you mention). Sure, if you look at things from a wide enough perspective, there haven't been any "revolutionary" breakthroughs. But this did seem to reignite interest in neural nets after they had sort of languished for a while. (Science described this work, somewhat hyperbolically, as "Neural nets 2.0").
I think culturally, Hinton made a big splash and got people to pay attention to learning hierarchies and SGD-like training algorithms. Algorithmically, though, SGD is both ancient and still the dominant deep learning training technique (though useful tricks, extensions, and rules of thumb keep accumulating)