The AI hardware race is still going strong, but with so many rapid changes to the fundamental architectures, it doesn't make sense to bet everything on specialized hardware just yet.. It's happening, but it's expensive and slow.
There's just not enough capacity to build memory fast enough right now. Everyone needs the biggest and fastest modules they can get, since it directly impacts the performance of the models.
There's just not enough capacity to build memory fast enough right now. Everyone needs the biggest and fastest modules they can get, since it directly impacts the performance of the models.
There's still a lot of happening to improve memory, like the latest Titans paper: https://arxiv.org/abs/2501.00663
So I think until a breakthrough happens or the fabs catch up, it'll be this painful race to build more datacenters.