Not really speculation, its mentioned in the article.
Theres not really any free lunch here, vanilla postgres is optimized to perform well within the constraints of a single scale-up host, minimizing storage usage as much as possible.
These Aurora style systems instead take the assumption that storage is relatively cheap in a cloud environment, and likewise with any compute task that can be scaled out instead of up. So move as much as possible out of the scale-up instance to scale-out storage and compute.
Additionally Google is claiming their system is better than Aurora because google has a distributed filesystem, where AWS only has block storage (tied to specific compute) and object storage (much worse performance).
> and likewise with any compute task that can be scaled out instead of up. So move as much as possible out of the scale-up instance to scale-out storage and compute.
This was very much the main lesson I got reading up on DataDog's third-gen event storage system Husky[1]. Great read.
Theres not really any free lunch here, vanilla postgres is optimized to perform well within the constraints of a single scale-up host, minimizing storage usage as much as possible.
These Aurora style systems instead take the assumption that storage is relatively cheap in a cloud environment, and likewise with any compute task that can be scaled out instead of up. So move as much as possible out of the scale-up instance to scale-out storage and compute.
Additionally Google is claiming their system is better than Aurora because google has a distributed filesystem, where AWS only has block storage (tied to specific compute) and object storage (much worse performance).