Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We use it with SmartOS on hypervisors running customer Bhyve machines. ZFS (and smartos) works really really well. .

The fact how I can replicate live data of arbitrary size (many TB sized filesystems) in small chunks to other hosts every minute greatly increased my deep sleep quality. Of course databases with multi-host write etc are nice, but in our use case, all customers are rather small with just lots and lots of files (medical and otherwise), the database itself is rather small and doesn't need replication.

Best thing, on the receiver side of the backup ZFS ensures due to its architecture that the diff is directly applied on top of the existing filesystem, while in normal differential backups one might find out months or years later that one diff snapshot was damaged in transfer or is not accessible.

zfs scrubbing with S.M.A.R.T monitoring also helps a lot to ensure drive quality over time.

# Gotchas

ZFS:

- There is no undo etc, this is unix, so beware of wrong commands. - ZFS Scrubbing can be stopped (it does sometimes affect io speed), but ZFS resilvering cannot. This can lead to performance issues. - There must be enough RAM for the caching to work well and synchronous workloads do well with good write cache drives (ZIL) - Data usage patterns should fit well with the Append Log schema of ZFS. E.g databases such as LevelDB worked really well. Others are not slow, but need a good ZIL more then when the pattern fits.

SmartOS: Some minor gotchas with how vmadm deletes zfs filesystems or in general with SmartOS, e.g when having too many snapshots, but everything quite predictable.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: