True, and I've done it! I accidentally used a consumer grade (Crucial M500) in an application where endurance would be reached in less than a month.
In fairness to Micron/Crucial, the drives did not lose any data.... but the write bandwidth degraded down to 10MB/s which counts as a failure for most apps. That's the catch in the report you cite: the bandwidth of a used-up drive is so poor that it's hard to actually write enough extra data to fully-fail.
Edit: Burning up an SSD was how I learned that TRIM is unnecessary: even though many parts of the filesystem were stable and never re-written, the bandwidth was even across the entire drive. The firmware noticed that some cells were underused and moved the stable data off them so it could use up the endurance evenly.
"In fairness to Micron/Crucial, the drives did not lose any data.... but the write bandwidth degraded down to 10MB/s which counts as a failure for most apps. That's the catch in the report you cite: the bandwidth of a used-up drive is so poor that it's hard to actually write enough extra data to fully-fail."
You call that a catch? I would sell it as a feature. It's way better than "looks OK until t minus 1, can't read or write from time t onwards" or lesser variants of the "try reading everything a few times, and you may get 90% of your data back"
Can you please elaborate on what application would basically kill an SSD in a month? It is my understanding that the Crucial M500 drives support something like 72TB of writes.
Please don't take any offense, I'm just interested in how you were using the drive.
In fairness to Micron/Crucial, the drives did not lose any data.... but the write bandwidth degraded down to 10MB/s which counts as a failure for most apps. That's the catch in the report you cite: the bandwidth of a used-up drive is so poor that it's hard to actually write enough extra data to fully-fail.
Edit: Burning up an SSD was how I learned that TRIM is unnecessary: even though many parts of the filesystem were stable and never re-written, the bandwidth was even across the entire drive. The firmware noticed that some cells were underused and moved the stable data off them so it could use up the endurance evenly.