Collected Links to my SSD overview guides
General Infos for SSD Users
(This was in the Samsung guide, but applies to all SSD vendors)
Raid / Caching excursion
Since this comes up pretty often here are some basic hints. All of this is non-Samsung - specific!
Don't use lowend raid HBAs. :)
LSI's general recommendation is to use NOReadAhead / Direct / WriteThrough for SSDs. By bypassing the HBA cache at all times you're supposed to see less latency than if everything is cached. For high-bandwidth use cases things may look different, but most people want to optimize for IOPS.
Above 200k IOPS on older HBAs or 400k on newer ones you would need the LSI FastPath option. Current gen controllers can push up to and above 700k IOPS with this feature unlocked. If you consider the data above, 200k IOPS is just 2 current-gen SSDs.
IBM users are in bad luck since instead they need to buy the "advanced performance key" at an extreme premium. LSI also turns off the write cache on some SSD models, especially those that ran into data loss during their tests. In those cases, you will be better off using a plain PCIe 3.0 HBA without any sort of Raid, doing the Raid in software, or using other, more modern storage technologies.
If using CacheCade
- triple-check the HCL
- do not use prosumer SSDs
- do not use an SSD thats not on the HCL
ATA TRIM is almost _never_ passed through raid devices. If you have a lot of writes you need to cyclically drop the SSDs out of the RAID and do secure erase (etc) on them.
In my own tests I did not get better results with Direct/WT. It depends on how many writer threads you have. As long as the HBAs CPU keeps up, it'll be fine. (Old LSI9265 can handle 2.2GB+/s)
Raid5/Raid6 do heavy write amplification and should be avoided, they can be used with heavily overprovisioned highend SSD. (old example intel 710 with the sata reserve area manually increased).
Linux MD Software Raid should support TRIM in most Raid levels now, but afaik not in Raid5/6.
Most (if not all) SW Cache implementations (Flashcache, EnhanceIO, ...) don't support ATA TRIM either.
In Linux LVM it has to be manually enabled in lvm.conf. The default is for it to be disabled.
In a stacked setup you would need to enable TRIM on the filesystem ("discard"), in LVM and in MD. Any of this being missed means it won't work.
I used numbers from the official datasheets and, where available, storagereview.
storagereview.com has turned out to be the only useful place for looking into SSD performance. Their benchmarks are pretty complete and let you differentiate models very well. For example, if you wanna find out about the differences between Intel S3500, S3700 and Hitachi SSD400M that all use the same Hitachi/Intel Controller but vastly differ in performance for different use cases.
Hardocp.com seemed to be the only source of steady state numbers for the vanilla 840 model, but their numbers are consistently off due to a more "friendly" test procedure. They do mention this, and give reasoning that harder tests don't apply to desktop use. It's fine, but makes their steady state numbers completely useless for a comparism.
You might also find my article on consumer SSD Failure statistics interesting. It covers (right now) over 30 SSDs and I'm doing yearly updates via the comment section.