Storage drive type
Fibre channel drives, while once popular in enterprise data centers, are increasingly uncommon in new deployments, and in most cases aren't a good fit for Ceph's design of operating on commodity hardware. Most clusters today will be built from drives with SAS or SATA interfaces. Opinions vary regarding the merits of SAS versus SATA; with modern HBA, drive and OS driver technologies the gap between the two may be smaller than it used to be. Successful clusters have been built on SATA drives, but many admins feel that SAS drives tend to perform better especially under heavy load. Neither choice is likely to be disastrous, especially as Ceph is built to ensure redundancy and data availability.
The more important choice is between traditional rotating magnetic drives, also known as spinning rust or simply spinners, versus SSDs. Modern SSD speed, capacity, and pricing has been dropping steadily year over year. Magnetic drive progress has been less dramatic: capacities creep progressively higher, but speed/throughput are largely limited by physics. In the past, the cost per GB of an SSD was dramatically higher than that for magnetic drives, limiting their use to certain performance-critical and other applications, but as the costs continue to fall, they are increasingly price-competitive. Considering TCO and not just up-front CapEx, the fact that rotational drives consume more power and thus also require more cooling additionally narrows the financial gap. Today's enterprise SSDs are also more reliable than magnetic drives, which must be considered as well.
Your constraints may vary, but for many the choice between SSD and magnetic drives is now driven by use case. If your cluster's role is to supply block storage to thousands of virtual machines running a variety of applications, your users may not be satisfied with the performance limitations of magnetic drives and you stand a good chance of running out of IOPS long before the cluster fills up. If, however, your use case is purely for REST-style object service using the RADOS Gateway, or long- term archival, density and capacity requirements may favor larger but slower rotating drives for the OSD service.
NVMe drives are a fairly new technology that is rapidly gaining acceptance as an alternative to SAS/SATA - based SSD drives. Most NVMe drives available today take the form of conventional PCIe cards, though there are interesting emerging designs for hot-swappable drives in the style of traditional front-panel-bay rotational drives. Pricing of NVMe drives has fallen significantly in recent years, which with their impressive speed finds them an increasingly popular choice for Ceph journal service. The traditional SAS and SATA interfaces were designed in an era of much slower media, compared to a multi-lane PCIe bus.
NVMe was designed for modern, fast media without the physical seek limitations of rotating drives, and offers a more efficient protocol. The blazing speed thus achieved by NVMe drives increasingly finds them selected for Ceph journal services: you can often pack ten or more OSD journals onto a single NVMe device. While they do today cost more per GB than a traditional SSD, dramatic increases in capacity are coming, and we should expect to see NVMe becoming common place for journals. Most servers offer a handful at most of available traditional PCIe slots, which today usually limits NVMe use to journal versus bulk OSD storage, but as server and chassis designs evolve, we are are beginning to see products offering hot-swappable NVMe drives utilizing novel PCIe connection types and in coming years entire scale-out clusters may enjoy NVMe speeds.
Regardless of drive type, you may consider provisioning drives from at least two manufacturers, to minimize the degree to which supply problems, disparate failure rates, and the potential for design or firmware flaws can impact your clusters. Some go so far as to source drives from separate VARs or at different times, the idea being to have drives from multiple manufacturing runs to avoid the bad batch phenomenon.