The problems with combining FC and SATA drives on the same NetApp

The idea of tiered storage is something many business are seriously exploring now a days, and combining and leveraging it with and for “cloud” operations is a major focus. The idea behind tiered storage is that you have different levels of disk that have different performance characteristics, with the main focuses being speed and performance. We recently looked in to the possibility of adding some tiering to one of the NetApp environment’s I manage. The idea was to use 300GB FC (Fibre Channel) as our Tier 1 NAS disk and some 1TB SATA as our Tier2. On the surface of things this seems like a good idea for a couple of reasons:

  1. The Tier 1 disk is sufficient enough to run Oracle databases over NFS if those databases are configured properly.
  2. The Tier 2 disk is much cheaper and would be perfect for housing non-critical non-performance intense shares, such as home directories, at a fraction of the cost
  3. The mix of available disk would allow us to tailor the allocations to the actual needs of the project based on their performance requirements.

So as I began to look in to this course of action I discovered a few things that completely negates this idea, at least for having it housed in one filer (or a clustered pair even):

  • When using FC connectivity to disk shelves, FC drive and SATA drives must go on different loops.  This means that if you add a shelf of SATA to an open FC port on a filer, you will not be able to add any FC drives to that loop.
  • All write operations are committed to disk as a group, regardless of what the destination aggregate is for those writes.  So, on a system with SATA and FC disk, write operations destined for FC drives may be slowed down by the writes going to the SATA drives, if the SATA drives are busy.

The first point, the dedicated loops, isn’t such a big deal if you are planning on adding a full loop worth of SATA shelves (6 shelves per loop) and you have an open FC port with nothing else attached (or can move other shelves to fill in open spots on other loops to free up an FC port). So while the dedicated loop can be resolved and may or may not be an issue depending on your set up, it’s the second point that poses the most trouble in our environment (and I would assume most others as well).

Running the risk of impacting performance to your Tier 1 disk is not acceptable. The applications running on that tier are there for a reason, they need the performance of those faster disks. But how do you know if you will hit that impact, maybe it won’t apply? Good question. Perhaps this won’t be an issue for your environment. So ask your self this: Do you know the exact details of your workloads? No of course you don’t. You may know that there are databases on some of the exports, or that certain exports are used for regular CIFS or NFS shares for home directories, but you most likely do not know all the intimate details of each given applications work load. Without that precise knowledge it is nearly impossible to quantify the potential impact ahead of time, and thus this possible latency becomes a real concern.

Because of these factors we choose (and I recommend) not mixing FC and SATA shelves on a single (or clustered) system. If you need to have multiple Tiers you still have options:

  • Implement SAN as your Tier 1 storage and utilize NAS as Tier 2
  • Implement a Tier 1 NAS environment and a Tier 2 Nas environment on separate hardware (read: separate physical systems, either single headed or clustered)
  • Look into an appliance that can handle different types of disk in the same housing without impact and configure tiering therein.

Tiering your storage is great idea and allows for many flexibilities and possible cost savings for the customer in terms of charge backs for disk utilization. Even so, you still need keep performance in mind, and for me, the possible performance impact is not worth the risk for mixing these shelf types on a single head.