Why SAS->SATA is not such a great idea
So, we've had some "issue" reports relating to the mpt driver. In almost all cases, the results are related to situations where people are using SATA drives, and hooking them into SAS configurations.
Although the technology is supposed to work, and sometimes it works well, its a bad idea.
Let me elaborate:
- SAS drives are generally subjected to more rigorous quality controls. This is the main reason why they cost more. (That and the market will pay more.)
- SAS to SATA conversion technologies involve a significant level of protocol conversion. While the electricals may be the same, the protocols are quite different.
- Such conversion technology is generally done in hardware, where only the hardware manufacturer has a chance of debugging problems when they occur.
- Some of these hardware implementations remove debugging information that would be present in the SATA packet, and just supply "generic" undebuggable data in the SCSI (SAS) error return.
- The conversion technology creates another potential point of failure.
- Some of these hardware implementations won't be upgradeable, or at least not easily upgradeable, with software.
- SATA drives won't have a SCSI GUID (ATA specs don't require it), and so the fabricated GUID (created by the SAS converter) may be different when you move the drive to a different chassis, potentially breaking things that rely on having a stable GUID for the drive.
Don't get me wrong. For many uses, SATA drives are great. They're great when you need low cost storage, and when you are connecting to a system that is purely SATA (such as to an AHCI controller), there is no reason to be concerned.
But building a system that relies upon complex protocol conversion in hardware, just adds another level of complexity. And complexity is evil. (KISS).
So if you want enterprise SAS storage, then go ahead and spring for the extra cost of drives that are natively SAS. Goofing around with the hybrid SAS/SATA options is just penny wise, and pound foolish.
But hey, its your data. I just know that I won't be putting my trusted data in a configuration that is effectively undebuggable.
(Note: the above is my own personal opinion, and should not be construed as an official statement from Nexenta.)
Aug 30, 2010: Update: At a significant account, I can say that we (meaning Nexenta) have verified that SAS/SATA expanders combined with high loads of ZFS activity have proven conclusively to be highly toxic. So, if you're designing an enterprise storage solution, please consider using SAS all the way to the disk drives, and just skip those cheaper SATA options. You may think SATA looks like a bargain, but when your array goes offline during ZFS scrub or resilver operations because the expander is choking on cache sync commands, you'll really wish you had spent the extra cash up front. Really.
Comments
And I make this comment for a home NAS user too... really SAS drives are not that much more expensive, and the additional reliability and simplicity is worth it.
The main reason people use SAS controllers with SATA disks is there's a serious lack of good SATA controllers with more than 4 ports.
On my SPARC NAS Server with 64 bit PCI slots, I had the most difficult time trying to get eSATA and FireWire drives working.
I went through a bunch of different SATA cards with no compatibility success. I think SPARC Solaris does not have reasonable SATA drivers.
I am still struggling with internal SATA Flash Drives recognized. I had to get a SATA to USB adapter to have the devices get found, but they still are not working quite right.
I reverted to using dual 1.5 TB external USB drives and I'll start messing with USB to SATA bridges for L2ARC with Flash when I get bored again.
If I was able to find a reasonably priced and compatible SAS card, I would have done it, but the SAS cards I found with 64 bit PCI were not clearly supported. I bet for some people, SAS-SATA is the only game in town.
Alternative: SATA-only expander arrangements with dedicated controller interface(s). Mixing SAS and SATA in the same chassis - especially without AAMUX controlling the SATA disks - is a mess in production.
Most people know about having duplicate copies on separate media, offsite and manage their risks correctly. If drive failures can't be dealt with, with hot swaps, then it can be more of a problem with an occasional drive failure.
But, if you can hot swap drives, and you mange the age of your drives appropriately by not buying everything from the same place and starting them all at the same time on the same system to reduce "bad batch failures", cheap drives are very much the right choice for most people.
Would you run them on "anything", no. But certainly, the hardware cost issues really are, in my mind, an important part of deciding what to do.
There is no way, I need to spend that kind of money today. I have two solaris machines, built in duplicity that rsync over the network (with zfs snaphots between syncs). I get perfect backups, and I can swap out drives and manage everything appropriately without the added costs. It's been working great for more than a year.
I know that it has been awhile since you posted this article but was wondering if this means that you do not recommend using SATA SSDs for the L2ARC and ZIL? These would also be plugged into the backplane along with the storage pool consisting of 100% SAS drives.
I wonder where you are actually buying your hardware from? ;-) .. Example from a german retailer:
Seagate Constellation ES.3 4TB, SATA 6Gb/s = 278 EUR
and
Seagate Constellation ES.3 4TB, SAS 6Gb/s (ST4000NM0023) = 288 EUR
SATA vs SAS .. the difference is just 11 EUR (the same above btw also applied to the WD RE4) .. so either we're comparing the wrong brands/types of HDDs here or he has a point.
BTW: of course I noticed that you were comparing SATA 10k vs. SAS 10k but I guess this wasn't his intention therefor the 7.2k comparison.
regards,
Patrick
now my some drives are showing as unavail with read/write errors, my pool is suspended, and resilvering is at 8.00 MB/s and has several hundred hours left...
wish vendors or engineering standards prevented sata from being used on this config w/o errors... until your using a lot of disk and resilvering...
likely why these disks are failing so often too.