Why SAS->SATA is not such a great idea

So, we've had some "issue" reports relating to the mpt driver. In almost all cases, the results are related to situations where people are using SATA drives, and hooking them into SAS configurations.

Although the technology is supposed to work, and sometimes it works well, its a bad idea.

Let me elaborate:

  • SAS drives are generally subjected to more rigorous quality controls. This is the main reason why they cost more. (That and the market will pay more.)
  • SAS to SATA conversion technologies involve a significant level of protocol conversion. While the electricals may be the same, the protocols are quite different.
  • Such conversion technology is generally done in hardware, where only the hardware manufacturer has a chance of debugging problems when they occur.
  • Some of these hardware implementations remove debugging information that would be present in the SATA packet, and just supply "generic" undebuggable data in the SCSI (SAS) error return.
  • The conversion technology creates another potential point of failure.
  • Some of these hardware implementations won't be upgradeable, or at least not easily upgradeable, with software.
  • SATA drives won't have a SCSI GUID (ATA specs don't require it), and so the fabricated GUID (created by the SAS converter) may be different when you move the drive to a different chassis, potentially breaking things that rely on having a stable GUID for the drive.

Don't get me wrong. For many uses, SATA drives are great. They're great when you need low cost storage, and when you are connecting to a system that is purely SATA (such as to an AHCI controller), there is no reason to be concerned.

But building a system that relies upon complex protocol conversion in hardware, just adds another level of complexity. And complexity is evil. (KISS).

So if you want enterprise SAS storage, then go ahead and spring for the extra cost of drives that are natively SAS. Goofing around with the hybrid SAS/SATA options is just penny wise, and pound foolish.

But hey, its your data. I just know that I won't be putting my trusted data in a configuration that is effectively undebuggable.

(Note: the above is my own personal opinion, and should not be construed as an official statement from Nexenta.)

Aug 30, 2010: Update: At a significant account, I can say that we (meaning Nexenta) have verified that SAS/SATA expanders combined with high loads of ZFS activity have proven conclusively to be highly toxic. So, if you're designing an enterprise storage solution, please consider using SAS all the way to the disk drives, and just skip those cheaper SATA options. You may think SATA looks like a bargain, but when your array goes offline during ZFS scrub or resilver operations because the expander is choking on cache sync commands, you'll really wish you had spent the extra cash up front. Really.

Comments

James said…
I know your post is more targeted at enterprises, but I use SATA drives on a SAS controller for my home NAS because SATA port multipliers aren't supported in Solaris. Even if they were, I've heard horror stories about them. I haven't had any issues with my current setup, so I'll continue to take my chances.
Unknown said…
If you need a port multiplier, then I'd argue its time to reconsider whether SATA is the best choice for you. My own opinion of course, but I think once you get beyond one disk per cable, its time to work with a technology that was designed to support a regular multi-device bus architecture. That isn't SATA, due to its heritage from ATA. (Well, there's the two devices per bus thing, but that's it.)

And I make this comment for a home NAS user too... really SAS drives are not that much more expensive, and the additional reliability and simplicity is worth it.
gtirloni said…
Have you heard about any issues from people using SAS backplanes (SuperMicro) and mixing SAS disks with SATA SSDs ? SAS disks are becoming very cost effective and we would like to use those.. but SAS SSDs aren't so mainstream yet. Mixing them works fine in the lab, but I assume there are some corner cases which we probably haven't run into yet.
Anonymous said…
Not that much more expensive? I can get a 1TB 7200rpm 32MB cache SATA Seagate for $AU80, vs $AU270 for a 1TB 7200rpm SAS Seagate with half the cache. And a 2TB WD RE4 SATA 64MB cache is $AU370 vs $AU430 for the 2TB SAS Seagate Constellation, again with only 16MB cache.

The main reason people use SAS controllers with SATA disks is there's a serious lack of good SATA controllers with more than 4 ports.
Anonymous said…
or conversely, consumer grade SAS drives would be nice. to fill the gap between "4 sata drives per controller" and "nearly infinite thousand dollar drives".
David said…
Decent SATA drivers under SPARC Solaris seem to be a real problem. Under SPARC, I wonder if anyone went through the wotk I did.

On my SPARC NAS Server with 64 bit PCI slots, I had the most difficult time trying to get eSATA and FireWire drives working.

I went through a bunch of different SATA cards with no compatibility success. I think SPARC Solaris does not have reasonable SATA drivers.

I am still struggling with internal SATA Flash Drives recognized. I had to get a SATA to USB adapter to have the devices get found, but they still are not working quite right.

I reverted to using dual 1.5 TB external USB drives and I'll start messing with USB to SATA bridges for L2ARC with Flash when I get bored again.

If I was able to find a reasonably priced and compatible SAS card, I would have done it, but the SAS cards I found with 64 bit PCI were not clearly supported. I bet for some people, SAS-SATA is the only game in town.
Unknown said…
SAS backplanes are best used exclusively with SAS drives, be they SSD or SATA. If you need SATA SSD storage, perhaps you can use that in the main system chassis with a pure SATA HBA. That would be my design of choice, at least.
Couldn't agree more. Mixing SAS and SATA (on the same bus/expander leg) for high-performance is an oxymoron. The risks associated with poorly behaving SATA disks devalue the SAS investment.

Alternative: SATA-only expander arrangements with dedicated controller interface(s). Mixing SAS and SATA in the same chassis - especially without AAMUX controlling the SATA disks - is a mess in production.
gwon said…
The primary issue is whether you have storage management technologies that match the risk of the storage media technologies you have selected. Storage management systems such as raid-5, mirroring and other "copies of the data" technologies can greatly reduce your exposure to drive failure. What you still have to live with is the chance of "massive" failure such as a power system issue physically destroying the connected hardware or some other worse case scenario.

Most people know about having duplicate copies on separate media, offsite and manage their risks correctly. If drive failures can't be dealt with, with hot swaps, then it can be more of a problem with an occasional drive failure.

But, if you can hot swap drives, and you mange the age of your drives appropriately by not buying everything from the same place and starting them all at the same time on the same system to reduce "bad batch failures", cheap drives are very much the right choice for most people.

Would you run them on "anything", no. But certainly, the hardware cost issues really are, in my mind, an important part of deciding what to do.

There is no way, I need to spend that kind of money today. I have two solaris machines, built in duplicity that rsync over the network (with zfs snaphots between syncs). I get perfect backups, and I can swap out drives and manage everything appropriately without the added costs. It's been working great for more than a year.
Chris said…
Hi
I know that it has been awhile since you posted this article but was wondering if this means that you do not recommend using SATA SSDs for the L2ARC and ZIL? These would also be plugged into the backplane along with the storage pool consisting of 100% SAS drives.
WalkaboutTigger said…
You say that SAS drives are not much more expensive than SATA drives. Either you are paying far too much for your SATA drives, or you need to tell me where you're purchasing your SAS drives from. I can purchase a WD 2 TB SATA 10k RPM drive for $200. The same SAS drive is $1700. 8 1/2 times the price is NOT just a bit more expensive.
Unknown said…
@WalkaboutTigger

I wonder where you are actually buying your hardware from? ;-) .. Example from a german retailer:

Seagate Constellation ES.3 4TB, SATA 6Gb/s = 278 EUR

and

Seagate Constellation ES.3 4TB, SAS 6Gb/s (ST4000NM0023) = 288 EUR

SATA vs SAS .. the difference is just 11 EUR (the same above btw also applied to the WD RE4) .. so either we're comparing the wrong brands/types of HDDs here or he has a point.

BTW: of course I noticed that you were comparing SATA 10k vs. SAS 10k but I guess this wasn't his intention therefor the 7.2k comparison.


regards,
Patrick
wish i read this 9 months ago. usng wd sata drives with sas config on solaris 11. everything was 'fine' until a disk degraded and i started resilvering (with 15 TB over 9 2TB drives and 2 ssds for zfs log)

now my some drives are showing as unavail with read/write errors, my pool is suspended, and resilvering is at 8.00 MB/s and has several hundred hours left...

wish vendors or engineering standards prevented sata from being used on this config w/o errors... until your using a lot of disk and resilvering...

likely why these disks are failing so often too.

Popular posts from this blog

SP (nanomsg) in Pure Go

An important milestone

GNU grep - A Cautionary Tale About GPLv3