For those of you that care, I think we're in the home stretch for integration of afe and mxfe into Nevada.
I spent the weekend going through the code, and reworking large portions of it to make use of zero-copy DMA wherever it was rational to do so (loan up for receive, direct binding for transmit).
The NICDRV test suite has also identified a number of issues with edge cases that didn't come up often, but which I'm glad to know about and have fixed in the version of the code getting putback.
They're only 100 Mbps nics, but the version of the code going into Nevada will make them run at pretty much the same speed as any other 100 Mbps NIC without IP checksum offload.
And, they are still 100% DDI compliant. :-) Thankfully the DDI has been extended for OpenSolaris since the last time I worried about such things (back in Solaris 8 days).
Anyway, looking forward to putback in b70 or b71. (Depending on whether I can get reviewers in time for b70 putback. If you can help me review, please let me know!)
Tuesday, July 17, 2007
Thursday, July 12, 2007
HME putback done
In case anyone ever wondered what a putback message looked like:
Btw, the case to make this work for qfe (PSARC 2007/ 404) was approved yesterday as well. There are some internal resourcing questions yet to be answered, but at least architectecturally, the approach has been approved.
I would really, really love it some qfe owners would file a bug asking for qfe to be GLDv3. It would make it much easier for me, I think, if this case were seen as a response to customer demand. (So many people have requested qfe GLDv3 support... please file a bug! Even better, file an *escalation*!)
Note: none of this eligible for backport to S10. You have to use OpenSolaris if you want the good stuff. Gotta keep a few carrots in reserve, right? (Seriously, ndd and Sun Trunking incompatibilities make it unsuitable for backport to S10 anyway.)
********* This mail is automatically generated *******
Your putback for the following fix(es) is complete:
PSARC 2007/319 HME GLDv3 conversion
4891284 RFE to add debug kstat counter for promiscuous mode to hme driver
6345963 panic in hme
6554790 Race betweeen hmedetach and hmestat_kstat_update
6568532 hme should support GLDv3
6578294 hme does not support hardware checksum
These fixes will be in release:
snv_70
The gate's automated scripts will mark these bugs "8-Fix Available"
momentarily, and the gatekeeper will mark them "10-Fix Delivered"
as soon as the gate has been delivered to the WOS. You should not
need to update the bug status.
Your Friendly Gatekeepers
Btw, the case to make this work for qfe (PSARC 2007/ 404) was approved yesterday as well. There are some internal resourcing questions yet to be answered, but at least architectecturally, the approach has been approved.
I would really, really love it some qfe owners would file a bug asking for qfe to be GLDv3. It would make it much easier for me, I think, if this case were seen as a response to customer demand. (So many people have requested qfe GLDv3 support... please file a bug! Even better, file an *escalation*!)
Note: none of this eligible for backport to S10. You have to use OpenSolaris if you want the good stuff. Gotta keep a few carrots in reserve, right? (Seriously, ndd and Sun Trunking incompatibilities make it unsuitable for backport to S10 anyway.)
Sunday, July 8, 2007
hme GLDv3 and *hardware checksum*
So I've been trying to run my GLDv3 port of hme through a very rigorous battery of tests called "nicdrv" (the test suite used for recent NIC drivers by Sun QE... hopefully soon to be open sourced, but that's another topic.)
Anyway, the test system I've been using is a poor little 360MHz US-II Tadpole system. (A Darwin-workalike, in shoe-box formfactor.)
Unfortunately, the test times out while trying to do the UDP RX tests. Which really shouldn't be surprising... the test was designed for gigabit network gear, with gigahertz system processors (or better.)
Well, it turns out that the hme driver can be faster. Considerably faster. Because the hardware supports IP checksum offload. But it was never enabled. (Note that this is true for the quad-port qfe boards as well, which are basically the same controller behind a bridge chip.)
So, I've decided to have another go at getting a fully successful test result with this hardware. By modifying the driver to support IP checksum offload. I'm hoping it may make the difference between a pass and fail. With tiny frames, every little bit helps.
Stay tuned here. Note that I also have logged performance data from earlier test runs, so I'll be able to compare that as well. One additional wrinkle in all this, is that I now feel compelled to test this with Sbus hme hardware. The oldest system I can find is a Sun Ultra 2. (Older Ultra 1 systems with 200 MHz and slower procs won't work. If anyone has an old Ultra 1 with 250 or better procs running Nevada, let me know!)
Anyway, the test system I've been using is a poor little 360MHz US-II Tadpole system. (A Darwin-workalike, in shoe-box formfactor.)
Unfortunately, the test times out while trying to do the UDP RX tests. Which really shouldn't be surprising... the test was designed for gigabit network gear, with gigahertz system processors (or better.)
Well, it turns out that the hme driver can be faster. Considerably faster. Because the hardware supports IP checksum offload. But it was never enabled. (Note that this is true for the quad-port qfe boards as well, which are basically the same controller behind a bridge chip.)
So, I've decided to have another go at getting a fully successful test result with this hardware. By modifying the driver to support IP checksum offload. I'm hoping it may make the difference between a pass and fail. With tiny frames, every little bit helps.
Stay tuned here. Note that I also have logged performance data from earlier test runs, so I'll be able to compare that as well. One additional wrinkle in all this, is that I now feel compelled to test this with Sbus hme hardware. The oldest system I can find is a Sun Ultra 2. (Older Ultra 1 systems with 200 MHz and slower procs won't work. If anyone has an old Ultra 1 with 250 or better procs running Nevada, let me know!)
Thursday, June 28, 2007
GLDv3 iprb putback
I just putback the GLDv3 conversion of iprb. It will be in the next SXDE/SXCE. (b69 and later). It is still closed source, but I think that may change soon, too. (All the technical information in the code is reproduced on a public open-source developers guide downloadable at Intel, with the exception of the binary microcode, which is in the FreeBSD tree under an Intel-owned BSD license.)
Anyway, I'm told Sun is having a meeting with Intel, and one of the agenda items is opening the source to iprb.
Meanwhile, enjoy the GLDv3 goodness.
Anyway, I'm told Sun is having a meeting with Intel, and one of the agenda items is opening the source to iprb.
Meanwhile, enjoy the GLDv3 goodness.
Monday, June 25, 2007
afe GLDv3-ified
I've converted "afe" to GLDv3 in anticipation of it getting putback. I've also greatly simplified the buffering logic in it, because I was trying to be "too clever" and I think we were seeing failures during the extreme testing that Sun QA likes to perform.
Anyway, this means that when afe gets putback (its on a schedule for snv68, but that may or may not happen), it will be GLDv3. Yay. Here's something to whet your appetite:
garrett@doc{44}> pfexec dladm show-link
eri0 type: non-vlan mtu: 1500 device: eri0
afe0 type: non-vlan mtu: 1500 device: afe0
This was done on a Sun Blade 100. No more legacy nics!
This is also helpful for laptop owners, because afe is one of the more common cardbus devices. So, your cardbus 10/100 NIC will work with NWAM.
If folks running snv_66 or newer want test binaries, let me know. I can offer them up in exchange for beer.
Anyway, this means that when afe gets putback (its on a schedule for snv68, but that may or may not happen), it will be GLDv3. Yay. Here's something to whet your appetite:
garrett@doc{44}> pfexec dladm show-link
eri0 type: non-vlan mtu: 1500 device: eri0
afe0 type: non-vlan mtu: 1500 device: afe0
This was done on a Sun Blade 100. No more legacy nics!
This is also helpful for laptop owners, because afe is one of the more common cardbus devices. So, your cardbus 10/100 NIC will work with NWAM.
If folks running snv_66 or newer want test binaries, let me know. I can offer them up in exchange for beer.
Wednesday, June 20, 2007
mxfe code reviewers sought
I'm also looking for folks to review my mxfe driver. It is posted at http://cr.opensolaris.org/~gdamore/mxfe
Thanks!
Thanks!
hme code reviewers sought
I need to get code review coverage over the hme GLDv3 conversion. This is also a good chance to learn what it takes to convert a legacy DLPI driver to GLDv3.
If you can help out, please look at the code at http://cr.opensolaris.org/~gdamore/nemo-hme/
The sooner I can get quality code review and test coverage, the sooner we can put this back! :-)
If you can help out, please look at the code at http://cr.opensolaris.org/~gdamore/nemo-hme/
The sooner I can get quality code review and test coverage, the sooner we can put this back! :-)
Subscribe to:
Posts (Atom)