I recently upgraded to the "latest" OpenSolaris on my desktop. This is also my primary NIS and NFS server and boot server.
I was rather dismayed at first by some of the bits that were missing. NIS, DHCP, and more. I finally got it all working, and I thought I'd document it here for posterity.
First install NIS. "pkg install SUNWyp"
(Configuration of NIS left as an "exercise" for the reader.)
Second, DHCP was missing, so we install it as well:
"pkg install SUNWdhcs SUNWdhcsb SUNWdhcm"
(Configuration of DHCP left as an exercise...)
Then, it turns out that there was no manifest for TFTPD, not even a commented out entry in /etc/services. I copied the line from /etc/inetd.conf on another system:
tftp dgram udp6 wait root /usr/sbin/in.tftpd in.tftpd -s /tftpboot
Then run "inetconv".
My question to the OpenSolaris/Indiana folks -- why wasn't this listed in the stock inetd.conf -- even as a commented out entry? What I'd have preferred is to see a default service installed, but not enabled by this for default. It took some man page reading and investigation to figure out where tftp was in the "new" world.
Anyway, its all working now... I'm booting a b95 image over the network as I type this...
Sunday, August 3, 2008
Friday, August 1, 2008
Another SDcard Status Update
Some of you may have been wondering about the work I've done for SDcard. Well, it looks like we have basic agreement at this point, and I am only waiting for the lawyers to draft the final text granting the approval, and then I'll be able to supply the bits publicly, as well as integrate them into OpenSolaris. I'm expecting that this could happen as soon as build 97. Stay tuned, and sorry for the delay.
(Internal folks can access bfu archives for Intel based system at /net/zcube.west/export/ws/sdcard/hg-x86/archives or for Tadpole SPARCLE /net/zcube.west/export/ws/sdcard/hg-sparc/archives. There's an internal webrev up as well at http://jurassic.sfbay/~gd78059/sdcard/ -- enjoy!)
(Internal folks can access bfu archives for Intel based system at /net/zcube.west/export/ws/sdcard/hg-x86/archives or for Tadpole SPARCLE /net/zcube.west/export/ws/sdcard/hg-sparc/archives. There's an internal webrev up as well at http://jurassic.sfbay/~gd78059/sdcard/ -- enjoy!)
Lint cleanups in kernel
A few weeks ago, I committed changes to start the clean up of a bunch of lint overrides in the kernel, and provided text explaining how to continue the work (along with admonishments to do so.)
I am very happy to report that it seems that some folks took what I said to heart, and over the past couple of weeks there have been a continual stream of lint related fixes, such as this one.
Thanks to everyone who's helping out!
I am very happy to report that it seems that some folks took what I said to heart, and over the past couple of weeks there have been a continual stream of lint related fixes, such as this one.
Thanks to everyone who's helping out!
Thursday, July 31, 2008
The TeXTing Swindle
I recently started communicating with some family via "texting", because, for reasons I don't fully understand, they will happily respond to a text message (and send them out on their own), but won't answer a regular phone.
So ignoring the annoyance of trying to draft up a message on the toy 10 key keypad on my phone -- which takes approximately 20 times as long as it would to say the same thing in a voice call, I started thinking about the economics behind the texting fad.
My provider charges me $0.20 per text message. My overlimit minutes cost about $0.25. (Picture messages cost more -- $1.95 each, I think.)
This is the biggest swindle made by phone companies in recent history.
Look at the data involved.
A phone call (voice message) takes the following bit of bandwidth:
Now, consider a typical text message. Say 12 lines at 40 characters each. (That's more than my phone will display), each character at 8 bits (we'll assume ASCII for now, I can't imagine trying to conjure up CJK characters on my phone!), gives me 12 * 40 * 8 = 3,840 bits. (I didn't try to make the numbers work out as evenly as they do for this example, it just happened!) So, 3840 bits per message.
To a rough approximation a text message costs about the same to the consumer as a minute of talk time. HOWEVER, the bandwidth involved is approximately 1/1000. If everyone is texting instead of using the voice service, the service provider should have about 1,000 times more capacity!
Put another way, text bits are about 1,000 times more expensive to end user!
Now, consider pictures. $1.95 is nearly 10x the cost of a text message or a minute of talk time.
A fully decompressed 16-bit VGA (640x480) resolution picture takes about 4.9Mbits.
To a rough approximation, a picture takes about as much bandwidth as a minute of talk time. The picture is therefore 100x less expensive per bit to send than a text message, and about 10x more expensive than a voice call.
Of course the secret here is that texting is incredibly efficient use of bandwidth to send a message (compared to a voice call), especially with those wacky abbreviations (such as "c u l8r"), but the end user doesn't realize that it is costing the provider far less to send the same message via a text message. (Put another way, the end user has no idea how much network bandwidth is simply wasted with traditional voice calls.)
The mobile providers here have made a real swindle. The fact that they aren't trying to undercut each other on the text messaging does allude rather strongly to "collusion" in market pricing on them -- there should be a lot more competitive pressure to drive the cost of texting down, or even to offer it for free (especially with plans that have unlimited voice, such as mine).
In fact, the phone company should be begging me to use texting instead of voice calls, instead of charging me extra for texting.
Anyone from the mobile companies want to explain how you can get away with this swindle?
Ask your mobile provider why they aren't giving you texting for free with your unlimited voice plan!
So ignoring the annoyance of trying to draft up a message on the toy 10 key keypad on my phone -- which takes approximately 20 times as long as it would to say the same thing in a voice call, I started thinking about the economics behind the texting fad.
My provider charges me $0.20 per text message. My overlimit minutes cost about $0.25. (Picture messages cost more -- $1.95 each, I think.)
This is the biggest swindle made by phone companies in recent history.
Look at the data involved.
A phone call (voice message) takes the following bit of bandwidth:
- Assume 8-bit ULAW or ALAW sampling (12 bits compressed to 8 bits)
- Assume 8 kHz sampling, which is typical for "phone quality" audio
Now, consider a typical text message. Say 12 lines at 40 characters each. (That's more than my phone will display), each character at 8 bits (we'll assume ASCII for now, I can't imagine trying to conjure up CJK characters on my phone!), gives me 12 * 40 * 8 = 3,840 bits. (I didn't try to make the numbers work out as evenly as they do for this example, it just happened!) So, 3840 bits per message.
To a rough approximation a text message costs about the same to the consumer as a minute of talk time. HOWEVER, the bandwidth involved is approximately 1/1000. If everyone is texting instead of using the voice service, the service provider should have about 1,000 times more capacity!
Put another way, text bits are about 1,000 times more expensive to end user!
Now, consider pictures. $1.95 is nearly 10x the cost of a text message or a minute of talk time.
A fully decompressed 16-bit VGA (640x480) resolution picture takes about 4.9Mbits.
To a rough approximation, a picture takes about as much bandwidth as a minute of talk time. The picture is therefore 100x less expensive per bit to send than a text message, and about 10x more expensive than a voice call.
Of course the secret here is that texting is incredibly efficient use of bandwidth to send a message (compared to a voice call), especially with those wacky abbreviations (such as "c u l8r"), but the end user doesn't realize that it is costing the provider far less to send the same message via a text message. (Put another way, the end user has no idea how much network bandwidth is simply wasted with traditional voice calls.)
The mobile providers here have made a real swindle. The fact that they aren't trying to undercut each other on the text messaging does allude rather strongly to "collusion" in market pricing on them -- there should be a lot more competitive pressure to drive the cost of texting down, or even to offer it for free (especially with plans that have unlimited voice, such as mine).
In fact, the phone company should be begging me to use texting instead of voice calls, instead of charging me extra for texting.
Anyone from the mobile companies want to explain how you can get away with this swindle?
Ask your mobile provider why they aren't giving you texting for free with your unlimited voice plan!
Wednesday, July 16, 2008
New Experimental AudioHD Driver
I've just posted the latest experimental version of the OpenSolaris audiohd driver. This driver includes the latest work from the engineers in Beijing, including a widget parser that enables the driver to work on a much larger variety of audio configurations.
As most motherboards these days ship with an audiohd compliant device, this should greatly expand the support for audio on Solaris systems.
The above driver binary also includes suspend/resume support.
Note that this is not the anticipated Open Sound System driver, but rather an extension of the pre-existing Sun driver. It won't work with OSS. However, there is some chance (I've not tested it myself) that this will even work on Solaris 10.
A webrev of the changes should be posted soon.
In the meantime, if you want to give this a whirl, let me know the results. (To test it, just copy the binary objects into /kernel/drv/amd64/ or /kernel/drv and reboot. -- Yes there are ways to do this without a reboot, but the caveats are too many to list here.)
We're interested in any problem reports, as well as notification of hardware configurations where this either did or did not work. Thanks!
[ update 10/13/2008: I've changed the link to the files list so that readers can find the latest version posted up on the site ]
As most motherboards these days ship with an audiohd compliant device, this should greatly expand the support for audio on Solaris systems.
The above driver binary also includes suspend/resume support.
Note that this is not the anticipated Open Sound System driver, but rather an extension of the pre-existing Sun driver. It won't work with OSS. However, there is some chance (I've not tested it myself) that this will even work on Solaris 10.
A webrev of the changes should be posted soon.
In the meantime, if you want to give this a whirl, let me know the results. (To test it, just copy the binary objects into /kernel/drv/amd64/ or /kernel/drv and reboot. -- Yes there are ways to do this without a reboot, but the caveats are too many to list here.)
We're interested in any problem reports, as well as notification of hardware configurations where this either did or did not work. Thanks!
[ update 10/13/2008: I've changed the link to the files list so that readers can find the latest version posted up on the site ]
Tuesday, July 8, 2008
NIC driver family trees
This may be interesting to some people -- and its recorded here for posterity. A number of NIC drivers in Solaris have been developed using "copy-paste" approaches. As a result, there are a couple of family "lines" for NIC drivers.
le (ancient 10 Mbit only) driver, is the common root for one heritage. From le was developed hme. From hme were developed qfe, eri, and gem. From these was developed ce. And ultimately, from these was developed nxge (although nxge shares little with its ancestors anymore.)
Another heritage goes back to ancient dnet (tulip) drivers on x86. dnet begat dmfe, which begat bge. There are probably other drivers that bge spawned, but its hard to know for sure how much copying may have occurred.
afe and mxfe were developed pretty much independently from the others, although afe was developed first, and begat mxfe (which is for very similar hardware -- a good counter example for a case where copying the code made sense -- of course I'm also the original author of both drivers). (They started life as pure DLPI from-scratch drivers, but migrated to GLDv2, and later to GLDv3.)
Murayama's drivers (e.g. sfe) owe some ancestry to Linux, although we're told that all the code was changed so that no derivative code from Linux remains.
I believe e1000g owes its ancestry to a combination of Linux code at Intel, and unique code written just for Solaris. (There was another driver, ipge, for the same hardware, which was part of the le/hme heritage, but it died in late toddlerhood.)
The wifi drivers owe most of their existence to the work done in FreeBSD and OpenBSD.
The other older drivers (pcelx, elxl, pcn, etc.) appear to share little if any heritage with any of the other NIC drivers.
le (ancient 10 Mbit only) driver, is the common root for one heritage. From le was developed hme. From hme were developed qfe, eri, and gem. From these was developed ce. And ultimately, from these was developed nxge (although nxge shares little with its ancestors anymore.)
Another heritage goes back to ancient dnet (tulip) drivers on x86. dnet begat dmfe, which begat bge. There are probably other drivers that bge spawned, but its hard to know for sure how much copying may have occurred.
afe and mxfe were developed pretty much independently from the others, although afe was developed first, and begat mxfe (which is for very similar hardware -- a good counter example for a case where copying the code made sense -- of course I'm also the original author of both drivers). (They started life as pure DLPI from-scratch drivers, but migrated to GLDv2, and later to GLDv3.)
Murayama's drivers (e.g. sfe) owe some ancestry to Linux, although we're told that all the code was changed so that no derivative code from Linux remains.
I believe e1000g owes its ancestry to a combination of Linux code at Intel, and unique code written just for Solaris. (There was another driver, ipge, for the same hardware, which was part of the le/hme heritage, but it died in late toddlerhood.)
The wifi drivers owe most of their existence to the work done in FreeBSD and OpenBSD.
The other older drivers (pcelx, elxl, pcn, etc.) appear to share little if any heritage with any of the other NIC drivers.
Cut & Paste Device Driver Design
A common approach to device driver design is to start with a device driver for a similar piece of hardware that someone else wrote, and copy it, then modify it. However, this approach is fraught with peril, and I thought it might be a good idea to record my thoughts on this for posterity. If this post saves just one future developer from falling into this pit, it will have been worthwhile.
The first problem is that most device drivers have a number of non-trivial "features" in them that relate to the specific issues of a particular piece of hardware. Often this means that there are internal design features of the driver that are not ideal for a new driver. (A classic example would be nxge's use of particular hacks relating to the location of it in relation to the IOMMU on certain Niagra processors. No other driver is ever going to need that.)
The second problem is that most device drivers are not 100% bug free. The copy-modify approach to coding tends to lead to bugs being copied from one driver to another.
The third, and possibly most significant problem, is that when the copy-modify approach is used, the author doing the copying rarely has a complete understanding of all of the original source driver's features, and the end result is a new driver that the original author isn't aware of (and isn't maintaining), and with code that the new author might not fully understand.
A fourth problem is that real drivers are complicated. I frequently hear advice given to new NIC driver authors to "just copy bge". That is terrible advice to give to someone who may be trying to get their head around a new DDK. As a real driver, bge carries a lot of legacy baggage with it, that new drivers definitely shouldn't repeat. All that baggage obfuscates the original code, and the would-be-copier may have little ability to contain the entire device driver in his head.
This leads to the fifth problem, which is that copying repeats all the optimization steps, but elides the measurement and experience that led to them. This violates one of my core design principles, which is KISS (keep it simple, stupid) -- design for simplicity first, and optimize only when proven necessary. (Example: does a 100MB NIC really need to have the logic associated with "buffer loan up" in it? Almost certainly not! And in fact, despite the fact that many NIC drivers have such logic in them, dating back to older architectures, the loan up actually has significant problems, and results in lower performance than the far simpler unconditional bcopy.
A sixth problem is that this approach can often lead to reproduction of design mistakes or limitations. For example, if one were to build a new driver for SPARC hardware by copying hme, the end driver would likely not be 100% portable to x86. (And even if it were made portable, the first pass would probably be quite suboptimal from a performance standpoint owing to hme's use of SPARC-only dvma interfaces.)
The upshot of all this is that I would strongly encourage prospective device driver authors not to start a new driver by copying source code from another driver.
A far better approach, IMO, is to start with a "skeleton driver" (Writing Device Drivers has some such skeleton drivers for different kinds of drivers) and flesh code as you go. Ask questions, on the driver-discuss@opensolaris.org mailing list as well, if you're not familiar with a particular type of driver.
For folks who might argue that starting fresh from a skeleton violates code-reuse and increases development costs, I would say that
The first problem is that most device drivers have a number of non-trivial "features" in them that relate to the specific issues of a particular piece of hardware. Often this means that there are internal design features of the driver that are not ideal for a new driver. (A classic example would be nxge's use of particular hacks relating to the location of it in relation to the IOMMU on certain Niagra processors. No other driver is ever going to need that.)
The second problem is that most device drivers are not 100% bug free. The copy-modify approach to coding tends to lead to bugs being copied from one driver to another.
The third, and possibly most significant problem, is that when the copy-modify approach is used, the author doing the copying rarely has a complete understanding of all of the original source driver's features, and the end result is a new driver that the original author isn't aware of (and isn't maintaining), and with code that the new author might not fully understand.
A fourth problem is that real drivers are complicated. I frequently hear advice given to new NIC driver authors to "just copy bge". That is terrible advice to give to someone who may be trying to get their head around a new DDK. As a real driver, bge carries a lot of legacy baggage with it, that new drivers definitely shouldn't repeat. All that baggage obfuscates the original code, and the would-be-copier may have little ability to contain the entire device driver in his head.
This leads to the fifth problem, which is that copying repeats all the optimization steps, but elides the measurement and experience that led to them. This violates one of my core design principles, which is KISS (keep it simple, stupid) -- design for simplicity first, and optimize only when proven necessary. (Example: does a 100MB NIC really need to have the logic associated with "buffer loan up" in it? Almost certainly not! And in fact, despite the fact that many NIC drivers have such logic in them, dating back to older architectures, the loan up actually has significant problems, and results in lower performance than the far simpler unconditional bcopy.
A sixth problem is that this approach can often lead to reproduction of design mistakes or limitations. For example, if one were to build a new driver for SPARC hardware by copying hme, the end driver would likely not be 100% portable to x86. (And even if it were made portable, the first pass would probably be quite suboptimal from a performance standpoint owing to hme's use of SPARC-only dvma interfaces.)
The upshot of all this is that I would strongly encourage prospective device driver authors not to start a new driver by copying source code from another driver.
A far better approach, IMO, is to start with a "skeleton driver" (Writing Device Drivers has some such skeleton drivers for different kinds of drivers) and flesh code as you go. Ask questions, on the driver-discuss@opensolaris.org mailing list as well, if you're not familiar with a particular type of driver.
For folks who might argue that starting fresh from a skeleton violates code-reuse and increases development costs, I would say that
- The end driver will almost certainly be more maintainable, as the author will understand every line of code.
- The cost of developing a new driver from scratch is probably not that high compared to the copy-modify approach (unless the source and target drivers are for devices that are very very similar. )
- The educational benefit of taking this approach is very great as well. I can think of no better way to truly understand the DDI/DDK than to write a device driver from scratch.
- For commonly reused code, a library with well defined interfaces is a far far better approach. The driver frameworks in Solaris such as SCSA and GLD are good examples of this. (Third party approaches such as Murayama's GEM also qualify in this regard.) This way there is only one real copy of the code, and clear interface boundaries make it possible for multiple parties to reuse the code safely, without depending on "implementation details".
Thursday, June 12, 2008
ISA sbpro driver removed
Starting with build 93, OpenSolaris will no longer have support for the ISA based SoundBlaster Pro, SoundBlaster 16, and SoundBlaster AWE32 devices. This shouldn't impact anyone, although its possible that bochs or qemu users may be impacted. (I've not heard of any such users successfully able to use the sbpro driver in a Solaris guest environment though.)
In the future, we may look into adding support for emulated ESS 1370 devices.
In the meantime, VirtualBox emulates an Intel ICH based AC'97 support, which works with the audio810 driver.
In the future, we may look into adding support for emulated ESS 1370 devices.
In the meantime, VirtualBox emulates an Intel ICH based AC'97 support, which works with the audio810 driver.
Blogger Bug
Its possible that is just showing up for me, but recently the "toolbar" at the top of my blog has become unusuable. Instead of a normal set of links, I just get a row of repeated Blogger icons. I'm using firefox 2.0.0.7 on Solaris Nevada.
Anyone else seen this?
Anyone else seen this?
Wednesday, June 4, 2008
Audio Driver Poll
I'm interested in collecting data about audio devices that folks are using. So, I've a few questions, which maybe folks could send answers to me. Here are the questions:
- Is there already support in Solaris for your audio devices/needs? (If yes, stop here, and no need to send me any information -- you're already covered as a "mainstream" user.)
- If you're using 4Front's OSS, and you're using a hardware driver other than apci97, hdaudio, ich, via8233, atiaudio, then please let me know what driver you are using, and what the actual audio hardware is (cat /dev/sndstat might be useful here.) (If the audio is built into a motherboard, I'd like to know the make/model of computer, and the rough date -- year -- that you purchased it.)
- Do you use/would you use hardware MIDI support?
- Are you using digital audio (Dolby Digital/AC3, and/or SPDIF) on your Solaris system? If so, please provide detail.
- Do you have any use for more than a single active input source? (I.e. do you need more than a microphone, or line input, to be supported simultaneously for recording?) If so, please provide detail.
- Do you have any other audio needs for Solaris beyond normal business/consumer audio? (I.e. I assume that most folks want audio good enough for DVD playback, gaming, and video conferencing.) Particularly, if you want to use Solaris audio to do production work, internet radio broadcasting, etc. then I'd like to know about that.
- If the answer to #1 is "no", then if you're willing, I'd like to have your e-mail address so that I might contact you to discuss your particular needs/application further.
Tuesday, June 3, 2008
Boycott Laptops with Broadcom WiFi
It seems to be a recurring theme, but we keep hearing about different WLAN parts that don't work with NDIS wrapper, or other problems building the NDIS wrapper etc.
I've never used the NDIS wrapper, and I refuse to do so. I also refuse to purchase or recommend any laptop with a Broadcom WLAN part on it, at least until Broadcom changes their position and makes it possible for 3rd parties (even under NDA) to develop device drivers for their WLAN products.
The reason is simple: until laptop manufacturers start losing sales due to people who take the same position, they won't stop including Broadcom WLAN on their products. The loss of a few individual WLAN cards won't impact Broadcom at all, but if Gateway or Dell stops purchasing WLAN parts, then its a whole new ballgame. And the more laptop vendors that we can get to understand that use of Broadcom leads to lost sales, the more impact it will have.
Either Broadcom will take notice, and correct their behavior (e.g. by offering 3rd parties access to device driver info under NDA, writing drivers themselves for platforms like OpenSolaris, or even better, offering up open technical specs), or their WLAN products will gradually fade from popularity so that they are no longer relevant.
To be honest, I don't care which result comes about. But please, don't use or purchase Broadcom WLAN. And to those of you writing neat things like NDIS wrapper and reverse engineering efforts like the bwi driver in *BSD, I'd recommend you rethink whether enabling further sales of laptops bundled with Broadcom WLAN is really something you want to encourage.
(For specific alternatives to Broadcom WLAN, look for either Intel or Atheros WIFI.)
I've never used the NDIS wrapper, and I refuse to do so. I also refuse to purchase or recommend any laptop with a Broadcom WLAN part on it, at least until Broadcom changes their position and makes it possible for 3rd parties (even under NDA) to develop device drivers for their WLAN products.
The reason is simple: until laptop manufacturers start losing sales due to people who take the same position, they won't stop including Broadcom WLAN on their products. The loss of a few individual WLAN cards won't impact Broadcom at all, but if Gateway or Dell stops purchasing WLAN parts, then its a whole new ballgame. And the more laptop vendors that we can get to understand that use of Broadcom leads to lost sales, the more impact it will have.
Either Broadcom will take notice, and correct their behavior (e.g. by offering 3rd parties access to device driver info under NDA, writing drivers themselves for platforms like OpenSolaris, or even better, offering up open technical specs), or their WLAN products will gradually fade from popularity so that they are no longer relevant.
To be honest, I don't care which result comes about. But please, don't use or purchase Broadcom WLAN. And to those of you writing neat things like NDIS wrapper and reverse engineering efforts like the bwi driver in *BSD, I'd recommend you rethink whether enabling further sales of laptops bundled with Broadcom WLAN is really something you want to encourage.
(For specific alternatives to Broadcom WLAN, look for either Intel or Atheros WIFI.)
Suspend/Resume Goodness!
Its been a busy week.
In the past week, there have been three separate putbacks:
1) Kerry Shut putback a fix for audiohd to suspend/resume
2) Brian Xu putback a fix for iwk to suspend/resume
3) Judy Chen putback a fix for ath to suspend/resume
The upshot of this is, if you have a laptop with an Nvidia graphics card, its a fair chance that your laptop will support suspend (and resume!)
A big thanks to everyone who's been working on this.
In the past week, there have been three separate putbacks:
1) Kerry Shut putback a fix for audiohd to suspend/resume
2) Brian Xu putback a fix for iwk to suspend/resume
3) Judy Chen putback a fix for ath to suspend/resume
The upshot of this is, if you have a laptop with an Nvidia graphics card, its a fair chance that your laptop will support suspend (and resume!)
A big thanks to everyone who's been working on this.
Wednesday, May 21, 2008
Brussels NDD compatibility code cleanup
I've just putback the changes to afe and mxfe to rip out the driver-private ndd support code and replace it with much cleaner and simpler mc_setprop(), mc_getprop() property access functions supplied by Brussels. For common link parameters Brussels does the NDD compatibility support for us. Yay! Drivers can be smaller.
There's a couple of opportunities here for folks to contribute driver improvements:
1) convert existing NIC drivers to the newer framework. E.g. rge, dmfe, maybe others.... (hme and eri for sure, but they may be hard due to the plethora of driver private properties they support via ndd).
2) try hard to remove private driver ioctl() support in favor of Brussels property functions
3) ADMtek centaur parts can support flow control, on certain hardware (pretty much anything shipped in the past 5-7 years.) Adding support for this in afe might be a relatively simple project, especially for someone familiar with ethernet flow control.
Contact me if you want to work on any of the above.
-- Garrett
There's a couple of opportunities here for folks to contribute driver improvements:
1) convert existing NIC drivers to the newer framework. E.g. rge, dmfe, maybe others.... (hme and eri for sure, but they may be hard due to the plethora of driver private properties they support via ndd).
2) try hard to remove private driver ioctl() support in favor of Brussels property functions
3) ADMtek centaur parts can support flow control, on certain hardware (pretty much anything shipped in the past 5-7 years.) Adding support for this in afe might be a relatively simple project, especially for someone familiar with ethernet flow control.
Contact me if you want to work on any of the above.
-- Garrett
Friday, April 25, 2008
New Project Direction -- Audio
So, since some folks may be wondering what I'm up to, I thought I'd briefly mention it here.
I've been asked to serve as tech. lead on the project integrate the software from 4Front's Open Sound System into OpenSolaris.
Surprisingly, this task isn't quite as straight-forward as it might seem. There are a number of outstanding issues that have to be resolved before the project can integrate, and we're working frantically to get them all resolved. We've also staffed up the project enough to increase the man power significantly beyond what was associated with in the past. So we are looking to drive this project to successful conclusion soon.
I'll have news about this in the future -- watch this space. But I will say this much -- it looks like in the not-distant-future there will be the ability to use OSS APIs from userland applications on *all* Solaris systems. This includes systems with older chips not supported by 4Front today, and Sun Ray thin client systems. Stay tuned.
(The other upshot with this project is that it is taking a great deal of my time, so my participation in other forums may appear to have dropped off -- but that is only so that I can devote as much of my time to making the OSS project a success. This is the same reason that I will not be attending the OpenSolaris Developer Summit this go around.... anything that detracts from getting work done is being set aside for now.)
I've been asked to serve as tech. lead on the project integrate the software from 4Front's Open Sound System into OpenSolaris.
Surprisingly, this task isn't quite as straight-forward as it might seem. There are a number of outstanding issues that have to be resolved before the project can integrate, and we're working frantically to get them all resolved. We've also staffed up the project enough to increase the man power significantly beyond what was associated with in the past. So we are looking to drive this project to successful conclusion soon.
I'll have news about this in the future -- watch this space. But I will say this much -- it looks like in the not-distant-future there will be the ability to use OSS APIs from userland applications on *all* Solaris systems. This includes systems with older chips not supported by 4Front today, and Sun Ray thin client systems. Stay tuned.
(The other upshot with this project is that it is taking a great deal of my time, so my participation in other forums may appear to have dropped off -- but that is only so that I can devote as much of my time to making the OSS project a success. This is the same reason that I will not be attending the OpenSolaris Developer Summit this go around.... anything that detracts from getting work done is being set aside for now.)
SDcard Status Update
For those of you who've been wondering what happened to the SDcard work...
The technical work finished quite a while ago. However, due to some vague language in the disclaimers associated with the SDcard simplified specifications, Sun has decided it is best if we are a member and have a full license to the SDcard specifications (although only the simplified specs were used in its development.)
Of course, this got the lawyers involved in reviewing license agreements and membership agreements, and purchasing machinery engaged, since there is now a transfer of funds involved. (The funds transfers have already been approved.)
The very last hurdle to having this stuff in OpenSolaris is clearing the hurdles with the legal group (and the latest is some concern about trademark rules associated with the SDcard org.) Once we get those final hurdles cleared, hopefully I'll be able to putback the code.
And yes, its all Open Source -- CDDL. Watch for in build 90 (hopefully).
(The rule that any time you involve lawyers in a project, take your original time estimate, double it, and move to the next largest unit of measurement has just about held true in this particular case.)
The technical work finished quite a while ago. However, due to some vague language in the disclaimers associated with the SDcard simplified specifications, Sun has decided it is best if we are a member and have a full license to the SDcard specifications (although only the simplified specs were used in its development.)
Of course, this got the lawyers involved in reviewing license agreements and membership agreements, and purchasing machinery engaged, since there is now a transfer of funds involved. (The funds transfers have already been approved.)
The very last hurdle to having this stuff in OpenSolaris is clearing the hurdles with the legal group (and the latest is some concern about trademark rules associated with the SDcard org.) Once we get those final hurdles cleared, hopefully I'll be able to putback the code.
And yes, its all Open Source -- CDDL. Watch for in build 90 (hopefully).
(The rule that any time you involve lawyers in a project, take your original time estimate, double it, and move to the next largest unit of measurement has just about held true in this particular case.)
Friday, March 28, 2008
Five-seven
This week (Tuesday and Wednesday) my father took my 8-year old daughter to Joshua Tree National Park to do some rock climbing. She'd done some simpler climbing before, briefly, a few years ago, and had enjoyed it. (Of course, she did awesome, and everyone around seemed quite impressed by her awesome instincts. Watching her route-find, and use hand holds and moves with flexibility that I can only dream about being able to do was very, very cool.)
The added bonus here was that I was invited to go along as well -- I had never been rock climbing before, and I was anxious to try it myself. (Dad's been climbing for about two years now, and talking about it pretty much continuously since -- now I think I know why.) It was awesome! First off Joshua Tree National Park is absolutely amazing... and it's only about 90 minutes away by car from where I live -- I can't believe I've been missing out on this. (Even if you don't rock climb, there are some beautiful hikes, world-class rock scrabbling -- which is basically half-way between climbing and hiking -- no rope required -- usually, and the natural beauty of the place is astonishing.)
But what was really cool was the climbing. Over two days, we did a number of different climbs (all top-rope climbs), varying from about 5.4 to 5.7. (This is a scale of difficulty, which is too much to explain here.) Prior to the 5.6 and 5.7 climbs, I recall looking up with butterflies in my stomach thinking, "I'm going to climb what? Surely you jest!" (Looking for a foothold on a vertical face, that might be less than a quarter inch wide... and then actually being able to use it to hold your entire weight... well you've got to try it to believe it. Climbing shoes stick like glue.)
During the climb, the butterflies completely vanished, and I was able to focus on getting the job done. (Probably because I never looked further down than my next foothold...)
The best part, after having done it, was the endorphin high at the top, having actually done the climb without giving up, and without actually falling (though a fall is only a couple of inches with a belayed top-rope). Its a huge sense of achievement. To anyone who's not tried this before, I highly recommend it.
Yeah, I'll be going back. It was cool out-climbing Dad (gee, wonder where I got that competitve gene) on the final 5.7, but I'm disappointed that I didn't try one of the 5.9 routes he did on the first day, and I definitely want to go back and do the multi-pitch climb that we turned back on after Brandy got an understandable case of the jitters and chills. (Hanging out on a windy ledge about nearly 100 vertical feet up, knowing that there were three more pitches to go, I certainly sympathized with her sudden onset of acrophobia.)
The added bonus here was that I was invited to go along as well -- I had never been rock climbing before, and I was anxious to try it myself. (Dad's been climbing for about two years now, and talking about it pretty much continuously since -- now I think I know why.) It was awesome! First off Joshua Tree National Park is absolutely amazing... and it's only about 90 minutes away by car from where I live -- I can't believe I've been missing out on this. (Even if you don't rock climb, there are some beautiful hikes, world-class rock scrabbling -- which is basically half-way between climbing and hiking -- no rope required -- usually, and the natural beauty of the place is astonishing.)
But what was really cool was the climbing. Over two days, we did a number of different climbs (all top-rope climbs), varying from about 5.4 to 5.7. (This is a scale of difficulty, which is too much to explain here.) Prior to the 5.6 and 5.7 climbs, I recall looking up with butterflies in my stomach thinking, "I'm going to climb what? Surely you jest!" (Looking for a foothold on a vertical face, that might be less than a quarter inch wide... and then actually being able to use it to hold your entire weight... well you've got to try it to believe it. Climbing shoes stick like glue.)
During the climb, the butterflies completely vanished, and I was able to focus on getting the job done. (Probably because I never looked further down than my next foothold...)
The best part, after having done it, was the endorphin high at the top, having actually done the climb without giving up, and without actually falling (though a fall is only a couple of inches with a belayed top-rope). Its a huge sense of achievement. To anyone who's not tried this before, I highly recommend it.
Yeah, I'll be going back. It was cool out-climbing Dad (gee, wonder where I got that competitve gene) on the final 5.7, but I'm disappointed that I didn't try one of the 5.9 routes he did on the first day, and I definitely want to go back and do the multi-pitch climb that we turned back on after Brandy got an understandable case of the jitters and chills. (Hanging out on a windy ledge about nearly 100 vertical feet up, knowing that there were three more pitches to go, I certainly sympathized with her sudden onset of acrophobia.)
Congrats to the new OGB
The results of the OpenSolaris 2008 ballot are in -- congratulations to the members-elect. It looks like a solid group of folks, and I am encouraged for the new year! (On a side node, I'd like to have seen a bit more representation from non-Sun employees, but the elected members are all folks I believe have a high level of integrity, and will serve the community's interests well.)
Monday, March 10, 2008
I voted!
I just recorded my vote in OpenSolaris. If you're a Core Contributor, please go to poll.opensolaris.org for instructions to register your vote!
For the curious, I voted FOR the two amendments, and my priority list is for a public bug system, public RTI system, SPARC build farm, x64 build farm, and to clean up inactive CGs.
I am not reporting my OGB selections, other than to say that it included a mix of candidates from Sun and non-Sun candidates, and included some former OGB/CAB members, and some fresh faces.
For the curious, I voted FOR the two amendments, and my priority list is for a public bug system, public RTI system, SPARC build farm, x64 build farm, and to clean up inactive CGs.
I am not reporting my OGB selections, other than to say that it included a mix of candidates from Sun and non-Sun candidates, and included some former OGB/CAB members, and some fresh faces.
Friday, March 7, 2008
return of iwk
Owners of laptops with Intel 4965 802.11n hardware will be glad to know, iwk has returned. Hopefully, all the legal confusion is sorted out properly this time, so it should be here to stay. For very small technical changes, there was a lot of work involved to make this happen, and a big thank-you to everyone who got it done, and to the community who've been patient with us while we made sure we were Doing The Right Thing.
Now I just need to get one of my own.
Now I just need to get one of my own.
Monday, February 4, 2008
What me, impulsive? Nah....
Well at least I'm not the only one in my household. But it can be really fun. The past week has been a great example of this.
For our 4th anniversary last month, my wife and I bought a 56 gallon freshwater setup. At the time we had a 10 gallon setup and 2.5 gallon that we were using to house babies from our livebearers. (Platy/swordtail hybrids, I think.)
That was about 3 weeks ago.
Today, we have a 46 gallon bow front, a 20 gallon, and another 10 gallon. Plus the 56 gallon and original 10 gallon tank. (Now both my of my daughters have their own 10g freshwater setup, the wife has the 46 g for her community ... mostly she wants a home for more more mollies and a peacock eel, but also probably some angels and a few red-eye tetras.)
The pet store had a sale. We uhm, went a little crazy. (A complete 20G freshwater setup was only $40.)
The 56 gallon display tank (24" tall) got converted to saltwater. This is my first marine tank. I only got it filled up last night. (And let me tell you, at about $5/lb, live rock is expensive. Between live rock, live sand, and water -- 89 cents/gallon, I've already spent about $300, and I've not even put any fish in it yet! That doesn't include the tank and equipment of course.) The setup is going to be a FOWLR (fish-only with live rock) tank.
To really get the full picture though, you need to picture me standing out in the cold wind and rain, in wet jeans, barefoot, hosing the 56 gallon out to make sure I've flushed out any freshwater bacteria properly. I must be half nuts. But Debbie came out and helped me, so I'm not the only one.
What is really cool is that my wife has had as much fun with this as I have. I am fortunate indeed to be married to a woman who enjoys so many of the same things I do.
Now we just need to wait.... and wait... and wait.... (New salt water tanks need to "cycle" for about 3-4 weeks before adding fish.)
For our 4th anniversary last month, my wife and I bought a 56 gallon freshwater setup. At the time we had a 10 gallon setup and 2.5 gallon that we were using to house babies from our livebearers. (Platy/swordtail hybrids, I think.)
That was about 3 weeks ago.
Today, we have a 46 gallon bow front, a 20 gallon, and another 10 gallon. Plus the 56 gallon and original 10 gallon tank. (Now both my of my daughters have their own 10g freshwater setup, the wife has the 46 g for her community ... mostly she wants a home for more more mollies and a peacock eel, but also probably some angels and a few red-eye tetras.)
The pet store had a sale. We uhm, went a little crazy. (A complete 20G freshwater setup was only $40.)
The 56 gallon display tank (24" tall) got converted to saltwater. This is my first marine tank. I only got it filled up last night. (And let me tell you, at about $5/lb, live rock is expensive. Between live rock, live sand, and water -- 89 cents/gallon, I've already spent about $300, and I've not even put any fish in it yet! That doesn't include the tank and equipment of course.) The setup is going to be a FOWLR (fish-only with live rock) tank.
To really get the full picture though, you need to picture me standing out in the cold wind and rain, in wet jeans, barefoot, hosing the 56 gallon out to make sure I've flushed out any freshwater bacteria properly. I must be half nuts. But Debbie came out and helped me, so I'm not the only one.
What is really cool is that my wife has had as much fun with this as I have. I am fortunate indeed to be married to a woman who enjoys so many of the same things I do.
Now we just need to wait.... and wait... and wait.... (New salt water tanks need to "cycle" for about 3-4 weeks before adding fish.)
Friday, January 25, 2008
Brussels putback
This post discusses the 2nd flag-day putback yesterday, which is Brussels (phase I). Brussels also changes the way NIC drivers are administered, but it is focused on simplifying and centralizing the administration of network driver tunables -- these are the values used to tune the device itself, or in some cases, the link layer properties. The most common of these tunables are the values associated with duplex and link speed settings.
Historically these values have been configured with ndd(1M) or driver.conf(4). Many people know how I feel about those methods, but let me just reiterate: "ndd must die!" (And driver.conf, as well.)
The Brussels putback represents another opportunity for community members interested in kernel programming though. A lot of these NIC drivers need to be converted to use the property access methods that Brussels offers, and have the ndd support ioctls removed. (And yes, I strongly desire to see the ndd(1M) ioctl support removed from drivers. A follow-on phase for Brussels will offer ndd compatibility at the Brussels layer.)
Brussels provided a conversion of bge(7d), but many other NIC drivers remain. I plan on converting my two drivers, afe(7d) and mxfe(7d), as well as a few drivers that are still closed source (iprb(7d) and rtls(7d)). But there remain many others. And conversion of a driver to support Brussels is just the sort of bite-sized task that is great for learning how to develop in the kernel. Some possible drivers to convert are sfe(7d), rge(7d), and nge(7d). If you're interested in working on one of those, let me know (you need the hardware, though!)
Historically these values have been configured with ndd(1M) or driver.conf(4). Many people know how I feel about those methods, but let me just reiterate: "ndd must die!" (And driver.conf, as well.)
The Brussels putback represents another opportunity for community members interested in kernel programming though. A lot of these NIC drivers need to be converted to use the property access methods that Brussels offers, and have the ndd support ioctls removed. (And yes, I strongly desire to see the ndd(1M) ioctl support removed from drivers. A follow-on phase for Brussels will offer ndd compatibility at the Brussels layer.)
Brussels provided a conversion of bge(7d), but many other NIC drivers remain. I plan on converting my two drivers, afe(7d) and mxfe(7d), as well as a few drivers that are still closed source (iprb(7d) and rtls(7d)). But there remain many others. And conversion of a driver to support Brussels is just the sort of bite-sized task that is great for learning how to develop in the kernel. Some possible drivers to convert are sfe(7d), rge(7d), and nge(7d). If you're interested in working on one of those, let me know (you need the hardware, though!)
Clearview/UV putback
Folks watching Nevada putbacks will have noticed at least 3 flag days in the past 24 hours. I want to take a second to talk about the first of them. (I'll talk about the second in a follow up post.)
The first, Clearview/UV, is about providing GLDv3-like features to legacy NIC drivers, and about providing friendlier names to device drivers. I will confess that I've not had a chance to play with any of these features yet, but I think that they are likely to be one of the more important putbacks to OpenSolaris this year. This putback fundamentally changes network administration by offering the ability to use "logical naming" for network device drivers.
The other important thing here is that some folks may believe that the Nemo Unification offered by Clearview/UV means that those legacy drivers don't need to be converted. This is not true. Conversion to GLDv3 still offers significant and tangible benefits to network device drivers:
Folks that want help with such a conversion should contact me.
The first, Clearview/UV, is about providing GLDv3-like features to legacy NIC drivers, and about providing friendlier names to device drivers. I will confess that I've not had a chance to play with any of these features yet, but I think that they are likely to be one of the more important putbacks to OpenSolaris this year. This putback fundamentally changes network administration by offering the ability to use "logical naming" for network device drivers.
The other important thing here is that some folks may believe that the Nemo Unification offered by Clearview/UV means that those legacy drivers don't need to be converted. This is not true. Conversion to GLDv3 still offers significant and tangible benefits to network device drivers:
- Performance. The translation layer that Clearview provides adds a performance hit for legacy drivers. Its also the case that legacy NIC drivers are unable to benefit from several of the performance benefits that GLDv3 offered (direct function calls, mblk chaining, etc.)
- Full VLAN support. Legacy drivers that don't support the undocumented VLAN features aren't able to offer full size VLAN frames. VLANs still work, but you have to shrink your MTU by 4 bytes.
- Certain upcoming GLDv3/Crossbow features. Legacy drivers won't be able to take advantage of upcoming features in GLDv3 from Crossbow. These include various interrupt mitigation techniques and multiple hardware ring support.
Folks that want help with such a conversion should contact me.
Wednesday, January 9, 2008
making good on a promise (Velocity Networks == bad)
My former Internet service provider, OChosting, was recently purchased by a much larger company, Velocity Networks.
As part of that acquisition, they moved my e-mail to some more central server. However, they screwed it up really really badly. The DNS MX records for my domain pointed to an old server, but the CNAME I was using for IMAP was pointing to the old one. The helpdesk was completely useless/powerless to fix the DNS records. (As part of this they were transitioning the old systems to a new management system, ala CPanel, as well. The helpdesk people were only able to deal with accounts that had been transitioned.
Finally I told them they'd not only lose my business, but I'd post my negative experiences here if they didn't get someone to help me quickly. That much they did. But ultimately, the promise that DNS records would clear up when caches flushed never materialized. Two weeks later their servers are still giving out incorrect DNS information. I've been able to access my e-mail by manually editing my /etc/hosts file, supplying a workaround IP address. (I have noticed that their IMAP server has gotten significantly slower since the transition as well. It can take up to 2-3 seconds to delete a message sometimes. Typically it takes about 1 second for the IMAP delete to occur. On my other servers IMAP deletes appear to be "instantaneous".) This doesn't work well for my wife though, and I'm fed up with trying to force feed these guys a clue.
I will point out that I was paying these bozos over $20/month for very modest hosting needs, which is 2-3x the typical market rate. And I'd been doing this for about the last 5 years, with nary a service call in the interim.
Anyway, now I'm making good on my original promise to them.
Anyway, I've been able to recover all my old e-mails (at least the ones that didn't bounce, and who knows how many of those occured!), and I've given my business to Bluehost as of about two days ago. So far, it seems to be going quite well. My e-mail performance seems to be good, and the software they have deployed for CPanel is a bit more sensible as well. (For one, they seem to understand the problem of matching TLS/SSL certificates to hostnames when used for IMAP or SMTP.) I'm also paying $7.95 a month (full year paid in advance) instead of $19.95, and my disk and throughput quotas are much much higher (not that I need them).
I would strongly urge folks considering ISPs to avoid Velocity Networks or any of their affiliates if at all possible. My experience is that they are clueless, and their helpdesk staff are completely hobbled by a combination of restrictions on what they can perform and their own lack of ability.
A big kudos to Bluehost, as well.
As part of that acquisition, they moved my e-mail to some more central server. However, they screwed it up really really badly. The DNS MX records for my domain pointed to an old server, but the CNAME I was using for IMAP was pointing to the old one. The helpdesk was completely useless/powerless to fix the DNS records. (As part of this they were transitioning the old systems to a new management system, ala CPanel, as well. The helpdesk people were only able to deal with accounts that had been transitioned.
Finally I told them they'd not only lose my business, but I'd post my negative experiences here if they didn't get someone to help me quickly. That much they did. But ultimately, the promise that DNS records would clear up when caches flushed never materialized. Two weeks later their servers are still giving out incorrect DNS information. I've been able to access my e-mail by manually editing my /etc/hosts file, supplying a workaround IP address. (I have noticed that their IMAP server has gotten significantly slower since the transition as well. It can take up to 2-3 seconds to delete a message sometimes. Typically it takes about 1 second for the IMAP delete to occur. On my other servers IMAP deletes appear to be "instantaneous".) This doesn't work well for my wife though, and I'm fed up with trying to force feed these guys a clue.
I will point out that I was paying these bozos over $20/month for very modest hosting needs, which is 2-3x the typical market rate. And I'd been doing this for about the last 5 years, with nary a service call in the interim.
Anyway, now I'm making good on my original promise to them.
Anyway, I've been able to recover all my old e-mails (at least the ones that didn't bounce, and who knows how many of those occured!), and I've given my business to Bluehost as of about two days ago. So far, it seems to be going quite well. My e-mail performance seems to be good, and the software they have deployed for CPanel is a bit more sensible as well. (For one, they seem to understand the problem of matching TLS/SSL certificates to hostnames when used for IMAP or SMTP.) I'm also paying $7.95 a month (full year paid in advance) instead of $19.95, and my disk and throughput quotas are much much higher (not that I need them).
I would strongly urge folks considering ISPs to avoid Velocity Networks or any of their affiliates if at all possible. My experience is that they are clueless, and their helpdesk staff are completely hobbled by a combination of restrictions on what they can perform and their own lack of ability.
A big kudos to Bluehost, as well.
Sunday, December 2, 2007
live upgrade rocks
Trying to build the latest tree, I ran into the problem that my build machine is downrev (its b74.) So I had to update to 77 to get the latest tree to build.
For any other OS, or in past days for Solaris, this would be a major crisis, incurring numerous hours of downtime. (Or I could use bfu.) But I decided to finally try out live upgrade.
I had an ISO image of b77 stored on a local zfs filesystem (along with all my critical data). When I had installed this system, I had set it up with a spare empty slice matching the / slice in size, in anticipation of one day trying out live upgrade. Boy am I glad I did.
All I had to do was a few commands:
# lucreate -n b77 -m /:/dev/dsk/c1t0d0s3:ufs
(Wait about 20 minutes.)
# lofiadm -a /data/isos/sol-nv-b77-sx86.iso
# mount -F hsfs -o ro /dev/lofi/1 /mnt
# luupgrade -u -n b77 -s /mnt
(Wait another 20-30 minutes.)
# luactivate b77
# init 6
(That last step confused me. I tried "reboot" a few times, before I actually read the output from luactivate to realize that you CANNOT USE REBOOT.)
All in all, the total downtime was the cost of a single reboot. (Well, several in my case, but that's because I didn't follow the instructions and used the wrong reboot command. Doh!)
Total time to upgrade took less time than it took to download the iso in the first place. Using lofi and zfs made this even more painless. Yay. And now I'll never be afraid to upgrade Solaris again. Had this been Windows or Linux I was trying to upgrade, I'd probably have had to kiss off the entire weekend dealing with fallout, etc.
A big kudos to the LU team. (And shame on me for not discovering this cool technology in Solaris earlier... its been around since Solaris 10u1, at least.)
For any other OS, or in past days for Solaris, this would be a major crisis, incurring numerous hours of downtime. (Or I could use bfu.) But I decided to finally try out live upgrade.
I had an ISO image of b77 stored on a local zfs filesystem (along with all my critical data). When I had installed this system, I had set it up with a spare empty slice matching the / slice in size, in anticipation of one day trying out live upgrade. Boy am I glad I did.
All I had to do was a few commands:
# lucreate -n b77 -m /:/dev/dsk/c1t0d0s3:ufs
(Wait about 20 minutes.)
# lofiadm -a /data/isos/sol-nv-b77-sx86.iso
# mount -F hsfs -o ro /dev/lofi/1 /mnt
# luupgrade -u -n b77 -s /mnt
(Wait another 20-30 minutes.)
# luactivate b77
# init 6
(That last step confused me. I tried "reboot" a few times, before I actually read the output from luactivate to realize that you CANNOT USE REBOOT.)
All in all, the total downtime was the cost of a single reboot. (Well, several in my case, but that's because I didn't follow the instructions and used the wrong reboot command. Doh!)
Total time to upgrade took less time than it took to download the iso in the first place. Using lofi and zfs made this even more painless. Yay. And now I'll never be afraid to upgrade Solaris again. Had this been Windows or Linux I was trying to upgrade, I'd probably have had to kiss off the entire weekend dealing with fallout, etc.
A big kudos to the LU team. (And shame on me for not discovering this cool technology in Solaris earlier... its been around since Solaris 10u1, at least.)
nge is open source!
With the putback of this:
6404761 nge driver need to be opensourced
The nvidia gigE driver is now open source! Yay!
6404761 nge driver need to be opensourced
The nvidia gigE driver is now open source! Yay!
Tuesday, November 6, 2007
iprb open source....
I just got an e-mail from our contact at Intel, and it looks like the main part of the review required to get iprb approved for release as open source has been done. There are still a few i's to dot and t's to cross, but basically, it looks like we should be good to have this open sourced within the next couple of weeks. Watch this space.
Friday, October 26, 2007
dnet suspend/resume webrev has been posted
Friday, October 19, 2007
Sun Opening up more...
So I recently was informed that Sr. management has directed my team to move some of the engineering work that we have been doing "internally" to the OpenSolaris web site. This is more than just moving internal web pages outside the firewall, though. This is about starting to do the actual engineering work in the open.
The first two projects that my group is going to be doing this with are the laptop suspend/resume effort and the SD card stack. (The SD card stack needs to go through some licensing approval first, as the SDA organization doesn't allow for redistribution without a license. The "open" specs are "Evaluation Only" apparently.
Anyway, this is a sign that the practices already being done elsewhere in the company (cf. the networking group) are starting to take hold elsewhere, even in demesnes that have historically been strongholds of the NDA.
Watch the laptop project page at os.o over the next week or so to see what we put up there... and there will be mailing lists for the project engineering as well!
The first two projects that my group is going to be doing this with are the laptop suspend/resume effort and the SD card stack. (The SD card stack needs to go through some licensing approval first, as the SDA organization doesn't allow for redistribution without a license. The "open" specs are "Evaluation Only" apparently.
Anyway, this is a sign that the practices already being done elsewhere in the company (cf. the networking group) are starting to take hold elsewhere, even in demesnes that have historically been strongholds of the NDA.
Watch the laptop project page at os.o over the next week or so to see what we put up there... and there will be mailing lists for the project engineering as well!
Tuesday, October 9, 2007
Backyard gallery
I recently checked our pool contractor's website, and found my backyard in his gallery. It looks pretty cool on his website. Check it out here.
Thursday, October 4, 2007
dmfe putback done
I just fixed a bunch of dmfe changes... most notably, the driver will now support add-in PCI cards. Look for it in snv76.
Wednesday, September 26, 2007
SecureDigital, and other memory formats
So I've been tasked with writing a driver for the SecureDigital controller found on certain laptops. As part of this effort, I'd like to take a straw poll. If you could e-mail or post a comment on this blog, indicating your response, I'd be grateful. Note that this is only for folks with card readers that are *not* USB connected. (USB readers are already covered by USB mass storage.)
a) how many SDcard slots do you have?
b) do you use older MMC media?
c) do you have any SDIO peripherals? (I.e. devices other than memory cards.)
d) do you have slots other than SDcard (such as xD or memstick) that are important to you? Which ones? (Again, not for USB connected media readers!)
e) I'm interested in prtconf -vp output for different readers. If you're game, send it to me, along with the make and model of your laptop.
Thanks!
a) how many SDcard slots do you have?
b) do you use older MMC media?
c) do you have any SDIO peripherals? (I.e. devices other than memory cards.)
d) do you have slots other than SDcard (such as xD or memstick) that are important to you? Which ones? (Again, not for USB connected media readers!)
e) I'm interested in prtconf -vp output for different readers. If you're game, send it to me, along with the make and model of your laptop.
Thanks!
dmfe for x86 and unbundled PCI nics on SPARC
I've got a dmfe driver working on both SPARC and x86, and it supports some unbundled PCI NICs. (The original driver only worked with onboard dm9102s on certain SPARC hardware.) The Davicom 9102 part is not terribly common, but some low end NICs sold by companies like C/Net and Buffalo use it.
Hopefully this will be putback into snv75.
Hopefully this will be putback into snv75.
oops, qfe is b74
I lied... (not intentionally, I got confused.) The qfe GLDv3 port is in b74, not b73. Sorry!
Wednesday, September 5, 2007
snv_73 goodness
Solaris Nevada b73, when it comes out, is going to have a lot of good NIC stuff in it.
* afe driver (putback yesterday)
* mxfe driver
* rtls on SPARC (including suspend/resume!)
* qfe GLDv3 support
Plus, there are a lot of good networking improvements; a lot of stale code was removed (defunct mobile IPv4 support, detangling NAT, etc.)
There's a bunch of carryover from snv_70-72 too...
I for one, can hardly wait.
* afe driver (putback yesterday)
* mxfe driver
* rtls on SPARC (including suspend/resume!)
* qfe GLDv3 support
Plus, there are a lot of good networking improvements; a lot of stale code was removed (defunct mobile IPv4 support, detangling NAT, etc.)
There's a bunch of carryover from snv_70-72 too...
I for one, can hardly wait.
Thursday, August 30, 2007
mxfe RTI....
FYI,
I've submitted earlier today the RTI for mxfe. I expect afe (which will be more popular) will be later this week or early next. (We've fallen behind on some of the testing.)
I've also started looking at porting rtls to SPARC, and making it support SUSPEND/RESUME. More on that shortly.
I've submitted earlier today the RTI for mxfe. I expect afe (which will be more popular) will be later this week or early next. (We've fallen behind on some of the testing.)
I've also started looking at porting rtls to SPARC, and making it support SUSPEND/RESUME. More on that shortly.
Friday, August 24, 2007
rtls GLDv3
And now rtls is GLDv3. Not open source (yet), and no SPARC support, but hopefully those will both get fixed soon. Have fun!
qfe GLDv3
As my first gift to the community since becoming a Sun employee, I've putback the conversion of QFE to the new hme common GLDv3 code. Now you can use your old QFE boards with IP instances, VLANs, whatever. Go wild. Hopefully the rtls conversion will get putback tonight as well... still waiting for my RTI advocate to approve it.
Tuesday, August 14, 2007
Stuck with an rtls? (Realtek 8139)
I've recently hacked up the Realtek driver (rtls) to support GLDv3. Its part of usr/closed right now (though I hope we can open source it!), so I can only share binaries.
Anyway, if you're stuck with this driver on your x86 system (because its on your motherboard, usually), and you want to try running a GLDv3 version of the driver, let me know.
The GLDv3 brings link aggregation support, VLAN support, and virtualization (IP instances) with it.
Of course the hardware is still somewhat crummy, so I wouldn't expect to get much performance out of it. But again, if you're stuck with it (as many people probably are) this may be helpful.
Anyway, if you're stuck with this driver on your x86 system (because its on your motherboard, usually), and you want to try running a GLDv3 version of the driver, let me know.
The GLDv3 brings link aggregation support, VLAN support, and virtualization (IP instances) with it.
Of course the hardware is still somewhat crummy, so I wouldn't expect to get much performance out of it. But again, if you're stuck with it (as many people probably are) this may be helpful.
Monday, August 6, 2007
Dropping the "C"
For those not in the know, its now official. I'll be (re-)joining Sun as a regular full time employee starting August 20th. That means that I get to drop the "C" in front of my employee ID.
I'll be reporting to Neal Pollack, initially working on various Intel related Solaris projects.
I'll be reporting to Neal Pollack, initially working on various Intel related Solaris projects.
Wednesday, August 1, 2007
hme checksum limitations
(This blog is as much for the benefit for other FOSS developers as it is for OpenSolaris.)
Please have a look at 6587116, which points out a hardware limitation in the hme chipset. I've found that at least NetBSD, and probably also Linux, suffer in that they expect the chip to support hardware checksum offload. However, if the packet is less than 64-bytes (not including FCS), the hardware IP checksum engine will fail. This means all packets that get padded, and even some that are otherwise legal (not needing padding) will not be checksummed properly.
For these packets, software checksum must be used.
Please have a look at 6587116, which points out a hardware limitation in the hme chipset. I've found that at least NetBSD, and probably also Linux, suffer in that they expect the chip to support hardware checksum offload. However, if the packet is less than 64-bytes (not including FCS), the hardware IP checksum engine will fail. This means all packets that get padded, and even some that are otherwise legal (not needing padding) will not be checksummed properly.
For these packets, software checksum must be used.
partial checksum bug
As a result of investigation of a fix for 6587116 (a bug in HME, more later), we have found a gaping bug in the implementation of UDP checksums on Solaris.
Most particularly, it appears that UDP hardware checksum offload is broken for the cases where the checksum calculation will result in a 16-bit value of 0. Most protocols (TCP, ICMP, etc.) specify that the value 0 be used for the checksum in this case.
UDP, however, specifies that the value 0xffff be substituted for 0. Why ? Because 0 is given special meaning. In IPv4 networks, it means that transmitter did not bother to include a checksum. In IPv6, the checksum is mandatory, and RFC 2460 says that when the receiver sees a packet with a zero checksum it should be discarded.
The problem is, the hardware commonly in use on Sun SPARC systems (hme, eri, ge, and probably also ce and nxge) does not have support for this particular semantic. Furthermore, we have no way to know, in the current spec, if this semantic should be applied (short of directly parsing the packet, which presents its own challenges and hits to performance).
We'll have to figure out how to deal with this particular problem, sometime soonish. My guess is that all Sun NICs will lose IP checksum acceleration (transmit side only) for UDP datagrams, and that those 3rd party products which can do something different will need another flag bit indicating UDP semantics.
Most particularly, it appears that UDP hardware checksum offload is broken for the cases where the checksum calculation will result in a 16-bit value of 0. Most protocols (TCP, ICMP, etc.) specify that the value 0 be used for the checksum in this case.
UDP, however, specifies that the value 0xffff be substituted for 0. Why ? Because 0 is given special meaning. In IPv4 networks, it means that transmitter did not bother to include a checksum. In IPv6, the checksum is mandatory, and RFC 2460 says that when the receiver sees a packet with a zero checksum it should be discarded.
The problem is, the hardware commonly in use on Sun SPARC systems (hme, eri, ge, and probably also ce and nxge) does not have support for this particular semantic. Furthermore, we have no way to know, in the current spec, if this semantic should be applied (short of directly parsing the packet, which presents its own challenges and hits to performance).
We'll have to figure out how to deal with this particular problem, sometime soonish. My guess is that all Sun NICs will lose IP checksum acceleration (transmit side only) for UDP datagrams, and that those 3rd party products which can do something different will need another flag bit indicating UDP semantics.
Friday, July 27, 2007
nxge and IP forwarding
You may or may not be aware of project Sitara. One of the goals of project Sitara is to fix the handling of small packets.
I have achieved a milestone... using a hacked version of the nxge driver (diffs available on request), I've been able to get UDP forwarding rates as high as 1.3M packets per sec (unidirectional) across a single pair of nxge ports, using Sun's next sun4v processor. (That's number of packets forwarded...) This is very close to line rate for a 1G line. I'm hoping that future enhancements will get us to significantly more than that... maybe as much as 2-3 Mpps per port. Taken as an aggregate, I expect this class of hardware to be able to forward up to 8Mpps. (Some Sun internal numbers using a microkernel are much higher than that... but then you'd lose all the nice features that the Solaris TCP/IP stack has.)
By the way, its likely that these results are directly applicable to applications like Asterisk (VoIP), where small UDP packets are heavily used. Hopefully we'll have a putback of the necessary tweaks before too long.
I have achieved a milestone... using a hacked version of the nxge driver (diffs available on request), I've been able to get UDP forwarding rates as high as 1.3M packets per sec (unidirectional) across a single pair of nxge ports, using Sun's next sun4v processor. (That's number of packets forwarded...) This is very close to line rate for a 1G line. I'm hoping that future enhancements will get us to significantly more than that... maybe as much as 2-3 Mpps per port. Taken as an aggregate, I expect this class of hardware to be able to forward up to 8Mpps. (Some Sun internal numbers using a microkernel are much higher than that... but then you'd lose all the nice features that the Solaris TCP/IP stack has.)
By the way, its likely that these results are directly applicable to applications like Asterisk (VoIP), where small UDP packets are heavily used. Hopefully we'll have a putback of the necessary tweaks before too long.
mpt SAS support on NetBSD
FYI, NetBSD has just got support for the LSI SAS controllers, such as that found on the Sun X4200. My patch to fix this was committed last night. (The work was a side project funded by TELES AG.)
Of course we'd much rather everyone ran Solaris on these machines, but if you need NetBSD for some reason, it works now.
Pullups to NetBSD 3 and 3.1 should be forthcoming.
Of course we'd much rather everyone ran Solaris on these machines, but if you need NetBSD for some reason, it works now.
Pullups to NetBSD 3 and 3.1 should be forthcoming.
Wednesday, July 25, 2007
hme GLDv3 versus qfe DLPI
So, the NICs group recently told me I should have started with qfe instead of hme, because qfe has some performance fixes. (Such as hardware checksum, which I added to hme!) To find out out if this holds water, I ran some tests, on my 360 MHz UltraSPARC-IIi system, using a PCI qfe card. (You can make hme bind to the qfe ports by doing
# rem_drv qfe
# update-drv -a -i '"SUNW,qfe"' hme
(This by the way is a nice hack to use GLDv3 features with your qfe cards today if you cannot wait for an official GLDv3 qfe port.)
Anyway, here's what I found out, using my hacked ttcp utility. Note that the times reported are "sys" times.
QFE/DLPI
MTU = 100, -n 2048
Tx: 18.3 Mbps, 7.0s (98%)
Rx: 5.7 Mbps, 2.4s (10%)
MTU = 1500, -n = 20480
Tx (v4): 92.1 Mbps, 1.1s (8%)
Rx (v4): 92.2 Mbps, 1.6s (12%)
Tx (v6): 91.2 Mbps, 1.1s (8%)
Rx (v6): 90.9 Mbps, 2.6s (22%
UDPv4 tx, 1500 (-n 20480) 90.5 Mbps, 1.6 (64%)
UDPv4 tx, 128 (-n 204800) 34.2 Mbps, 5.2 (99%)
UDPv4 tx, 64 (-n 204800) 17.4 Mbps, 5.1 (99%)
And here are the numbers for hme with GLDv3
HME GLDv3
MTU = 100, -n 2048
Tx: 16.0 Mbps, 7.6s (93%)
Rx: 11.6 Mbps, 1.8s (16%)
MTU = 1500, -n = 20480
Tx (v4): 92.1 Mbps, 1.2s (8%)
Rx (v4): 92.2 Mbps, 3.2s (24%)
Tx (v6): 90.8 Mbps, 0.8 (6%)
Rx (v6): 91.2 Mbps, 4.0s (29%)
UDPv4 tx, 1500 (-n 20480) 89.7 Mbps, 1.5s (60%)
UDPv4 tx, 128 (-n 204800) 29.4 Mbps, 6.0s (99%)
UDPv4 tx, 64 (-n 204800) 14.8 Mbps, 6.0s (99%)
So, given these numbers, it appears that either QFE is more efficient (which is possible, but I'm slightly skeptical) or the cost of the extra overhead of some of the GLDv3 support is hurting us. I'm more inclined to believe this. (For example, we have to check to see if the packet is a VLAN tagged packet... those features don't come for free... :-)
What is really interesting, is that the hme GLDv3 work was about 3% better than the old DLPI hme. So clearly there has been more effort invested into qfe.
Interestingly enough, the performance for Rx tiny packets with GLDv3 is better. I am starting to wonder if there is a difference in the bcopy/dvma thresholds.
So one of the questions that C-Team has to answer is, how important are these relatively minor differences in performance. On a faster machine, you'd be unlikely to notice at all. If this performance becomes a gating factor, I might find it difficult to putback the qfe GLDv3 conversion.
To be completely honest, tracking down the 1-2% difference in performance may not be worthwhile. I'd far rather work on fixing 1-2% gains in the stack than worry about how a certain legacy driver performs.
What are your thoughts? Let me know!
# rem_drv qfe
# update-drv -a -i '"SUNW,qfe"' hme
(This by the way is a nice hack to use GLDv3 features with your qfe cards today if you cannot wait for an official GLDv3 qfe port.)
Anyway, here's what I found out, using my hacked ttcp utility. Note that the times reported are "sys" times.
QFE/DLPI
MTU = 100, -n 2048
Tx: 18.3 Mbps, 7.0s (98%)
Rx: 5.7 Mbps, 2.4s (10%)
MTU = 1500, -n = 20480
Tx (v4): 92.1 Mbps, 1.1s (8%)
Rx (v4): 92.2 Mbps, 1.6s (12%)
Tx (v6): 91.2 Mbps, 1.1s (8%)
Rx (v6): 90.9 Mbps, 2.6s (22%
UDPv4 tx, 1500 (-n 20480) 90.5 Mbps, 1.6 (64%)
UDPv4 tx, 128 (-n 204800) 34.2 Mbps, 5.2 (99%)
UDPv4 tx, 64 (-n 204800) 17.4 Mbps, 5.1 (99%)
And here are the numbers for hme with GLDv3
HME GLDv3
MTU = 100, -n 2048
Tx: 16.0 Mbps, 7.6s (93%)
Rx: 11.6 Mbps, 1.8s (16%)
MTU = 1500, -n = 20480
Tx (v4): 92.1 Mbps, 1.2s (8%)
Rx (v4): 92.2 Mbps, 3.2s (24%)
Tx (v6): 90.8 Mbps, 0.8 (6%)
Rx (v6): 91.2 Mbps, 4.0s (29%)
UDPv4 tx, 1500 (-n 20480) 89.7 Mbps, 1.5s (60%)
UDPv4 tx, 128 (-n 204800) 29.4 Mbps, 6.0s (99%)
UDPv4 tx, 64 (-n 204800) 14.8 Mbps, 6.0s (99%)
So, given these numbers, it appears that either QFE is more efficient (which is possible, but I'm slightly skeptical) or the cost of the extra overhead of some of the GLDv3 support is hurting us. I'm more inclined to believe this. (For example, we have to check to see if the packet is a VLAN tagged packet... those features don't come for free... :-)
What is really interesting, is that the hme GLDv3 work was about 3% better than the old DLPI hme. So clearly there has been more effort invested into qfe.
Interestingly enough, the performance for Rx tiny packets with GLDv3 is better. I am starting to wonder if there is a difference in the bcopy/dvma thresholds.
So one of the questions that C-Team has to answer is, how important are these relatively minor differences in performance. On a faster machine, you'd be unlikely to notice at all. If this performance becomes a gating factor, I might find it difficult to putback the qfe GLDv3 conversion.
To be completely honest, tracking down the 1-2% difference in performance may not be worthwhile. I'd far rather work on fixing 1-2% gains in the stack than worry about how a certain legacy driver performs.
What are your thoughts? Let me know!
Tuesday, July 17, 2007
afe and mxfe status notes
For those of you that care, I think we're in the home stretch for integration of afe and mxfe into Nevada.
I spent the weekend going through the code, and reworking large portions of it to make use of zero-copy DMA wherever it was rational to do so (loan up for receive, direct binding for transmit).
The NICDRV test suite has also identified a number of issues with edge cases that didn't come up often, but which I'm glad to know about and have fixed in the version of the code getting putback.
They're only 100 Mbps nics, but the version of the code going into Nevada will make them run at pretty much the same speed as any other 100 Mbps NIC without IP checksum offload.
And, they are still 100% DDI compliant. :-) Thankfully the DDI has been extended for OpenSolaris since the last time I worried about such things (back in Solaris 8 days).
Anyway, looking forward to putback in b70 or b71. (Depending on whether I can get reviewers in time for b70 putback. If you can help me review, please let me know!)
I spent the weekend going through the code, and reworking large portions of it to make use of zero-copy DMA wherever it was rational to do so (loan up for receive, direct binding for transmit).
The NICDRV test suite has also identified a number of issues with edge cases that didn't come up often, but which I'm glad to know about and have fixed in the version of the code getting putback.
They're only 100 Mbps nics, but the version of the code going into Nevada will make them run at pretty much the same speed as any other 100 Mbps NIC without IP checksum offload.
And, they are still 100% DDI compliant. :-) Thankfully the DDI has been extended for OpenSolaris since the last time I worried about such things (back in Solaris 8 days).
Anyway, looking forward to putback in b70 or b71. (Depending on whether I can get reviewers in time for b70 putback. If you can help me review, please let me know!)
Thursday, July 12, 2007
HME putback done
In case anyone ever wondered what a putback message looked like:
Btw, the case to make this work for qfe (PSARC 2007/ 404) was approved yesterday as well. There are some internal resourcing questions yet to be answered, but at least architectecturally, the approach has been approved.
I would really, really love it some qfe owners would file a bug asking for qfe to be GLDv3. It would make it much easier for me, I think, if this case were seen as a response to customer demand. (So many people have requested qfe GLDv3 support... please file a bug! Even better, file an *escalation*!)
Note: none of this eligible for backport to S10. You have to use OpenSolaris if you want the good stuff. Gotta keep a few carrots in reserve, right? (Seriously, ndd and Sun Trunking incompatibilities make it unsuitable for backport to S10 anyway.)
********* This mail is automatically generated *******
Your putback for the following fix(es) is complete:
PSARC 2007/319 HME GLDv3 conversion
4891284 RFE to add debug kstat counter for promiscuous mode to hme driver
6345963 panic in hme
6554790 Race betweeen hmedetach and hmestat_kstat_update
6568532 hme should support GLDv3
6578294 hme does not support hardware checksum
These fixes will be in release:
snv_70
The gate's automated scripts will mark these bugs "8-Fix Available"
momentarily, and the gatekeeper will mark them "10-Fix Delivered"
as soon as the gate has been delivered to the WOS. You should not
need to update the bug status.
Your Friendly Gatekeepers
Btw, the case to make this work for qfe (PSARC 2007/ 404) was approved yesterday as well. There are some internal resourcing questions yet to be answered, but at least architectecturally, the approach has been approved.
I would really, really love it some qfe owners would file a bug asking for qfe to be GLDv3. It would make it much easier for me, I think, if this case were seen as a response to customer demand. (So many people have requested qfe GLDv3 support... please file a bug! Even better, file an *escalation*!)
Note: none of this eligible for backport to S10. You have to use OpenSolaris if you want the good stuff. Gotta keep a few carrots in reserve, right? (Seriously, ndd and Sun Trunking incompatibilities make it unsuitable for backport to S10 anyway.)
Sunday, July 8, 2007
hme GLDv3 and *hardware checksum*
So I've been trying to run my GLDv3 port of hme through a very rigorous battery of tests called "nicdrv" (the test suite used for recent NIC drivers by Sun QE... hopefully soon to be open sourced, but that's another topic.)
Anyway, the test system I've been using is a poor little 360MHz US-II Tadpole system. (A Darwin-workalike, in shoe-box formfactor.)
Unfortunately, the test times out while trying to do the UDP RX tests. Which really shouldn't be surprising... the test was designed for gigabit network gear, with gigahertz system processors (or better.)
Well, it turns out that the hme driver can be faster. Considerably faster. Because the hardware supports IP checksum offload. But it was never enabled. (Note that this is true for the quad-port qfe boards as well, which are basically the same controller behind a bridge chip.)
So, I've decided to have another go at getting a fully successful test result with this hardware. By modifying the driver to support IP checksum offload. I'm hoping it may make the difference between a pass and fail. With tiny frames, every little bit helps.
Stay tuned here. Note that I also have logged performance data from earlier test runs, so I'll be able to compare that as well. One additional wrinkle in all this, is that I now feel compelled to test this with Sbus hme hardware. The oldest system I can find is a Sun Ultra 2. (Older Ultra 1 systems with 200 MHz and slower procs won't work. If anyone has an old Ultra 1 with 250 or better procs running Nevada, let me know!)
Anyway, the test system I've been using is a poor little 360MHz US-II Tadpole system. (A Darwin-workalike, in shoe-box formfactor.)
Unfortunately, the test times out while trying to do the UDP RX tests. Which really shouldn't be surprising... the test was designed for gigabit network gear, with gigahertz system processors (or better.)
Well, it turns out that the hme driver can be faster. Considerably faster. Because the hardware supports IP checksum offload. But it was never enabled. (Note that this is true for the quad-port qfe boards as well, which are basically the same controller behind a bridge chip.)
So, I've decided to have another go at getting a fully successful test result with this hardware. By modifying the driver to support IP checksum offload. I'm hoping it may make the difference between a pass and fail. With tiny frames, every little bit helps.
Stay tuned here. Note that I also have logged performance data from earlier test runs, so I'll be able to compare that as well. One additional wrinkle in all this, is that I now feel compelled to test this with Sbus hme hardware. The oldest system I can find is a Sun Ultra 2. (Older Ultra 1 systems with 200 MHz and slower procs won't work. If anyone has an old Ultra 1 with 250 or better procs running Nevada, let me know!)
Thursday, June 28, 2007
GLDv3 iprb putback
I just putback the GLDv3 conversion of iprb. It will be in the next SXDE/SXCE. (b69 and later). It is still closed source, but I think that may change soon, too. (All the technical information in the code is reproduced on a public open-source developers guide downloadable at Intel, with the exception of the binary microcode, which is in the FreeBSD tree under an Intel-owned BSD license.)
Anyway, I'm told Sun is having a meeting with Intel, and one of the agenda items is opening the source to iprb.
Meanwhile, enjoy the GLDv3 goodness.
Anyway, I'm told Sun is having a meeting with Intel, and one of the agenda items is opening the source to iprb.
Meanwhile, enjoy the GLDv3 goodness.
Monday, June 25, 2007
afe GLDv3-ified
I've converted "afe" to GLDv3 in anticipation of it getting putback. I've also greatly simplified the buffering logic in it, because I was trying to be "too clever" and I think we were seeing failures during the extreme testing that Sun QA likes to perform.
Anyway, this means that when afe gets putback (its on a schedule for snv68, but that may or may not happen), it will be GLDv3. Yay. Here's something to whet your appetite:
garrett@doc{44}> pfexec dladm show-link
eri0 type: non-vlan mtu: 1500 device: eri0
afe0 type: non-vlan mtu: 1500 device: afe0
This was done on a Sun Blade 100. No more legacy nics!
This is also helpful for laptop owners, because afe is one of the more common cardbus devices. So, your cardbus 10/100 NIC will work with NWAM.
If folks running snv_66 or newer want test binaries, let me know. I can offer them up in exchange for beer.
Anyway, this means that when afe gets putback (its on a schedule for snv68, but that may or may not happen), it will be GLDv3. Yay. Here's something to whet your appetite:
garrett@doc{44}> pfexec dladm show-link
eri0 type: non-vlan mtu: 1500 device: eri0
afe0 type: non-vlan mtu: 1500 device: afe0
This was done on a Sun Blade 100. No more legacy nics!
This is also helpful for laptop owners, because afe is one of the more common cardbus devices. So, your cardbus 10/100 NIC will work with NWAM.
If folks running snv_66 or newer want test binaries, let me know. I can offer them up in exchange for beer.
Subscribe to:
Posts (Atom)