Today I integrated "interrupt free audio". This set of changes, including some other changes, represents a substantial simplification in the DDI for audio drivers.
The typical audio driver no longer needs to worry about interrupt handlers. On average, about 300 lines of code (or about 10-20% of complexity for typical drivers) was removed from each audio driver.
Furthermore, many audio drivers (for example audio810) are able to run completely lock free, since the audio framework provides synchronization for certain operations. (Operations against each audio engine are synchronized, operations against audio controls are synchronized as a whole, and everything is synchronized against suspend/resume functions.)
Even better, these changes enable some new advanced features that will be used for Sun Ray, virtualization, and hotplug support in the future.
Oh yeah, and since the asynchronous processing now happens as part of the regular timer interrupt, it means that system CPUs can remain in deeper C states for longer, even while playing audio. So, we should have an improvement on system power consumption (admittedly I've not measured this.)
There will be more stuff related to audio in the future, stay tuned.
Tuesday, March 16, 2010
"Legislative Sleight of Hand"
I normally have avoided using my blog as a soapbox for my political beliefs. However, I simply cannot remain silent on recent events in the House of Representatives (United States for foreign readers.
No matter what your position is on the health care reforms under consideration, everyone should agree that the reforms are sweeping; perhaps some of the most significant legislation that will affect nearly every American we've seen in quite some time.
House Democratic leadership, knowing that the measure is unpopular with many voters (and hence House Democrats may be unlikely to "vote the party line" to avoid a backlash in their constituencies) are planning a move that is even more offensive than "reconciliation".
While I'm a Republican, and generally opposed to nationalization of 1/6th of our economy, I find far more offensive that the House leadership (particularly Ms. Pelosi) would consider a move that so boldly disenfranchises the people of this nation.
This is a crime, if not against the law, then certainly against the spirit of democracy upon which our country is founded. If health care reform is to be passed, then it should be done with a regular vote where the politicians who vote for it are required to be accountable for those votes (and vice versa, by the way).
If it passes without such a vote, then it will go down as one of the greatest failures of "representative democracy" in history.
No matter what your position is on the health care reforms under consideration, everyone should agree that the reforms are sweeping; perhaps some of the most significant legislation that will affect nearly every American we've seen in quite some time.
House Democratic leadership, knowing that the measure is unpopular with many voters (and hence House Democrats may be unlikely to "vote the party line" to avoid a backlash in their constituencies) are planning a move that is even more offensive than "reconciliation".
While I'm a Republican, and generally opposed to nationalization of 1/6th of our economy, I find far more offensive that the House leadership (particularly Ms. Pelosi) would consider a move that so boldly disenfranchises the people of this nation.
This is a crime, if not against the law, then certainly against the spirit of democracy upon which our country is founded. If health care reform is to be passed, then it should be done with a regular vote where the politicians who vote for it are required to be accountable for those votes (and vice versa, by the way).
If it passes without such a vote, then it will go down as one of the greatest failures of "representative democracy" in history.
Monday, March 15, 2010
Why We Need a Human Spaceflight
Space aficionados may be aware that President Obama has canceled the previous administration's "Vision for Space Exploration", which consisted of the Constellation program including Ares I, Ares V, and Orion. This has been fairly well covered in the mainstream media.
Critics of the Constellation program raise some significant and relevant objections to the Constellation program.
However, I strongly believe that as a nation, we need a national space program that includes human spaceflight beyond low Earth orbit. The cancellation of Constellation, while perhaps with good cause, has left our national space program with a vacuum -- the lack of a heavy lift vehicle, and lack of any vision, would effectively constrain human exploration to LEO for a generation. Furthermore, it significantly constrains the kinds of activities that we can perform in LEO.
Its my belief that this is short-sighted in the extreme.
We need a space program that includes vehicles with the ability to loft large payloads into orbit. Projects like the International Space Station, and further commercialization of space, are only possible with the ability to loft a significant payload into orbit.
We also need to plan for human exploration beyond our front porch. While many people argue that sending robotic explorers is less risky, and far cheaper, the idea that we can or should abdicate all future space endeavors to robotic missions is actually offensive.
Robots won't inspire a generation of students to continue to excel at math and science. Robots can't stand in as national heroes. And robots alone won't help develop the enthusiasm required for the general public to continue to want to invest in space and space technologies. Robotic exploration is mostly a solved problem -- many new technologies that are necessary for human space travel will simply not be invented or invested in, without the "problems" to solve that are involved in human space exploration.
I'm from a generation of kids who viewed astronauts as near personal heroes; I dreamed, and still dream, of being able to see our planet from space itself one day. I dream of the days when human kind steps beyond just Earth, and has outposts on the Moon, Mars, the asteroids, and perhaps other interesting places in the solar system. And someday beyond.
My own son dreams someday of being an astronaut and visiting Mars. Unfunded as it was, at least VSE allowed a glimmer of such a hope. Obama has killed that hope, and maybe the dreams and hopes of thousands or millions of other like minded kids.
Fortunately, there is a proposal that would revive these dreams, and allow us to retain a national heavy lift capability, retain a lot of the knowledge and expertise that we acquired with the successful STS (space shuttle) program (even reusing a significant amount of the materials and technology), and allow for a "way forward" that would allow us to get beyond LEO and go to interesting places elsewhere in the solar system. The DIRECT v3.0 proposal is IMO the best way forward; it allows us to have our cake and eat it too -- giving us all the heavy lift capabilities that we need, minimizing the significant impact on our economy that both the Constellation program, and the cancellation of the STS and Constellation programs, create.
I firmly believe that we are on the cusp of a major economic shift, where commercialization of space may play as important a role in the coming decade or two as the Internet has played in the previous two. The question is, will we as a nation continue to develop that potential, or will we let it slip away, to be picked up by India, China, or Russia?
Yes, I'm an American. And I believe that it is important for America to be a leader in the exploration and utilization of space. Ultimately, I believe that "planting flags" is much more important than the proponents of solely robotic exploration would have us believe. Someday people will visit Mars. Will America be there, or will we just be an observer while one of the Asian nations celebrates a major achievement?
Critics of the Constellation program raise some significant and relevant objections to the Constellation program.
However, I strongly believe that as a nation, we need a national space program that includes human spaceflight beyond low Earth orbit. The cancellation of Constellation, while perhaps with good cause, has left our national space program with a vacuum -- the lack of a heavy lift vehicle, and lack of any vision, would effectively constrain human exploration to LEO for a generation. Furthermore, it significantly constrains the kinds of activities that we can perform in LEO.
Its my belief that this is short-sighted in the extreme.
We need a space program that includes vehicles with the ability to loft large payloads into orbit. Projects like the International Space Station, and further commercialization of space, are only possible with the ability to loft a significant payload into orbit.
We also need to plan for human exploration beyond our front porch. While many people argue that sending robotic explorers is less risky, and far cheaper, the idea that we can or should abdicate all future space endeavors to robotic missions is actually offensive.
Robots won't inspire a generation of students to continue to excel at math and science. Robots can't stand in as national heroes. And robots alone won't help develop the enthusiasm required for the general public to continue to want to invest in space and space technologies. Robotic exploration is mostly a solved problem -- many new technologies that are necessary for human space travel will simply not be invented or invested in, without the "problems" to solve that are involved in human space exploration.
I'm from a generation of kids who viewed astronauts as near personal heroes; I dreamed, and still dream, of being able to see our planet from space itself one day. I dream of the days when human kind steps beyond just Earth, and has outposts on the Moon, Mars, the asteroids, and perhaps other interesting places in the solar system. And someday beyond.
My own son dreams someday of being an astronaut and visiting Mars. Unfunded as it was, at least VSE allowed a glimmer of such a hope. Obama has killed that hope, and maybe the dreams and hopes of thousands or millions of other like minded kids.
Fortunately, there is a proposal that would revive these dreams, and allow us to retain a national heavy lift capability, retain a lot of the knowledge and expertise that we acquired with the successful STS (space shuttle) program (even reusing a significant amount of the materials and technology), and allow for a "way forward" that would allow us to get beyond LEO and go to interesting places elsewhere in the solar system. The DIRECT v3.0 proposal is IMO the best way forward; it allows us to have our cake and eat it too -- giving us all the heavy lift capabilities that we need, minimizing the significant impact on our economy that both the Constellation program, and the cancellation of the STS and Constellation programs, create.
I firmly believe that we are on the cusp of a major economic shift, where commercialization of space may play as important a role in the coming decade or two as the Internet has played in the previous two. The question is, will we as a nation continue to develop that potential, or will we let it slip away, to be picked up by India, China, or Russia?
Yes, I'm an American. And I believe that it is important for America to be a leader in the exploration and utilization of space. Ultimately, I believe that "planting flags" is much more important than the proponents of solely robotic exploration would have us believe. Someday people will visit Mars. Will America be there, or will we just be an observer while one of the Asian nations celebrates a major achievement?
Wednesday, March 3, 2010
ON IPS surprisingly easy
So I have an EOF RTI that was in queue when the ON IPS integration happened last night.
Of course, this totally whacked my packaging changes, and I had to modify them. Making the changes was quite easy. Here's the old, and the new version of the changes. Its actually less files to update under IPS.
I was dreading retesting. Dealing with distro construction sounded "painful".
I needn't have worried. In the tools directory there is this neat tool called "onu" (on-update I guess?)
I had to load a machine with b133 to set up a baseline, but we have a nice way to do that internally via our internal infrastructure and AI. It boils down to running one command on an install server than doing "boot net:dhcp - install" at the OBP prompt. (Yes, this is a SPARC system.)
It took a little bit for it to install, but less than an hour.
Then, after rebooting and getting the initial settings on the system, it was just a simple matter of "onu -d ${ws}/packages/sparc/nightly-nd" to update it. This took a while (20-30 minutes, I wasn't counting). Eventually the system was up and ready for business. Easier than bfu. Amazing.
Thanks to the IPS team! I can't wait for bfu to finally go away.
Of course, this totally whacked my packaging changes, and I had to modify them. Making the changes was quite easy. Here's the old, and the new version of the changes. Its actually less files to update under IPS.
I was dreading retesting. Dealing with distro construction sounded "painful".
I needn't have worried. In the tools directory there is this neat tool called "onu" (on-update I guess?)
I had to load a machine with b133 to set up a baseline, but we have a nice way to do that internally via our internal infrastructure and AI. It boils down to running one command on an install server than doing "boot net:dhcp - install" at the OBP prompt. (Yes, this is a SPARC system.)
It took a little bit for it to install, but less than an hour.
Then, after rebooting and getting the initial settings on the system, it was just a simple matter of "onu -d ${ws}/packages/sparc/nightly-nd" to update it. This took a while (20-30 minutes, I wasn't counting). Eventually the system was up and ready for business. Easier than bfu. Amazing.
Thanks to the IPS team! I can't wait for bfu to finally go away.
Sunday, February 21, 2010
Funny Ancient Software
I just found out that Ubuntu has been shipping (since version 6.06 -- Dapper Drake I think it was called?) and apparently included all the way into forthcoming 10.x LTS version) a program I wrote nearly two decades ago as a student -- vtprint -- and yes, that link points to manual text I wrote way back when.
(This program, "vtprint", was for use with printing from a UNIX shell prompt, when you don't have a better way to move files around. Back then we used commands like "kermit" to connect to a UNIX server from our PC over a 2400 or 9600 baud modem -- and well before PPP or even SLIP.)
I haven't used vtprint since about 1995, but its funny to still see it kicking around. Too bad the docs still have an old e-mail address for me at SDSU.... I guess nobody has needed a bug fix for it for some time.
(This program, "vtprint", was for use with printing from a UNIX shell prompt, when you don't have a better way to move files around. Back then we used commands like "kermit" to connect to a UNIX server from our PC over a 2400 or 9600 baud modem -- and well before PPP or even SLIP.)
I haven't used vtprint since about 1995, but its funny to still see it kicking around. Too bad the docs still have an old e-mail address for me at SDSU.... I guess nobody has needed a bug fix for it for some time.
Monday, February 15, 2010
Congratulations to BMW Oracle Racing
If you're involved in the sailing community, you'll already know that Larry Ellison, who's now ultimately my boss, had put together a team to challenge the America's Cup. They won this weekend, bringing the America's Cup back home to America, and I'm enormously proud of Ellison and his team, both as an American, as a sailor, and -- now -- as an Oracle employee.
Friday, February 12, 2010
Open Development
Note: I'm posting this on my personal blog, which as always is a reflection of my own thoughts and in no way represent any official policy from my employer (whoever that may be).
Now that we former Sun employees (for the most part) are now part of a larger company, there have been some questions about how much of the trend Sun had made towards open development will continue (particularly where Solaris/OpenSolaris is concerned.)
(I want to separate the concern of Open Source -- where source code is made available for products after they are released -- from Open Development -- where the product is developed in the open.)
Many of us who were part of that acquisition are wondering the same things. Officially, the word is "no changes in what we're doing", but unofficially there's an atmosphere that our new employer places a greater emphasis on commercial profitability and a lesser emphasis on things like "including the community."
Speaking abstractly, there are risks to any open development effort, particularly when the effort is intended to be supportive of a commercial endeavor. The risks range from enabling competitors with early information, to forestalling customer purchases of bits today as customers wait for the new feature that's being developed in the open, to simply diluting the impact that "surprise" delivery of a new product or feature can make.
Certainly there seems to be some evidence that Oracle may have a greater concern about the costs and risks associated with "early disclosure" than Sun did. No matter how passionately one may believe in Open Development, nobody can deny that there are real costs and risks associated with open development.
So the challenge for Oracle is to figure out what the balance is that makes commercial sense.
Ultimately profit is the primary responsibility of any publicly traded company.
The challenge for the community is to figure how to provide commercial justification to Oracle for the open development that the community likes to see.
If you want to retain truly Open Development (which includes public posting of webrevs for stuff developed internally, open ARC reviews, and public design discussions on mailing lists and similar fora) of Solaris and OpenSolaris, this is call out to you.
Have you or your company made a significant Sun purchase? Has open development (as opposed to open source) influenced your decision? How and why? Will open development influence future purchasing decisions? If you can, put a number to the value of open development, and provide that information to your sales reps or post it publicly.
The decision makers need to see value in the practice of open development if they're going to continue to support it.
Again, I'm only talking about open development, not about open source.
As an aside, I don't think statements coming from community contributors without the support of purchasing dollars are likely to carry much weight with Oracle decision makers. I believe that if you look at the contributions from the community-at-large in OpenSolaris, you'll find that the meaningful contributions have been fairly small and generally of little commercial interest, and have always required additional engineering expense from Sun. So "leveraging" the community for development has not, IMO, been a gamble that has yielded dividends -- at least not from the perspective of either Sun or Oracle.
Now that we former Sun employees (for the most part) are now part of a larger company, there have been some questions about how much of the trend Sun had made towards open development will continue (particularly where Solaris/OpenSolaris is concerned.)
(I want to separate the concern of Open Source -- where source code is made available for products after they are released -- from Open Development -- where the product is developed in the open.)
Many of us who were part of that acquisition are wondering the same things. Officially, the word is "no changes in what we're doing", but unofficially there's an atmosphere that our new employer places a greater emphasis on commercial profitability and a lesser emphasis on things like "including the community."
Speaking abstractly, there are risks to any open development effort, particularly when the effort is intended to be supportive of a commercial endeavor. The risks range from enabling competitors with early information, to forestalling customer purchases of bits today as customers wait for the new feature that's being developed in the open, to simply diluting the impact that "surprise" delivery of a new product or feature can make.
Certainly there seems to be some evidence that Oracle may have a greater concern about the costs and risks associated with "early disclosure" than Sun did. No matter how passionately one may believe in Open Development, nobody can deny that there are real costs and risks associated with open development.
So the challenge for Oracle is to figure out what the balance is that makes commercial sense.
Ultimately profit is the primary responsibility of any publicly traded company.
The challenge for the community is to figure how to provide commercial justification to Oracle for the open development that the community likes to see.
If you want to retain truly Open Development (which includes public posting of webrevs for stuff developed internally, open ARC reviews, and public design discussions on mailing lists and similar fora) of Solaris and OpenSolaris, this is call out to you.
Have you or your company made a significant Sun purchase? Has open development (as opposed to open source) influenced your decision? How and why? Will open development influence future purchasing decisions? If you can, put a number to the value of open development, and provide that information to your sales reps or post it publicly.
The decision makers need to see value in the practice of open development if they're going to continue to support it.
Again, I'm only talking about open development, not about open source.
As an aside, I don't think statements coming from community contributors without the support of purchasing dollars are likely to carry much weight with Oracle decision makers. I believe that if you look at the contributions from the community-at-large in OpenSolaris, you'll find that the meaningful contributions have been fairly small and generally of little commercial interest, and have always required additional engineering expense from Sun. So "leveraging" the community for development has not, IMO, been a gamble that has yielded dividends -- at least not from the perspective of either Sun or Oracle.
Thursday, February 4, 2010
Scalability FUD
Yesterday I saw yet another argument about the Linux vs. Solaris scalability debate. The Linux fans were loudly proclaiming that the claim of Solaris' superior scalability is FUD in the presence of evidence like the Cray XT class of systems which utilize thousands of processors in a system, running Linux.
The problem with comparing (or even considering!) the systems in the Top500 supercomputers when talking about "scalability" is simply that those systems are irrelevant for the typical "scalability" debate -- at least as it pertains to operating system kernels.
Irrelevant?! Yes. Irrelevant. Let me explain.
First, one must consider the typical environment and problems that are dealt with in the HPC arena. In HPC (High Performance Computing), scientific problems are considered that are usually fully compute bound. That is to say, they spend a huge majority of their time in "user" and only a minuscule tiny amount of time in "sys". I'd expect to find very, very few calls to inter-thread synchronization (like mutex locking) in such applications.
Second, these systems are used by users who are willing, expect, and often need, to write custom software to deal with highly parallel architectures. The software deployed into these environments is tuned for use in situations where the synchronization cost between processors is expected to be "relatively" high. Granted the architectures still attempt to minimize such costs, using very highly optimized message passing busses and the like.
Third many of these systems (most? all?) are based on systems that don't actually run a single system image. There is not a single universal addressable memory space visible to all processors -- at least not without high NUMA costs requiring special programming to deliver good performance, and frequently not at all. In many ways, these systems can be considered "clusters" of compute nodes around a highly optimized network. Certainly, programming systems like the XT5 is likely to be similar in many respects to programming software for clusters using more traditional network interconnects. An extreme example of this kind of software is SETI@home, where the interconnect (the global Internet) can be extremely slow compared to compute power.
So why does any of this matter?
It matters because most traditional software is designed without NUMA-specific optimizations, or even cluster-specific optimizations. More traditional software used in commercial applications like databases, web servers, business logic systems, or even servers for MMORPGs spend a much larger percentage of their time in the kernel, either performing some fashion of I/O or inter-thread communication (including synchronization like mutex locks and such.)
Consider a massive non-clustered database. (Note that these days many databases are designed for clustered operation.) In this situation, there will be some kind of central coordinator for locking and table access, and such, plus a vast number of I/O operations to storage, and a vast number of hits against common memory. These kinds of systems spend a lot more time doing work in the operating system kernel. This situation is going to exercise the kernel a lot more fully, and give a much truer picture of "kernel scalability" -- at least as the arguments are made by the folks arguing for or against Solaris or Linux superiority.
Solaris aficionados claim it is more scalable in handling workloads of this nature -- that a single SMP system image supporting traditional programming approaches (e.g. a single monolithic process made up of many threads for example) will experience better scalability on a Solaris system than on a Linux system.
I've not measured it, so I can't say for sure. But having been in both kernels (and many others), I can say that the visual evidence from reading the code is that Solaris seems like it ought to scale better in this respect than any of the other commonly available free operating systems. If you don't believe me, measure it -- and post your results online. It would be wonderful to have some quantitative data here.
Linux supporters, please, please stop pointing at the Top500 as evidence for Linux claims of superior scalability though. If there are some more traditional commercial kinds of single-system deployments that can support your claim, then lets hear about them!
The problem with comparing (or even considering!) the systems in the Top500 supercomputers when talking about "scalability" is simply that those systems are irrelevant for the typical "scalability" debate -- at least as it pertains to operating system kernels.
Irrelevant?! Yes. Irrelevant. Let me explain.
First, one must consider the typical environment and problems that are dealt with in the HPC arena. In HPC (High Performance Computing), scientific problems are considered that are usually fully compute bound. That is to say, they spend a huge majority of their time in "user" and only a minuscule tiny amount of time in "sys". I'd expect to find very, very few calls to inter-thread synchronization (like mutex locking) in such applications.
Second, these systems are used by users who are willing, expect, and often need, to write custom software to deal with highly parallel architectures. The software deployed into these environments is tuned for use in situations where the synchronization cost between processors is expected to be "relatively" high. Granted the architectures still attempt to minimize such costs, using very highly optimized message passing busses and the like.
Third many of these systems (most? all?) are based on systems that don't actually run a single system image. There is not a single universal addressable memory space visible to all processors -- at least not without high NUMA costs requiring special programming to deliver good performance, and frequently not at all. In many ways, these systems can be considered "clusters" of compute nodes around a highly optimized network. Certainly, programming systems like the XT5 is likely to be similar in many respects to programming software for clusters using more traditional network interconnects. An extreme example of this kind of software is SETI@home, where the interconnect (the global Internet) can be extremely slow compared to compute power.
So why does any of this matter?
It matters because most traditional software is designed without NUMA-specific optimizations, or even cluster-specific optimizations. More traditional software used in commercial applications like databases, web servers, business logic systems, or even servers for MMORPGs spend a much larger percentage of their time in the kernel, either performing some fashion of I/O or inter-thread communication (including synchronization like mutex locks and such.)
Consider a massive non-clustered database. (Note that these days many databases are designed for clustered operation.) In this situation, there will be some kind of central coordinator for locking and table access, and such, plus a vast number of I/O operations to storage, and a vast number of hits against common memory. These kinds of systems spend a lot more time doing work in the operating system kernel. This situation is going to exercise the kernel a lot more fully, and give a much truer picture of "kernel scalability" -- at least as the arguments are made by the folks arguing for or against Solaris or Linux superiority.
Solaris aficionados claim it is more scalable in handling workloads of this nature -- that a single SMP system image supporting traditional programming approaches (e.g. a single monolithic process made up of many threads for example) will experience better scalability on a Solaris system than on a Linux system.
I've not measured it, so I can't say for sure. But having been in both kernels (and many others), I can say that the visual evidence from reading the code is that Solaris seems like it ought to scale better in this respect than any of the other commonly available free operating systems. If you don't believe me, measure it -- and post your results online. It would be wonderful to have some quantitative data here.
Linux supporters, please, please stop pointing at the Top500 as evidence for Linux claims of superior scalability though. If there are some more traditional commercial kinds of single-system deployments that can support your claim, then lets hear about them!
Missing audio packages
I have learned that at least two packages, SUNWaudioemu10k and SUNWaudiosolo, are not part of the "standard" ("entire?") install of OpenSolaris b131. If you're looking for either of these, you should do "pfexec pkg install SUNWaudiosolo" or "pfexec pkg install SUNWaudiosolo".
Hopefully we'll get this sorted out before the next official release.
Update: Apparently (according to the expert I talked to) this problem only affects systems updating with pkg image-update. If you install a fresh system, the audio packages should be installed.
Hopefully we'll get this sorted out before the next official release.
Update: Apparently (according to the expert I talked to) this problem only affects systems updating with pkg image-update. If you install a fresh system, the audio packages should be installed.
Tuesday, February 2, 2010
System board for ZFS NAS
I'm thinking about creating a home storage server, like many, and I want it to be performant enough to host work spaces for compilation over NFS, and efficient enough to reduce my current power consumption somewhat.
I'm thinking of a new Intel D510 system board, and looking at several, I found a board from Supermicro that looks ideally suited to the task. Does anyone else have experience with this board? It looks like its all stock Intel parts, so it should Just Work.
I'm thinking that with 4 or more SATA drives combined with RAIDZ, and dual Intel 82574 gigabit Ethernet (which I could use in an Ethernet link aggregation), I should be able to get excellent performance. (I might even set up jumbo frames, to further bump NFS performance -- if they really are 82574's then they support up to 9K MTU).
I'm thinking of a new Intel D510 system board, and looking at several, I found a board from Supermicro that looks ideally suited to the task. Does anyone else have experience with this board? It looks like its all stock Intel parts, so it should Just Work.
I'm thinking that with 4 or more SATA drives combined with RAIDZ, and dual Intel 82574 gigabit Ethernet (which I could use in an Ethernet link aggregation), I should be able to get excellent performance. (I might even set up jumbo frames, to further bump NFS performance -- if they really are 82574's then they support up to 9K MTU).
Kindle Converts a Skeptic
Recently I bought my wife an Amazon Kindle (the new international unit), at her request. Personally I was rather skeptical -- trying to read book material on computers, even laptops or netbooks, has always felt very awkward to me. I always believed that there was something about holding a paperback (or even a hardback) which would never be replaceable by technology -- maybe for others, but at least not for me.
I have to recant. Debbie has read something like a dozen novels already on her unit. I decided to try it out... and I have to say, I was surprised. I was reading H.G. Wells' War of the Worlds (not for the first time of course), which was a free download, and wow, was I surprised. After 10 or 15 minutes of reading, I almost forgot I was holding something in my hand that isn't printed paper. (The form-factor, which is quite similar to a book, works quite well here. I don't think I'd like the larger DX, as it would destroy the "illusion" of reading a paper back book.)
Not only did the technology not "get in the way", the reading experience was actually more pleasurable, largely because I was able to bump up the font size up to a more comfortable reading level. Last night I read about 1/3 of the book, before I got too tired, but I'm sold on the concept -- and I was a die hard skeptic before.
I don't think I'd like to use it for other things ... but for the primary purpose of reading novels, it settles in quite nicely bringing some technological advantages without letting the technology get in the way of reading.
Will Apple's newer iPad compete here? I'm skeptical. The Apple product is a fancier device, with its backlit screen, and probably will feel more like a hybrid between a laptop and an iPhone (of course I still have an ancient model of phone that is used pretty exclusively for making phone calls -- call me a Luddite.) I suspect that the combination of screen glare, snazz, and lower battery life (iPad users will need to be lot more cognizant of their current battery status), means that its going to be a poor replacement for a Kindle, and an even poorer replacement for the printed materials that the Kindle is meant to replace.
When I go on my round-the-world sailing trip (not any time soon!), would I want a Kindle with me? Absolutely (or something similar) -- along with a solar or wind based charging system. (Product idea... a case for the kindle that integrates photovoltaic solar charging system, so your Kindle is always charging when its closed.)
An iPad? Not likely -- if I'm going to be working or sending e-mails, sure, but then one of the netbooks is probably a better option.. with a "real" keyboard.
I have to recant. Debbie has read something like a dozen novels already on her unit. I decided to try it out... and I have to say, I was surprised. I was reading H.G. Wells' War of the Worlds (not for the first time of course), which was a free download, and wow, was I surprised. After 10 or 15 minutes of reading, I almost forgot I was holding something in my hand that isn't printed paper. (The form-factor, which is quite similar to a book, works quite well here. I don't think I'd like the larger DX, as it would destroy the "illusion" of reading a paper back book.)
Not only did the technology not "get in the way", the reading experience was actually more pleasurable, largely because I was able to bump up the font size up to a more comfortable reading level. Last night I read about 1/3 of the book, before I got too tired, but I'm sold on the concept -- and I was a die hard skeptic before.
I don't think I'd like to use it for other things ... but for the primary purpose of reading novels, it settles in quite nicely bringing some technological advantages without letting the technology get in the way of reading.
Will Apple's newer iPad compete here? I'm skeptical. The Apple product is a fancier device, with its backlit screen, and probably will feel more like a hybrid between a laptop and an iPhone (of course I still have an ancient model of phone that is used pretty exclusively for making phone calls -- call me a Luddite.) I suspect that the combination of screen glare, snazz, and lower battery life (iPad users will need to be lot more cognizant of their current battery status), means that its going to be a poor replacement for a Kindle, and an even poorer replacement for the printed materials that the Kindle is meant to replace.
When I go on my round-the-world sailing trip (not any time soon!), would I want a Kindle with me? Absolutely (or something similar) -- along with a solar or wind based charging system. (Product idea... a case for the kindle that integrates photovoltaic solar charging system, so your Kindle is always charging when its closed.)
An iPad? Not likely -- if I'm going to be working or sending e-mails, sure, but then one of the netbooks is probably a better option.. with a "real" keyboard.
Monday, February 1, 2010
Reprehensible behavior from a monopoly
Misbehavior stemming from lack of competition is apparently not unique to the IT industry.
I saw this post today, and couldn't believe it. And then a bit of additional research shows this is not unique -- a number of people complained about actions on the part of Greyhound that would never be tolerated in market where there is true competition.
Forcing a grandmother to wait out in cold, while there's still snow on the ground, may not be in violation of the letter of the law, but it is certainly in violation of the basic tenets of human decency, and the management at the Memphis location showed they have none.
Its been over ten years since I've ridden a Greyhound (or any other long-haul bus for that matter), and after reading this, I am unlikely to ride another Greyhound again. Instead I'll stick to air transport where lively competition means that even the worst airlines understand that they have to at least pretend to care about their customers.
If you're reading this and thinking about taking a Greyhound somewhere, don't.
While there may not be much competition for Greyhound for long-haul ground travel, there is at least some. And there is always air transport for those able to use it.
I'll be interested to hear if Greyhound corporate does anything to fix the problems they obviously have. A good start would be firing most or all of the staff at their Memphis location (especially the management and security guard in question) and refunding the tickets of each of the passengers who were stuck there.
I saw this post today, and couldn't believe it. And then a bit of additional research shows this is not unique -- a number of people complained about actions on the part of Greyhound that would never be tolerated in market where there is true competition.
Forcing a grandmother to wait out in cold, while there's still snow on the ground, may not be in violation of the letter of the law, but it is certainly in violation of the basic tenets of human decency, and the management at the Memphis location showed they have none.
Its been over ten years since I've ridden a Greyhound (or any other long-haul bus for that matter), and after reading this, I am unlikely to ride another Greyhound again. Instead I'll stick to air transport where lively competition means that even the worst airlines understand that they have to at least pretend to care about their customers.
If you're reading this and thinking about taking a Greyhound somewhere, don't.
While there may not be much competition for Greyhound for long-haul ground travel, there is at least some. And there is always air transport for those able to use it.
I'll be interested to hear if Greyhound corporate does anything to fix the problems they obviously have. A good start would be firing most or all of the staff at their Memphis location (especially the management and security guard in question) and refunding the tickets of each of the passengers who were stuck there.
Wednesday, January 27, 2010
Its Official
I'm no longer a Sun Microsystems Employee, since Sun no longer exists. Hopefully I'll get to keep my job at Oracle, but I've not seen any e-mail yet. I expect I will before day's end.
Monday, January 25, 2010
Auto Install Finally Working For Me!
Some of you may know, I've been struggling (and failing) to make auto install work for me. I've had challenges, because my network is not routable, and due to other issues (bugs!) in OpenSolaris Auto Install.
However, it seems that I've finally hit on a successful recipe. I want to record this here for others.
First, in order to use AI, you will need your installation server to be running a recent build of OpenSolaris. The release notes indicate b128 is sufficient. I just ran "pkg image-update" to update to build 131. If you fail to do this, there won't be a warning at all, its just your clients will simply not boot.
The next thing you'll need to do is download a full-repository and the AI image.
Unfortunately, there are not public versions of the full repo ISO file available that are "current". (No, I can't get you a copy, and no I don't know why they haven't posted a more recent update.) Hopefully this problem will be corrected soon.
Setting up the local repository can be done following these directions. (Note that you will have to change the paths to reflect your system. I stash ISO images in /data/isos, and install images under /data/install. These are separate ZFS filesystems.
You'll need to edit the ${REPO}/cfg_cache file, changing the origins entry to match your system. I used a value like this:
Then you'll want to use installadm to setup an initial boot service. Here's the recipe I used:
Now you need to change the default manifest file. This is the tricky part, that IMO was not terribly well explained anywhere else.
Then edit default.xml file in /tmp, changing the value of "main_url" to point to your server. I used a value like this:
Finally, I did some tweaking in my DHCP configuration. I have a macro for each service name, that provides the defaults. For example, my "os131_x86" macro looks likes this:
My "pepper" macro (pepper is the name of my server) sets some shared defaults, but most especially it sets BootSrvA to the IP address of the server (192.168.128.11 in my case.)
Then I just configure individual addresses for which ever version of OpenSolaris (or SXCE) I want to install using the the correct configuration macro. (For SXCE there are very different DHCP options to use. Also the SPARC version of OpenSolaris uses different options as well.)
However, it seems that I've finally hit on a successful recipe. I want to record this here for others.
First, in order to use AI, you will need your installation server to be running a recent build of OpenSolaris. The release notes indicate b128 is sufficient. I just ran "pkg image-update" to update to build 131. If you fail to do this, there won't be a warning at all, its just your clients will simply not boot.
The next thing you'll need to do is download a full-repository and the AI image.
Unfortunately, there are not public versions of the full repo ISO file available that are "current". (No, I can't get you a copy, and no I don't know why they haven't posted a more recent update.) Hopefully this problem will be corrected soon.
Setting up the local repository can be done following these directions. (Note that you will have to change the paths to reflect your system. I stash ISO images in /data/isos, and install images under /data/install. These are separate ZFS filesystems.
# where do ISOs live, without leading /
ISODIR=data/isos
# where does the repo live, without leading /
REPO=data/install/os131_repo_full
# AI service name to use
NAME=os131_x86
# parent directory for installations, without leading /
INSTDIR=data/install
# port to use for install server
PORT=8181
mount -F hsfs -r /${ISODIR}/osol-repo-131-full.iso /mnt
zfs create -o compression=on ${REPO}
svccfg -s application/pkg/server setprop pkg/inst_root=/${REPO}
svccfg -s application/pkg/server setprop pkg/readonly=true
svccfg -s application/pkg/server setprop pkg/port=${PORT}
You'll need to edit the ${REPO}/cfg_cache file, changing the origins entry to match your system. I used a value like this:
origins = http://192.168.128.11:8181
Then you'll want to use installadm to setup an initial boot service. Here's the recipe I used:
zfs create -o compression=on ${INSTDIR}/${NAME}
installadm create-service -n ${NAME} -s /${ISODIR}/osol-dev-131-ai-x86.iso /${INSTDIR}/${NAME}
Now you need to change the default manifest file. This is the tricky part, that IMO was not terribly well explained anywhere else.
cp /${NSTDIR}/${NAME}/auto_install/default.xml /tmp
Then edit default.xml file in /tmp, changing the value of "main_url" to point to your server. I used a value like this:
<main url="http://192.168.128.11:8181" publisher="opensolaris.org"/>Then apply this manifest to the default manifest:
installadm add -m /tmp/default.xml -n ${NAME}
Finally, I did some tweaking in my DHCP configuration. I have a macro for each service name, that provides the defaults. For example, my "os131_x86" macro looks likes this:
Include pepper
BootFile os131_x86
GrubMenu menu.lst.os131_x86
My "pepper" macro (pepper is the name of my server) sets some shared defaults, but most especially it sets BootSrvA to the IP address of the server (192.168.128.11 in my case.)
Then I just configure individual addresses for which ever version of OpenSolaris (or SXCE) I want to install using the the correct configuration macro. (For SXCE there are very different DHCP options to use. Also the SPARC version of OpenSolaris uses different options as well.)
Tuesday, January 19, 2010
Six Years & Counting
Its hard for me to believe that six years ago today at this hour in the morning I was getting myself ready to meet my bride. We had a wonderful wedding on the beach in front of the Del Mar powerhouse in San Diego, with our friends and family in attendance.
Looking back, its been the best six years of my life. I've truly been blessed. I'm looking forward to spending the next sixty together with my beautiful bride Deborah.
Looking back, its been the best six years of my life. I've truly been blessed. I'm looking forward to spending the next sixty together with my beautiful bride Deborah.
Thursday, January 14, 2010
Interesting device driver work
So there are a couple of "closed" drivers that are not part of OpenSolaris, and might never be because of redistribution restrictions. However, this represents an opportunity for an enterprising software engineer to contribute. The drivers are
glm - Symbios 53x810 and similiar devices
qus - QLogic ISP 10160 and similar devices
adp - Adaptec AIC 7870P and similar devices
cadp - Adaptec AIC 7896 and similar devices
There are open source drivers for these from FreeBSD and NetBSD, which could be used as a starting point for a port. I'd probably be interested in trying one of these out myself, if time allowed -- but alas it does not, my plate is already quite full.
The best part of these drivers is that there are few, if any, "political" or "business" restrictions on integrating replacement drivers. Indeed, at one point recently each of these was considered for an EOF simply because they weren't considered strategic anymore. (The EOFs were rejected, but these will only be delivered via an extras repository or somesuch.)
So, what are you waiting for? This is a good opportunity to learn about SCSA, and provide us with superior replacement drivers. (The glm replacement looks like it could be done in as few as 2 or 3 KLOC; that is all the NetBSD version of the driver uses.)
glm - Symbios 53x810 and similiar devices
qus - QLogic ISP 10160 and similar devices
adp - Adaptec AIC 7870P and similar devices
cadp - Adaptec AIC 7896 and similar devices
There are open source drivers for these from FreeBSD and NetBSD, which could be used as a starting point for a port. I'd probably be interested in trying one of these out myself, if time allowed -- but alas it does not, my plate is already quite full.
The best part of these drivers is that there are few, if any, "political" or "business" restrictions on integrating replacement drivers. Indeed, at one point recently each of these was considered for an EOF simply because they weren't considered strategic anymore. (The EOFs were rejected, but these will only be delivered via an extras repository or somesuch.)
So, what are you waiting for? This is a good opportunity to learn about SCSA, and provide us with superior replacement drivers. (The glm replacement looks like it could be done in as few as 2 or 3 KLOC; that is all the NetBSD version of the driver uses.)
Wednesday, January 13, 2010
A "modern" elxl driver
I undertook this past weekend an effort to "port" the NetBSD "ex" driver to GLDv3, as an open source replacement for "elxl".
It took a bit over 2 days.
The new sources are on line in a webrev format. I'd really appreciate feedback. I'm hoping to integrate these changes soon (and I could use help with testing!)
This new version, apart from being Open Source, also has with it:
1. Full support for VLANs (including full-MTU frames)
2. Full support for link notification on twisted pair (and hopefully also fiber)
3. Full integration with Brussels for MII and media selection.
4. Full support for Suspend/Resume (S3)
5. Full support for Quiesce (fast reboot)
6. Support for additional devices
It can also be extended fairly easily to support Cardbus and MiniPCI variants, and hardware checksum offload. (The checksum offload part is basically written and #ifdef'd out until I find a newer card to test with personally.)
What is missing is "automatic" media selection based on active probing. The old Solaris driver would "autoselect" which port (BNC, AUI, twisted pair) to use based on some active probes to look for link. These were rather complex, and not something I could take from the closed driver. These days everyone just uses twisted pair anyway, right? (The fiber and TX4 cards don't offer any choices, so there is no probing needed for them.)
If you have such a COMBO card, you can force the media using a new "driver private" property called "_media", which you can set to various values. See the driver sources in the webrev for more information.
I've done enough work on this driver that there is probably as much of my own code in it (at least) as was in the original NetBSD code. Nonetheless, I'd like to than the NetBSD Foundation for making these sources available.
I'll be posting binaries soon, stay tune. (Sun internal users can grab the binaries from /net/temecula.sfbay/data/work/gdamore/yge/yge/usr/src/uts/intel/elxl/ -- at least for now. That path is likely to work until I get the code integrated.)
It took a bit over 2 days.
The new sources are on line in a webrev format. I'd really appreciate feedback. I'm hoping to integrate these changes soon (and I could use help with testing!)
This new version, apart from being Open Source, also has with it:
1. Full support for VLANs (including full-MTU frames)
2. Full support for link notification on twisted pair (and hopefully also fiber)
3. Full integration with Brussels for MII and media selection.
4. Full support for Suspend/Resume (S3)
5. Full support for Quiesce (fast reboot)
6. Support for additional devices
It can also be extended fairly easily to support Cardbus and MiniPCI variants, and hardware checksum offload. (The checksum offload part is basically written and #ifdef'd out until I find a newer card to test with personally.)
What is missing is "automatic" media selection based on active probing. The old Solaris driver would "autoselect" which port (BNC, AUI, twisted pair) to use based on some active probes to look for link. These were rather complex, and not something I could take from the closed driver. These days everyone just uses twisted pair anyway, right? (The fiber and TX4 cards don't offer any choices, so there is no probing needed for them.)
If you have such a COMBO card, you can force the media using a new "driver private" property called "_media", which you can set to various values. See the driver sources in the webrev for more information.
I've done enough work on this driver that there is probably as much of my own code in it (at least) as was in the original NetBSD code. Nonetheless, I'd like to than the NetBSD Foundation for making these sources available.
I'll be posting binaries soon, stay tune. (Sun internal users can grab the binaries from /net/temecula.sfbay/data/work/gdamore/yge/yge/usr/src/uts/intel/elxl/ -- at least for now. That path is likely to work until I get the code integrated.)
Friday, January 1, 2010
Out with the Old, In with the New
Happy New Year (2010) everyone!
I thought I'd take a second to reflect on the accomplishments of the past year, and look forward to what I think is in store for my contributions to OpenSolaris this year.
It's hard to believe that I've been a member of the OpenSolaris community for over 5 years now. (I was a pilot member.)
Undoubtedly this past year my biggest contribution to OpenSolaris was the new audio framework (Boomer) and many new audio drivers.
I did a lot of other work besides, including a bunch of work on NIC drivers (including a new common MII framework and the yge driver which supports Marvell Yukon 2 parts), and various changes to the SDcard framework.
I've also developed a device driver for a very interesting (and very high performance) hybrid storage device (which won't be integrating for non-technical reasons), and a new storage framework for block oriented storage devices. (This framework, blkdev, will be integrating once we're out past the build restrictions.)
I also did a lot of cleanup of legacy and stale code, which hopefully reduces the install foot-print and shrinks compile times somewhat.
2009 was also the year I became a full voting member of PSARC, and I was privileged to serve 3 months as PSARC chair. (The chair rotates amongst all active PSARC members on a roughly quarterly schedule.)
This past year is also the year that I became the top contributor to ON in separate integrations, since the OpenSolaris project started, at least as reported by ohloh.net. (Note that the statistics only cover the open portion of the ON consolidation.)
So what's coming up in the next year? Here's what I expect to be working on:
One thing is that I feel very privileged to have been able to work on the OpenSolaris code base, and to continue to be able to do so. I often reflect that its amazing that I get paid to do this work -- I think I'd be far less productive if I didn't enjoy my job so much. When my management asks me to take a break and do something fun for a change, I think he has a hard time understanding that for me hacking on OpenSolaris code is fun. I genuinely hope that I continue to be so privileged for the foreseeable future.
I thought I'd take a second to reflect on the accomplishments of the past year, and look forward to what I think is in store for my contributions to OpenSolaris this year.
It's hard to believe that I've been a member of the OpenSolaris community for over 5 years now. (I was a pilot member.)
Undoubtedly this past year my biggest contribution to OpenSolaris was the new audio framework (Boomer) and many new audio drivers.
I did a lot of other work besides, including a bunch of work on NIC drivers (including a new common MII framework and the yge driver which supports Marvell Yukon 2 parts), and various changes to the SDcard framework.
I've also developed a device driver for a very interesting (and very high performance) hybrid storage device (which won't be integrating for non-technical reasons), and a new storage framework for block oriented storage devices. (This framework, blkdev, will be integrating once we're out past the build restrictions.)
I also did a lot of cleanup of legacy and stale code, which hopefully reduces the install foot-print and shrinks compile times somewhat.
2009 was also the year I became a full voting member of PSARC, and I was privileged to serve 3 months as PSARC chair. (The chair rotates amongst all active PSARC members on a roughly quarterly schedule.)
This past year is also the year that I became the top contributor to ON in separate integrations, since the OpenSolaris project started, at least as reported by ohloh.net. (Note that the statistics only cover the open portion of the ON consolidation.)
So what's coming up in the next year? Here's what I expect to be working on:
- 10GbE ethernet for Mellanox ConnectX devices (hermon). This is probably my top priority at the moment. The work is largely being done by Mellanox, but I'm the Sun engineer ultimately responsible.
- Sun Ray audio. This is one of my biggest priorities. We want a Boomer driver for Sun Ray audio, bringing in-kernel mixing to Sun Ray appliances.
- Interrupt-less audio. This is a major rethink of the way we process audio in the kernel, and reduces a lot of complexity in device drivers and is an enabler for several other new features. The code for this is done, so expect an integration in b135 or 136.
- Better audio hotplug support. (Especially for USB audio.)
- Better audio virtualization support (especially with Trusted Extensions.)
- Additional audio device support. (Asus Xonar, via audiocmihd, for example.)
- Integration of blkdev, and hopefully faster, simpler, better kernel support for simple block oriented flash-storage devices. (I.e. those devices that don't natively understand the SCSI command set.)
- Fixes for a variety of network device driver bugs. I've already got a few of these changes queued up.
- Broader support for the MII framework in other NIC drivers. (I have changes queued up for rtls, already, for example.)
- Further cleanup of not-needed legacy code.
- Continued contributions and participation at PSARC.
- Support for SDXC card media (and possibly also development of exFAT filesystem code, dependent on licensing concerns.)
- Possibly work on various track pad bits, depending on time and resource. (Synaptics support finally?)
One thing is that I feel very privileged to have been able to work on the OpenSolaris code base, and to continue to be able to do so. I often reflect that its amazing that I get paid to do this work -- I think I'd be far less productive if I didn't enjoy my job so much. When my management asks me to take a break and do something fun for a change, I think he has a hard time understanding that for me hacking on OpenSolaris code is fun. I genuinely hope that I continue to be so privileged for the foreseeable future.
Subscribe to:
Posts (Atom)