Space aficionados may be aware that President Obama has canceled the previous administration's "Vision for Space Exploration", which consisted of the Constellation program including Ares I, Ares V, and Orion. This has been fairly well covered in the mainstream media.
Critics of the Constellation program raise some significant and relevant objections to the Constellation program.
However, I strongly believe that as a nation, we need a national space program that includes human spaceflight beyond low Earth orbit. The cancellation of Constellation, while perhaps with good cause, has left our national space program with a vacuum -- the lack of a heavy lift vehicle, and lack of any vision, would effectively constrain human exploration to LEO for a generation. Furthermore, it significantly constrains the kinds of activities that we can perform in LEO.
Its my belief that this is short-sighted in the extreme.
We need a space program that includes vehicles with the ability to loft large payloads into orbit. Projects like the International Space Station, and further commercialization of space, are only possible with the ability to loft a significant payload into orbit.
We also need to plan for human exploration beyond our front porch. While many people argue that sending robotic explorers is less risky, and far cheaper, the idea that we can or should abdicate all future space endeavors to robotic missions is actually offensive.
Robots won't inspire a generation of students to continue to excel at math and science. Robots can't stand in as national heroes. And robots alone won't help develop the enthusiasm required for the general public to continue to want to invest in space and space technologies. Robotic exploration is mostly a solved problem -- many new technologies that are necessary for human space travel will simply not be invented or invested in, without the "problems" to solve that are involved in human space exploration.
I'm from a generation of kids who viewed astronauts as near personal heroes; I dreamed, and still dream, of being able to see our planet from space itself one day. I dream of the days when human kind steps beyond just Earth, and has outposts on the Moon, Mars, the asteroids, and perhaps other interesting places in the solar system. And someday beyond.
My own son dreams someday of being an astronaut and visiting Mars. Unfunded as it was, at least VSE allowed a glimmer of such a hope. Obama has killed that hope, and maybe the dreams and hopes of thousands or millions of other like minded kids.
Fortunately, there is a proposal that would revive these dreams, and allow us to retain a national heavy lift capability, retain a lot of the knowledge and expertise that we acquired with the successful STS (space shuttle) program (even reusing a significant amount of the materials and technology), and allow for a "way forward" that would allow us to get beyond LEO and go to interesting places elsewhere in the solar system. The DIRECT v3.0 proposal is IMO the best way forward; it allows us to have our cake and eat it too -- giving us all the heavy lift capabilities that we need, minimizing the significant impact on our economy that both the Constellation program, and the cancellation of the STS and Constellation programs, create.
I firmly believe that we are on the cusp of a major economic shift, where commercialization of space may play as important a role in the coming decade or two as the Internet has played in the previous two. The question is, will we as a nation continue to develop that potential, or will we let it slip away, to be picked up by India, China, or Russia?
Yes, I'm an American. And I believe that it is important for America to be a leader in the exploration and utilization of space. Ultimately, I believe that "planting flags" is much more important than the proponents of solely robotic exploration would have us believe. Someday people will visit Mars. Will America be there, or will we just be an observer while one of the Asian nations celebrates a major achievement?
Monday, March 15, 2010
Wednesday, March 3, 2010
ON IPS surprisingly easy
So I have an EOF RTI that was in queue when the ON IPS integration happened last night.
Of course, this totally whacked my packaging changes, and I had to modify them. Making the changes was quite easy. Here's the old, and the new version of the changes. Its actually less files to update under IPS.
I was dreading retesting. Dealing with distro construction sounded "painful".
I needn't have worried. In the tools directory there is this neat tool called "onu" (on-update I guess?)
I had to load a machine with b133 to set up a baseline, but we have a nice way to do that internally via our internal infrastructure and AI. It boils down to running one command on an install server than doing "boot net:dhcp - install" at the OBP prompt. (Yes, this is a SPARC system.)
It took a little bit for it to install, but less than an hour.
Then, after rebooting and getting the initial settings on the system, it was just a simple matter of "onu -d ${ws}/packages/sparc/nightly-nd" to update it. This took a while (20-30 minutes, I wasn't counting). Eventually the system was up and ready for business. Easier than bfu. Amazing.
Thanks to the IPS team! I can't wait for bfu to finally go away.
Of course, this totally whacked my packaging changes, and I had to modify them. Making the changes was quite easy. Here's the old, and the new version of the changes. Its actually less files to update under IPS.
I was dreading retesting. Dealing with distro construction sounded "painful".
I needn't have worried. In the tools directory there is this neat tool called "onu" (on-update I guess?)
I had to load a machine with b133 to set up a baseline, but we have a nice way to do that internally via our internal infrastructure and AI. It boils down to running one command on an install server than doing "boot net:dhcp - install" at the OBP prompt. (Yes, this is a SPARC system.)
It took a little bit for it to install, but less than an hour.
Then, after rebooting and getting the initial settings on the system, it was just a simple matter of "onu -d ${ws}/packages/sparc/nightly-nd" to update it. This took a while (20-30 minutes, I wasn't counting). Eventually the system was up and ready for business. Easier than bfu. Amazing.
Thanks to the IPS team! I can't wait for bfu to finally go away.
Sunday, February 21, 2010
Funny Ancient Software
I just found out that Ubuntu has been shipping (since version 6.06 -- Dapper Drake I think it was called?) and apparently included all the way into forthcoming 10.x LTS version) a program I wrote nearly two decades ago as a student -- vtprint -- and yes, that link points to manual text I wrote way back when.
(This program, "vtprint", was for use with printing from a UNIX shell prompt, when you don't have a better way to move files around. Back then we used commands like "kermit" to connect to a UNIX server from our PC over a 2400 or 9600 baud modem -- and well before PPP or even SLIP.)
I haven't used vtprint since about 1995, but its funny to still see it kicking around. Too bad the docs still have an old e-mail address for me at SDSU.... I guess nobody has needed a bug fix for it for some time.
(This program, "vtprint", was for use with printing from a UNIX shell prompt, when you don't have a better way to move files around. Back then we used commands like "kermit" to connect to a UNIX server from our PC over a 2400 or 9600 baud modem -- and well before PPP or even SLIP.)
I haven't used vtprint since about 1995, but its funny to still see it kicking around. Too bad the docs still have an old e-mail address for me at SDSU.... I guess nobody has needed a bug fix for it for some time.
Monday, February 15, 2010
Congratulations to BMW Oracle Racing
If you're involved in the sailing community, you'll already know that Larry Ellison, who's now ultimately my boss, had put together a team to challenge the America's Cup. They won this weekend, bringing the America's Cup back home to America, and I'm enormously proud of Ellison and his team, both as an American, as a sailor, and -- now -- as an Oracle employee.
Friday, February 12, 2010
Open Development
Note: I'm posting this on my personal blog, which as always is a reflection of my own thoughts and in no way represent any official policy from my employer (whoever that may be).
Now that we former Sun employees (for the most part) are now part of a larger company, there have been some questions about how much of the trend Sun had made towards open development will continue (particularly where Solaris/OpenSolaris is concerned.)
(I want to separate the concern of Open Source -- where source code is made available for products after they are released -- from Open Development -- where the product is developed in the open.)
Many of us who were part of that acquisition are wondering the same things. Officially, the word is "no changes in what we're doing", but unofficially there's an atmosphere that our new employer places a greater emphasis on commercial profitability and a lesser emphasis on things like "including the community."
Speaking abstractly, there are risks to any open development effort, particularly when the effort is intended to be supportive of a commercial endeavor. The risks range from enabling competitors with early information, to forestalling customer purchases of bits today as customers wait for the new feature that's being developed in the open, to simply diluting the impact that "surprise" delivery of a new product or feature can make.
Certainly there seems to be some evidence that Oracle may have a greater concern about the costs and risks associated with "early disclosure" than Sun did. No matter how passionately one may believe in Open Development, nobody can deny that there are real costs and risks associated with open development.
So the challenge for Oracle is to figure out what the balance is that makes commercial sense.
Ultimately profit is the primary responsibility of any publicly traded company.
The challenge for the community is to figure how to provide commercial justification to Oracle for the open development that the community likes to see.
If you want to retain truly Open Development (which includes public posting of webrevs for stuff developed internally, open ARC reviews, and public design discussions on mailing lists and similar fora) of Solaris and OpenSolaris, this is call out to you.
Have you or your company made a significant Sun purchase? Has open development (as opposed to open source) influenced your decision? How and why? Will open development influence future purchasing decisions? If you can, put a number to the value of open development, and provide that information to your sales reps or post it publicly.
The decision makers need to see value in the practice of open development if they're going to continue to support it.
Again, I'm only talking about open development, not about open source.
As an aside, I don't think statements coming from community contributors without the support of purchasing dollars are likely to carry much weight with Oracle decision makers. I believe that if you look at the contributions from the community-at-large in OpenSolaris, you'll find that the meaningful contributions have been fairly small and generally of little commercial interest, and have always required additional engineering expense from Sun. So "leveraging" the community for development has not, IMO, been a gamble that has yielded dividends -- at least not from the perspective of either Sun or Oracle.
Now that we former Sun employees (for the most part) are now part of a larger company, there have been some questions about how much of the trend Sun had made towards open development will continue (particularly where Solaris/OpenSolaris is concerned.)
(I want to separate the concern of Open Source -- where source code is made available for products after they are released -- from Open Development -- where the product is developed in the open.)
Many of us who were part of that acquisition are wondering the same things. Officially, the word is "no changes in what we're doing", but unofficially there's an atmosphere that our new employer places a greater emphasis on commercial profitability and a lesser emphasis on things like "including the community."
Speaking abstractly, there are risks to any open development effort, particularly when the effort is intended to be supportive of a commercial endeavor. The risks range from enabling competitors with early information, to forestalling customer purchases of bits today as customers wait for the new feature that's being developed in the open, to simply diluting the impact that "surprise" delivery of a new product or feature can make.
Certainly there seems to be some evidence that Oracle may have a greater concern about the costs and risks associated with "early disclosure" than Sun did. No matter how passionately one may believe in Open Development, nobody can deny that there are real costs and risks associated with open development.
So the challenge for Oracle is to figure out what the balance is that makes commercial sense.
Ultimately profit is the primary responsibility of any publicly traded company.
The challenge for the community is to figure how to provide commercial justification to Oracle for the open development that the community likes to see.
If you want to retain truly Open Development (which includes public posting of webrevs for stuff developed internally, open ARC reviews, and public design discussions on mailing lists and similar fora) of Solaris and OpenSolaris, this is call out to you.
Have you or your company made a significant Sun purchase? Has open development (as opposed to open source) influenced your decision? How and why? Will open development influence future purchasing decisions? If you can, put a number to the value of open development, and provide that information to your sales reps or post it publicly.
The decision makers need to see value in the practice of open development if they're going to continue to support it.
Again, I'm only talking about open development, not about open source.
As an aside, I don't think statements coming from community contributors without the support of purchasing dollars are likely to carry much weight with Oracle decision makers. I believe that if you look at the contributions from the community-at-large in OpenSolaris, you'll find that the meaningful contributions have been fairly small and generally of little commercial interest, and have always required additional engineering expense from Sun. So "leveraging" the community for development has not, IMO, been a gamble that has yielded dividends -- at least not from the perspective of either Sun or Oracle.
Thursday, February 4, 2010
Scalability FUD
Yesterday I saw yet another argument about the Linux vs. Solaris scalability debate. The Linux fans were loudly proclaiming that the claim of Solaris' superior scalability is FUD in the presence of evidence like the Cray XT class of systems which utilize thousands of processors in a system, running Linux.
The problem with comparing (or even considering!) the systems in the Top500 supercomputers when talking about "scalability" is simply that those systems are irrelevant for the typical "scalability" debate -- at least as it pertains to operating system kernels.
Irrelevant?! Yes. Irrelevant. Let me explain.
First, one must consider the typical environment and problems that are dealt with in the HPC arena. In HPC (High Performance Computing), scientific problems are considered that are usually fully compute bound. That is to say, they spend a huge majority of their time in "user" and only a minuscule tiny amount of time in "sys". I'd expect to find very, very few calls to inter-thread synchronization (like mutex locking) in such applications.
Second, these systems are used by users who are willing, expect, and often need, to write custom software to deal with highly parallel architectures. The software deployed into these environments is tuned for use in situations where the synchronization cost between processors is expected to be "relatively" high. Granted the architectures still attempt to minimize such costs, using very highly optimized message passing busses and the like.
Third many of these systems (most? all?) are based on systems that don't actually run a single system image. There is not a single universal addressable memory space visible to all processors -- at least not without high NUMA costs requiring special programming to deliver good performance, and frequently not at all. In many ways, these systems can be considered "clusters" of compute nodes around a highly optimized network. Certainly, programming systems like the XT5 is likely to be similar in many respects to programming software for clusters using more traditional network interconnects. An extreme example of this kind of software is SETI@home, where the interconnect (the global Internet) can be extremely slow compared to compute power.
So why does any of this matter?
It matters because most traditional software is designed without NUMA-specific optimizations, or even cluster-specific optimizations. More traditional software used in commercial applications like databases, web servers, business logic systems, or even servers for MMORPGs spend a much larger percentage of their time in the kernel, either performing some fashion of I/O or inter-thread communication (including synchronization like mutex locks and such.)
Consider a massive non-clustered database. (Note that these days many databases are designed for clustered operation.) In this situation, there will be some kind of central coordinator for locking and table access, and such, plus a vast number of I/O operations to storage, and a vast number of hits against common memory. These kinds of systems spend a lot more time doing work in the operating system kernel. This situation is going to exercise the kernel a lot more fully, and give a much truer picture of "kernel scalability" -- at least as the arguments are made by the folks arguing for or against Solaris or Linux superiority.
Solaris aficionados claim it is more scalable in handling workloads of this nature -- that a single SMP system image supporting traditional programming approaches (e.g. a single monolithic process made up of many threads for example) will experience better scalability on a Solaris system than on a Linux system.
I've not measured it, so I can't say for sure. But having been in both kernels (and many others), I can say that the visual evidence from reading the code is that Solaris seems like it ought to scale better in this respect than any of the other commonly available free operating systems. If you don't believe me, measure it -- and post your results online. It would be wonderful to have some quantitative data here.
Linux supporters, please, please stop pointing at the Top500 as evidence for Linux claims of superior scalability though. If there are some more traditional commercial kinds of single-system deployments that can support your claim, then lets hear about them!
The problem with comparing (or even considering!) the systems in the Top500 supercomputers when talking about "scalability" is simply that those systems are irrelevant for the typical "scalability" debate -- at least as it pertains to operating system kernels.
Irrelevant?! Yes. Irrelevant. Let me explain.
First, one must consider the typical environment and problems that are dealt with in the HPC arena. In HPC (High Performance Computing), scientific problems are considered that are usually fully compute bound. That is to say, they spend a huge majority of their time in "user" and only a minuscule tiny amount of time in "sys". I'd expect to find very, very few calls to inter-thread synchronization (like mutex locking) in such applications.
Second, these systems are used by users who are willing, expect, and often need, to write custom software to deal with highly parallel architectures. The software deployed into these environments is tuned for use in situations where the synchronization cost between processors is expected to be "relatively" high. Granted the architectures still attempt to minimize such costs, using very highly optimized message passing busses and the like.
Third many of these systems (most? all?) are based on systems that don't actually run a single system image. There is not a single universal addressable memory space visible to all processors -- at least not without high NUMA costs requiring special programming to deliver good performance, and frequently not at all. In many ways, these systems can be considered "clusters" of compute nodes around a highly optimized network. Certainly, programming systems like the XT5 is likely to be similar in many respects to programming software for clusters using more traditional network interconnects. An extreme example of this kind of software is SETI@home, where the interconnect (the global Internet) can be extremely slow compared to compute power.
So why does any of this matter?
It matters because most traditional software is designed without NUMA-specific optimizations, or even cluster-specific optimizations. More traditional software used in commercial applications like databases, web servers, business logic systems, or even servers for MMORPGs spend a much larger percentage of their time in the kernel, either performing some fashion of I/O or inter-thread communication (including synchronization like mutex locks and such.)
Consider a massive non-clustered database. (Note that these days many databases are designed for clustered operation.) In this situation, there will be some kind of central coordinator for locking and table access, and such, plus a vast number of I/O operations to storage, and a vast number of hits against common memory. These kinds of systems spend a lot more time doing work in the operating system kernel. This situation is going to exercise the kernel a lot more fully, and give a much truer picture of "kernel scalability" -- at least as the arguments are made by the folks arguing for or against Solaris or Linux superiority.
Solaris aficionados claim it is more scalable in handling workloads of this nature -- that a single SMP system image supporting traditional programming approaches (e.g. a single monolithic process made up of many threads for example) will experience better scalability on a Solaris system than on a Linux system.
I've not measured it, so I can't say for sure. But having been in both kernels (and many others), I can say that the visual evidence from reading the code is that Solaris seems like it ought to scale better in this respect than any of the other commonly available free operating systems. If you don't believe me, measure it -- and post your results online. It would be wonderful to have some quantitative data here.
Linux supporters, please, please stop pointing at the Top500 as evidence for Linux claims of superior scalability though. If there are some more traditional commercial kinds of single-system deployments that can support your claim, then lets hear about them!
Missing audio packages
I have learned that at least two packages, SUNWaudioemu10k and SUNWaudiosolo, are not part of the "standard" ("entire?") install of OpenSolaris b131. If you're looking for either of these, you should do "pfexec pkg install SUNWaudiosolo" or "pfexec pkg install SUNWaudiosolo".
Hopefully we'll get this sorted out before the next official release.
Update: Apparently (according to the expert I talked to) this problem only affects systems updating with pkg image-update. If you install a fresh system, the audio packages should be installed.
Hopefully we'll get this sorted out before the next official release.
Update: Apparently (according to the expert I talked to) this problem only affects systems updating with pkg image-update. If you install a fresh system, the audio packages should be installed.
Subscribe to:
Posts (Atom)