Tuesday, January 23, 2018

Why I'm Boycotting Crypto Currencies

Unless you've been living under a rock somewhere, you probably have heard about the crypto currency called "Bitcoin".  Lately its skyrocketed in "value", and a number of other currencies based on similar mathematics have also arisen.  Collectively, these are termed cryptocurrencies.

The idea behind them is fairly ingenious, and based upon the idea that by solving "hard" problems (in terms of mathematics), the currency can limit how many "coins" are introduced into the economy.  Both the math and the social experiment behind them is something that on paper looks really interesting.

The problem is that the explosion of value has created a number of problems, and as a result I won't be accepting any of these forms of currencies for the foreseeable future.

First, the market for each of these currencies is controlled by a relatively small number of individuals who own a majority of the outstanding "coins".  The problem with this is that by collusion, these individuals can generate "fake" transactions, which appear to drive up demand on the coins, and thus lead to a higher "value" (in terms of what people might be willing to pay).  The problem is that this is a "bubble", and the bottom will fall right out if enough people try to sell their coins for hard currency.  As a result, I believe that the value of the coins is completely artificial, and while a few people might convert some of these coins into hard cash for a nice profit, the majority of coin holders are going to be left out in the cold.

Second, the "cost" of performing transactions for some of these currencies is becoming prohibitively expensive.  With most transactions of real currency, its just a matter of giving someone paper currency, or running an electronic transaction that normally completes in milliseconds.  Because of the math associated with cryptocurrencies, the work to sign block chains becomes prohibitive, such that for some currencies transactions can take a lot of time -- and processors are now nearly obliged to charge what would be extortionary rates just to cover their own costs (in terms of electricity and processing power used).

The environmental impact, and monumental waste, caused by cryptocurrencies cannot be overstated.  We now have huge farms of machines running, consuming vast amounts of power, performing no useful work except to "mine" coins.  As time goes on, the amount of work needed to mine each coin grows significantly (an intentional aspect of the coin), but what this means is that we are burning large amounts of power (much of which is fossil-fuel generated!) to perform work that has no useful practical purpose.   Some might say something similar about mining precious metals or gems, but their a many many real practical applications for metals like gold, silver, and platinum, and gems like diamonds and rubies as well.

Finally, as anyone who wants to build a new PC probably realizes, the use of computing hardware, and specifically "GPUs" (graphical processing units, but which also can be used to solve many numerical problems in parallel) have increased in cost dramatically -- consumer grade GPUs are generally only available today for about 2x-3x their MSRPs.  This is because the "miners" of cryptocurrencies have snapped up every available GPU.  The upshot of this is that the supply of this hardware has become prohibitive for hobbyists and professionals alike.  Indeed, much of this hardware would be far far better used in HPC arenas where it could be used to solve real-world problems, like genomic research towards finding a cure for cancer, or protein folding, or any number of other interesting and useful problems which solving would benefit mankind as a whole.  It would not surprise me if a number of new HPC projects have been canceled or put on hold simply because the supply of suitable GPU hardware has been exhausted, and putting some of those projects out of budget reach.

Eventually, when the bottom does fall out of those cryptocurrencies, all that GPU hardware will probably wind up filling land-fills, as many people won't want to buy used GPUs, which may (or may not) have had their lifespans shortened.  (One hopes that at least the eWaste these cause will be recycled, but we know that much eWaste winds up in landfills in third world countries.)

Crypto-curency mining is probably one of the most self-serving and irresponsible (to humanity and our environment) activities one can take today, while still staying in the confines of the law (except in a few jurisdictions which have sensibly outlawed cryptocurrencies.)

It's my firm belief that the world would be far better off if crypto-currencies had never been invented.

Wednesday, November 22, 2017

Small Business Accounting Software Woes

I'm so disappointed with the online accounting software options available to me; and I've spent far far too much time in the past couple of days looking for an accounting solution for my new business. The current state of affairs makes me wonder if just using a spreadsheet might be as easy.

I am posting my experiences here for two reasons.
  1. To inform others who might have similar needs, and
  2. To inform the hopefully smart people at these companies, so maybe they will improve their products.
Let me start with a brief summary of my needs:

  • Track time (esp. billable hours)
  • Tracked time should include date, and project/client, and some description of work performed.
  • Multiple currency support. I have international clients that I need to bill in their preferred currency.
  • Invoicing and payment tracking for above.
  • Payroll -- preferably integrated with someone like Gusto.
  • Support for two employees with plans to grow. 
  • Double-entry accounting (including bank reconciliation) for my accountant.
  • Affordable -- I'm a small business owner.
That's it. Nothing super difficult, right?  You'd think there would be dozens of contenders who could help me.

You'd be wrong.

Here's what I looked at, and their deficiencies:

Freshbooks 



I really like most of what Freshbooks has to offer, and this was my starting point. Super easy to use, an integration with Gusto, and their invoicing solution is super elegant. Unfortunately, their lack of reconciliation and double-entry accounting (or any of the other "real" accounting stuff) disqualifies them. Adding to the problem, I already use them for my personal consulting business (where I've been a happy user), and they don't have support for multiple business on their "Classic Edition".

Then there is the whole confusion between "New Freshbooks" and "Classic Freshbooks".

This is a company that states they intend to continue to keep two separate software stacks, with about 90% functionality overlap, running ~forever. Why? Because they have some features (and some integrations) that they lack in the new one. (I've been informed that my use patterns indicate that I should stay on the "Classic" edition forever because of my heavy use of Time Tracking.) Some of us with real world software engineering experience know how costly and hateful it is to have multiple simultaneous versions of a product in production. Freshbook's approach here, with no plans to merge the functionality, is about the most boneheaded decision I've seen engineering management take.

Being stuck on the "Classic Edition" makes me feel like a loser, but really it's a sign that their own product is the loser.  I have to believe at some point one product or the other is going to be a dead end.

Quickbooks Online


This is a product that is well recommended, and probably one of the most widely used. It has so much capability. It also lacks the "hacked together by a bunch of different engineering teams that didn't talk to each other" feeling that their desktop product has. (Yes, I have experience with Quickbooks Pro, too. Sad to say.)  It's probably a good thing I can't look at their code behind the curtain.

The biggest, maybe even only, failing they have for my use case is their inability to bill against clients that are in a different currency. Wait, they are multicurrency capable, right?  Uh, no they aren't. If I can't record my billable hours against a client in another country in their preferred currency, then whatever you think your "multicurrency" support is doesn't count. I have international clients that demand billing in their local currency.  So this is a non-starter for me. This feature has been asked for before from them, and they have ignored it. Major, and honestly unexpected, fail.

Cost wise they aren't the cheapest, but this one feature absence is a show stopper for me, otherwise I'd probably have settled here.

Xero


Xero is another of the main companies, and in Gartner's magic quadrant as their leader in the sector. I didn't actually try them out -- though I did research. Their shortcomings for me were: price (multi-currency support requires me to pay $70 / month, which is about 2x all the others), and lack of time tracking. Sure, I can add an integration from some other company like Tsheets, for another $20 / month. But now this solution is like 3x the cost of everyone else.

One feature that Xero includes for that $70 is payroll processing -- but only for a handful of states (California is one), and I can't seem to find any reviews for folks who have used them.   If I want to use an outside company with a longer track record and broader coverage across states, like SurePayroll or Gusto or ADP, I will wind up paying double.

If Xero would change their menu somewhat (make it ala carte), we'd be able to work together. Let me integrate with Gusto, and not have to pay exorbitant fees for multi-currency support. Add time tracking and it would be even better.

Arguably I could stop being such a penny pincher, and just go with Xero + Tsheets or somesuch. Outside of the crazy expensive options for companies that can afford a full time accountant (Sage, NetSuite, looking at you!), this was the most expensive option.  I'd also have to use Xero's payroll service, and I'm not sure

ZipBooks


At first blush, ZipBooks looked like a great option. On paper they have everything I need -- they even partnered with Gusto, and claim to have multicurrency support.  Amazingly, they are even freeOf course if you elect to use some of their add-ons, you pay a modest fee, but from a pure price perspective, this looks like the cheapest.

Unfortunately, as I played with their system, I found a few major issues. Their multi-currency support is a bit of an inconvenient joke. They don't let you set a per-client currency. Instead you change the currency for the entire account, then generate invoices in that currency (or accept payments), then have to switch back to the home currency. This is account wide, so you better not have more than one person access the account at a time. The whole setup feels really hinky, and to be honest I just don't trust it.

Second, their bank integration is (as of today) broken -- meaning the website gives me conflict errors before I even can select a bank (I wanted to see if my business bank -- a regional smaller bank -- is on their list). So, not very reliable.

Finally, their support is nearly non-existent. I sent several questions to them through their on-line support channel, and got back a message "ZipBooks usually responds in a day". A day. Other companies I looked at took maybe 10-20 minutes to respond -- I still have not received a response from ZipBooks.

I need a service that supports real multicurrency invoicing, is reliable, and with reachable support. Three strikes for ZipBooks.  Damn, I really wanted to like these guys.

Kashoo


Kashoo was well reviewed, but I had some problems with them. First their only payroll integration is with SurePayroll. I hate being locked in, although I could probably overlook this. Second, they don't have any time tracking support. Instead they partner with Freshbooks, but only the "Classic Edition" (and apparently no plans to support the "New Freshbooks".)  A red flag.

And, that brings in the Freshbooks liability (only one company, so I can't have both my old consulting business and this new one on the same iOS device for example), and I'd have to pay for Freshbooks service too.

On the plus side, the Kashoo tech support (or pre-sales support?) was quite responsive.  I don't think they are far off the mark.

Wave Accounting 


Wave is another free option, but they offer payroll (although full service only in five states) as an add-on.  (They also make money on payment processing, if you use that.)  Unfortunately, they lacked support for integrations, time tracking, or multi-currency support.  I'd like to say close but no cigar, but really in this case, it's just "no cigar".  (I guess you get what you pay for...)

Zoho Books


Zoho Books is another strong option, well regarded.  So far, it seems to have everything I need except any kind of payroll support.  I'd really love it if they would integrate with Gusto.  I was afraid that I would need to set up with Zoho Project and pay another service fee, but it looks -- at least so far from my trial, like this won't be necessary.

So my feature request is for integration with Gusto.  In the meantime, I'll probably just handle payroll expenses by manually copying the data from Gusto.

Conclusion


So many, so close, and yet nothing actually hits the mark.   (These aren't all the options I looked at, but they are the main contenders.  Some weren't offered in the US, or were too expensive, or self-hosted.  For now I'm going to try Zoho.  I will try to update this in a few months when I have more experience.

Updates: (As of Nov. 30, 2017) 


  1. Zoho has since introduced Zoho Payroll, and they contacted me about it.  It's only available for California at this time, and has some restrictions.  I personally don't want to be an early adopter for my payroll processing service, so I'm going to stick with Gusto for now.   Zoho's representative did tell me that they welcome other payroll processing companies to develop integrations for Zoho Books.   I hope Gusto will take notice.
  2. ZipBooks also contacted me.  They apologized for the delays in getting back to me -- apparently their staff left early for Thanksgiving weekend.  They indicated that they have fixed whatever bug caused me to be unable to link my bank account.  Their COO also contacted me, and we had a long phone call, mostly to discuss my thoughts and needs around multi-currency support.  I'm not quite ready to switch to them, but I'd keep a close eye on them.  They do need to work to improve their initial customer service experience, in my opinion.
  3. It looks like my own multi-currency needs may be vanishing, as my primary external customer has agreed to be billed in USD and to pay me in USD.  That said, I want to keep the option open for the future, as I may have other international customers in the future.
  4. None of the other vendors reached out to me, even though I linked to them on Twitter.  The lack of response itself is "significant" in terms of customer service, IMO. 

Tuesday, November 14, 2017

TLS close-notify .... what were they thinking?

Close-Notify Idiocy?


TLS (and presumably SSL) require that implementations send a special disconnect message, "close-notify", when closing a connection.  The precise language (from TLS v1.2) reads:

The client and the server must share knowledge that the connection is
ending in order to avoid a truncation attack. Either party may
initiate the exchange of closing messages. 
close_notify 
This message notifies the recipient that the sender will not send
any more messages on this connection. Note that as of TLS 1.1,
failure to properly close a connection no longer requires that a
session not be resumed. This is a change from TLS 1.0 to conform
with widespread implementation practice. 
Either party may initiate a close by sending a close_notify alert.
Any data received after a closure alert is ignored. 
Unless some other fatal alert has been transmitted, each party is
required to send a close_notify alert before closing the write side
of the connection. The other party MUST respond with a close_notify
alert of its own and close down the connection immediately,
discarding any pending writes. It is not required for the initiator
of the close to wait for the responding close_notify alert before
closing the read side of the connection.

This has to be one of the stupider designs I've seen.

The stated reason for this is to prevent a "truncation attack", where an attacker terminates the session by sending a clear-text disconnect (TCP FIN) message, presumably just before you log out of some sensitive service, say GMail.

The stupid thing here is that this is for WebApps that want to send a logout, and don't want to wait for confirmation that logout had occurred before sending confirmation to the user.  So this logout is unlike every other RPC.  What...?!?

Practical Exploit?


It's not even clear how one would use this attack to compromise a system... an attacker won't be able to hijack the actual TLS session unless they already pwned your encryption.  (In which case, game over, no need for truncation attacks.)  The idea in the truncation attack is that one side (the server?) still thinks the connection is alive, while the other (the browser?) thinks it is closed.  I guess this could be used to cause extra resource leaks on the server... but that's what keep-alives are for, right?

Bugs Everywhere


Of course, close-notify is the source of many bugs (pretty much none of them security critical) in TLS implementations.  Go ahead, Google... I'll wait...  Java, Microsoft, and many others have struggled in implementing this part of the RFC.

Even the TLS v1.1 authors recognized that "widespread implementation practice" is simply to ignore this part of the specification and close the TCP channel.

So you may be asking yourself, why don't implementations send the close-notify ... after all sending a single message seems pretty straight-forward and simple, right?

Semantic Overreach


Well, the thing is that on many occasions, the application is closing down.  Historically, operating systems would just close() their file descriptors on exit().  Even for long running applications, the quick way to abort a connection is ... close().  With no notification.  Application developers expect that close() is a non-blocking operation on network connections (and most everywhere else)1.

Guess what, you now cannot exit your application without sending this, without breaking the RFC.   That's right, this RFC changes the semantic of exit(2).  Whoa.

That's a little presumptive, dontcha think?

Requiring implementations to send this message means that now close() grows some kind of new semantic, where the application has to stop and wait for this to be delivered.  Which means TCP has to be flowing and healthy.  The only other RFC compliant behavior is to block and wait for it flow.

What happens if the other side is stuck, and doesn't read, leading to a TCP flow control condition?  You can't send the message, because the kernel TCP code won't accept it -- write() would block, and if you're in a non-blocking or event driven model, the event will simply never occur.  Your close() now blocks forever.

Defensively, you must insert a timeout somehow -- in violation of the RFC.  Otherwise your TCP session could block forever.  And now you have to contemplate how long to hold the channel open?  You've already decided (for whatever other reason) to abort the session, but you now have to wait a while ... how long is too long?  And meanwhile this open TCP sits around consuming buffer space, an open file descriptor, and perhaps other resources....

A Bit of Sanity


The sensible course of action, treating a connection abort for any reason as an implicit close notification, was simply "not considered" from what I can tell.

In my own application protocols, when using TLS, I may violate this RFC with prejudice. But then I also am not doing stupid things in the protocol like TCP connection reuse.  If you close the connection, all application state with that connection goes away.  Period.  Kind of ... logical, right?

Standards bodies be damned.

1. The exception here is historical tape devices, which might actually perform operations like rewinding the tape automatically upon close(). I think this semantic is probably lost in the mists of time for most of us.

Wednesday, November 8, 2017

CMake ExternalProject_add In Libraries

First off, I'm a developer of open source application libraries, some of which are fairly popular.

TLDR: Library developers should not use ExternalProject_Add, but instead rely on FindPackage, demanding that their downstream developers pre-install their dependencies.

I recently decided to try to add TLS v1.2 support to one of my messaging libraries, which is written in C and configured via CMake.



The best way for me to do this -- so I thought -- would be to add a dependency in my project using a sub project, bringing in a 3rd party (also open source) library -- Mbed TLS.

Now the Mbed TLS project is also configured by CMake, so you'd think this would be relatively straight-forward to include their work in my own.  You'd be mistaken.

CMake includes a capability for configuring external projects, even downloading their source code (or checking out the stuff via git) called ExternalProjects.

This looks super handy -- and it almost is.  (And for folks using CMake to build applications I'm sure this works out well indeed.)

Unfortunately, this facility needs a lot of work still -- it only runs at build time, not configuration time.

It also isn't immediately obvious that ExternalProject_Add() just creates the custom target, without making any dependencies upon that target.  I spent a number of hours trying to understand why my ExternalProject was not getting configured.  Hip hip hurray for CMake's amazing debugging facilities... notIt's sort of like trying to debug some bastard mix of m4, shell, and Python.  Hint, Add_Dependencies() is the clue you need, may this knowledge save you hours lack of it cost me.  Otherwise, enjoy the spaghetti.
Bon Apetit, CMake lovers!

So once you're configuring the dependent library, how are you going to link your own library against the dependent?

Well, if you're building an application, you just link (hopefully statically), have the link resolved at compile time, and forget about it forever more.

But if you're building a library the problem is harder.  You can't include the dependent library directly in your own.  There's no portable way to "merge" archive libraries or even dynamic libraries.

Basically, your consumers are going to be stuck having to link against the dependent libraries as well as your own (and in the right order too!)  You want to make this easier for folks, but you just can't. 
(My kingdom for a C equivalent to the Golang solution to this problem.  No wonder Pike et. al. got fed up with C and invented Go!)

And Gophers everywhere rejoiced!

Making matters worse, the actual library (or more, as in the aforementioned TLS software there are actually 3 separate libraries -- libmbedcrypto, libmbedx509, and libmbedtls) is located somewhere deeply nested in the build directory.   Your poor consumers are never gonna be able to figure it out.

There are two solutions:

a) Install the dependency as well as your own library (and tell users where it lives, perhaps via pkgconfig or somesuch).

b) Just forget about this and make users pre-install the dependency explicitly themselves, and pass the location to your configuration tool (CMake, autotools, etc.) explicitly.

Of these two, "a" is easier for end users -- as long as the application software doesn't also want to use functions in that library (perhaps linking against a *different* copy of the library).  If this happens, the problem can become kind of intractable to solve.

So, we basically punt, and make the user deal with this.  Which tests days for many systems is handled by packaging systems like debian, pkg-add, and brew.

After having worked in Go for so long (and admittedly in kernel software, which has none of these silly userland problems), the current state of affairs here in C is rather disappointing.

Does anyone out there have any other better ideas to handle this (I mean besides "develop in Y", where Y is some language besides C)?

Licensing... again....

Let me start by saying this... I hate the GPL.  Oh yeah, and a heads up, I am just a software engineer, and not a lawyer.  Having said that....

I've released software under the GPL, but I never will again.  Don't get me wrong, I love open source, but GPL's license terms are unaccountably toxic, creating an island that I am pretty sure that original GPL authors never intended.


My Problem....


So I started by wanting to contemplate a licensing change for a new library I'm working on, to move from the very loose and liberal MIT license, to something with a few characteristics I like -- namely patent protection and a "builtin" contributor agreement.   I'm speaking of course of the well-respected and well-regarded Apache License 2.0.

The problem is, I ran into a complete and utter roadblock.

I want my software to be maximally usable by as many folks as possible.

There is a large installed base of software released under the GPLv2.  (Often without the automatic upgrade clause.)

Now I'm not a big fan of "viral licenses" in general, but I get that folks want to have a copy-left that prevents folks from including their work in closed source projects.  I get it, and it's not an entirely unreasonable position to hold, even if I think it limits adoption of such licensed software.

My problem is, that the GPLv2's terms are incredibly strict, prohibiting any other license terms being applied by any other source in the project.  This means that you can't mix GPLv2 with pretty much anything else, except the very most permissive licenses.  The patent grant & protection clauses breaks GPLv2.  (In another older circumstance, the CDDL had similar issues which blocks ZFS from being distributed with the Linux kernel proper.  The CDDL also had a fairly benign choice-of-venue clause for legal action, which was also deemed incompatible to the GPLv2.)

So at the end of the day, GPLv2 freezes innovation and has limited my own actions because I would like to enable people who have GPLv2 libraries to use my libraries.  We even have an ideological agreement -- the FSF actually recommends the Apache License 2.0!  And yet I can't use it; I'm stuck with a very much inferior MIT license in order to let GPLv2 folks play in the pool.

Wait, you say, what about the GPLv3?  It fixed these incompatibilities, right?   Well, yeah, but then it went and added other constraints on use which are even more chilling than the GPLv2.  (The anti-Tivoization clause, which is one of the more bizarre things I've seen in any software license, applying only to equipment intended primarily "consumer premises".  What??)

The GPL is the FOSS movements worst enemy, in my opinion.  Sure, Linux is everywhere, but I believe that this is in spite of the GPLv2 license, rather than as a natural by product.  The same result could have been achieved under a liberal, or a file-based copyleft.

GPL in Support of Proprietary Ecosystems


In another turn of events, the GPL is now being used by commercial entities in a bait-and-switch.  In this scheme, they hook the developer on their work under the GPL.  But when the developer wants to add some kind of commercial capability and retain the source confidentially, the developer cannot do that -- unless the developer pays the original author a fee for a special commercial license.    For a typical example, have a look at the WolfSSL license page.

Now all that is fine and dandy legal as you please.  But, in this case, the GPL isn't being used to promote open source at all.  Instead, it has become an enabler for monetization of closed source, and frankly leads to a richer proprietary software ecosystem.  I don't think this was what the original GPL authors had intended.

Furthermore, because the author of this proprietary software needs to be able to relicense the code under commercial terms, they are very very unlikely to accept contributions from third parties (e.g. external developers) -- unless those contributors are willing to perform a copyright assignment or sign a contributor agreement giving the commercial entity very broad relicensing rights.

So instead of becoming an enabler for open collaboration, the GPL just becomes another tool in the pockets of commercial interests.

The GPL Needs to Die

If you love open source, and you want to enhance innovation, please, please don't license your stuff under GPL unless you have no other choice.  If you can relicense your work under other terms, please do so!  Look for a non-viral license with the patent protections needed for both your and your downstreams.  I recommend either the Mozilla Public License (if you need a copyleft on your own code), or the Apache License (which is liberal but offers better protections over BSD or MIT or similar alternatives.)

Monday, October 24, 2016

MacOS X Mystery (Challenge)

(Maybe my MacOS X expert friends will know the answer.)

This is a mystery that I cannot seem to figure out.  I think its a bug in the operating system, but I cannot seem to figure out the solution, or even explain the behavior to my satisfaction.

Occasionally, a shell window (iTerm2) will appear to "forget" my identity.

For example:

% whoami
501

That's half right... The same command in other window is more correct:

% whoami
garrett

Further, id -a reports differently:

The broken window:

% id -a
uid=501 gid=20(staff) groups=20(staff),501,12(everyone),61(localaccounts),79(_appserverusr),80(admin),81(_appserveradm),98(_lpadmin),33(_appstore),100(_lpoperator),204(_developer),395(com.apple.access_ftp),398(com.apple.access_screensharing),399(com.apple.access_ssh)

The working one:

% id -a
uid=501(garrett) gid=20(staff) groups=20(staff),501(access_bpf),12(everyone),61(localaccounts),79(_appserverusr),80(admin),81(_appserveradm),98(_lpadmin),33(_appstore),100(_lpoperator),204(_developer),395(com.apple.access_ftp),398(com.apple.access_screensharing),399(com.apple.access_ssh)

It appears that the shell (and this broken behavior seems to be inherited by child shells, by the way), somehow loses the ability to map numeric Unix ids to login names.

So I tried another command:

% dscl . -read /Users/garrett
Operation failed with error: eServerError

The same works properly in my other window (I'm not posting the entire output, since its really long).

I am wondering what could possibly be different.  The behavior doesn't seem to depend on environment variables (I've tried stripping those out).

I'm thinking that there is something in the process table (in the MacOS X equivalent of the uarea?) that gives me access to directory services -- and that this is somehow clobbered.  As indicated, whatever the thing is, it appears to be inherited across fork(2).

I thought maybe I could figure this out with DTrace or dtruss... but Apple have crippled DTrace on the platform and this is one of those binaries that I am unable to introspect.  Arrgh!

sudo dtruss dscl . -read /Users/garrett
Password:
dtrace: system integrity protection is on, some features will not be available

dtrace: failed to execute dscl: dtrace cannot control executables signed with restricted entitlements

Btw, I'm running the latest MacOS X:

% uname -a
Darwin Triton.local 16.0.0 Darwin Kernel Version 16.0.0: Mon Aug 29 17:56:20 PDT 2016; root:xnu-3789.1.32~3/RELEASE_X86_64 x86_64

So, for my MacOS X expert friends -- anyone know how directory services really works?  (As in how it works under the hood?)  I don't think we're in UNIX land anymore, Toto!

Security Advice to IoT Firmware Engineers

Last Friday (October 16, 2016), a major DDoS attack brought down a number of sites across the Internet.  My own employer was amongst those affected by the wide spread DNS outage.

It turns out that the sheer scale (millions of unique botnet members) was made possible by the IoT, and rather shoddy engineering practices.

Its time for device manufacturers and firmware engineers to "grow up", and learn how to properly engineer these things for the hostile Internet, so that they don't have to subsequently issue recalls when their customers' devices are weaponized by attackers without their owners knowledge.

This blog is meant to offer some advice to firmware engineers and manufacturers in the hope that it may help them prevent their devices from being used in these kinds of attacks in the future.


Passwords


Passwords are the root of most of the problems, and so much of the advice here is about improving the way these are handled.


No Default Passwords


The idea of using a simple default password and user name, like "admin/admin", is a practice from the 90's, and is intended to facilitate service personnel, and eliminate management considerations from dealing with many different passwords.  Unfortunately, this is probably the single biggest problem -- bad usernames and passwords.  Its far worse in an IoT world, where there are many thousands, or even millions, of devices that have the same user name and password.

The proper solution is to allocate a unique password to each and every device.  Much like we already do manage unique MAC addresses, we need every device to have a unique password.  (Critically, the password must not be derived from the MAC address though.)

My advice is to simply have a small amount of ROM that is factory burned with either a unique password, or a numeric key that can be used to create one.  (If you have enough memory to store a dictionary in generic firmware -- say 32k words, you can get very nice human manageable default passwords by storing just four 16-bit numbers, each representing an index into the dictionary (so only 15 bits of unique data, but thats 60 bits of total entropy, which is plenty to ensure that every device has its own password -- and only requires storing a 64-bit random number in ROM.)

Then you have nice human parseable passwords like "bigger-stampede-plasma-pandering".  These can be printed on the same sticker that MAC passwords are typically given.  (You could also accept a hexadecimal representation of the underlying 64-bit value, or just use that instead of human readable passwords if you are unable to accommodate an English dictionary.  Devices localized for use in other countries could use locale-appropriate dictionaries as well.)


Mandatory Authorization Delay


Second, IoT devices should inject a minimum delay after password authentication attempts (regardless of whether successful or otherwise).  Just a few seconds is enough to substantially slow down dictionary attacks against poorly chosen end-user passwords.  (2 seconds means that only 1800 unique attempts can be performed per hour under automation - 5 seconds reduces that to 720.  It will be difficult to iterate a million passwords against a device that does this.)


Strong Password Enforcement


User chosen passwords should not be a single dictionary word; indeed, the default should be to use a randomly generated password using the same dictionary approach above (generate a 64-bit random number, break into chunks, and index into a stock dictionary).  It may be necessary to provide an end-user override, but it should be somewhat difficult to get at by default, and when activate should display large warnings about the compromise to security that user-chosen passwords typically represent.


Networks


Dealing with the network, and securing the use of the network, is the other part of the problem that IoT vendors need to get right.


Local Network Authentication Only


IoT devices generally know the network they are on; if the device has a separate management port or LAN-only port (like a WiFi Router), it should only by default allow administrator access from that port.

Devices with only a single port, or that exist on a WiFi network, should prevent administrator access from "routed" networks, by default.   That is, devices should not allow login attempts from a remote IP address that is not on a local subnet, by default.  While this won't stop many attacks (especially those on public WiFis), it makes attacking them from a global botnet, or managing them as part of a global botnet, that much harder.   (Again, there has to be a provision to disable this limitation, but it should present a warning.)


Encrypted Access Only


Use of unsecured channels (HTTP or telnet) is unacceptable in this day and age.  TLS and/or SSH are the preferred ways to do this, and will let your customers deploy these devices somewhat more securely.

Secure All Other Ports


Devices should disable any network services that are not specifically part of the service they offer, or intrinsic to their management.   System administrators have known to do this on systems for decades now, but it seems some firmwares still have stock services enabled that can be used as attack vectors.


Don't Advertise Yourself


This one is probably the hardest.  mDNS and device discovery over "standard" networks is one of the ways that attackers find devices to target.  Its far far better to have this disabled by default -- if discovery is needed during device configuration, then it can be enabled briefly, when the device is being configured.  Having a "pairing" button to give end-users the ability to enable this briefly is useful -- but mDNS should be used only with caution.


Secure Your Channel Home


Devices often want to call-home for reporting, or web-centric command & control.  (E.g. remote management of your thermostat.)  This is one of the major attack vectors.   (If you can avoid calling home altogether, this is even better!)

Users must be able to disable this function (it should be disabled by default in fact).  Furthermore, the channels must be properly secured entirely through your network, with provision for dealing with a compromise (e.g. leaked private keys at the server side).  Get a security expert to review your protocols, and your internal security practices.


Mesh Securely


Building local mesh networks of devices, e.g. to create a local cloud, means having strong pairing technology.  The strongest forms of this require administrator action to approve -- just like pairing a bluetooth keyboard or other peripheral.

If you want to automate secure mesh provisioning, you have to have secure networking in place -- technologies like VPN or ZeroTier can help build networking layers that are secure by default.


Don't Invent Your Own Protocols


The roadside is littered with the corpses of protocols and products that attempted to invent their own protocols or use cryptography in non-standard ways.  The best example of this is WEP, which took a relatively secure crypto layer (RC4 was not broken at the time), but deployed it naively and brokenly.  RC4 got a very bad rap for this, but it was actually WEP that was broken.  (Since then, RC4 itself has been shown to have some weaknesses, but this is relatively new compared to the brokenness that was WEP.)


General Wisdoms


Next we have some advice that most people should already be aware of, but yet bears repeating.


Don't Rely on Obscurity


Its an old adage that "security by obscurity is no security at all".  Yet we often see naive engineers trying to harden systems by making them more obscure.  This really doesn't help anything long term, and can actually hinder security efforts by giving a false sense of security or creating barriers to security analysis.


Audit


Get an independent security expert to audit your work.  Special focus should be paid to the items pointed out above.  This should include a review of the product, as well as your internal practices around engineering, including secure coding, use of mitigation technologies, and business practices for dealing with keying material, code signing, and other sensitive data.

Saturday, May 14, 2016

Microsoft Hates My Name (Not Me, Just My Name)

In order to debug nanomsg problems on Windows, I recently installed a copy of Windows 8.1 in a VMWare guest VM, along with Visual Studio 14 and CMake 3.5.2.  (Yes, I've entered a special plane of Hell, reserved for just for people who try to maintain cross-platform open source software.  I think this one might be the tenth plane, that Dante skipped because it was just too damned horrible.)

Every time I tried to build, I got bizarre errors from the CMake / build process ... like this:

Cannot evaluate the item metadata "%(FullPath)

Turns out that when I created my account, using the "easy" installation in VMWare, it created my Windows account using my full name.  "Garrett D'Amore".  Turns out that the software is buggy, and can't cope with the apostrophe in my full name, when it appears in a filesystem path. 

Moving the project directory to C:\Projects\nanomsg solved the problem.

Really Microsoft?  This is 2016.  I expected programs to struggle and for me to find bugs in programs (often root exploits  -- all hackers should try using punctuation in their login and personal names) with the apostrophe in my name back in the 1990s.  Not in this decade.

Not only that, but the error message was so incredibly cryptic that it took a Google search to figure out that it was a problem with the path.  (Other people encountered this problem with paths > 260 characters.  I knew that wasn't my problem, but I hypothesized, and proved, that it was my name.)  I have no idea how to file a bug on Visual Studio to Microsoft.  I'm not a paying user of it, so maybe I shouldn't complain, and I really have no recourse.  Still, they need to fix this.

Normally, I'd never intentionally create a path with an apostrophe in it, but in this case I was being lazy and just accepted some defaults.  I staunchly refuse to change my name because some software is too stupid to cope with it -- this is a pet peeve for me. 

We're in the new millennium, and have been for a decade and half.  Large numbers of folks with heritage from countries like Italy, France, and Ireland have this character in their surname.  (And more recently -- since like the 1960s! -- the African-American community has been using this character in their first names too!)  If your software can't accommodate this common character in names, then it's broken, and you need to fix it.  There are literally millions of us that are angered by this sort of brokenness every day; do us all a favor and make your software just a little less rage inducing by letting us use the names we were born with please.



Tuesday, February 23, 2016

Leaving github

(Brief reminder that this represents my own personal opinion, not necessarily that of any employer or larger open source project.)

I am planning to move my personal git repositories (including mangos, tcell, govisor, less-fork, etc.)  from GitHub to GitLab.com

The reasons for this are fairly simple.  They have nothing to do whatsoever with technology.  I love the GitHub platform, and have been a happy user of it for years now. I would dearly love it if I could proceed with GitHub.  Fortunately GitLab seems to have feature parity with GitHub (and a growing user and project base), so I'm not trapped.

The reason for leaving GitHub is because of the hostility of it's leadership towards certain classes of people makes me feel that I cannot in good conscience continue to support them. In particular, their HR department is engaging in what is nothing less than race warfare against white people.  (Especially men, but even white women are being discriminated against.) By the way, I'd take the same position if the hostility were instead towards any other racial or gender group other than my own.

I'm not alone in asking GitHub to fix this; yet they've remained silent on the matter, leading me to believe that the problematic policies have support within the highest levels of the company.  (Github itself is in trouble, and I have doubts about its future, as both developers and employees are leaving in droves.)

Post Tom Preston-Werner, GitHub's leadership apparently sees the company as a platform for prosecuting the Social Justice War, and it even has a Social Impact Team just to that effect. In GitHub's own words:
"The Social Impact team will be focused on these three areas: - Diversity & Inclusion - both internally and within the Open Source Community - Community Engagement - we have a net positive impact in local and online communities via partnerships - Leveraging GitHub for Positive Impact - supporting people from varied communities to use GitHub.com in innovative ways"
It's no accident that they list "Diversity & Inclusion" as the first item here either.  Apparently this has been more of a priority for GitHub than improving their platform or addressing long standing customer issues.

Those of you who have followed me know that I’m strongly in favor of inclusion, and making an environment friendly for all people, regardless of race or gender or religion (provided your religion respects my basic rights -- religious fundamentalist nut-jobs need not apply).

Lack of diversity cannot be fixed through exclusion.  Attempts to do so are inherently misguided.  Furthermore, as a company engages in any exclusive hiring practices they are inherently limiting their own access to talent.  Racist or sexist (or ageist) approaches are self-destructive, and companies that engage in such behavior deserve to fail.

The way to fix an un-level playing field is to level the playing field -- not to swing it back in the other direction.  You can't fix social injustice with more injustice; we should guarantee equal opportunity not equal results.

There are plenty of people of diverse ethnic backgrounds who have overcome significant social and economic barriers to achieve success.  And many who have not.  News flash -- you will find white men and women in both lists, as well as blacks, latinos, women, gays, and people of "other gender identification".  Any hiring approach or policy (written or otherwise) that only looks at the color of a person's skin or gender is unfair, and probably illegal outside of a very limited few and specific instances (e.g. casting for movie roles).

 Note that this does not mean that I do not support efforts to reach out to encourage people from other groups to engage more in technology (or any other field).  As I said, I encourage efforts to include everyone -- the larger talent pool that we can engage with, the more successful we are likely to be.  And we should do everything we can as a society and as an industry to make sure that the talent pool is as big as we can make it.

We should neither exclude any future Marie Curie or Daniel Hale Williams from achieving the highest levels of success, nor should we exclude a future Isaac Newton just because of his race or gender.  The best way to avoid that, is to be inclusive of everyone, and make sure that everyone has the best opportunities to achieve success possible.

Sadly I will probably be labeled racist or sexist, or some other -ist, because I'm not supportive of the divisive agendas supported by people like Nicole Sanchez and Danilo Libre, and because I am a heterosexual white middle class male (hence automatically an entitled enemy in their eyes.)  It seems that they would rather have me as an enemy rather than a friendly supporter -- at least that is what their actions demonstrate.  It's certainly easier to apply an -ist label than to engage in rationale dialogue.

I am however deeply supportive of efforts to reach out to underrepresented groups in early stages.  Show more girls, blacks, and latinos filling the role of technophiles in popular culture (movies and shows) that market towards children.  Spend money (wisely!) to improve education in poorer school districts.  Teach kids that they truly can be successful regardless of color or gender, and make sure that they have the tools (including access to technology) to achieve success based on merit, not because of their grouping.  These efforts have to be made at the primary and secondary school levels, where inspiration can have the biggest effects.  (By the way, these lessons apply equally well to white boys; teaching children to respect one another as individuals rather than as labels is a good thing, in all directions.)

By the time someone in is choosing a college or sitting in front of a recruiter, it's far too late (and far too expensive).  The only tools that can be applied at later stages are only punitive in nature, and therefore the only reasonable thing to do at this late stage is to punish unjust behaviors (i.e. zero tolerance towards bigotry, harassment, and so forth.)

I'll have more detail as to the moves of the specific repos over the coming days.

PS:  GitLab does support diversity as well, which is a good thing, but they do it without engaging in the social justice war, or exclusive policies.


Wednesday, January 6, 2016

Stepping Down

Updated Nov 9, 2017: When I originally posted this, nearly two years ago, things were different.  As folks may know, I returned back to leadership of nanomsg, and have since released several significant updates including version 1.0.0 and follow ups.  I'm also in the process of a complete architectural redesign and rewrite (nng), which is fast nearing completion.  This post is left here for posterity, but if you've wandered here via search-engine, be certain that the nanomsg community is alive and well.

(Quick reminder that this blog represents my own opinion, and not necessarily that of any open source project or employer.)

For nearly a year, I've been primary maintainer of nanomsg, a library of common lightweight messaging patterns written in C.

I was given this mantle when I asked for the nanomsg community to take some action to get forward progress on some changes I had to fix some core bugs, one of which was a protocol bug.  (I am also the creator of mangos, a wire-compatible library supporting the same patterns written in Go, which is why I came to care about fixing nanomsg.)

Today, I am stepping down as maintainer.

There are several reasons for this, but the most relevant right now is my frustration with this community, and its response to what I believed to be a benign proposal, that to adopt a Code of Conduct, in an attempt to make the project more inviting to a broader audience.

I was unprepared for the backlash.

And frankly, I haven't got enough love of the project to want to continue to lead it, when its clearly unwilling to codify what are frankly some sound and reasonable communication practices.

As maintainer, I could have just enforced my will upon the project, but since the project existed before I came to it, that doesn't feel right.  So instead, I'm just stepping down.

I'm not sure who will succeed me.  I can nominate a party, but at this point there are several other parties with git commit privileges to the project; I think they should nominate one.  Martin (the founder) still has administrative privileges as well.

To be clear, I think both sides of the Code of Conduct are wrong -- a bunch of whinny kids really. 

On the one side, we have people who seem to feel that the existence of a document means something.

I think that's a stupid view; it may have meaning when you have larger democratic projects and you need therefore written rules to justify actions -- and in that case a Code of Conduct is really a way to justify punishing someone, rather than prevention or education.  To those of you who think you need such a document in order to participate in a project -- I think you're acting like a bunch of spineless wimps.

This isn't to say you should have to put up with abuse or toxic conduct.  But if you think a document creates a "safe space", you're smoking something funny.  Instead, look at the actual conduct of the project, and the actions of leadership.  A paper Code of Conduct isn't going to fix brokenness, and I have my doubts that it can prevent brokenness from occurring in the first place.

If the leadership needs a CoC to correct toxic behavior, then the leadership of the project is busted.  Strong leadership leads by example, and takes the appropriate action to ensure that the communities that they lead are pleasant places to be.   (That's not necessarily the same as being conflict-free; much technical goodness comes about as a consequence of heartfelt debate, and developers can be just as passionate about the things they care about as anyone else.  Keeping the tone of such debate on topic and non-personal and professional is one of the signs of good leadership.)

On the other side, are those who rail against such a document.  Are you so afraid of your own speech that you don't think you can agree to a document that basically says we are to treat each other respectfully?  The word I use for such people is "chickenshit".   If you can't or won't agree to be respectful towards others in the open source projects I lead, then I don't want your involvement.

There's no doubt that there exists real abuse and intolerance in open source communities, and those who would cast aspersions on someone because of race, religion, physical attribute, or gender (or preference), are themselves slime, who really only underscore for everyone else their own ignorance and stupidity.  I have no tolerance for such bigotry, and I don't think anyone else should either.

Don't misunderstand me; I'm not advocating for CoCs.  I think they are nearly worthless, and I resent the movement that demands that every project adopt one.  But I equally resent the strenuous opposition to their existence.  If a CoC does no good, it seems to me that it does no harm either.  So even if it is just a placebo effect, if it can avoid conflict and make a project more widely acceptable, then its worth having one, precisely because the cost of doing so is so low.

Yes, this is "slacktivism".

I've been taught that actions speak louder than words though.

So today I'm stepping down.

I'm retaining my BDFL of mangos, of course, so I'll still be around the nanomsg community, but I will be giving it far less of my energy.

Friday, December 11, 2015

What Microsoft Can Do to Make Me Hate Windows a Little Less

Those who know me know that I have little love for Microsoft Windows.  The platform is a special snowflake, and coming from a Unix background (real UNIX, not Linux, btw), every time I'm faced with Windows I feel like I'm in some alternate dimension where everything is a little strange and painful.

I have to deal with Windows because of applications.  My wife runs Quickbooks (which is one of the more chaotic and poorly designed bits of software I've run across), the kids have video games they like.  I've had to run it myself historically because some expense report site back at former employer AMD was only compatible with IE.  I also have a flight simulator for RC aircraft that only works in Windows (better to practice on the sim, no glue needed when you crash, just hit the reset button.)

All of those are merely annoyances, and I keep Windows around on one of my computers for this reason.  It's not one I use primarily, nor one I carry with me when I travel.

But I also have created and support software that runs on Windows, or that people want to use on Windows.  Software like nanomsg, mangos, tcell, etc.  This is stuff that supports other developers.  Its free and open software, and I make no money from any of it.

Supporting that software is a pain on Windows, largely due to the fact that I don't have a Windows license to run Windows in a VM.  The only reason I'd buy such a license for my development laptop would be to support my free software development efforts.  Which would actually help and benefit the Windows ecosystem.

I rely on AppVeyor (which is an excellent service btw) to help me overcome my lack of a Windows instance on my development system.  This has allowed me to support some things pretty well, but the lack of an interactive command line means that some experiments are nigh impossible for me to try; others make me wait for the CI to build and test this, which takes a while.  Leading to lost time during the development cycle, all of which make me loathe working on the platform even more.

Microsoft can fix this.  In their latest "incarnation", they are claiming to be open source friendly, and they've even made big strides here in supporting open source developers.  Visual Studio is free (as in beer).  Their latest code editor is even open source.  The .Net framework itself is open source.

But the biggest barrier is the license for the platform itself.  I'm simply not going to run Windows on the bare metal -- I'm a Mac/UNIX guy and that is not going to change.  But I can and would be happier to occasionally run Windows to better support that platform in a VM, just like I do for illumos or Linux or FreeBSD.

So, Microsoft, here's your chance to make me hate your platform a little less.  Give open source developers access to free Windows licenses; to avoid cannibalizing your business you could have license terms that only allow these free licenses to be used when Windows is run in a virtual machine for non-commercial purposes.  This is a small thing you could do, to extend your reach to a set of developers who've mostly abandoned you.

(And Apple, there's a similar lesson there for you.  I'm a devoted MacOS X fan, but imagine how much wider your developer audience could be if you let people run MacOS X in a VM for non-commercial use?)

In the meantime, if you use software I develop, please don't be surprised if you find that I treat Windows as a distinctly second class citizen.  After all, its no worse than how Microsoft has treated me as an open source developer.

Tuesday, December 8, 2015

On Misunderstandings

Yesterday there was a flurry of activity on Twitter, and in retrospect, it seems that some have come away with interpretations of what I said that are other than what I intended.  Some of that misunderstanding is pretty unfortunate, so I'd like to set the record straight on a couple of items now.

First off, let me begin by saying that this blog, and my Twitter account, are mine alone, and are used by me to express my opinions.  They represent neither illumos nor Lucera, nor anyone or anything else.

Second, I have to apologize for it seems that I've come across as somehow advocating either against diversity (whether in the community or in the workplace) or in favor of toxicity.

Nothing could be further from the truth.  I believe strongly in diversity and an inclusive environment, both for illumos, and in the work place.  I talked about this at illumos day last year (see about 13:30 into the video, slides here), and I've also put my money where my mouth is.  Clearly, it hasn't been enough, and I think we all can and should do better.  I'm interested in finding ways to increase the diversity in illumos in particular, and the industry in general.  Feel free to post your suggestions in the comments following this blog.

Additionally, no, I don't believe that anyone should have to put up with "high performing toxic people".  The illumos community has appropriately censured people for toxic behavior in the past, and I was supportive of that action back then, and still am now.  Maintaining a comfortable work place and a comfortable community leads to increased personal satisfaction, and that leads to increased productivity.  Toxicity drives people away, and that flies in the face of the aforementioned desire for diversity (as well as the unstated ones for a growing and a healthy community.)

Finally, I didn't mean to offend anyone.  If I've done so in my recent tweets, please be assured that this was not intentional, and I hope you'll accept my heartfelt apology.

Thanks.

Monday, October 19, 2015

A Space Shooter in Curses

Some of you who follow me may know that I have recently built a pretty nifty framework for working with terminals.  ANSI, ASCII, VT100, Windows Console, etc.  Its called Tcell, and located on github.  (Its a Go framework though.)  It offers many of the same features as curses, though it is most definitely not a clone of curses.

Anyway, I decided it should be possible to write a game in this framework, so I wrote one.

I give you Escape From Proxima 5, a 2D multi-axis scrolling space shooter written entirely in Go, designed to operate in your text terminal

The game is fairly primordial, but there is a playable level complete with enemies and hazards.   It's actually reasonably difficult to get past just this first level.

Mostly the idea here is that you can get a sense of what the game engine is capable of, and see Tcell in action.

As part of this, I wrote a pretty complete 2D game engine.  Its got rich sprite management with collision detection, palettes, an events subsystem, scrolling maps, and support for keyboards and mice.  Its also got pretty nice extensibility as assets are defined in YAML files that are converted and compiled into the program.  (I guess an asset editor needs to be written. :-)

The code is Apache 2 licensed, so feel free to borrow bits for your own projects.  I'd love to hear about it.

Anyway, I thought I'd post this here.  I made two videos.  The longer one, at about 3:30, shows most of the features of the game, animated sprites, some nice explosions, gravity effects, beam field effects, etc.



The second video shows what this looks like on less rich terminals -- say a VT100 with only 7-bit ASCII characters available.  The richer your locale, the nicer it will look.  But it falls down as gracefully as one can expect.


Btw, this framework is now basically design complete, so it should be super easy to product a lot of simples kinds of games -- for example a clone of Missile Command or Space Invaders should be doable in an afternoon.   What makes this game a little bigger is the number if different kinds of objects and object interactions we can have.