by Anonymous Coward
What are your feelings about protocols and file formats and keeping them open? Where do the efforts to keep protocols and file formats open and accessible to others fall on your list of priorities?
ESR: I don't think my answer will surprise you. When the function of software is defined by a requirement to be compatible with a protocol or file format, openness of the protocol or formats is even more important than the licensing status of any of the implementations around it.
The reason should be obvious. If the protocol is well documented and open, you can build open-source code to process it. On the other hand, if crucial parts are undocumented or (worse) require techniques that are under a non-royalty-free patent, *any* code touching it can have a serious problem.
There's a productive analogy with DNA and ribosomes here which I leave for the reader to fill in.
As a long time "Unix philosophy" advocate, and in the light of the announced switch to it by Debian, Ubuntu, and basically every other major Linux distribution, what do you think of systemd, and the tight vertical integration it intends to bring as a standard plumbing for (most of) all Linux distributions?
ESR: I apologize; I haven't studied systemd in the detail that would be required for me to give a firm answer to this - it's been on my to-do list for a while, but I'm buried in other projects.
I want to study it carefully because I'm a bit troubled by what I hear about the feature set and the goals. From that, I fear it may be one of those projects that is teetering right at the edge of manageable complexity - OK as long as an architect with a strong sense of design discipline is running things, but very prone to mission creep and bloat and likely to turn into a nasty hairball over the longer term.
But this may be me being too pessimistic. I don't actually think I know yet.
here's an obvious one..
it's been almost 20 years since your write tCatB...i gave it a quick read and thought, "well, it *is* dated now, isn't it?" altho i am old enough to remember when its' ideas were pretty cutting edge. Given the current state of software development (ie the ease of use of PHP and the fact that, without a doubt, the cathedral model has won), what would you either like to change or add to your original thesis?
ESR: Um. What color is the sky on your planet? The one where the cathedral model has won, I mean.
What's happening on Earth is just the opposite - even where bazaar-mode development hasn't taken over, many organizations that would previously have run their projects in a cathedral style are trying really hard to flatten out hierarchies, lighten up, and co-opt the many-eyeballs effect in any way they can. This is pretty clear just from what shows up in my mailbox - and see my later response to a question about Apple, too.
I think there have been some significant shifts in methodology that would affect the book if I were writing it today. A big one is that systematic use of version control is now pervasive in a way it was not then (when CaTB was written, Subversion wasn't out of early alpha stage yet; git and hg weren't even imagined). Development workflow is now correspondingly much more centered around shared public repositories.
The effects of always being able to revert to known codebase states rapidly are subtle but very large. One obvious one is that the risk factor of exploration drops significantly. That includes the risk in taking patches from strangers.
Less obvious but just as important is how sharp version-control tools raise the effectiveness and reduce the friction cost of testing techniques. In 2001 we couldn't routinely run bisections to pinpoint bad code changes; our tools were too slow and clumsy. Now we can, and the effect is to make building good unit and regression-test suites both easier and more rewarding in defects squashed per hours invested.
The reason I'm going on about this is that, like any technique that increases our visibility into the code's behavioral space, better test suites tremendously amplify the positive effects of code review. Of course that feeds through into a differential competitive advantage for open source, because our process naturally recruits more code reviewers than closed-source shops can usually afford to hire.
Here's an example of the effect. There's a project I've led since about 2005 called GPSD, a service daemon that handles GPSes and other geodetic sensors. It's *everywhere* in mobile embedded systems, including your Android phone - we must have well over over a billion deployments by now. Yet our defect rate is so low that months go by at a time between single bug reports.
Why? Because I wrote a test suite with good coverage - and use a test strategy that relies on fast rollback capabilities I plain didn't have before modern version control. Changes in tools change the rules. It's much easier to get to the this-never-breaks level of reliability than it was when I wrote CatB, if you know what you're doing.
(For much more on this case study see my paper on the architecture of GPSD; there's a major section on engineering for high reliability.)
Open-source development has quite a few advantages over closed in exploiting this possibility - better tools, healthier culture, and just plain more developers. I think a major theme of the next decade is going to be learning to systematically capture these gains.
How to ask questions
When you wrote "How to ask questions" did you have any idea how big it would be? Or how long it would be relevant? And how do you feel that your most referenced piece of work is a howto for the clueless? :)
ESR: I'm not sure it is my most referenced piece of work. Either "How To Become A Hacker" or the Jargon File could easily be getting more hits; I haven't bothered to track this.
But supposing it is, that's OK. I expect it to be relevant for a very long time, because the newbies and the clueless are always with us.
I recall reading (and re-reading on occasion) the Halloween Documents. Have you written anything regarding any other opponents to OSS, or perhaps a look back on them and see what the end effect of Microsoft's attempts did long term?
ESR: I haven't written a retrospective, or anything else really similar.
I think those documents had a pretty significant effect in legitimizing not buying the Microsoft lock-in. The trade press certainly thought so at the time, and the intervening decade and a half hasn't given me any reason to suppose they were wrong.
How essential is software redistribution rights?
One of the issues w/ Open Source has been the freedom to redistribute software downstream - be it just binaries, just source or any combination of the 2. Do you think there are any good ways for software companies who make their software open source to prevent their customers from effectively becoming their competitors - by giving away or selling cheaper what they were sold? Or is the only alternative going for a shared-source approach, as opposed to open source, where redistribution can be explicitly prohibited?
ESR: If your customers are selling your open-source software for a lower price than you are, then you're doing it wrong! You need to face the question of why you've attached a sales price to the software itself at all. I think that's a doomed approach.
You need to be thinking about monetizing that investment in a different way. The most obvious is service and consulting contracts around the code. You have the advantage there; as the originators, you are in a better position to add value to the bundle than your competitors are.
There are a couple other potential business models here, but none I can recommend without knowing more details about your situation. My advice in The Magic Cauldron is still quite relevant.
What about the new wave of proprietary programs
So it seems these days the most effective method of DRM is a network interface, like that used by Facebook, Google, Pinterest, etc... You cannot run your own instance of Gmail or Facebook, and you certainly cannot see or modify the code. At the same time all these companies are pressuring us to push our data into their servers by not supporting or coming up with solutions that let us continue to control/manage our data on our own machines and private networks. What can open source do to stem that tide? What about open source licensing? Could webkit or Mozilla have slowed down the encroachment of Chrom/ium and its pro-Google agenda if it had more defensive licensing terms like something similar to the GPL? How do we convince hackers to hack on open-source 'website programs', like an open Gmail or an open Facebook (e.g., Diaspora)?
ESR: You're pointing at a real problem. I don't know of any near-term solutions beyond being very careful what services you allow to draw you into their web. I run my own mailserver, rather than using GMail, for exactly this reason. I don't use Facebook or Pinterest. I use G+ for nonessential things only.
I don't think defensive or reciprocal licensing can solve the problem, because it is not one created by code secrecy. The service providers are trading on real advantages of scale that they would still collect if every line of source code in their app stack were public; the value they're offering actually comes from ubiquity and synergy.
In fact I'm a little surprised they even bother maintaining code secrecy, it has nothing whatsoever to do with their value proposition. I think we're seeing a result of instinctive territoriality rather than rational thought.
I'd love to believe that projects like Diaspora are a long-term solution to the problem, but I don't - basically because no matter how attractive and ingenious your software is, it tales gobs of capital expenditure on server farms to scale up to where you're any kind of functional competition to Facebook/Google/Pinterest etc.
In the long term I think the way we'll win is if the giants have to compete with each other for business by giving their customers exit and recovery options. Google's Data Liberation Front is a positive early sign.
Linus's Law (Many Eyes) Problems
Hi, there is currently some debate about the many eyes theory over on HNews about why it's a fallacious argument, but in my view they have it all wrong, in that a core component of Linus's Law is that the amount of code is directly inverse to the amount of eyes that can hit all of that code (or a significant percentage). Therefore, in my eyes it is the problem of code bloat that is undermining the open source movement more than anything. For example, the Linux kernel is now at, what, 10mil+ lines of code? That's insane. Minix 3, on the other hand, is at ~15k?
What are your thoughts on this problem?
ESR: I think you raise a valid point about code bloat being a problem. On the other hand, the code-coverage effectiveness of individual developers is also rising for reasons I wrote about in response to a previous question - better tools and better testing strategies feeding back on each other in virtuous ways.
A lot of criticisms of Linus's Law (including the Hacker News thread, as far down as I read it) miss the point that "many eyeballs" isn't just about sheer volume of people reviewing code, it's about diversity of assumptions. You want people reviewing the code that don't all work for the same company and report to the same boss - people who speak different languages, different toolkits, different expertise areas.
A handful of people who think very differently may be more effective auditors than an army with identical blind-spots. By recruiting more people you're maximizing the odds of good diversity in the subgroup that actually reviews any given section of code.
I actually chuckled when I read the Hacker News thread, because I've seen this movie before after every serious security flap in an open-source tool. The script, which includes a bunch of people indignantly exclaiming that many-eyeballs is useless because bug X lurked in a dusty corner for Y months, is so predictable that I can anticipate a lot of the lines.
The mistake being made here is a classic example of Frederic Bastiat's things seen versus things unseen. Critics of Linus's Law overweight the bug they can *see* and underweight the high probability that equivalently positioned closed-source security flaws they *can't* see are actually far worse, just so far undiscovered.
That's how it seems to go whenever we get a hint of the defect rate inside closed-source blobs, anyway. As a very pertinent example, in the last couple months I've learned some things about the security-defect density in proprietary firmware on residential and small business Internet routers that would absolutely curl your hair. It's far, far worse than most people understand out there.
Friends don't let friends run factory firmware. You really do *not* want to be relying on anything less audited than OpenWRT or one of its kindred (DDWRT, or CeroWRT for the bleeding edge). And yet the next time any security flaw turns up in one of those open-source projects, we'll see a replay of the movie with yet another round of squawking about open source not working.
Ironically enough this will happen precisely because the open-source process *is* working ... while, elsewhere, bugs that are *far* worse lurk in closed-source router firmware. Things seen vs. things unseen...
Your comments in The Art of Unix Programming about Apple/Mac developers being diametrically opposed to Unix developers in development style and emphases (designing simple, user-friendly interfaces from the outside in) were quite interesting. I am wondering about your perspective on Apple now. My interest is specifically in Apple's contributions to open-source (WebKit and LLVM, chiefly) and your take on those. It seems to me that Apple has done quite a bit to foster an alternative ecosystem to the GNU environment, for instance in FreeBSD's adoption of clang as their default compiler; and also it seems to to me that WebKit has supplanted Gecko as the most widely used browser framework. Curious about your viewpoint here.
ESR: In answering an earlier question I spoke of organizations that would previously have developed in a secretive cathedral mode adopting the bazaar model and open-source practices. Projects like LLVM and Webkit exemplify this trend.
The interesting thing about these projects is that they're not just facades. They really seem to welcome, not just as outside contributors but sometimes as full-time employees, people who are from the Unix-descended open-source culture (with its inside-to-out priorities) rather than interface-centric Mac guys.
That - and of course, OS X - tells us Apple's technical culture in't what it used to be. It's more Unix-influenced now, more open, has more hacker in it. Obviously that doesn't fix every problem with Apple - I'm with RMS in judging the locked-down, walled-garden design of their phones and tablets to be a very bad thing for users in the longer term - but it's movement in a good direction.
AK or AR
Which is the better battle rifle, an AK-47/74 type or an AR-15/M-16/M-4 type? Please give your criteria as well as your answer. Bonus: favorite handgun platform/caliber that isn't a .45 1911.
ESR: "Better battle rifle" depends on who you're equipping, and for what. I lean towards the AR-15 because I'm from a culture that readily produces people with good marksmanship, fire discipline, and steadiness onder combat pressure. The AR-15 is the better weapon to match those traits - it rewards skill in the shooter and you can actually use it at distance.
On the other hand, if your troops are savages or bandits who can barely clean a weapon and for whom the natural mode is short-range spray'n'pray, the AK-47 is probably a better choice. It hardly rewards shooter skill at all, but handles egregious abuse under field conditions better.
As for what I like when I don't have .45ACP handy, my answer is easy and boring: .40S&W. Medium-caliber semis suit me very well. I don't mind shooting my wife's Glock .40 at all, and it's likely what I'd carry if not for John Moses Browning (peace be unto him)