Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Microsoft Software Linux

Windows vs. Linux Study Author Replies 501

Last week you submitted questions for Dr. Herb Thompson, author of the latest Microsoft-sponsored Windows vs. Linux study. Here are his answers. Please feel free to ask follow-up questions. Dr. Thompson says he'll respond to as many as he can. He's registered a new Slashdot username, FFE4, specifically to participate in this discussion. All others claiming to be him are imposters. So read, post, ask, and enjoy.
1- A better way of putting it:
by einhverfr

It seems that your study attempted to simulate the growth of an internet startup firm on Windows or Linux. One thing I did not see in the study was a good description of assumptions you made. What assumptions were made in both the design of the requirements and the analysis of the data? What limitations can we place on the conclusions as a result of these assumptions?

Dr. Thompson

This is a really important question. I think there are two sections of the study: the assessment methodology and then the experiment we undertook to illustrate how to apply that methodology. I'll answer the assumption question for both parts:

Methodology - For the methodology, we wanted to provide a tool that organizations could use and apply their own assumptions. Maintaining a system is all about context; some environments favor Linux, others Windows. The question is, how do you know what's likely to be the most reliable (which includes manageable, secure and supportable) solution for your environment? We proposed a methodology a recipe - that looks at a solution in its entirety instead of just individual components. Policies like configuration control vary from organization to organization and to get something that's truly meaningful in your environment, the methodology needs to be carried out in your context. Enterprise customers can and should do this when they are about to trust their critical business processes to a platform. That said, the basic assumptions of the methodology are that patches are applied at 1 month intervals and that business needs evolve over time. How those business needs evolve depends on the scenario you're looking at (in our experiment we looked at ecommerce for example). The methodology doesn't cover steady state reliability, meaning the uptime of a system that is completely static. While this is important, our conversations with CIOs, CTOs, CSOs and IT folks lead us to believe that this was a smaller contributor to pain in a dynamic environment. In an appliance for example, though, steady state reliability is king, and I think an important limitation of this methodology is that we don't capture that well, and I think it's amazingly difficult quality to measure in a time-lapse way.

The purpose of the experiment was to illustrate how to apply the methodology and to begin to get some insights into some of the key model differences between two platforms. For the experiment we picked the ecommerce scenario, for no other reason than there has been a clear shift in how ecommerce sites have serviced their customers in recent years moving from static sites to personalized content. Some specific assumptions were:

* The transition from a basic purchasing site to a personalized portal based on order/browsing history takes place over a one year period.

* The period we looked at was July 1st, 2004 to June 30th, 2005 (the most recent full year at the time of the study).

* A configuration control policy exists that mandates OS version but not much else meaning administrators had fairly free rein to meet business requirements.

* All patches marked as critical or important supplied by the vendor are applied.

* We assume the system to be functioning if the original ecommerce application is running and meets some basic acceptance tests (same for both platforms see Appendix 1 of the report) and the new installed components are also running.

* To add new capabilities, we use leading 3rd party components as opposed to building custom code in-house.

* The business migrates operating system versions at the end of the one year period to the latest versions of the platform.

* The administrators that participated in the experiment reflect the average Linux (specifically SuSE) and Windows administrators in skill, capability and knowledge. While this was strived for, it's important to recognize the small sample size in drawing any conclusions from the data.

As far as limitations, the experiment looks at one specific case with a total of six administrators. I'd love to have done it with a hundred admins on each side on a wide range of business requirement scenarios and my hope is that others will do that and publish their results. Our experiment, however, shows that for this particular, clearly documented scenario, experienced Linux Admins had conflicts between meeting business needs and a recommended best practice like not introducing out-of-distribution components. If one is aware of potential conflicts and challenges upfront, I think you can put controls in place to make reasonable tradeoffs. In the linux case, a precise and specific configuration control policy may have prohibited the problematic upgrade of one of the components that the 3rd party solutions required. This would have likely reduced the number of failures but would have put some hefty constraints on 3rd party solutions. To understand the implications for your environment you really need to run through the methodology with the assumptions and restrictions of your organization and I hope that this study either prompts or provokes people to do that.

************************

2 - Meta-credibility?
by Tackhead

Where I come from (non-management, grunt-level techie), appearing in any of these analysts' journals *costs* an author more credibility than it gains him or her. For example, if $RAG says that $CORP has the best customer support, I immediately assume that $CORP has such horrid customer support that they had to pay someone to make up some research that proves otherwise.

To be sarcastic, I'd ask "who the heck actually takes these studies seriously?", but obviously *somebody* does. Who are these people, and why do these people take these industry analyst firms/journals/reports seriously? Are they right or wrong to do so? This isn't an attack (or endorsement :) of your research -- I'm talking about the credibility gap in industry research, and my observation that it's an industry-wide problem.

The meta-credibility question is this: Given the amount of shoddy pay-for-play research out there, does being published in an analyst journal tend to cost (a researcher, his consulting company, his financial backers) more credibility than it can gains him/her/them? If not, why not -- and more importantly, if so, is there any way to reverse the trend?

Dr. Thompson

This is a really interesting question because it cuts to the heart of what a real research study should provide to the reader. It should provide a baseline and I think research should always be questioned, scrutinized and debated because one can always find reasons for bias. Particularly, if a subject of the study (vendor for example) is behind its funding, whether directly (as in this study) or indirectly (meaning that they are big clients) I think it's critical that the study not provide just a baked cake for readers but the recipe as well. The recipe has to be inherently fair and simple, meaning that it has to map directly to a the quality or pain one is trying to measure without taking into account how the subjects try and provide that service or mitigate that pain. I think slanted opinion pieces, with no backup for those opinions, seriously hurts credibility, at least in my book. If you're presenting facts though and encouraging others to question them then I think that actually helps credibility, even if the search for those facts was paid for.

I agree though that one is tempted to dismiss research a priori though because of funding or some vendor tie. I think a good way to reverse the trend is to open the process up to public scrutiny; that's probably the main reason I came on Slashdot. To use this specific study as an example, some folks disagreed with several points in the experiment from counting patches, to reasons for upgrading key components, to the ecommerce scenario we used. For me, the study's key value is the methodology. Could different applications/scenarios have been chosen: absolutely!

The value I think that this study gives to the practitioner is arming them with a tool to help measure in their own environment. By applying the methodology, the results should take into account things like administrators skillsets, support policies, configuration control policies and the tradeoffs between customizability, maintainability, visibility, security and usability. It's only by looking at this stuff in context can one make a sound judgment; and a true research paper, especially one where funding is in question, needs to fully disclose the method and the funding source. In our case, the methodology has been vetted by industry analysts, IT organizations and several academics. That doesn't mean much, though, if you don't find the methodology meaningful for the questions you want answered. One reason I've come on Slashdot is to get the thoughts, opinions and assessments of the methodology itself from administrators in the trenches. I'm really pleased with the great questions and comments amidst the inevitable flames and I'm looking forward to this being posted so that others can weigh-in with their feedback and I can jump into the threads to get some discussion going.

If the research helps give real insight, and the methodology makes sense, I think there's real value no matter who paid the bill. At the end of the day, you need to decide whether or not you can extract any value from the information presented to you. In the case of this study, my hope is that it will leave you thinking hmmm.... maybe we should actually run through a process like this and check out how this works for ourselves. My more ambitious hope is that you'll implement it and tell me what challenges you faces on Windows, Linux, OSX, BSD, whatever platform you choose to compare. It may not even venture into the perennial Windows versus Linux battle; maybe you're a linux shop trying to decide between multiple distributions for example. Either way, if it's got people thinking about the topic and asking questions, well, that's all any researcher can really hope for.

************************

3 - Weak setup
by 0xABADC0DA

If I understand the study correctly, the windows side had to do nothing but set up a server to do a few different tasks over time and run windows update. The linux side had to have multiple incompatible versions of their database server running simultaneously on a single system and had to run unsupported versions of software to do it.

Why wasn't the windows side required to run multiple versions of IIS or SQL server simultaneously? In real life if you need to run multiple database versions you use virtualization or multiple systems, especially if one requires untested software. You don't run some hokie unstable branch on the same system as everything else. Why was a linux solution picked that required this level of work? My other related question is, did any of the unix administrators question why there were being asked to do such a thing? For example, did they come back and say they need a license for vmware? If they did not they do not seem like very competent administrators in my opinion.

Dr. Thompson

The Windows Admins and Linux admins were given the exact same set of business requirements which doesn't necessarily translate into the same tasks as they went about fulfilling them. The 3rd party components installed were chosen solely based on their market leadership position and any upgrades of OS were unknown at the time of selection. That said, on the Windows side, it turned out that no upgrades of IIS were needed (except for patches) and SQL Server was upgraded to SP4 as part of patch application. On the Linux side, at a high-level there were two main classes of upgrades: MySQL and GLIBC and they were both prompted by the installed components. After the experiment, the administrators were asked on both sides if this kind of evolution of systems met with their real-world experience. They said yes, with the caveat of if they were asked to install a component that required an upgrade of GLIBC that they would likely upgrade the operating system as long as their configuration control policy allowed it.

You make a great point about installing components on some sort of staging system (which is almost always done) as opposed to live running systems. That still means that the problems that the administrators had equal real IT pain. If something weird had to be done to get the system running but it does run and it's then put into production it's like a fuse that gets set on a bomb. A careful configuration control policy would almost certainly help and thats why I think it's so important to conduct this kind of experiment in your own environment with your own policies.

As far as selection of the Linux administrators go, they all had at least 5 years of enterprise administration experience, and two years of experience on SuSE specifically. With three people there's certainly likely to be a lot of variability and to get some conclusive results, I'd love to get a huge group of administrators across the spectrum in terms of experience. I'd also love to do it across multiple scenarios, beyond the ecommerce study. For this experiment, basically the bottom line is that we Illustrate one clearly documented scenario with six highly qualified admins that we selected based on experience. We cant ensure equal competency levels, but there was nothing in our screening that would lead us to believe there were gaps in knowledge on either side. When it comes down to it though, the really meaningful results are the ones you get when you perform the evaluation in your environment. Hopefully this study provides a starting point for asking the right questions when you do that.

************************

4- Who determined the metrics
by Infonaut

Did Microsoft come to you with a specific set of metrics, or did you work with them to develop the metrics, or did you determine them completely on your own?

Kudos to you for braving the inevitable flames to answer people's questions here on Slashdot.

Dr. Thompson

Great question! The metrics and the methodology were developed completely on our own and independent of Microsoft. They were created with the help and feedback of enterprise CIOs as well as industry analysts. I think that this relates to a couple of other questions on Slashdot with the gist of if Microsoft is funding the study aren't you incentivized for them to come out ahead. Besides the standard we would never do that and that would put our credibility at risk which is our primary commodity which are both very true, let me explain a little more about how our research engagements work.

Company X (in this case Microsoft) comes to us and says can you help us measure quality Y (in this case Reliability) to get some insight into how product Z stacks up. We say, sure, BUT we have complete creation and control of the methodology, it will be reviewed and vetted by the community (end users and independent analysts) and must strictly follow scientific principles. The response will either be: great, we want to know whats really going on or um, heres some things to focus on and I think you should set it up this way. In the first case we proceed, in the second case we inform that company that we don't do that kind of research. We are also not in the opinion business, so we present a methodology to follow and illustrate how that methodology is applied with the hope that people will take the methodology and apply it in their own environment.

All of our studies are written as if they will be released publicly BUT it is up to the sponsor if the study is publicly released. The vendor knows that they're taking a risk. They pay for the research either way but only have control over whether it is published, not over content. So if their intent is to use it as an outward facing piece, they may end up with something they don't like. Either way, I think it's of high value to them. If there are aspects of the results that favor the sponsor's product, in my experience, it goes to the marketing department and gets released publicly; if it favors the competitors product it goes off to the engineering folks as a tool to understand their product, their competitor's product, and the problem more clearly. Either way, we maintain complete editorial control over the study and there is no financial incentive for us if it becomes a public study or is used as an internal market analysis piece. The methodology has to be as objective as possible to be of any real value in either case.

************************

5 - ATMs vs. Voting Machines
by digitaldc

How is it that Diebold can make ATM machines that will account for every last penny in a banking system, but they can't make secure electronic voting machines?

Also, does the flame-resistant suit come with its own matching tinfoil hat? (don't answer that one)

Dr. Thompson

This is a question that has passed through my mind more than once. The voting world is very interesting. I don't have experience with the inner workings of Diebolds ATM machines but I can say that the versions of their tabulation software that Ive seen have some major security challenges (see this Washington post documentary for some of the gory details). I'd say I'm concerned about the e-voting systems Ive seen but that would be a serious understatement.

I question whether the economic incentive is there for them to make their voting systems more secure. Take an ATM for example. Imagine the ATM has a flaw and if you do something to it, you can make it give you more money than is actually deducted from your account. Anything involving money gets audited and sometimes audited multiple times and chances are good that the bank is going to figure out that they're loosing money. On the flip side, if there was a flaw in the ATM in the banks favor, someone balancing their checkbook is going to notice a discrepancy. The point is that there's always traceability and there's always someone keeping score. If you think about voting tabulators though we've got this mysterious box that vote data gets fed into and then, in many states, only a fraction of these votes are audited. That means we don't really know what the bank balance is other than what the machine tells us it is. If the system is highly vulnerable and its vulnerability is known by the manufacturer *but* it's going to be expensive to fix it and shore up defenses, there seems to be no huge incentive to fix the problems. I think the only way to get some decent software that counts votes that people can have confidence in is to allow security experts to actually test the systems, highlight potential vulnerabilities, and put some proper checks and balances in place. That would give the general public some visibility into a critical infrastructure system that we usually aren't in the habit of questioning and will hold voting manufacturers directly accountable to voters.

As for the tin foil hat to go with the flame resistant suit; it hasn't been shipped to me yet - apparently the manufacturing company is still filling backorders from SCO :).

************************

6 - Why are the requirements different?
by altoz

Looking at your research report's appendices, it seems that the requirements for Windows Administrators were somewhat different than the Linux Administrators. For instance, you ask for 4-5 years sys admin experience minimum for Windows, whereas it's 3-4 years sys admin experience minimum for Linux.

Why wasn't it equal for both? And doesn't this sort of slight Windows favoring undermine your credibility?

Dr. Thompson

Short answer: Typo. Long answer: We originally were looking for 4 years of general administration experience for both Linux and Windows which is what is reflected in the desired responses to the General Background questionnaire for Linux. We then raised it to 5 years for both Linux and Windows which is reflected in the General Background of the Windows questionnaire. The difference in the two was just a failure to update the response criteria on that shared section of one of the questionnaires. On page 5 though we've got the actual administrator experience laid out:

Each SuSE Linux administrator had at least 5 years experience administering Linux in an enterprise setting. We also required 2 years minimum experience administering SuSE Linux distributions and at least 1 year administering SuSE Linux Enterprise Server 8 and half a year administering SLES 9 (released in late 2004). Windows administrators all had at least 5 years experience administering Windows servers in an enterprise environment. These administrators also had at least 2 years experience administering Windows Server 2000 and at least 1 year administration experience with Windows Server 2003.

************************

7 - Scalability of Results?
by hahiss

You tested six people on two different systems; how is that supposed to yield any substantial insight into the underlying OSes themselves?

[At best, your study seems to show that the GNU/Linux distribution you selected was not particularly good at this task. But why does that show that the ``monolithic" style of Windows is better per se than the ``modular" style of GNU/Linux distributions?]

Dr. Thompson

First, let's look at what we did. We followed a methodology for evaluating reliability with three Windows admins and three Linux admins. This is small sample set and it looked at one scenario: ecommerce. Is this enough to make sweeping claims about the reliability of Linux/Windows? No way. I do however think the results raise some interesting questions about the modularity vs. integration tradeoffs that come with operating systems. I don't think that either the Windows or Linux models are better in a general sense but they *are* different; the question is which is likely to cause less pain and provide more value for your particular business need in your specific environment. Hopefully these are the questions that people will ask after reading this study, and with any luck it will prompt others to carry out their own analysis within their own IT environment, building on what we started here. I think the methodology in this paper has provided a good starting point to help people answer those questions in context.

************************

8 - Convenience vs. security
by Sheetrock

Lately, I've felt that Microsoft is emphasizing greater trust in their control over your system as a means of increasing your security. This is suggested by the difficulty of obtaining individual or bulk security patches from their website as opposed to simply loading Internet Explorer and using their Windows Update service, the encouragement in Service Pack 2 of allowing Automatic Update to run in the background, and the introduction of Genuine Advantage requiring the user to authenticate his system before obtaining critical updates such as DirectX.

In addition, Digital Rights Management or other copy protection schemes are becoming increasingly demanding and insidious, whether by uniquely identifying and reporting on user activity, intentionally restricting functionality, and even introducing new security issues (the most recent flap involves copy protection software on Sony CDs that not only hides content from the user but permits viruses to take advantage of this feature.)

I would like to know how you feel about the shift of control over the personal computer from the person to the software manufacturers -- is it right, and do we gain more than we're losing in privacy and security?

Dr. Thompson

This is an interesting problem because manufacturers have to deal with a wide range of users. If there was real visibility and education for users on the security implications of doing A, B or C then we'd be ok. It's scary though when that line gets crossed. Sony's DRM rootkit is a good example. But if you think about it, we are essentially passively accepting things like this all the time. Every time we install a new piece of software,especially something that reads untrusted data like a browser plugin,we tacitly accept that this software is likely to contain security flaws and can be an entryway into your system; NOW are you sure you want to install it? The visceral immediate reaction is no but then you balance tradeoffs of the features you get versus potential risks. Increasingly, were not even given that choice, and components that are intended to help us (or help the vendor) are installed with out our knowledge. This also brings up the question of visibility; how do we know what security state were really in with a system? Again, there are tradeoffs, some of this installed software may actually increase usability or maintainability but it's abstracting away what's happening on the metal. So far, it seems as though the market has tended towards the usability, maintainability, integration that favors bundling on both the Linux and Windows sides. It's kind of a disturbing trend though.

As another example, think about how much trustaverage programmers put into their compiler these days. Whenever I teach classes on computer security and then go off into x86 op codes or even assembly, it seems to be a totally foreign concept and skillset. We've created a culture of building applications rapidly in super high-level languages which does get the job done, but at the same time seems to have sacrificed knowledge of (or even the desire to know) what's happening on the metal. This places a heavy burden on platform developers, compiler writers and even IDE manufacturers because we are shifting the cloud of security responsibility over to them in big way. Under the right conditions it can be good because the average programmer knows little about security, but we need to make sure that the components we depend on and trust are written with security in mind, analyzed by folks that have a clue, and are tested and verified with security in mind. This means asking vendors the tough questions about their development processes and making sure they've got pretty good answers. Here's what I think is a good start. If that fails, theres always BSD. :).

************************

9 - Apache versus IIS
by 00_NOP


Simple one: of course I accept that Windows and Linux are a priori equally vulnerable - C programmers make mistakes. The question is which model is most likely to deliver a fix fastest. Given that the one area where Linux is probably in the lead over Microsoft's software is in the realm of the webserver - why are my server logs filled with artifacts of hacked IIS boxes but apache seems to remain pretty safe?

Dr. Thompson

You bring up a couple of interesting points. The first is patch delivery. It's true that on Linux if there's a high profile vulnerability you're likely to be able to find a patch out on the net from somebody in a few hours. Sometimes the fix is simple, a one-liner, and other times it may be more complex. Either way, there could be unintended side effects of the patch which is why there's usually a significant lag between these first responder patches and a blessed patch released from the distribution vendor. Most enterprises I know wait for the distribution patch as a matter of policy, and even then, they go through a fairly rigorous testing and compatibility verification process before the patch gets deployed widely. In the Windows world, one doesn't get the alpha or beta patches, just the blessed finished product. So the question is which solution is likely to provide a patch that fixes the problem and doesn't create any more problems the fastest. That's a tough one to answer. I think theres something to be learned by looking historically and that in general theres a big discrepancy between perception and reality. Here's a (pdf) link to a study we did earlier this year based on 2004 data that I think provides a good starting point for answering that question.

As far as why you've got so many attempts on your Windows/IIS box, I think there are two distinct issues: vulnerability and threat profile. In the past, I would argue that the path of least resistance was through Windows because desktop systems were often left unprotected by the home computer user. Bang-for-the-packet favored creating tools that exploited these problems and some of the attacks actually worked on poorly configured servers as well. Then there's the targeted vs. broad attacks. Theres no question that the high-profile worms and viruses in the last several years have favored Windows as a target. The issue gets even more complicated when you look at targeted attacks. These targeted attacks are much harder to measure, even anecdotally, because either an organization gets compromised and doesn't disclose it (unless they're compelled to by law) or the attack goes undetected because it doesn't leave any of the standard footprints, in which case no pain is felt immediately. That may help to explain it but the truth is that there's a lot of conflicting data out there. I remember reading this on Slashdot last year which claims Apache was more attacked than IIS but I've also read reports to the contrary. The reality is that any target of value is going to get attacked frequently. If there is an indiscriminant mass attack like a worm or virus, that's pretty bad and can be really painful. What's scarier though is the attack that just targets you.

************************

10 - Do you agree with Windows Local Workflow
by MosesJones

Microsoft and Linux distros have had a policy for some time of including more and more functionality in the base operating system, the latest example is the inclusion of "Local Workflow" in Windows Vista.

As a security expert do you think that bundling more and more increases or decreases the risks, and should both Windows and Linux distros be doing more to create reduced platforms that just act as good operating systems?

Dr. Thompson

Three years ago I bought my mother a combination TV, VCR and DVD player. It was great; she didn't have to worry about cables or the notorious multi-remote control problem. She didn't even really need the VCR because she hardly ever watches Video tapes, but I thought, why not. It worked great for two years, mom watched her DVDs, and on a blue moon a video tape from a family vacation would find its way into the VCR. All was well at the Thompson household. This past year, tragedy struck. The VCR devoured a videotape, completely entangling it in the machine. This not only knocked out the VCR but the television too (it thought it was constantly at the end of a tape and needing to rewind it). So here's the issue: mom probably only needed a TV and a separate DVD player. I probably could have gotten better quality components individually too, and with some ebay-savvy shopping, the group may have been cheaper. For my mom though, the integration and ease of operation of the three were key assets. The flipside of that is that the whole is only as strong as the weakest of its constituent parts, and by the manufacturer throwing some questionable VCR components into the mix, it caused the whole thing to fail. The meta-question: did I make the right choice, going for the kitchen-sink approach versus individual components? I think for mom I made the right call. For me, my willingness to program a universal remote and my love of tweaking the system would have lead me down a different route.

In operating systems, it depends what you're looking for and what the risk vs. reward equation is for you, and I would argue that the answer varies from user to user. The ideal would be something that gave you integration, ease of use, visibility, manageability and the ability to truly customize and minimize functionality and maintenance requirements. No operating system I've ever seen strikes that balance optimally and for every user. As far as bundling functionality with the distribution, I think it's a question of market demand. There's no question though that from a simple mathematical perspective, the less code processing untrusted data the better. That means if I need a system to perform one specific function, and that function was constant over time, then from a security perspective I only want the stuff on that box that does what I need to serve that goal. For example, I don't ever want X Windows on my linux file server. I just want the minimal code base there because as long as the code itself is reliable, I'll only have to mess with the box to apply patches (and much fewer patches if I strip the system down). That's true of my home fileserver. If I have an army of systems to manage though, my decision is going to come down to which platform is reliable and extends me the most tools to manage it efficiently and effectively. That's a question that can only be answered in context. I can tell you what I run at home though. File server: Red Hat EL 4 (no X windows). Laptop: Windows XP SP2. Desktop: Windows Server 2003 with virtual machines of everything under the sun from Win 9x to SuSE, Red Hat and Debian.
This discussion has been archived. No new comments can be posted.

Windows vs. Linux Study Author Replies

Comments Filter:
  • Don't forget (Score:4, Interesting)

    by sucker_muts ( 776572 ) <.moc.liamtoh. .ta. .nvp_rekcus.> on Monday November 28, 2005 @12:31PM (#14129705) Homepage Journal
    People on slashdot can get pretty upset about the studies Microsft shows the world, and these mostly say Microsoft is the king on the hill. But don't ever forget they don't show ALL of their studies. It could well be that 60% of them does not favor Microsoft good enough or not at all.

    Of course I realise they try to use situations that are more likely to favor for them as for [insert competitor].

    No if just once a bunch of other studies leaked we could get a real view over what MS is doing with their researches all the time...
  • MySQL (Score:5, Interesting)

    by Shawn is an Asshole ( 845769 ) on Monday November 28, 2005 @12:34PM (#14129733)
    Okay, so they needed a certain version of MySQL which required a newer version of Glibc. Still, though, any Unix admin should know that upgrading glibc is risky at best (I've broken many systems due to upgrading glibc).

    Here's my question: Why didn't they just rebuild the source RPM and install the resulting binaries? This way the binary would be built with the same glibc as everything else on the system. I've done that on many system with no adverse effects. They didn't have to rebuild in on the server, just any machine running the same distro would do fine.
  • by plover ( 150551 ) * on Monday November 28, 2005 @12:36PM (#14129756) Homepage Journal
    You said above "I agree though that one is tempted to dismiss research a priori though because of funding or some vendor tie. I think a good way to reverse the trend is to open the process up to public scrutiny; thats probably the main reason I came on Slashdot."

    You obviously see the value of public scrutiny in what you do. So do we, we're obviously paying attention to your studies, and are pleased to see the "inner workings." It certainly helps lend credibility to your points. But it also begs the question: why doesn't Microsoft extend that same logic to operating systems or applications?

  • by ananke ( 8417 ) on Monday November 28, 2005 @12:40PM (#14129785)
    From a purely technical point of view, I was mostly interested in seeing the following question [and thread] addressed:

    http://interviews.slashdot.org/comments.pl?sid=168 949&cid=14084692 [slashdot.org]

  • Re:Riiiiiight (Score:2, Interesting)

    by MSFanBoi2 ( 930319 ) on Monday November 28, 2005 @12:40PM (#14129786)
    Mostly, becuase unlike ESR, he doesn't seem to have an agenda... Unlike ESR the Dr. doesn't work for Microsoft or any OSS org...
  • Re:~FFE4 (Score:3, Interesting)

    by GogglesPisano ( 199483 ) on Monday November 28, 2005 @12:42PM (#14129798)
    I'm not sure if this is what he's referring to, but back in the day $FFE4 was the address for the "get whatever key is being pressed" routine in the 8-bit Commodore kernal (e.g., the C64).

    As in:

    WAITKEY: JSR $FFE4 ; Check for a keypress
    BEQ WAITKEY ; If no key pressed, a zero is in the accumulator, so loop back
  • Re:Well (Score:2, Interesting)

    by MSFanBoi2 ( 930319 ) on Monday November 28, 2005 @12:47PM (#14129859)
    If said experiment was repeated, funded by say RedHat and they found the same results, do you think they would have the acument to publish them?
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday November 28, 2005 @12:49PM (#14129869)
    The OS upgrade was already part of the "evaluation".

    Why not allow the sysadmins to upgrade from SLES 8 to SLES 9 instead of REQUIRING them to backport the glibc patches from 9 to 8?
  • by Anonymous Coward on Monday November 28, 2005 @12:49PM (#14129880)
    In all of my years as an administrator I have "upgraded" operating systems exactly twice on systems that are not FreeBSD. The reason? Upgrades break stuff. Random binaries don't work or some configuration file is in the wrong place or two copies exist. Something is wrong. It is usually faster to make a final backup, and install the new version and then start the system fresh from the latest backups, providing any tweaks required. Legacy components left around for years come back to bite you in the ass, 'tis a proven fact.
  • by Foofoobar ( 318279 ) on Monday November 28, 2005 @12:54PM (#14129933)
    King of the Desktop perhaps but not King of servers. Sure they show more REVENUE but as for deployment, Linux still dominates and has been squeezing Microsoft more and more out of server space. While Linux eats into UNIX market share, they also are eating into Windows market share as well.

    Don't believe it? Look at what the most widely used Web server is. Look at what the most widely used DB is. look at the most popular scripting languages. And now keep in mind that they all come installed by default on almost all Linux distros.

    They can keep putting money into trying to convince people that Microsoft Clusterfuck Edition can replace Linux clusters. That's cool. Just another money pit for them and a great way to divert resources into a nowhere scheme. And sure they have loads of funds but they still have to answer to shareholders and they are not pleased that the stock has stagnated for so long and they won't be pleased when didvidends stop getting payed and products not being sold or delivered on time do to them focusing on a product that will go nowhere.

    The entire open source world and all companies supporting open source (IBM, Google, Sun, Amazon, etc.) are all starting a bait and switch where Microsoft throws mony into duplicating anything that it thinks may be a threat. This is turn causes them to waste funds and resources on red herrings when the actual threat is something else entirely.

    These past 5 years have seen Linux and open source go from obscurity to mainstream in the business market. The next five years will see it go from obscurity to mainstream in the consumer market.
  • by Korexz ( 915405 ) on Monday November 28, 2005 @01:00PM (#14129997) Homepage
    How long will this argument go on? Apples and Oranges I say. More marketing propaganda to buffer the bottom line. Technology will only move forward when we stop arguing over what is better and start working towards a common goal.
  • by greenegg77 ( 718749 ) on Monday November 28, 2005 @01:07PM (#14130076) Homepage Journal

    How is it that Diebold can make ATM machines that will account for every last penny in a banking system, but they can't make secure electronic voting machines?

    The reason is that Diebold is not required by any law or regulation to do so. The banking industry and financial networks demand and regulate the security and journalling of transactions. If you don't follow the rules, they don't let you run transactions.
    The "voting industry," on the other hand, has yet to regulate or stringently demand minumum standards from e-voting machines. Until the constituency informs their lawmakers that they want the security of a) knowing that their vote went through the way they wanted it to, and b) knowing that no one can rig the election so that Snoopy wins, Diebold has no economic incentive to add these features.

    BTW - for what it's worth, Diebold can't build an ATM machine worth a crap. They were one of the original ATM manufacturers, and thus have great brand-name recognition in the industry. What they build is over-engineered, over-priced, and over-proprietary. Think of the old IBM PCs that cost much more that their clone counterparts, used nothing that was off-the-shelf, and did no more than a cheaper computer. That's Diebold.

  • Re:MySQL (Score:3, Interesting)

    by ajs ( 35943 ) <{ajs} {at} {ajs.com}> on Monday November 28, 2005 @01:13PM (#14130122) Homepage Journal
    They did not just rebuild source RPMS because that would have violated business constraints, which were the basis for comparison.

    He did comment that thre admins provided feedback saying that they would have considered a distribution upgrade over the glibc upgrade if they were allowed to. That would seem to me to be a more likely path for a business to have taken. Still, for the constraints posed, this was a fairly valid test (and remember that the constraints were posed on both sides).
  • Re:~FFE4 (Score:2, Interesting)

    by LnxAddct ( 679316 ) <sgk25@drexel.edu> on Monday November 28, 2005 @01:15PM (#14130134)
    I must say, you are a true geek through and through. Thanks for an unbiased study and being brave enough to respond to slashdot. Geeks around the world thank you. (As you can see from my username, I am slightly biased towards the competition :) but still found your study to be excellent)
    Regards,
    Steve
  • by spejsklark ( 913641 ) on Monday November 28, 2005 @01:17PM (#14130168)
    FFE4: What kind of credibility do you think you have, being a Microsoft MVP? [securityinnovation.com]
  • Re:Meta-credibility? (Score:3, Interesting)

    by geomon ( 78680 ) on Monday November 28, 2005 @01:27PM (#14130256) Homepage Journal
    Sorry, just watched six guys on laptops code and tweak for two hours failing to get the newest, hippest OS du jour to even recognize basic hardware.

    No need for apologies. Apple users were watching Windows users perform the same frustration-filled dance for nearly two decades.

    It took the XP release for Microsoft to get right what Apple did in the 1980's.

    I think that Linux has made some marvelous achievements with a fraction of the financial resouces of Apple and Microsoft. To compare Linux to Microsoft and declare Microsoft the winner is like declaring Dilophosaurus the best and final winner of evolution 190 million years ago.

    Linux's primary achievement has been to keep the operating system market competative and alive. By constantly nipping at the heels of Microsoft, open source products like Linux have kept Microsoft working hard to develop new products. By showing that open source software (e.g., BSD) is a viable platform for developing high-end user interfaces (OSX), Apple has benefitted as well.

    Anyone who dismisses the real lessons of what Linux has achieved in the last 16 years is fated to be stuck in the Jurrasic with the other dinosaurs.
  • by Svartalf ( 2997 ) on Monday November 28, 2005 @01:34PM (#14130327) Homepage
    ...upgrading something like kernel.dll under NT4, 2000, XP, etc. It's not something lightly undertaken on a running machine- especially a production machine. Typically, when something of that magnitude needs an update, it's a full system upgrade- doesn't matter if it's Windows, etc. What makes the author of the report think that this was even remotely a fair comparison in question.

    And I'll be honest, I find it fishy to say the least that he seemed to need that specific version of glibc; pretty much all vendors that are in the FOSS world try to track deprecated interfaces, avoid making calls to "broken" apis on the machines in question, etc. Even with a security flaw present, unless the glibc actually is the root cause, they will go out of their way to code around problems in most cases instead of mandating a glibc update for customers- it's that big a deal. Better yet, it seems that the official version updates from SuSE DID address all of this, including dealing with a fix to glibc that changed the revision number. If it's on SuSE's update sets, it's been pretty much vetted unless you change something fundamental, like glibc, at which time, all bets are off- it'd be the same way with Windows if you figured out how to accomplish a swap out of kernel.dll, or similar. Currently, for all distributions in main use except for Slackware, a system of handling all dependency relationships and obtaining all the official updates, etc. online. This is a KNOWN feature of all those distributions, whether you're talking Yast, urpmi, apt-get, yum, up-2-date, etc. Given that this is the case, not a single admin that actually knows what he's doing would have ever done what you describe in the draft 13 version of the paper on page 31, where you list things like admins doing by-hand updates of glibc, etc. That's "where Angels fear to tread" territory and would only be attempted by people that either roll custom distributions for embedded use or similar (Myself, for example...)- which would not be your typical sysadmin and they'd not be doing something like that with a production or pre-production server because they know better. And this is just one of numerous flaws with the whole study. I'll try to get to more later.

    While I won't label you as a shill for Microsoft (partly because you're brave enough to face the gauntlet on this site...), I will question your ability to frame in adequate tests that actually test something- because you failed to do anything useful here except give Microsoft precisely what they were looking for. The work you did as presented to the whole world is hopelessly flawed in a manner not unlike what Mindcraft did for Microsoft a while back. I'd not consider your firm a reliable source of input or information at this point- while I was going to use one of your other papers that was provided online for a reference item in one of the white papers I am working on for my company, I must now largely discard this and find other sources for the information as everything you've produced is suspect because of the egregious flaws in the paper we're discussing.
  • Re:MySQL (Score:1, Interesting)

    by Anonymous Coward on Monday November 28, 2005 @01:41PM (#14130396)
    What were these 3rd party components that you chose? And did your administrators have any power to veto or propose alternatives to these choices?
  • by ookaze ( 227977 ) on Monday November 28, 2005 @01:44PM (#14130413) Homepage
    The f*cked up part is still there and well.

    To sum up :
    - Despite what is said, the Linux admins just do not look like experienced Linux or Suse admins
    - I still don't know what is this search package (the one which required new MySQL and glibc)
    - I have to question why the search package chosen was not supported by the distro, as sure enough, no sane Linux admin would have chosen it

    The big question is still there : how come they ended up updating glibc ?
    Glibc for god's sake !!
    Sth is still very fishy here. We're talking about 5 years experience Linux admin yes ? With 2 years experience with Suse right ?
    So sth does not compute here. Sure enough, I have less than 2 years experience with Suse (but 6 years of experience with Linux at the time of my story that follows). In fact, I was confronted with Suse only once, on a project, where we used the same old Suse 8 version.
    I had to install lots of more complicated things : IDE RAID drivers unsupported by the Linux version provided (for Proliant servers), teaming for the Gigabit ethernet cards, LVS, ...
    I had to recreate RPM for most of these things. I managed to create RPMs for all the unsupported packages, taking the source RPMS as guide. That is the only path a decent Linux admin with experience would take IMHO, if the route chosen is to use unsupported packages on a production platform (which is the case in this study). I grasped Suse in less than 1 day, knowing other binary distro.
    A Suse admin with 2 years experience should know that putting a package for a newer distro will invite lots of update. He should know how dependancies work, these admins obviously did not.
    What is fishy for me ? An experienced Linux and Suse admin :
    - would never have gone the "source distro" route and "make install" things like that in the system
    - would have created RPM for his distro
    - would never have recompiled glibc, but would have recompiled MySQL instead
    - even if foolish enough for recompiling glibc, would not have wiped out the old version, but made his package to install next to the old one

    These supposed Linux admins behaved like they don't know how Linux OS work, or even how Suse works.
  • by Zathrus ( 232140 ) on Monday November 28, 2005 @01:45PM (#14130427) Homepage
    Why not allow the sysadmins to upgrade from SLES 8 to SLES 9

    He answered this -- the configuration control system that was in place did not allow for the upgrading of the OS.

    This is not unusual -- if you know everything works with OS Y version X, then you simply do not upgrade just because X+1 comes out without doing massive testing.

    He also said that after the test was done the Linux admins said that the test followed their real world experience pretty well, except that they would've upgraded the OS instead of backporting glibc. The configuration control didn't allow for that -- which is almost certainly a problem with the configuration control. If your admins say "well, we can upgrade to X+1 and certify that everything works in Z days, or we can try to backport the changes which will take W days with the understanding that it may all blow up anyway" then most businesses will go with the first route -- even if Z is bigger than W because that "blow up anyway" bit should scare the crap out of any CTO that's worth employing.

    Yes, they should've allowed for the upgrading. The configuration control was overly stringent and caused undue breakage. There are certainly parallels in the Windows world where installing a patch breaks other systems. And there you're down one option -- you can either deal with the broken software, you can go back to a vulnerable/unpatched state, but you cannot port the patch backwards in most cases. Not that I recommend the latter option in almost any case -- fixing the broken apps is likely to cause far less pain.
  • by chaoskitty ( 11449 ) <.gro.slrigxis. .ta. .nhoj.> on Monday November 28, 2005 @01:50PM (#14130473) Homepage
    The study illustrates some of the weaknesses of the GNU/Linux methodology which were previously GNU/Linux strengths. For instance, much software in the Unix world is distributed as source code, yet problems constantly arise because people have moved from source distribution to binary distribution. As a BSD user who hardly ever uses x86 systems, I find it strange that the trend is heading in this direction, but it seems that this isn't the only way that GNU/Linux distros are becoming more similar to Windows. Binary patches seem to be commonplace, and so are "wizards" which are hardly stateful and therefore not particularly suited to a multiuser server, for instance.

    Would it be unreasonable to suggest that a good lesson that GNU/Linux people could learn from a study like this is that moving towards the lowest common denominator is NOT a good thing?
  • by Julian Morrison ( 5575 ) on Monday November 28, 2005 @01:56PM (#14130516)
    A major possible fault of subject-is-buyer studies is the possibility of bias by selective publication. Do ten thousand completely fair studies, publish the favourable results and bury the rest. Or, a similar procedure but preemptive, focus the study's remit upon a known strength which is in fact surrounded and dwarfed by (un-studied) weaknesses.

    In this the researcher may not actually be methodologically at fault at all. How did you protect your study from this kind of externally induced bias?
  • by crulx ( 3223 ) on Monday November 28, 2005 @02:01PM (#14130552)
    Many of us have several questions about the level of incompetence displayed by these Linux Admins. From the choice of distros to the botched installation of glibc, they made egregious errors that would have sunk ANY startup that they were intended to help setup. And given your knowledge of Linux from your home use, I think you know this.

    Do you see this as a credible challenge to your study?

    Can we talk with these supposed "admins" to gain insight into why they behaved so incompetently?

    And given that you don't have enough admins to be in adherence to the central limit theorem, how do you feel your study applies in a general way to anything at all?
  • by electroniceric ( 468976 ) on Monday November 28, 2005 @02:06PM (#14130608)
    Excellent point. In fact, I'd be awfully surprised if some of these experienced Linux admins didn't point that out. Even if there hadn't been these glibc issues, I'd be awfully tempted to upgrade to a newer OS to avoid the potential for having that same problem with other components. Nor are such compatibility traps between a particular platform (e.g. OS + database) and an application particularly specific to Linux, in fact SAP and Peoplesoft installations are legendary for this sort of cross-application compatibility trap. I'd be very curious to hear what the admins' reaction to the scenario was.

    This study covers an area where Microsoft has invested substantial effort in making a specific set of migration pathways. Microsoft's design method has always been to streamline certain task pathways, and (by design and/or side effect) make work outside those pathways much more difficult. For example, trying to get data out of Exchange and into any database other than SQL server requires a very complex set of programming with CDO and other objects. The effort to get data out of a mail-storage system on Linux would pale in comparison, regardless of the RDBMS used. Another example in the migration area is legacy OSes. If a Microsoft operating system reaches its end of life, not only are there no further patches or upgrades issued by the vendor, but it cannot be patched by anyone outside of Microsoft. So how about a test of modifying an application on an NT4 server versus RedHat 6?

    The findings of this study do seem legitimate, and its credibility is certainly enhanced by the author's willingness to open its methodology to scrutiny. And unsurprisingly, Microsoft asked for a study in an area where they already thought their product was better. I'd call it one state of a large ensemble.
  • by Svartalf ( 2997 ) on Monday November 28, 2005 @02:18PM (#14130716) Homepage
    I have grave concerns as I'm reading the paper. If the 3rd party component needed an upgrade to a new glibc, you would never have done what these admins allegedly did in the paper. It would have been a red-flag on the component in question and if it was something critical to the application, it would be assumed that the official version of the OS that was supported by the components was SLES 9, not 8 because it didn't have support for that version of glibc. You don't hack something like this in a production system, ever- even if you've got the skills to pull something like it off. I've got the skills, but even I wouldn't do what was done. You'd do a migration to the next version- period. There's far, far too many things that can go wrong and you really need to vet everything once you do it. What your esteemed admins did was analogous to someone haxoring kernel.dll by patching it manually and then putting it into a production Windows machine. I honestly don't know of anyone in their right mind that would do that one- ever.

    Another faintly disturbing thing in this paper is that it's assumed that it's Linux at fault, when in reality, it was the ancillary components' requirements and someone trying to bull their way through the "problem". There's several problems with this, but I can number a few key ones for you:

    1) glibc's interface, the ABI, doesn't change all that much over time. Typically, it's linked
    to at runtime through a sonamed link to the actual .so file (Currently libc.so.6 on modern Linux and *BSD distributions...). This interface can be safely used for many years at a time, in spite of varying version numbers and the expected behavior will be the same for an older and a newer version- so long as you're not stepping on a bug within the older version or a new feature offered by a later on version of the runtime.

    2) Yes, you CAN get away with minor revision updates of glibc without problems, but typically, you need to vet all your compiled code for regression testing purposes. It really, really is like replacing kernel.dll on Windows. If it isn't provided as an update, you've got a lot of regression work ahead of you to ensure that fixes done to the library don't break other code (Typically not a problem, but you never can tell when someone mis-used something...)- this is NOT something that your rank-and-file sysadmin has any real business doing. It's NOT their job.

    3) Either the component stepped on a bug, or they're using some new feature of the glibc layer. In either case, you can't bull your way into using it on something that doesn't have the needed support level. What your admins did was analogous to trying to make this work on NT4, only to find out that you need the .Net framework for everything and then proceeded to install pieceparts of the OS to get it there.

    The study's flawed- that plain, that simple. You can defend it all you'd like, but it's got bad problems that everyone, myself included, have been pointing out and you've avoided answering several of the key points we've been making.
  • by BeBoxer ( 14448 ) on Monday November 28, 2005 @02:28PM (#14130822)
    As others have pointed out, the root problem was a GLIBC incompatability with a closed-source binary-only application which was one of the requirements. For unknown reasons, upgrading to SLES9 was ruled out. As was running the closed-source application on a separate server. As was choosing a compatable product instead of the incompatable one. Moreover, the selection of the "requirement" applications was made solely on "market share" with no consideration as to the actual compatability with existing IT infrastructure. Basically, a series of poor techincal decisions which no competent IT organization would make. The only valid conclusion you can draw from this study is that choosing applications based on market share alone with no thought as to technical considerations can lead to unfavorable outcomes. Is that enough of a refutation for you?
  • by Svartalf ( 2997 ) on Monday November 28, 2005 @02:31PM (#14130854) Homepage
    "And several 'experienced' Linux admins had trouble making MySQL work on SUSE?"


    To play devil's advocate for a moment, how do we know you're past just "experienced" and on deep into the Wizard or Guru realm of administration or programming? (I know, I know, but he's going to flip that one out all the same... I'd be legitimately tarred with that brush in his response... >:-))

    Realistically, though, you're right- I have issues with all of this. They picked distros that would most likely have issues with things. They picked rules that required a lot of patching on the Linux side, but only had the normal set of updates on the Windows side- a lot of patching that simply wasn't needed and didn't have an analog in the Windows world. They picked a stilted set of conditions that honestly would have mandated a distribution version update- in any shop for any OS you could name in the real world.

    I have trouble buying into this- and it's to the point that I'm being forced to re-work my own stuff for my startup because I was referring to other papers by them; I can't trust the data here as far as I could pick the Doctor up and throw him, so everything from this consultancy firm is now suspect.
  • Data Points (Score:3, Interesting)

    by quantaman ( 517394 ) on Monday November 28, 2005 @03:00PM (#14131121)
    A lot of people are trying to poke holes in the study itself though it seems to have been fairly well implemented.

    I did however notice two interesting bits that cause me to put a lot less importance on the results

    With three people there's certainly likely to be a lot of variability and to get some conclusive results, I'd love to get a huge group of administrators across the spectrum in terms of experience. I'd also love to do it across multiple scenarios, beyond the ecommerce study.

    And a little later

    it is up to the sponsor if the study is publicly released

    Simply fund a lot of small legitimate studies with a high variance, publish only the results that fit your case. In a way it's like one big badly done study where someone throws out all the data points that don't fit their hypothesis, for all we know he, or another researcher, might have done a dozen other studies which came out in favour of Linux and were subsequently ignored. The research itself is all completely legitimate but Microsoft creates a false overall conclusion through selective publication, perhaps companies who fund the studies should be held to the same eithical standard as those who do the research?
  • by ananke ( 8417 ) on Monday November 28, 2005 @03:54PM (#14131687)
    I think we need to clarify something, because it seems that majority of the geek slashdot users have the same baffled look on their faces as I do:

    1) 3 individual linux administrators were put to a test. Each one had 5 years of experience.
    2) Each one of them decided to upgrade glibc:

    2a) one decided to do it from scratch, "from GNU site" [I assume that meant compiling it]
    2b) second went to upgrade using packages for a new version of suse, and only that
    2c) third did something similar to the second one.

    Now, call me crazy, but somehow points #1 and 2a/b/c do not match up. Nobody with that much experience should ever consider the solutions taken by those three people. Especially 2a - nobody in their right mind would ever consider that. It's just way too risky. That's why I'm wondering - were they asked to go that route? Where they given instructions to go beyond of what the vendor supports?

    Considering that it is mentioned that a new version of suse was available, why nobody decided to upgrade the entire distribution?

    You may be right, the ability to perfom #2a is something that wouldn't be possible in the windows world, thus eliminating the possible problems it may cause. However, something still doesn't add up. Those admins should have never attempted those routes.

    other than that, interesting paper.

  • by benjamindees ( 441808 ) on Monday November 28, 2005 @04:40PM (#14132139) Homepage
    Well, I didn't really define it. I just repeated it. But I assume it has the general meaning you would expect. A "monolithic" operating system is highly integrated, with irreplaceable components. A "modular" OS would be more flexible, have multiple, interchangeable options for major components. In a "modular" OS, components can be removed without causing adverse effects, yet the lack of standards can make setup and use more difficult. A "monolithic" OS has many standard components higher up the application stack, which have numerous cross-requirements, such that, for instance, removing a spellchecker might cause your e-mail client to fail.

    "Monolithic" operating systems are usually easy to setup, impossible to upgrade, and can be supported by a small group of programmers apart from the environment in which they are used, along with relatively incapable administrators willing to perform mindless, repetitive tasks, perfect for a commercial OS. "Modular" systems are more difficult to setup initially, easier to upgrade (especially incrementally), and require (and enable) a more cohesive inteface between those who create the OS and those who use it, perfect for capable sysadmins, and Open Source Software.

    A good example of each would be something like Debian versus something like OSX. Debian, as a "modular" OS, packages almost every OSS program out there, yet sets very few defaults. OSX, on the other hand, comes out-of-the-box with a full set of default programs and relatively little support for integration of 3rd party applications. Or you can think something like Windows 3.1 with 3rd party browsers, versus Windows 95 with Internet Explorer, or, in a more general sense, KDE versus a lightweight DE like blackbox.

    What specific features contribute to a "modular" OS? I'd like to say things like robust, version and upgrade-aware package management. Obviously, a compiler and development tools and the ability of admins to modify the OS, which are lacking in proprietary commercial software, limited in some commercial Linux distributions (such as Linspire), and difficult or discouraged in others, such as Fedora. Or, lacking source availability, a robust community of interoperable, 3rd party software, and a generally application-neutral OS design. All of these requirements, to a certain extent, also necessitate a long development and support lifecycle.

    But, in reality, those things are just symptoms of a much deeper cause. The actual, driving force behind modular operating systems is the concept of the "programmer-admin". A "programmer-admin", while perhaps not a full time programmer, is at least capable of diagnosing complex problems and submitting patches and valuable bug reports to upstream sources. Consequently, the "programmer-admin" doesn't spend much time further up the application stack, such as tasks like helping users write reports and general end-user training. The main task of the "programmer-admin" is to maintain and incrementally improve the functionality of the OS. As such, she must be capable of playing an integral role in the development process. Depending on the size of the userbase and IT staff, the "programmer-admin" may even specialize on a specific part of the OS, or ignore userland applications entirely.

    However, this study, and many "enterprises", expressly forbid admins from programming. Using commercial, "monolithic" operating systems, most sysadmins are too busy trying to integrate 3rd party components and performing upgrades to be able to make real contributions to an OS, which will most likely render any improvements worthless at the next upgrade. The result is that admins perform a variety of incidental tasks, from minor upgrades to purchasing to user training, mostly nothing special or requiring extensive skills or ability, instead of truly beneficial, long-lasting work. Unless the client is large enough to garner special attention from the OS vendor, the OS is written by programmers who have little contact with end-users, and important functiona
  • by FFE4 ( 932849 ) on Monday November 28, 2005 @05:47PM (#14132734)
    This is *really* interesting. It gets to the "philosophy" of research as opposed to this study itself - we talk about this internally all the time and about how we can build an industry infrastructure to support this Feynman-esque research. Here's what I'd love to do: get a group of industry folks together on all sides of the fence (so there's no question of funding); agree to some ground rules, a methodology, and then also agree that the work will be published no matter what. To some degree that's what some of the consumer review groups do but I don't think we have a *real* equivalent in the IT world for the really big stuff. This gets down to the question of how could we set up something truly unbiased (perceived or real) in the Feynman sense of the word that would also work as an economic model. It seems like a consortium of consumers (organizations that use technology as opposed to selling it commercially) who do not have a vested interest in the outcome would be ideal. It would be great to get some responses to this thread with some suggestions. Again, the premise is simple, and funding from a fairly neutral third party like the government is one thing, but how would the IT community do something where multiple participants in the user world would be willing to fund it or multiple vendors, as a group, will be willingly to take that risk?
  • Re:Meta-credibility? (Score:3, Interesting)

    by geomon ( 78680 ) on Monday November 28, 2005 @05:50PM (#14132754) Homepage Journal
    Well, to be fair, if you had to pay the linux contributors the same hourly rate that you would pay the average programmer at Microsoft or Apple, I'd be interested in seeing which OS actually costs more. And add on top of that, Linus dictating which features needed to go in and when. That's something I'd really be interested in seeing.

    Good point, if the economics were comparable. It would be interesting, for instance, to calculate how much money would have been spent by local farmers if they had hired a contractor to build their barn rather than by paying it forward by helping raise their neighbors barn.

    Or how much money would have been spent if soup kitchens had to pay for their food rather than relying on donations. Or how much each Habitat for Humanity house would cost if it had paid for the volunteer labor.

    That is the problem with comparing a commercial venture with a volunteer effort. The economics aren't the same. My point was considering the vast amount of capital that Microsoft and Apple have at their disposal, why is Linux so close in quality that these arguments over which is better are even possible?
  • by jmorris42 ( 1458 ) * <jmorris&beau,org> on Monday November 28, 2005 @06:42PM (#14133168)
    > But these are legitimate problems we HAVE to deal with. These aren't issues really in the
    > Microsoft world; but they are in the Linux world. This study brings it to light.

    Oh really. Most of the problems came from an artificial and highly contrived requirement that an unspecified 3rd party binary only package be run on Suse 8 instead of Suse 9, which it was designed for. So are you saying that any Windows software will run on any version of Windows? Well then I guess that pretty much wraps it up for Shorthorn since nobody needs to upgrade to it!

    Get a grip here people. If you buy a package and the box says "Requires Windows Server 2003" you don't expect the IT peeps to pull a rabbit out of a hat and make it run on the XP servers you standardized on a couple of years ago. Same thing here. When the unspecified third party binary said it needed services only available on Suse 9 a decision needed to be made. A) get with the vendor and get a version built and supported on Suse 8, B) Upgrade the server it is to run on to Suse 9 or C) pick a different vendor.

    It is pretty obvious Microsoft designed the test as a no-win scenario.
  • Re:MySQL (Score:3, Interesting)

    by rholtzjr ( 928771 ) on Monday November 28, 2005 @07:31PM (#14133560) Journal

    Of course he did. That was the whole point of this study. When would a Windows system do better than a Linux system with respect to upgrading components while putting constraints on what they can do. In my opinion, this study has no merit execept that is exhibits what NOT to do when requirements for an application are not met.

    Here is what convinced me that this study is totally bogus.

    From his assumption:

    * The business migrates operating system versions at the end of the one year period to the latest versions of the platform.

    Since SLES9 was released around Aug 2004 (approx.), this would probably mean that since they upgrade their OS at the end of year, then more than likely they would be setting this environment up in their test/development environment within the next couple of months( say Oct at latest ) . Now, MySQL 4.1 went GA around Oct 2004( aprox.) so technically 4.1 was not available until around that time frame.

    When was the decision to go to 4.1 made? Was the upgrade so important that is must bypass the development/test phase and preempt the OS upgrade that was hapening in two month?

    I see this scenario as nothing more than what conditions can be created to ensure that one system fails and the other does not.

    I do not look as this scenario as a failure/benefit for any OS. I look at this scenario as a failure in the Software Engineering process that was used. The sequence of events that formulated the conclusion(s) are fictitious and do not reflect a real world scenario in respect to the real world application life cycle.

    This is also an example of failed requirements gathering in the analysis phase and instead of redoing the requirements based upon their glibc version incompatibility findings, they proceeded down the wrong upgrade path and thus causing a catastrophic (at the extreme) system failure or an unsupported system by the vendor. In this scenario the requirements would not be met once a single application that would make a Commercially support OS/Application no longer under contract support.

    Would I hire this person to design my IT infrastucture? Sure! If he comes up with a plan that I agree with :)

8 Catfish = 1 Octo-puss

Working...