Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Microsoft Software Linux

Windows vs. Linux Study Author Replies 501

Last week you submitted questions for Dr. Herb Thompson, author of the latest Microsoft-sponsored Windows vs. Linux study. Here are his answers. Please feel free to ask follow-up questions. Dr. Thompson says he'll respond to as many as he can. He's registered a new Slashdot username, FFE4, specifically to participate in this discussion. All others claiming to be him are imposters. So read, post, ask, and enjoy.
1- A better way of putting it:
by einhverfr

It seems that your study attempted to simulate the growth of an internet startup firm on Windows or Linux. One thing I did not see in the study was a good description of assumptions you made. What assumptions were made in both the design of the requirements and the analysis of the data? What limitations can we place on the conclusions as a result of these assumptions?

Dr. Thompson

This is a really important question. I think there are two sections of the study: the assessment methodology and then the experiment we undertook to illustrate how to apply that methodology. I'll answer the assumption question for both parts:

Methodology - For the methodology, we wanted to provide a tool that organizations could use and apply their own assumptions. Maintaining a system is all about context; some environments favor Linux, others Windows. The question is, how do you know what's likely to be the most reliable (which includes manageable, secure and supportable) solution for your environment? We proposed a methodology a recipe - that looks at a solution in its entirety instead of just individual components. Policies like configuration control vary from organization to organization and to get something that's truly meaningful in your environment, the methodology needs to be carried out in your context. Enterprise customers can and should do this when they are about to trust their critical business processes to a platform. That said, the basic assumptions of the methodology are that patches are applied at 1 month intervals and that business needs evolve over time. How those business needs evolve depends on the scenario you're looking at (in our experiment we looked at ecommerce for example). The methodology doesn't cover steady state reliability, meaning the uptime of a system that is completely static. While this is important, our conversations with CIOs, CTOs, CSOs and IT folks lead us to believe that this was a smaller contributor to pain in a dynamic environment. In an appliance for example, though, steady state reliability is king, and I think an important limitation of this methodology is that we don't capture that well, and I think it's amazingly difficult quality to measure in a time-lapse way.

The purpose of the experiment was to illustrate how to apply the methodology and to begin to get some insights into some of the key model differences between two platforms. For the experiment we picked the ecommerce scenario, for no other reason than there has been a clear shift in how ecommerce sites have serviced their customers in recent years moving from static sites to personalized content. Some specific assumptions were:

* The transition from a basic purchasing site to a personalized portal based on order/browsing history takes place over a one year period.

* The period we looked at was July 1st, 2004 to June 30th, 2005 (the most recent full year at the time of the study).

* A configuration control policy exists that mandates OS version but not much else meaning administrators had fairly free rein to meet business requirements.

* All patches marked as critical or important supplied by the vendor are applied.

* We assume the system to be functioning if the original ecommerce application is running and meets some basic acceptance tests (same for both platforms see Appendix 1 of the report) and the new installed components are also running.

* To add new capabilities, we use leading 3rd party components as opposed to building custom code in-house.

* The business migrates operating system versions at the end of the one year period to the latest versions of the platform.

* The administrators that participated in the experiment reflect the average Linux (specifically SuSE) and Windows administrators in skill, capability and knowledge. While this was strived for, it's important to recognize the small sample size in drawing any conclusions from the data.

As far as limitations, the experiment looks at one specific case with a total of six administrators. I'd love to have done it with a hundred admins on each side on a wide range of business requirement scenarios and my hope is that others will do that and publish their results. Our experiment, however, shows that for this particular, clearly documented scenario, experienced Linux Admins had conflicts between meeting business needs and a recommended best practice like not introducing out-of-distribution components. If one is aware of potential conflicts and challenges upfront, I think you can put controls in place to make reasonable tradeoffs. In the linux case, a precise and specific configuration control policy may have prohibited the problematic upgrade of one of the components that the 3rd party solutions required. This would have likely reduced the number of failures but would have put some hefty constraints on 3rd party solutions. To understand the implications for your environment you really need to run through the methodology with the assumptions and restrictions of your organization and I hope that this study either prompts or provokes people to do that.

************************

2 - Meta-credibility?
by Tackhead

Where I come from (non-management, grunt-level techie), appearing in any of these analysts' journals *costs* an author more credibility than it gains him or her. For example, if $RAG says that $CORP has the best customer support, I immediately assume that $CORP has such horrid customer support that they had to pay someone to make up some research that proves otherwise.

To be sarcastic, I'd ask "who the heck actually takes these studies seriously?", but obviously *somebody* does. Who are these people, and why do these people take these industry analyst firms/journals/reports seriously? Are they right or wrong to do so? This isn't an attack (or endorsement :) of your research -- I'm talking about the credibility gap in industry research, and my observation that it's an industry-wide problem.

The meta-credibility question is this: Given the amount of shoddy pay-for-play research out there, does being published in an analyst journal tend to cost (a researcher, his consulting company, his financial backers) more credibility than it can gains him/her/them? If not, why not -- and more importantly, if so, is there any way to reverse the trend?

Dr. Thompson

This is a really interesting question because it cuts to the heart of what a real research study should provide to the reader. It should provide a baseline and I think research should always be questioned, scrutinized and debated because one can always find reasons for bias. Particularly, if a subject of the study (vendor for example) is behind its funding, whether directly (as in this study) or indirectly (meaning that they are big clients) I think it's critical that the study not provide just a baked cake for readers but the recipe as well. The recipe has to be inherently fair and simple, meaning that it has to map directly to a the quality or pain one is trying to measure without taking into account how the subjects try and provide that service or mitigate that pain. I think slanted opinion pieces, with no backup for those opinions, seriously hurts credibility, at least in my book. If you're presenting facts though and encouraging others to question them then I think that actually helps credibility, even if the search for those facts was paid for.

I agree though that one is tempted to dismiss research a priori though because of funding or some vendor tie. I think a good way to reverse the trend is to open the process up to public scrutiny; that's probably the main reason I came on Slashdot. To use this specific study as an example, some folks disagreed with several points in the experiment from counting patches, to reasons for upgrading key components, to the ecommerce scenario we used. For me, the study's key value is the methodology. Could different applications/scenarios have been chosen: absolutely!

The value I think that this study gives to the practitioner is arming them with a tool to help measure in their own environment. By applying the methodology, the results should take into account things like administrators skillsets, support policies, configuration control policies and the tradeoffs between customizability, maintainability, visibility, security and usability. It's only by looking at this stuff in context can one make a sound judgment; and a true research paper, especially one where funding is in question, needs to fully disclose the method and the funding source. In our case, the methodology has been vetted by industry analysts, IT organizations and several academics. That doesn't mean much, though, if you don't find the methodology meaningful for the questions you want answered. One reason I've come on Slashdot is to get the thoughts, opinions and assessments of the methodology itself from administrators in the trenches. I'm really pleased with the great questions and comments amidst the inevitable flames and I'm looking forward to this being posted so that others can weigh-in with their feedback and I can jump into the threads to get some discussion going.

If the research helps give real insight, and the methodology makes sense, I think there's real value no matter who paid the bill. At the end of the day, you need to decide whether or not you can extract any value from the information presented to you. In the case of this study, my hope is that it will leave you thinking hmmm.... maybe we should actually run through a process like this and check out how this works for ourselves. My more ambitious hope is that you'll implement it and tell me what challenges you faces on Windows, Linux, OSX, BSD, whatever platform you choose to compare. It may not even venture into the perennial Windows versus Linux battle; maybe you're a linux shop trying to decide between multiple distributions for example. Either way, if it's got people thinking about the topic and asking questions, well, that's all any researcher can really hope for.

************************

3 - Weak setup
by 0xABADC0DA

If I understand the study correctly, the windows side had to do nothing but set up a server to do a few different tasks over time and run windows update. The linux side had to have multiple incompatible versions of their database server running simultaneously on a single system and had to run unsupported versions of software to do it.

Why wasn't the windows side required to run multiple versions of IIS or SQL server simultaneously? In real life if you need to run multiple database versions you use virtualization or multiple systems, especially if one requires untested software. You don't run some hokie unstable branch on the same system as everything else. Why was a linux solution picked that required this level of work? My other related question is, did any of the unix administrators question why there were being asked to do such a thing? For example, did they come back and say they need a license for vmware? If they did not they do not seem like very competent administrators in my opinion.

Dr. Thompson

The Windows Admins and Linux admins were given the exact same set of business requirements which doesn't necessarily translate into the same tasks as they went about fulfilling them. The 3rd party components installed were chosen solely based on their market leadership position and any upgrades of OS were unknown at the time of selection. That said, on the Windows side, it turned out that no upgrades of IIS were needed (except for patches) and SQL Server was upgraded to SP4 as part of patch application. On the Linux side, at a high-level there were two main classes of upgrades: MySQL and GLIBC and they were both prompted by the installed components. After the experiment, the administrators were asked on both sides if this kind of evolution of systems met with their real-world experience. They said yes, with the caveat of if they were asked to install a component that required an upgrade of GLIBC that they would likely upgrade the operating system as long as their configuration control policy allowed it.

You make a great point about installing components on some sort of staging system (which is almost always done) as opposed to live running systems. That still means that the problems that the administrators had equal real IT pain. If something weird had to be done to get the system running but it does run and it's then put into production it's like a fuse that gets set on a bomb. A careful configuration control policy would almost certainly help and thats why I think it's so important to conduct this kind of experiment in your own environment with your own policies.

As far as selection of the Linux administrators go, they all had at least 5 years of enterprise administration experience, and two years of experience on SuSE specifically. With three people there's certainly likely to be a lot of variability and to get some conclusive results, I'd love to get a huge group of administrators across the spectrum in terms of experience. I'd also love to do it across multiple scenarios, beyond the ecommerce study. For this experiment, basically the bottom line is that we Illustrate one clearly documented scenario with six highly qualified admins that we selected based on experience. We cant ensure equal competency levels, but there was nothing in our screening that would lead us to believe there were gaps in knowledge on either side. When it comes down to it though, the really meaningful results are the ones you get when you perform the evaluation in your environment. Hopefully this study provides a starting point for asking the right questions when you do that.

************************

4- Who determined the metrics
by Infonaut

Did Microsoft come to you with a specific set of metrics, or did you work with them to develop the metrics, or did you determine them completely on your own?

Kudos to you for braving the inevitable flames to answer people's questions here on Slashdot.

Dr. Thompson

Great question! The metrics and the methodology were developed completely on our own and independent of Microsoft. They were created with the help and feedback of enterprise CIOs as well as industry analysts. I think that this relates to a couple of other questions on Slashdot with the gist of if Microsoft is funding the study aren't you incentivized for them to come out ahead. Besides the standard we would never do that and that would put our credibility at risk which is our primary commodity which are both very true, let me explain a little more about how our research engagements work.

Company X (in this case Microsoft) comes to us and says can you help us measure quality Y (in this case Reliability) to get some insight into how product Z stacks up. We say, sure, BUT we have complete creation and control of the methodology, it will be reviewed and vetted by the community (end users and independent analysts) and must strictly follow scientific principles. The response will either be: great, we want to know whats really going on or um, heres some things to focus on and I think you should set it up this way. In the first case we proceed, in the second case we inform that company that we don't do that kind of research. We are also not in the opinion business, so we present a methodology to follow and illustrate how that methodology is applied with the hope that people will take the methodology and apply it in their own environment.

All of our studies are written as if they will be released publicly BUT it is up to the sponsor if the study is publicly released. The vendor knows that they're taking a risk. They pay for the research either way but only have control over whether it is published, not over content. So if their intent is to use it as an outward facing piece, they may end up with something they don't like. Either way, I think it's of high value to them. If there are aspects of the results that favor the sponsor's product, in my experience, it goes to the marketing department and gets released publicly; if it favors the competitors product it goes off to the engineering folks as a tool to understand their product, their competitor's product, and the problem more clearly. Either way, we maintain complete editorial control over the study and there is no financial incentive for us if it becomes a public study or is used as an internal market analysis piece. The methodology has to be as objective as possible to be of any real value in either case.

************************

5 - ATMs vs. Voting Machines
by digitaldc

How is it that Diebold can make ATM machines that will account for every last penny in a banking system, but they can't make secure electronic voting machines?

Also, does the flame-resistant suit come with its own matching tinfoil hat? (don't answer that one)

Dr. Thompson

This is a question that has passed through my mind more than once. The voting world is very interesting. I don't have experience with the inner workings of Diebolds ATM machines but I can say that the versions of their tabulation software that Ive seen have some major security challenges (see this Washington post documentary for some of the gory details). I'd say I'm concerned about the e-voting systems Ive seen but that would be a serious understatement.

I question whether the economic incentive is there for them to make their voting systems more secure. Take an ATM for example. Imagine the ATM has a flaw and if you do something to it, you can make it give you more money than is actually deducted from your account. Anything involving money gets audited and sometimes audited multiple times and chances are good that the bank is going to figure out that they're loosing money. On the flip side, if there was a flaw in the ATM in the banks favor, someone balancing their checkbook is going to notice a discrepancy. The point is that there's always traceability and there's always someone keeping score. If you think about voting tabulators though we've got this mysterious box that vote data gets fed into and then, in many states, only a fraction of these votes are audited. That means we don't really know what the bank balance is other than what the machine tells us it is. If the system is highly vulnerable and its vulnerability is known by the manufacturer *but* it's going to be expensive to fix it and shore up defenses, there seems to be no huge incentive to fix the problems. I think the only way to get some decent software that counts votes that people can have confidence in is to allow security experts to actually test the systems, highlight potential vulnerabilities, and put some proper checks and balances in place. That would give the general public some visibility into a critical infrastructure system that we usually aren't in the habit of questioning and will hold voting manufacturers directly accountable to voters.

As for the tin foil hat to go with the flame resistant suit; it hasn't been shipped to me yet - apparently the manufacturing company is still filling backorders from SCO :).

************************

6 - Why are the requirements different?
by altoz

Looking at your research report's appendices, it seems that the requirements for Windows Administrators were somewhat different than the Linux Administrators. For instance, you ask for 4-5 years sys admin experience minimum for Windows, whereas it's 3-4 years sys admin experience minimum for Linux.

Why wasn't it equal for both? And doesn't this sort of slight Windows favoring undermine your credibility?

Dr. Thompson

Short answer: Typo. Long answer: We originally were looking for 4 years of general administration experience for both Linux and Windows which is what is reflected in the desired responses to the General Background questionnaire for Linux. We then raised it to 5 years for both Linux and Windows which is reflected in the General Background of the Windows questionnaire. The difference in the two was just a failure to update the response criteria on that shared section of one of the questionnaires. On page 5 though we've got the actual administrator experience laid out:

Each SuSE Linux administrator had at least 5 years experience administering Linux in an enterprise setting. We also required 2 years minimum experience administering SuSE Linux distributions and at least 1 year administering SuSE Linux Enterprise Server 8 and half a year administering SLES 9 (released in late 2004). Windows administrators all had at least 5 years experience administering Windows servers in an enterprise environment. These administrators also had at least 2 years experience administering Windows Server 2000 and at least 1 year administration experience with Windows Server 2003.

************************

7 - Scalability of Results?
by hahiss

You tested six people on two different systems; how is that supposed to yield any substantial insight into the underlying OSes themselves?

[At best, your study seems to show that the GNU/Linux distribution you selected was not particularly good at this task. But why does that show that the ``monolithic" style of Windows is better per se than the ``modular" style of GNU/Linux distributions?]

Dr. Thompson

First, let's look at what we did. We followed a methodology for evaluating reliability with three Windows admins and three Linux admins. This is small sample set and it looked at one scenario: ecommerce. Is this enough to make sweeping claims about the reliability of Linux/Windows? No way. I do however think the results raise some interesting questions about the modularity vs. integration tradeoffs that come with operating systems. I don't think that either the Windows or Linux models are better in a general sense but they *are* different; the question is which is likely to cause less pain and provide more value for your particular business need in your specific environment. Hopefully these are the questions that people will ask after reading this study, and with any luck it will prompt others to carry out their own analysis within their own IT environment, building on what we started here. I think the methodology in this paper has provided a good starting point to help people answer those questions in context.

************************

8 - Convenience vs. security
by Sheetrock

Lately, I've felt that Microsoft is emphasizing greater trust in their control over your system as a means of increasing your security. This is suggested by the difficulty of obtaining individual or bulk security patches from their website as opposed to simply loading Internet Explorer and using their Windows Update service, the encouragement in Service Pack 2 of allowing Automatic Update to run in the background, and the introduction of Genuine Advantage requiring the user to authenticate his system before obtaining critical updates such as DirectX.

In addition, Digital Rights Management or other copy protection schemes are becoming increasingly demanding and insidious, whether by uniquely identifying and reporting on user activity, intentionally restricting functionality, and even introducing new security issues (the most recent flap involves copy protection software on Sony CDs that not only hides content from the user but permits viruses to take advantage of this feature.)

I would like to know how you feel about the shift of control over the personal computer from the person to the software manufacturers -- is it right, and do we gain more than we're losing in privacy and security?

Dr. Thompson

This is an interesting problem because manufacturers have to deal with a wide range of users. If there was real visibility and education for users on the security implications of doing A, B or C then we'd be ok. It's scary though when that line gets crossed. Sony's DRM rootkit is a good example. But if you think about it, we are essentially passively accepting things like this all the time. Every time we install a new piece of software,especially something that reads untrusted data like a browser plugin,we tacitly accept that this software is likely to contain security flaws and can be an entryway into your system; NOW are you sure you want to install it? The visceral immediate reaction is no but then you balance tradeoffs of the features you get versus potential risks. Increasingly, were not even given that choice, and components that are intended to help us (or help the vendor) are installed with out our knowledge. This also brings up the question of visibility; how do we know what security state were really in with a system? Again, there are tradeoffs, some of this installed software may actually increase usability or maintainability but it's abstracting away what's happening on the metal. So far, it seems as though the market has tended towards the usability, maintainability, integration that favors bundling on both the Linux and Windows sides. It's kind of a disturbing trend though.

As another example, think about how much trustaverage programmers put into their compiler these days. Whenever I teach classes on computer security and then go off into x86 op codes or even assembly, it seems to be a totally foreign concept and skillset. We've created a culture of building applications rapidly in super high-level languages which does get the job done, but at the same time seems to have sacrificed knowledge of (or even the desire to know) what's happening on the metal. This places a heavy burden on platform developers, compiler writers and even IDE manufacturers because we are shifting the cloud of security responsibility over to them in big way. Under the right conditions it can be good because the average programmer knows little about security, but we need to make sure that the components we depend on and trust are written with security in mind, analyzed by folks that have a clue, and are tested and verified with security in mind. This means asking vendors the tough questions about their development processes and making sure they've got pretty good answers. Here's what I think is a good start. If that fails, theres always BSD. :).

************************

9 - Apache versus IIS
by 00_NOP


Simple one: of course I accept that Windows and Linux are a priori equally vulnerable - C programmers make mistakes. The question is which model is most likely to deliver a fix fastest. Given that the one area where Linux is probably in the lead over Microsoft's software is in the realm of the webserver - why are my server logs filled with artifacts of hacked IIS boxes but apache seems to remain pretty safe?

Dr. Thompson

You bring up a couple of interesting points. The first is patch delivery. It's true that on Linux if there's a high profile vulnerability you're likely to be able to find a patch out on the net from somebody in a few hours. Sometimes the fix is simple, a one-liner, and other times it may be more complex. Either way, there could be unintended side effects of the patch which is why there's usually a significant lag between these first responder patches and a blessed patch released from the distribution vendor. Most enterprises I know wait for the distribution patch as a matter of policy, and even then, they go through a fairly rigorous testing and compatibility verification process before the patch gets deployed widely. In the Windows world, one doesn't get the alpha or beta patches, just the blessed finished product. So the question is which solution is likely to provide a patch that fixes the problem and doesn't create any more problems the fastest. That's a tough one to answer. I think theres something to be learned by looking historically and that in general theres a big discrepancy between perception and reality. Here's a (pdf) link to a study we did earlier this year based on 2004 data that I think provides a good starting point for answering that question.

As far as why you've got so many attempts on your Windows/IIS box, I think there are two distinct issues: vulnerability and threat profile. In the past, I would argue that the path of least resistance was through Windows because desktop systems were often left unprotected by the home computer user. Bang-for-the-packet favored creating tools that exploited these problems and some of the attacks actually worked on poorly configured servers as well. Then there's the targeted vs. broad attacks. Theres no question that the high-profile worms and viruses in the last several years have favored Windows as a target. The issue gets even more complicated when you look at targeted attacks. These targeted attacks are much harder to measure, even anecdotally, because either an organization gets compromised and doesn't disclose it (unless they're compelled to by law) or the attack goes undetected because it doesn't leave any of the standard footprints, in which case no pain is felt immediately. That may help to explain it but the truth is that there's a lot of conflicting data out there. I remember reading this on Slashdot last year which claims Apache was more attacked than IIS but I've also read reports to the contrary. The reality is that any target of value is going to get attacked frequently. If there is an indiscriminant mass attack like a worm or virus, that's pretty bad and can be really painful. What's scarier though is the attack that just targets you.

************************

10 - Do you agree with Windows Local Workflow
by MosesJones

Microsoft and Linux distros have had a policy for some time of including more and more functionality in the base operating system, the latest example is the inclusion of "Local Workflow" in Windows Vista.

As a security expert do you think that bundling more and more increases or decreases the risks, and should both Windows and Linux distros be doing more to create reduced platforms that just act as good operating systems?

Dr. Thompson

Three years ago I bought my mother a combination TV, VCR and DVD player. It was great; she didn't have to worry about cables or the notorious multi-remote control problem. She didn't even really need the VCR because she hardly ever watches Video tapes, but I thought, why not. It worked great for two years, mom watched her DVDs, and on a blue moon a video tape from a family vacation would find its way into the VCR. All was well at the Thompson household. This past year, tragedy struck. The VCR devoured a videotape, completely entangling it in the machine. This not only knocked out the VCR but the television too (it thought it was constantly at the end of a tape and needing to rewind it). So here's the issue: mom probably only needed a TV and a separate DVD player. I probably could have gotten better quality components individually too, and with some ebay-savvy shopping, the group may have been cheaper. For my mom though, the integration and ease of operation of the three were key assets. The flipside of that is that the whole is only as strong as the weakest of its constituent parts, and by the manufacturer throwing some questionable VCR components into the mix, it caused the whole thing to fail. The meta-question: did I make the right choice, going for the kitchen-sink approach versus individual components? I think for mom I made the right call. For me, my willingness to program a universal remote and my love of tweaking the system would have lead me down a different route.

In operating systems, it depends what you're looking for and what the risk vs. reward equation is for you, and I would argue that the answer varies from user to user. The ideal would be something that gave you integration, ease of use, visibility, manageability and the ability to truly customize and minimize functionality and maintenance requirements. No operating system I've ever seen strikes that balance optimally and for every user. As far as bundling functionality with the distribution, I think it's a question of market demand. There's no question though that from a simple mathematical perspective, the less code processing untrusted data the better. That means if I need a system to perform one specific function, and that function was constant over time, then from a security perspective I only want the stuff on that box that does what I need to serve that goal. For example, I don't ever want X Windows on my linux file server. I just want the minimal code base there because as long as the code itself is reliable, I'll only have to mess with the box to apply patches (and much fewer patches if I strip the system down). That's true of my home fileserver. If I have an army of systems to manage though, my decision is going to come down to which platform is reliable and extends me the most tools to manage it efficiently and effectively. That's a question that can only be answered in context. I can tell you what I run at home though. File server: Red Hat EL 4 (no X windows). Laptop: Windows XP SP2. Desktop: Windows Server 2003 with virtual machines of everything under the sun from Win 9x to SuSE, Red Hat and Debian.
This discussion has been archived. No new comments can be posted.

Windows vs. Linux Study Author Replies

Comments Filter:
  • ~FFE4 (Score:3, Funny)

    by GillBates0 ( 664202 ) on Monday November 28, 2005 @12:25PM (#14129652) Homepage Journal
    UID: FFE4 (932849). What a n00b. He must be new here.

    Kidding!

    • Re:~FFE4 (Score:3, Interesting)

      I'm not sure if this is what he's referring to, but back in the day $FFE4 was the address for the "get whatever key is being pressed" routine in the 8-bit Commodore kernal (e.g., the C64).

      As in:

      WAITKEY: JSR $FFE4 ; Check for a keypress
      BEQ WAITKEY ; If no key pressed, a zero is in the accumulator, so loop back
    • Re:~FFE4 (Score:5, Informative)

      by FFE4 ( 932849 ) on Monday November 28, 2005 @01:01PM (#14130003)
      FFE4 = JMP ESP on x86 (one of my favorite instructions for certain contexts - buffer overflows in particular :)). It's one I created just for this interview and thus got a UID heading towards infinity!
      • Re:~FFE4 (Score:2, Interesting)

        by LnxAddct ( 679316 )
        I must say, you are a true geek through and through. Thanks for an unbiased study and being brave enough to respond to slashdot. Geeks around the world thank you. (As you can see from my username, I am slightly biased towards the competition :) but still found your study to be excellent)
        Regards,
        Steve
  • Don't forget (Score:4, Interesting)

    by sucker_muts ( 776572 ) <sucker_pvn@hotmCHICAGOail.com minus city> on Monday November 28, 2005 @12:31PM (#14129705) Homepage Journal
    People on slashdot can get pretty upset about the studies Microsft shows the world, and these mostly say Microsoft is the king on the hill. But don't ever forget they don't show ALL of their studies. It could well be that 60% of them does not favor Microsoft good enough or not at all.

    Of course I realise they try to use situations that are more likely to favor for them as for [insert competitor].

    No if just once a bunch of other studies leaked we could get a real view over what MS is doing with their researches all the time...
    • by Foofoobar ( 318279 ) on Monday November 28, 2005 @12:54PM (#14129933)
      King of the Desktop perhaps but not King of servers. Sure they show more REVENUE but as for deployment, Linux still dominates and has been squeezing Microsoft more and more out of server space. While Linux eats into UNIX market share, they also are eating into Windows market share as well.

      Don't believe it? Look at what the most widely used Web server is. Look at what the most widely used DB is. look at the most popular scripting languages. And now keep in mind that they all come installed by default on almost all Linux distros.

      They can keep putting money into trying to convince people that Microsoft Clusterfuck Edition can replace Linux clusters. That's cool. Just another money pit for them and a great way to divert resources into a nowhere scheme. And sure they have loads of funds but they still have to answer to shareholders and they are not pleased that the stock has stagnated for so long and they won't be pleased when didvidends stop getting payed and products not being sold or delivered on time do to them focusing on a product that will go nowhere.

      The entire open source world and all companies supporting open source (IBM, Google, Sun, Amazon, etc.) are all starting a bait and switch where Microsoft throws mony into duplicating anything that it thinks may be a threat. This is turn causes them to waste funds and resources on red herrings when the actual threat is something else entirely.

      These past 5 years have seen Linux and open source go from obscurity to mainstream in the business market. The next five years will see it go from obscurity to mainstream in the consumer market.
  • by sconeu ( 64226 ) on Monday November 28, 2005 @12:31PM (#14129707) Homepage Journal
    At least the guy has a sense of humor.

    See his comment on the Flameproof suit/Tinfoil hat question.
  • MySQL (Score:5, Interesting)

    by Shawn is an Asshole ( 845769 ) on Monday November 28, 2005 @12:34PM (#14129733)
    Okay, so they needed a certain version of MySQL which required a newer version of Glibc. Still, though, any Unix admin should know that upgrading glibc is risky at best (I've broken many systems due to upgrading glibc).

    Here's my question: Why didn't they just rebuild the source RPM and install the resulting binaries? This way the binary would be built with the same glibc as everything else on the system. I've done that on many system with no adverse effects. They didn't have to rebuild in on the server, just any machine running the same distro would do fine.
    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday November 28, 2005 @12:49PM (#14129869)
      The OS upgrade was already part of the "evaluation".

      Why not allow the sysadmins to upgrade from SLES 8 to SLES 9 instead of REQUIRING them to backport the glibc patches from 9 to 8?
      • by Zathrus ( 232140 ) on Monday November 28, 2005 @01:45PM (#14130427) Homepage
        Why not allow the sysadmins to upgrade from SLES 8 to SLES 9

        He answered this -- the configuration control system that was in place did not allow for the upgrading of the OS.

        This is not unusual -- if you know everything works with OS Y version X, then you simply do not upgrade just because X+1 comes out without doing massive testing.

        He also said that after the test was done the Linux admins said that the test followed their real world experience pretty well, except that they would've upgraded the OS instead of backporting glibc. The configuration control didn't allow for that -- which is almost certainly a problem with the configuration control. If your admins say "well, we can upgrade to X+1 and certify that everything works in Z days, or we can try to backport the changes which will take W days with the understanding that it may all blow up anyway" then most businesses will go with the first route -- even if Z is bigger than W because that "blow up anyway" bit should scare the crap out of any CTO that's worth employing.

        Yes, they should've allowed for the upgrading. The configuration control was overly stringent and caused undue breakage. There are certainly parallels in the Windows world where installing a patch breaks other systems. And there you're down one option -- you can either deal with the broken software, you can go back to a vulnerable/unpatched state, but you cannot port the patch backwards in most cases. Not that I recommend the latter option in almost any case -- fixing the broken apps is likely to cause far less pain.
        • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday November 28, 2005 @02:14PM (#14130666)
          Yes, they should've allowed for the upgrading. The configuration control was overly stringent and caused undue breakage.
          But they DID allow for upgrading.

          In fact, it was part of the requirments.

          But they did NOT let them upgrade when any normal person would have. They REQUIRED them to stay on SLES 8 and backport patches from SLES 9 ... and then later they required them to upgrade to SLES 9.

          Any intelligent person would have skipped the backport process, done the upgrade when it became necessary and bypassed all the "problems" that were "found" in this "study".

      • by electroniceric ( 468976 ) on Monday November 28, 2005 @02:06PM (#14130608)
        Excellent point. In fact, I'd be awfully surprised if some of these experienced Linux admins didn't point that out. Even if there hadn't been these glibc issues, I'd be awfully tempted to upgrade to a newer OS to avoid the potential for having that same problem with other components. Nor are such compatibility traps between a particular platform (e.g. OS + database) and an application particularly specific to Linux, in fact SAP and Peoplesoft installations are legendary for this sort of cross-application compatibility trap. I'd be very curious to hear what the admins' reaction to the scenario was.

        This study covers an area where Microsoft has invested substantial effort in making a specific set of migration pathways. Microsoft's design method has always been to streamline certain task pathways, and (by design and/or side effect) make work outside those pathways much more difficult. For example, trying to get data out of Exchange and into any database other than SQL server requires a very complex set of programming with CDO and other objects. The effort to get data out of a mail-storage system on Linux would pale in comparison, regardless of the RDBMS used. Another example in the migration area is legacy OSes. If a Microsoft operating system reaches its end of life, not only are there no further patches or upgrades issued by the vendor, but it cannot be patched by anyone outside of Microsoft. So how about a test of modifying an application on an NT4 server versus RedHat 6?

        The findings of this study do seem legitimate, and its credibility is certainly enhanced by the author's willingness to open its methodology to scrutiny. And unsurprisingly, Microsoft asked for a study in an area where they already thought their product was better. I'd call it one state of a large ensemble.
    • Re:MySQL (Score:2, Insightful)

      by IdleTime ( 561841 )
      Most likely because the new MySQL version used a glibc function not existing in the previous version, hence rebuilding with the old glibc would error out.

      I know that the database I work with on a daily basis have a minimum requirement for glibc versions and when we release a new version, that requirement normally have bumped the release of the mninimum required glibc version, hence a glibc upgrade may be necessary.
      • Re:MySQL (Score:5, Informative)

        by molarmass192 ( 608071 ) on Monday November 28, 2005 @01:28PM (#14130265) Homepage Journal
        Most likely because the new MySQL version used a glibc function not existing in the previous version

        I find that EXCEEDINGLY hard to believe considering that the req was:

        "In the Linux case, the component required an upgrade of the MySQL database component from version 3.23 to version 4.1"

        and MySQL 4.1 works fine when compiled against GLIBC 2.2 which is what SLES 8 ships with. Truth be told, the study admins choose to hunt down precompiled RPMs for MySQL 4.1 rather than download the sources and do a simple configure/make install. If they REALLY wanted RPMs, they could even have grabbed the SRPM from SuSE, ran it through alien,subbed in the new tgz, and rebuild a fresh RPM. Thus, my long standing position that there is no such thing as a "good" admin who hasn't also done some development work.
      • Re:MySQL (Score:4, Informative)

        by ookaze ( 227977 ) on Monday November 28, 2005 @01:52PM (#14130487) Homepage
        Most likely because the new MySQL version used a glibc function not existing in the previous version, hence rebuilding with the old glibc would error out.

        Stop the BS please.
        They upgraded from MySQL 3 to MySQL 4, and no MySQL requires any specific version of GLIBC.
        Look at the report, they just reacted like no Linux admin would : they recompiled (and replaced instead of adding a new version of !!!) glibc instead of recompiling MySQL.

        I know that the database I work with on a daily basis have a minimum requirement for glibc versions and when we release a new version, that requirement normally have bumped the release of the mninimum required glibc version, hence a glibc upgrade may be necessary.

        Stop saying such stupid things please.
        Saying this, you just show that you are not an experienced Linux admin.
        The minimum glibc version you would require would be 2.x, which is available in any distro since years.
        Even 2.3.x are available since years.
        No database requires a new glibc version, as I doubt they need the latest TLS things.
        The only problem is with closed source databases, and if you have problems, that means you use a version unsupported by your platform.
    • Re:MySQL (Score:3, Interesting)

      by ajs ( 35943 )
      They did not just rebuild source RPMS because that would have violated business constraints, which were the basis for comparison.

      He did comment that thre admins provided feedback saying that they would have considered a distribution upgrade over the glibc upgrade if they were allowed to. That would seem to me to be a more likely path for a business to have taken. Still, for the constraints posed, this was a fairly valid test (and remember that the constraints were posed on both sides).
    • Re:MySQL (Score:5, Informative)

      by FFE4 ( 932849 ) on Monday November 28, 2005 @01:18PM (#14130171)
      It was actually one of the 3rd party components that required the GLIBC upgrade and not MySQL. If it had been MySQL and they had the SRPMs I'd agree with you (although that may lead to some wierd patching problems down the road). Many 3rd party commercial vendors only provide the binary RPMs and that was the case here too. Again, let me say that we chose components based on market share without knowing that these issues would crop up. That's why I think it's critical to apply this methodology in your own environment because you get the added benefit of any configuration control policies you may have in place, and going through the exercise may, in addition to helping you select a platform, help you select the 3rd party components that minimize pain too. Most of this kind of stuff just ain't documented in the install/release notes.
      • Re:MySQL (Score:5, Insightful)

        by ookaze ( 227977 ) on Monday November 28, 2005 @01:59PM (#14130538) Homepage
        It was actually one of the 3rd party components that required the GLIBC upgrade and not MySQL

        Which is not what is written in the report. In either case, sth is very wrong, because that only means your 3rd party WAS NOT supported by the platform, and yet, it has most market share on Linux ?
        So Suse, that was chosen, was not the platform with the most market share, not enough to be supported by this 3rd party. And yes, that would apply to Suse 8, as the 3rd party had most market share before your study, during which SLES 9 became available.

        Again, let me say that we chose components based on market share without knowing that these issues would crop up

        How come ? Every 3rd party tells you which platform they support !!!
        A Linux admin that does not know that is not even an admin.

        Most of this kind of stuff just ain't documented in the install/release notes

        Of course it is. It says SLES 8 supported or it doesn't, and then you ask.
        This is nonsense otherwise, and nonsense happened in this study.
      • Re:MySQL (Score:5, Insightful)

        by BeBoxer ( 14448 ) on Monday November 28, 2005 @02:10PM (#14130630)
        Many 3rd party commercial vendors only provide the binary RPMs and that was the case here too. Again, let me say that we chose components based on market share without knowing that these issues would crop up.

        Let's be honest here. You should have known that those issues might crop up. Binary incompatability is a well known problem with closed-source software, and not just on Linux. It's one of the major advantages of open-source software over closed-source. Having the source means I can rebuild the software for my system to avoid exactly this issue. Or more commonly, my distro can rebuild the software and provide me with an easy to use and fully compatable binary package.

        Any project which goes out and chooses what software to use exclusively based on "market share" deserves any problems they run into. That should be the conclusion of your study. When I go looking for applications to use, compatability is primary consideration. Having a maintained version included in my distro of choice (Debian for me) is a huge plus. If I do have to use closed-source, putting it into it's own isolated OS will probably end up a requirement as well since that's the easiest and most direct way of avoiding binary compatability issues.

        To compare Windows and Linux by forcing one of the biggest weaknesses of closed-source software onto the open-source solution is quite disingenous I think. It may be that the closed-source software is well and truely required and has no open-source competetor. But you never actually name the software, so no one can come along and say "hey, why not use GNU Mailman to handle the mailing" for example. Both mailing lists and search have many many open source options. Data mining has perhaps not so many, but in all liklihood that application can run on an indepenent server and connect to MySQL over the network. That would eliminate all the GBLIC problems.

        Really, not to sound snide, but the strongest conclusion I can make from this study is that I should not hire you to design my IT infrastructure. I can't say if it was ignorace or malice, but it sounds like you pretty much set the Linux side up for failure.
        • Re:MySQL (Score:3, Interesting)

          by rholtzjr ( 928771 )

          Of course he did. That was the whole point of this study. When would a Windows system do better than a Linux system with respect to upgrading components while putting constraints on what they can do. In my opinion, this study has no merit execept that is exhibits what NOT to do when requirements for an application are not met.

          Here is what convinced me that this study is totally bogus.

          From his assumption:

          * The business migrates operating system versions at the end of the one year period to the lates

      • by Svartalf ( 2997 ) on Monday November 28, 2005 @02:18PM (#14130716) Homepage
        I have grave concerns as I'm reading the paper. If the 3rd party component needed an upgrade to a new glibc, you would never have done what these admins allegedly did in the paper. It would have been a red-flag on the component in question and if it was something critical to the application, it would be assumed that the official version of the OS that was supported by the components was SLES 9, not 8 because it didn't have support for that version of glibc. You don't hack something like this in a production system, ever- even if you've got the skills to pull something like it off. I've got the skills, but even I wouldn't do what was done. You'd do a migration to the next version- period. There's far, far too many things that can go wrong and you really need to vet everything once you do it. What your esteemed admins did was analogous to someone haxoring kernel.dll by patching it manually and then putting it into a production Windows machine. I honestly don't know of anyone in their right mind that would do that one- ever.

        Another faintly disturbing thing in this paper is that it's assumed that it's Linux at fault, when in reality, it was the ancillary components' requirements and someone trying to bull their way through the "problem". There's several problems with this, but I can number a few key ones for you:

        1) glibc's interface, the ABI, doesn't change all that much over time. Typically, it's linked
        to at runtime through a sonamed link to the actual .so file (Currently libc.so.6 on modern Linux and *BSD distributions...). This interface can be safely used for many years at a time, in spite of varying version numbers and the expected behavior will be the same for an older and a newer version- so long as you're not stepping on a bug within the older version or a new feature offered by a later on version of the runtime.

        2) Yes, you CAN get away with minor revision updates of glibc without problems, but typically, you need to vet all your compiled code for regression testing purposes. It really, really is like replacing kernel.dll on Windows. If it isn't provided as an update, you've got a lot of regression work ahead of you to ensure that fixes done to the library don't break other code (Typically not a problem, but you never can tell when someone mis-used something...)- this is NOT something that your rank-and-file sysadmin has any real business doing. It's NOT their job.

        3) Either the component stepped on a bug, or they're using some new feature of the glibc layer. In either case, you can't bull your way into using it on something that doesn't have the needed support level. What your admins did was analogous to trying to make this work on NT4, only to find out that you need the .Net framework for everything and then proceeded to install pieceparts of the OS to get it there.

        The study's flawed- that plain, that simple. You can defend it all you'd like, but it's got bad problems that everyone, myself included, have been pointing out and you've avoided answering several of the key points we've been making.
      • Re:MySQL (Score:3, Insightful)

        So, the admins were free to use any tools they wanted, and this was supposed to be a test of Linux, yet you dictated components, (proprietary, binary-only components that you refuse to disclose, and that apparently weren't even supported on the Linux OS used,) based on market share. And Linux failed because, in order to comply with these requirements, your genuis admins performed a glibc upgrade that broke the system???

        Why am I supposed to take this seriously again?
      • Re:MySQL (Score:3, Insightful)

        by burnin1965 ( 535071 )
        Hello Dr. Thompson,

        I appreciate your answering questions on the report, it takes some courage to face a hostile community.

        Anyhow to the question, perhaps I should go back and read more, but what I would like to see are more specific details on the third party applications you were using, the issues they created, and how they were resolved.

        I'm curious because it appears that some initial rules and choices that were made for the study were a recipe for disaster. Its like telling two teams they will be in a ra
      • Re:MySQL (Score:3, Insightful)

        by grahammm ( 9083 ) *
        I suppose it also needs to be asked why they started off with such an old version of glibc. In July 2004, glibc 2.3.2 was the latest version and that been released for 15 months. Would it not have been reasonable to start the trial with at least semi-up-to-date software?
      • Re:MySQL (Score:4, Insightful)

        by Krach42 ( 227798 ) on Monday November 28, 2005 @04:09PM (#14131845) Homepage Journal
        It was actually one of the 3rd party components that required the GLIBC upgrade and not MySQL.

        Why were the SLES admins not allow to say basically that this 3rd party component is sufficiently incapable of working with their systems as is. Then, either go to the company that makes the 3rd party component, or "we'll take our business elsewhere."

        Was this something you would have possibly allowed them to do? Because if you were to run into this same sort of problem with Windows, one would have only the choice to upgrade the OS, or pick another product.

        Namely, if this same situation were to occur on Windows (they're using say, Windows 2003, and the SP1 comes out, and the 3rd party component won't work unless one has SP1) there would be no choice but to either upgrade to the newer version of Windows, pick another component supplier, or badger the component supplier for a compatible version.

        I don't think it fair to say that the Linux people had a hassle because they were able to take the option of getting it working on the older version. If anything this shows a greater flexibility of Linux at the cost of hassle, than Windows. And to force Linux to use this flexibility at the cost of easy of administration could be said to be entirely contrary to the entire purpose of the study.
    • by Svartalf ( 2997 ) on Monday November 28, 2005 @01:34PM (#14130327) Homepage
      ...upgrading something like kernel.dll under NT4, 2000, XP, etc. It's not something lightly undertaken on a running machine- especially a production machine. Typically, when something of that magnitude needs an update, it's a full system upgrade- doesn't matter if it's Windows, etc. What makes the author of the report think that this was even remotely a fair comparison in question.

      And I'll be honest, I find it fishy to say the least that he seemed to need that specific version of glibc; pretty much all vendors that are in the FOSS world try to track deprecated interfaces, avoid making calls to "broken" apis on the machines in question, etc. Even with a security flaw present, unless the glibc actually is the root cause, they will go out of their way to code around problems in most cases instead of mandating a glibc update for customers- it's that big a deal. Better yet, it seems that the official version updates from SuSE DID address all of this, including dealing with a fix to glibc that changed the revision number. If it's on SuSE's update sets, it's been pretty much vetted unless you change something fundamental, like glibc, at which time, all bets are off- it'd be the same way with Windows if you figured out how to accomplish a swap out of kernel.dll, or similar. Currently, for all distributions in main use except for Slackware, a system of handling all dependency relationships and obtaining all the official updates, etc. online. This is a KNOWN feature of all those distributions, whether you're talking Yast, urpmi, apt-get, yum, up-2-date, etc. Given that this is the case, not a single admin that actually knows what he's doing would have ever done what you describe in the draft 13 version of the paper on page 31, where you list things like admins doing by-hand updates of glibc, etc. That's "where Angels fear to tread" territory and would only be attempted by people that either roll custom distributions for embedded use or similar (Myself, for example...)- which would not be your typical sysadmin and they'd not be doing something like that with a production or pre-production server because they know better. And this is just one of numerous flaws with the whole study. I'll try to get to more later.

      While I won't label you as a shill for Microsoft (partly because you're brave enough to face the gauntlet on this site...), I will question your ability to frame in adequate tests that actually test something- because you failed to do anything useful here except give Microsoft precisely what they were looking for. The work you did as presented to the whole world is hopelessly flawed in a manner not unlike what Mindcraft did for Microsoft a while back. I'd not consider your firm a reliable source of input or information at this point- while I was going to use one of your other papers that was provided online for a reference item in one of the white papers I am working on for my company, I must now largely discard this and find other sources for the information as everything you've produced is suspect because of the egregious flaws in the paper we're discussing.
      • Go read up on the versioning scheme glibc uses - it's unique and defies both logic and common sense.

        Basically, and this is coming from somebody who has a lot of experience of dealing with binary software on Linux:

        • Yes, it's entirely believable that a glibc upgrade was required, because when you compile a program that binary is usually locked to the version of glibc it was compiled with. Newer versions are OK, older versions aren't.
        • This locking process is automatic and independent of what the source code
  • Well (Score:5, Insightful)

    by flyinwhitey ( 928430 ) on Monday November 28, 2005 @12:35PM (#14129739)
    When this study was originally posted, many of you slashbots rushed to dismiss it solely on the basis of funding.

    When I brought it to your attention that doing so is fallacious, I was modded down into oblivion.

    Inevitably the same people will post again, with the same fallacious arguments, claiming that this guy is a shill for MS.

    I'll be interested to hear the excuses that are made this time, and I can guarantee that several people will attack this man personally for no reason other than the results of his study.

    So how about, instead of relying on old prejudices, we instad attempt to actually examine the research and gauge it on it's own merits.

    • Re:Well (Score:5, Insightful)

      by nharmon ( 97591 ) on Monday November 28, 2005 @12:44PM (#14129829)
      Just because he says he's not a shill does not mean he is not.

      I wonder if we would get the same results if we repeated the experiment, and not have it funded by Microsoft.
      • Re:Well (Score:2, Interesting)

        by MSFanBoi2 ( 930319 )
        If said experiment was repeated, funded by say RedHat and they found the same results, do you think they would have the acument to publish them?
      • by everphilski ( 877346 ) on Monday November 28, 2005 @12:51PM (#14129901) Journal
        He told you his process. He told you how Microsoft approached his company. He gave you his methodology. Show us where he f*ed up.

        I'm waiting... come on... all talk now? yeah...

        -everphilski-
        • The f*cked up part is still there and well.

          To sum up :
          - Despite what is said, the Linux admins just do not look like experienced Linux or Suse admins
          - I still don't know what is this search package (the one which required new MySQL and glibc)
          - I have to question why the search package chosen was not supported by the distro, as sure enough, no sane Linux admin would have chosen it

          The big question is still there : how come they ended up updating glibc ?
          Glibc for god's sake !!
          Sth is still very fishy here. We'r
          • by Master of Transhuman ( 597628 ) on Monday November 28, 2005 @02:24PM (#14130776) Homepage

            Hell, no sys admin - Windows or Linux - should have upgraded anything as significant as the compiler or libraries without backing up the system first so he could back out the changes if something broke!

            The statement that "the RPM was broken so they couldn't undo their changes" right there tells you something was wrong with these guys!

            At the very least, they were probably pissed that they had to use a 3rd party proprietary system that used binary RPMs only!
        • by arevos ( 659374 ) on Monday November 28, 2005 @02:12PM (#14130646) Homepage
          The dubious points of the study have been pointed out several times. The problems stems from third-party software that was incompatible with the Linux system they used. All the study shows is that an unnamed third party piece of software doesn't work with a specific version of Linux. From this sample space of 1, the study infers that server administrators can implement business targets more easily in Windows than in Linux. The study simply isn't nearly comprehensive enough to come to any valid conclusion.
          • by Master of Transhuman ( 597628 ) on Monday November 28, 2005 @02:31PM (#14130849) Homepage

            Excellent summary in one paragraph.

            Now, some people will say, "Well, this is what happens in a real corporate environment - you have to do what management wants you to do. And the issue is how well can you do it in one OS or the other?"

            But this is just begging the question. Worse, it's justifying piss-poor IT management decisions in the name of "reality", just biasing in favor of Windows and against OSS on the face of it. But you could easily find just as many bad decisions that result in Windows being screwed up than Linux. The point is that overall IT management policies and procedures have more to do with this study than either OS do. Which makes the study worthless as a comparison.

            The study also does nothing to examine how Linux and OSS in general have great flexibility in meeting business application needs compared to proprietary solutions. In fact, the study, by requiring closed source binary RPMS for an application, demonstrates the opposite.
          • by everphilski ( 877346 ) on Monday November 28, 2005 @03:16PM (#14131288) Journal
            The problems stems from third-party software that was incompatible with the Linux system they used. All the study shows is that an unnamed third party piece of software doesn't work with a specific version of Linux.

            But these are legitimate problems we HAVE to deal with. These aren't issues really in the Microsoft world; but they are in the Linux world. This study brings it to light.

            The study simply isn't nearly comprehensive enough to come to any valid conclusion.

            And the author admits that too. But without more cash he can't do much more.

            -everphilski-
            • > But these are legitimate problems we HAVE to deal with. These aren't issues really in the
              > Microsoft world; but they are in the Linux world. This study brings it to light.

              Oh really. Most of the problems came from an artificial and highly contrived requirement that an unspecified 3rd party binary only package be run on Suse 8 instead of Suse 9, which it was designed for. So are you saying that any Windows software will run on any version of Windows? Well then I guess that pretty much wraps it up f
      • Re:Well (Score:3, Insightful)

        by mpcooke3 ( 306161 )
        I wonder if we would get the same results if we repeated the experiment, and not have it funded by Microsoft.

        It's traditional to fund 10 independant studies and publish the ones that came down on your side.
    • So how about, instead of relying on old prejudices, we instad attempt to actually examine the research and gauge it on it's own merits.

      Oh hush. Why go against everything Slashdot stands for?

      Admit it! You're working for Microsoft!

      Now that I've accused you, I await a +5 Insightful mod, and the inevitable pats on the back.
    • Re:Well (Score:3, Insightful)

      by slavemowgli ( 585321 )
      I think you're wrong. Dismissing a study based solely on who commissioned it (which is different from just funding it) is not fallacious, it's common sense. Think about it for a moment.

      If you can't see why it is, consider this analogy from sports: if an athlete gets doped prior to an important event, they'll get disqualified. That is common sense, too, and arguments like "he would've won even if he hadn't taken anything" or "the substance he took didn't actually do anything" would be laughed at. It's obviou
  • by MSFanBoi2 ( 930319 ) on Monday November 28, 2005 @12:35PM (#14129743)
    Looks like a bunch of honest and detailed answers with no dodging...
    • Ahh... No. (Score:3, Insightful)

      by Concern ( 819622 ) *
      Not really. Just more sophisticated than usual.

      There's a lot of fancy ducking and dodging, none of which changes the facts that:
      1. Whether you're crooked or not, you'll give the exact answers he gave about your ethics. We judge only by the work itself. If you asked me that question, that's what I'd say, not a lot of stuff I wouldn't expect anyone to believe.
      2. The sample size is far too small to be meaningful in any way to anyone, yet he did the study anyway, knowing full well how Microsoft would "misrepresent"
  • by plover ( 150551 ) * on Monday November 28, 2005 @12:36PM (#14129756) Homepage Journal
    You said above "I agree though that one is tempted to dismiss research a priori though because of funding or some vendor tie. I think a good way to reverse the trend is to open the process up to public scrutiny; thats probably the main reason I came on Slashdot."

    You obviously see the value of public scrutiny in what you do. So do we, we're obviously paying attention to your studies, and are pleased to see the "inner workings." It certainly helps lend credibility to your points. But it also begs the question: why doesn't Microsoft extend that same logic to operating systems or applications?

  • Meta-credibility? (Score:4, Insightful)

    by spazmonkey ( 920425 ) on Monday November 28, 2005 @12:38PM (#14129767)
    Not to sound like a troll, but meta-credibility does also work the opposite way;

            anti-$ rag says that grassroots anti-$ os/app/whatever is "the best" and you will have an immediate knee-jerk reaction from the community defending it to the death and proudly installing it on thier boxes just to say they did, even if it takes several dozen man-hours to get it to do anything even marginally useful.

            Dogma is probably even more dangerous and counterproductive than putting blind trust in some $corps marketing stooges, as hard as that is to comprehend.

            Sorry, just watched six guys on laptops code and tweak for two hours failing to get the newest, hippest OS du jour to even recognize basic hardware.

    • Re:Meta-credibility? (Score:3, Interesting)

      by geomon ( 78680 )
      Sorry, just watched six guys on laptops code and tweak for two hours failing to get the newest, hippest OS du jour to even recognize basic hardware.

      No need for apologies. Apple users were watching Windows users perform the same frustration-filled dance for nearly two decades.

      It took the XP release for Microsoft to get right what Apple did in the 1980's.

      I think that Linux has made some marvelous achievements with a fraction of the financial resouces of Apple and Microsoft. To compare Linux to Microsoft and d
  • by ananke ( 8417 ) on Monday November 28, 2005 @12:40PM (#14129785)
    From a purely technical point of view, I was mostly interested in seeing the following question [and thread] addressed:

    http://interviews.slashdot.org/comments.pl?sid=168 949&cid=14084692 [slashdot.org]

    • It's all about the criteria. Why was the criteria such that the Linux sysadmins were backporting patches?
    • by FFE4 ( 932849 ) on Monday November 28, 2005 @03:29PM (#14131438)
      In response to the question you referred to about fairness, here's why I think the study was fair, and here's what I think the limitations are. If I'm a business that needs to deploy some solution I know what I need in terms of business requirements. There are a lot of ways I could technically implement a solution to those business problems. We tried to come up with a methodology to give people insight into the challenges they might run into before they do an enterprise deployment. In the experiment, you've got the assumptions we started with, and the administrators were given fairly free reign. As far as patches, the Linux guys ended up going to different places at Novell for the majority of components and then to MySQL for updates for newer versions they installed. Similarly, the Windows admins had to go to the Windows Update site for patches but also had to check for patches to SQL Server. At a high level, giving some folks business requirements and seeing how they implement them with a particular technology base is fair. The limitations though are the small sample size, the lack of a detailed configuration control policy and the high potential variability of the small group. I think that it's great to question the paths that the admins followed. I think that there are a million ways that they could have approached things, and I guess the key takeaway for me is that given three experienced linux admins we got three really different results. I do think that if that's recognized as a challenge then we can put procedures in to minimize the risk of some of the problems encountered here. You may be prepared to assume that responsibility and in some situations it might even be in your best interest to do so (possibly highly customized environments, embedded, ...). I hope that this study will put Company X be in a better position to do their own evaluation.
      • by ananke ( 8417 ) on Monday November 28, 2005 @03:54PM (#14131687)
        I think we need to clarify something, because it seems that majority of the geek slashdot users have the same baffled look on their faces as I do:

        1) 3 individual linux administrators were put to a test. Each one had 5 years of experience.
        2) Each one of them decided to upgrade glibc:

        2a) one decided to do it from scratch, "from GNU site" [I assume that meant compiling it]
        2b) second went to upgrade using packages for a new version of suse, and only that
        2c) third did something similar to the second one.

        Now, call me crazy, but somehow points #1 and 2a/b/c do not match up. Nobody with that much experience should ever consider the solutions taken by those three people. Especially 2a - nobody in their right mind would ever consider that. It's just way too risky. That's why I'm wondering - were they asked to go that route? Where they given instructions to go beyond of what the vendor supports?

        Considering that it is mentioned that a new version of suse was available, why nobody decided to upgrade the entire distribution?

        You may be right, the ability to perfom #2a is something that wouldn't be possible in the windows world, thus eliminating the possible problems it may cause. However, something still doesn't add up. Those admins should have never attempted those routes.

        other than that, interesting paper.

  • I like it, I find it very difficult to deal with the multi remote problem at someones house.

    Surround sound, Satellite, DVD, VHS, cable, PS2 all plugged in. For many peoples house I just give up trying to watch TV or even change channels/volume.
    • I've taken to recommending the Harmony remotes (now from Logitech) for anyone who has a home theater setup that they have a hard time controlling. Even non-techies can set them up fairly easily. Their only drawback is the remotes literally cost more than the TV/DVD/VCR combo box he mentioned above. (The Harmony 880 is $250 at Best Buy.)
  • Suse is great distribution, but I'd rather place it on desktop instead servers.
    I'd like to dare the author to replicate this experiment using Debian stable as linux side server OS.

  • by TubeSteak ( 669689 ) on Monday November 28, 2005 @12:53PM (#14129926) Journal
    5. I'd just like to mention that Diebold ATMs are not amazingly secure machines.
    DECEMBER 03, 2003 [computerworld.com]
    Last week's revelation by Diebold Inc. that its automated teller machines operated by two financial services customers were struck by the W32/Nachi worm raises the specter of even wider disruptions from virus and worm outbreaks and highlights a growing security concern that cash machines running Windows XP and interacting with other Windows systems are vulnerable to attack. ...
    The security problems on ATM networks come as many banks worldwide are migrating off of an older generation of machines using IBM's OS/2 operating system to new systems running Windows.
    And that was just the first news story google turned up for atm+diebold+flaws

    There is a lot of crap that goes on in the banking industry which is not reported. Mostly because there are no laws requiring it to be reported.
  • by lightyear4 ( 852813 ) on Monday November 28, 2005 @12:54PM (#14129935)


    Maintaining a system is all about context; some environments favor Linux, others Windows.

    I've built many many systems for many people; servers, desktops, multimedia backends, you name it. I personally use linux/unix, but the OS installed upon each of the machines I build is by no means limited by my personal preference. Dr. Thompson makes a wonderful point here. In computing as in life, different situations merit different approaches.



    I really wish all of the microsoft-, bsd-, and linux-zealots would realize this. To each, his own.

  • by 0xABADC0DA ( 867955 ) on Monday November 28, 2005 @12:54PM (#14129936)
    From the responses it sounds like he did an honest attempt at this study. I think the conclusion however should be that stupid admins cost a lot, so taking away things they could mess up is the key to lowering costs. If it turned out that the windows admins had to actually do anything, I bet the results would have been just as bad or worse for Windows.
    • by phasm42 ( 588479 ) on Monday November 28, 2005 @01:02PM (#14130022)
      Maybe that was one of the conclusions of the study -- the Windows admins didn't have to do as much. This is a real-world concern.
      • Actually... (Score:5, Insightful)

        by Svartalf ( 2997 ) on Monday November 28, 2005 @01:47PM (#14130447) Homepage
        The Linux admins were artificially given much more to do and screw up than the Windows admins, if the verbiage in the paper is to be believed. They were mandated to patch much more than is realistic, etc. in a production shop. If you were to have to patch all the local exploits in everything Windows related, you'd be very busy, moreso than the Linux admins- but they only had to do the Windows critical updates as MS provided them. The Linux admins were off patching everything- even if it wasn't very relevent to the servers (i.e. if it's a properly set up server, they shouldn't be ABLE to exploit local exploit possibilities, etc...). Worse, they had the guys doing manual updates to a lot of stuff, even though it WASN'T needed.

        The study's heavily stilted to favor Microsoft and Windows- either through ignorance or malice. It'd be your call on how it got there, but it DID get there all the same.
      • Maybe the Windows admins CAN'T do as much.

        THIS is a real world concern that has been expressed many times.
    • If it turned out that the windows admins had to actually do anything

      And that's a completely valid response. If your choice of software allows your admins to do less work, perform less upgrades/migrations/etc. over a given timeframe... that's a good thing.

      -everphilski-
  • microsoft patches (Score:5, Insightful)

    by jonastullus ( 530101 ) on Monday November 28, 2005 @12:56PM (#14129960) Homepage
    In the Windows world, one doesn't get the alpha or beta patches, just the blessed finished product

    yeah, right!
    i won't even mention IE's security holes for the last 8 or so years (active x, ...) or outlook's bad record of keeping spam from executing malicious code (mostly through the IE engine).

    but boldly stating how much due diligence is exacted upon the microsoft patches before final release is ridiculous in face of them frequently backfiring and leaving old or new vulnerabilities in their wake:

    http://www.hideaway.net/home/public_html/article.p hp?story=20020924094345962 [hideaway.net]
    http://www.infoworld.com/article/03/09/08/HNhacker sjump_1.html [infoworld.com]
    http://www.eweek.com/article2/0,1895,1753511,00.as p [eweek.com]
    http://www.vnunet.com/vnunet/news/2120864/doubts-r aised-microsoft-patches [vnunet.com]

    jethr0
  • by Shaman ( 1148 ) <shaman AT kos DOT net> on Monday November 28, 2005 @01:00PM (#14129996) Homepage
    ...these were highly experienced Linux admins.

    - which chose an ancient linux distribution
    - which tried to use bleeding-edge software on an old OS software platform
    - which didn't know that glibc updates can break things
    - which apparently didn't upgrade the system first if that's what they had in mind
    - which took more than an afternoon to set up a linux system
    - which were stymied by basic systems administration
    - which appeared to be unaware of the tools available such as webmin

    Wow. That's why I hire kids fresh out of highschool. They're so much more advanced than "experienced professionals" available to this guy.
    • by FFE4 ( 932849 ) on Monday November 28, 2005 @01:50PM (#14130470)
      Responses inline:

      ...these were highly experienced Linux admins.

      - which chose an ancient linux distribution


      Answer: SLES 8 was the most recent at the beginning of the study time period - July 1, 2004

      - which tried to use bleeding-edge software on an old OS software platform

      Answer: All the components used were available in the time-correct period of the study. For example, if they installed a component in the simulated September 2004 time period then that version was available in September 2004.

      - which didn't know that glibc updates can break things

      Answer: They did know that GLIBC could break things and tries to minimize the breakages (see study)

      - which apparently didn't upgrade the system first if that's what they had in mind

      Answer: Good point! The only configuration control issue was that the enterprise wouldn't upgrade the OS version until July 1, 2005. This is mainly based on our experience with companies that don't move to the latest OS version until it has had time to "bake" in the community. At that time, SLES 9 was hot off the compiler.

      - which took more than an afternoon to set up a linux system
      - which were stymied by basic systems administration


      Answer: Not sure there's anything to respond to here...

      - which appeared to be unaware of the tools available such as webmin

      Answer: Hmmm...not really sure how using webmin would have helped in this situation. They were free to use any tools they wanted though.
      • by Shaman ( 1148 ) <shaman AT kos DOT net> on Monday November 28, 2005 @02:12PM (#14130644) Homepage
        > Answer: SLES 8 was the most recent at the beginning of the study time period -
        > July 1, 2004

        True. But a second point would be to mention that SUSE is not a server distribution. Meaning that its packages, etc. are not set up for gentle updates. Which you found out. RedHat, Debian, Libranet would have been better choices.

        I have over 20 Linux servers, I didn't run into these issues. Coincidentally I've just had my first ever issue with updating GlibC (because I went from 32 to 64 bits when I did).

        I usually do a kernel upgrade when glibc is upgraded, and reboot the system. That gives me a fresh environment.

        >Answer: All the components used were available in the time-correct period of the
        >study. For example, if they installed a component in the simulated September 2004
        >time period then that version was available in September 2004.

        Interesting. Was this possible with Windows?

        > Answer: They did know that GLIBC could break things and tries to minimize the
        > breakages (see study)

        I read the study. To me, they looked like bumbling newbies. :)

        > At that time, SLES 9 was hot off the compiler.

        *nix systems almost always upgrade incrementally. It's highly doubtful that SLES 9 would be more buggy than SLES 8. The case could be made for the opposite, and you can be sure that most of SLES 9 was venerable packages going through minor point revisions. This is just the *nix way.

        > Answer: Not sure there's anything to respond to here...

        Ah but there is. I recently resurrected an Ultra 10 SPARC box (see above GlibC issue), which is just about as non-standard as it gets for a Linux install. I was able to install it in one afternoon, which included building a custom kernel with only the components I wanted, and updating over 600 packages to their most current versions from our Debian APT-proxy (which wasn't populated with SPARC packages, sadly). I also installed a Jabber server, Apache2 with PHP/PEAR, MySQL 5.x, DJBDNS, Courier-IMAP and compiled a few packages which aren't usually in Debian, and had it operating. I also mirrored the boot drives. All in one afternoon.

        And several "experienced" Linux admins had trouble making MySQL work on SUSE?
        • by Svartalf ( 2997 ) on Monday November 28, 2005 @02:31PM (#14130854) Homepage
          "And several 'experienced' Linux admins had trouble making MySQL work on SUSE?"


          To play devil's advocate for a moment, how do we know you're past just "experienced" and on deep into the Wizard or Guru realm of administration or programming? (I know, I know, but he's going to flip that one out all the same... I'd be legitimately tarred with that brush in his response... >:-))

          Realistically, though, you're right- I have issues with all of this. They picked distros that would most likely have issues with things. They picked rules that required a lot of patching on the Linux side, but only had the normal set of updates on the Windows side- a lot of patching that simply wasn't needed and didn't have an analog in the Windows world. They picked a stilted set of conditions that honestly would have mandated a distribution version update- in any shop for any OS you could name in the real world.

          I have trouble buying into this- and it's to the point that I'm being forced to re-work my own stuff for my startup because I was referring to other papers by them; I can't trust the data here as far as I could pick the Doctor up and throw him, so everything from this consultancy firm is now suspect.
  • by Korexz ( 915405 ) on Monday November 28, 2005 @01:00PM (#14129997) Homepage
    How long will this argument go on? Apples and Oranges I say. More marketing propaganda to buffer the bottom line. Technology will only move forward when we stop arguing over what is better and start working towards a common goal.
  • Every time we install a new piece of software, ... ,we tacitly accept that this software is likely to contain security flaws and can be an entryway into your system; NOW are you sure you want to install it?

    Except I'd expect higher quality programming out of a company designing security software.

    Like your average anti-virus vendor for example. I find it a little rediculous that virus writers eventually just started targeting buffer overflows, etc. in anti-virus software.

    I think what we're seeing is the overa

  • "All of our studies are written as if they will be released publicly BUT it is up to the sponsor if the study is publicly released."

    My understanding is the sponsor will publish only favorable study. Do they have to choose before or after? Let's order a few studies and publish only the "good" ones.
  • How is it that Diebold can make ATM machines that will account for every last penny in a banking system, but they can't make secure electronic voting machines?

    The reason is that Diebold is not required by any law or regulation to do so. The banking industry and financial networks demand and regulate the security and journalling of transactions. If you don't follow the rules, they don't let you run transactions.
    The "voting industry," on the other hand, has yet to regulate or stringently demand minumum st

  • by TheConfusedOne ( 442158 ) <the@confused@one.gmail@com> on Monday November 28, 2005 @01:13PM (#14130120) Journal
    From the study:
    Beginning at Milestone 1 however, some upgraded components were out of support from SLES 8 and updates for those components had to be obtained from the package distribution sites. As of Milestone 1, MySQL patches were obtained from the MySQL distribution site and as of milestone 2, glibc and directly related packages were maintained through manually applying SLES 9 patches.


    If we look at the history of SuSE then we see Novell's big involvement was in the 9.0 world. Right from the get-go we can see that forcing the administrators to remain on SLES 8 is creating problems that would be considered a show stopper in a regular environment. Especially if you're talking about buying components with their required environments. The fact that you even have the option of applying SLES 9.0 patches to an 8.0 environment is something that you can't do in the Windows world.

    What were the "third-party components" installed on the systems? The following dodge "The specific 3rd party vendors are not disclosed
    because the focus of the study is the methodology and not a specific component." is complete bull if you're crowing about the repeatability of your experiment. How can the experiment be repeated if we don't know the items? (It would be interesting to know if those components didn't support SLES 8 at the time of their installation.)

    Also, why this requirement for the components: "Support on both Windows and Linux" when your environments are obviously not equivalent (IIS/ASP versus LAMP instead of J2EE)?

  • by benjamindees ( 441808 ) on Monday November 28, 2005 @01:15PM (#14130135) Homepage
    [At best, your study seems to show that the GNU/Linux distribution you selected was not particularly good at this task. But why does that show that the ``monolithic" style of Windows is better per se than the ``modular" style of GNU/Linux distributions?]

    That pretty much sums up the entire study. This isn't really a test of Windows versus Linux, but a test of "modular" operating systems versus "monolithic" operating systems. And, unfortunately, the study didn't even do a good job of testing that.

    Linux happens to include several distributions, some more "monolithic" than "modular". Unsuprisingly, the "monolithic" versions are usually those used by "enterprises", such as RedHat and SuSE. The "modular" operating systems, such as Debian, are almost universally ignored by businesses, though you will find IT personnel swear by them. There are Linux distributions that adhere to the Unix philosophy, and there are those that try to emulate Windows and Apple in the name of "ease of use". Hell, even some of SCO's products are more "modular" than commercial Linux distributions.

    By requiring "enterprise" sysadmins and a Linux distro that is geared towards "enterprises", the study preselected a Linux competitor with which Windows can easily compete: admins (probably used to using Windows) using Linux distros that attempt to emulate Microsoft's "monolithic" operating system. By virtue of the fact that Microsoft has been building "monolithic" operating systems for at least a decade longer than any of these Linux companies even existed, that the vast majority of Linux components are designed to be used instead in a "modular" fashion, and that most "enterprises" wouldn't know proper system administration from their own asses, anyone can see that this test is designed to fail.

    I've spent the last one and a half years doing this exact same study. Guess what I found? You can't treat "monolithic" operating systems, RedHat, Fedora, SuSE, Windows, as though they were "modular". Though doing so is easier with Linux, it's not recommended, and distro makers such as RedHat explicitly warn against doing so. Any IT guy learns this lesson about six months into his career. You either find a truly "modular" OS, such as Debian, or a good Unix, or you very carefully buy products made only by Microsoft or by companies joined at the hip with Microsoft. That is, if you choose modularity, you choose Unix. If you choose out-of-the-box integration, you choose Apple or try to navigate the Microsoft "ecosystem", and you pay monopoly rents for doing so. The people who choose RedHat and SuSE, and expect it to be Windows at this stage, are kidding themselves.

    The real headline should be: "Linux admins tasked with using Linux in the same retarded-ass way as Windows, fail." Which should be no suprise.

    But the important thing to take out of this is that it is neither technical necessity nor user requirements that make operating systems less "modular", and thus less flexible, less powerful, and ultimately less valuable. It is the commercial requirements of the operating system manufacturers themselves. It is the fact that the OS is commercial that makes it difficult to upgrade, impossible to integrate, and expensive to maintain. The evolution of commercial Linux distributions towards the "monolithic" model of Microsoft, and the concomitant decline in their quality, has proved this beyond a shadow of a doubt. At most, this study only serves to highlight what any competent Linux admin already knew.
  • FFE4: What kind of credibility do you think you have, being a Microsoft MVP? [securityinnovation.com]
  • by arevos ( 659374 ) on Monday November 28, 2005 @01:21PM (#14130199) Homepage
    The problems the study reported with Linux appear to all due to an incompatable unnamed 3rd party software package. Surely then, all this study can conclude is that the 3rd party software used was incompatable with SLES? And if not, why not?
  • Followup question (Score:3, Insightful)

    by cavemanf16 ( 303184 ) on Monday November 28, 2005 @01:28PM (#14130263) Homepage Journal
    From one of the answers to a question:

    "All of our studies are written as if they will be released publicly BUT it is up to the sponsor if the study is publicly released. The vendor knows that they're taking a risk. They pay for the research either way but only have control over whether it is published, not over content. So if their intent is to use it as an outward facing piece, they may end up with something they don't like. Either way, I think it's of high value to them. If there are aspects of the results that favor the sponsor's product, in my experience, it goes to the marketing department and gets released publicly; if it favors the competitors product it goes off to the engineering folks as a tool to understand their product, their competitor's product, and the problem more clearly. Either way, we maintain complete editorial control over the study and there is no financial incentive for us if it becomes a public study or is used as an internal market analysis piece. The methodology has to be as objective as possible to be of any real value in either case."

    But isn't this part of the problem with vendor-funded studies? (Maybe it's THE problem)

    This WOULD be fine if it were just science for the advance of knowledge, but in the case of studies of *products* somebody somewhere is looking to use the information to make a product purchasing decision, or to promote a new product. In other words, someone is looking to either save money or make money using the results of the study. But those two goals conflict. For the purchaser, they would like to know both the pros and the cons of all studies involving that product. For the seller, they want to know both the pros and cons of their product, but only want their consumers to know the pros, and minimize the cons as much as possible. Both of these positions make complete sense... except for the group conducting the study. You have two different types of customers that you are trying to satisfy with these studies, but only one group is paying you to do the study - the seller. Hence, the results ARE skewed in favor of the organization purchasing the study, because they maintain control over whether the study gets released to the purchasers of that seller's products or not.

    In this case, Microsoft has a win-win proposition, whereas for the rest of us, the purchasers, it's a win-lose proposition. Only if the study is positive for Microsoft will we be given more information necessary to help us save money. But if it's a study that puts Microsoft in a bad light, we lose because we don't get to see such information to make a purchasing decision, and may therefore make an incorrect decision.

    I'm still skeptical that these "industry supported" studies are fully worthwhile to us, the purchasers.

  • by cooldev ( 204270 ) on Monday November 28, 2005 @01:45PM (#14130433)

    We say, sure, BUT we have complete creation and control of the methodology, it will be reviewed and vetted by the community (end users and independent analysts) and must strictly follow scientific principles... All of our studies are written as if they will be released publicly BUT it is up to the sponsor if the study is publicly released.

    While I understand the reasoning, I don't think this should be represented as following scientific principles. In one of his most famous speeches, Cargo Cult Science [brocku.ca], Richard Feynman specifically called out this type of research as being problematic:

    "One example of the principle is this: If you've made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish BOTH kinds of results."

    "I say that's also important in giving certain types of government advice. Supposing a senator asked you for advice about whether drilling a hole should be done in his state; and you decide it would be better in some other state. If you don't publish such a result, it seems to me you're not giving scientific advice. You're being used. If your answer happens to come out in the direction the government or the politicians like, they can use it as an argument in their favor; if it comes out the other way, they don't publish at all. That's not giving scientific advice."

    IMHO the open source community is just as bad on average, if not worse. You better believe they have an agenda and they often aren't held under the same level of scrutiny as corporations, who have to face up to investors, competitors, governments, and "lottery ticket" lawsuits (especially Microsoft these days). The solution? We need fewer one-sided publishing of studies. We also need more studies overall, as they naturally conflict and are situationally dependent, but together would paint a better picture of the state of the world.

    Of course finding funding for unbiased studies that will be published regardless of outcome is probably hard to come by.

    • by FFE4 ( 932849 ) on Monday November 28, 2005 @05:47PM (#14132734)
      This is *really* interesting. It gets to the "philosophy" of research as opposed to this study itself - we talk about this internally all the time and about how we can build an industry infrastructure to support this Feynman-esque research. Here's what I'd love to do: get a group of industry folks together on all sides of the fence (so there's no question of funding); agree to some ground rules, a methodology, and then also agree that the work will be published no matter what. To some degree that's what some of the consumer review groups do but I don't think we have a *real* equivalent in the IT world for the really big stuff. This gets down to the question of how could we set up something truly unbiased (perceived or real) in the Feynman sense of the word that would also work as an economic model. It seems like a consortium of consumers (organizations that use technology as opposed to selling it commercially) who do not have a vested interest in the outcome would be ideal. It would be great to get some responses to this thread with some suggestions. Again, the premise is simple, and funding from a fairly neutral third party like the government is one thing, but how would the IT community do something where multiple participants in the user world would be willing to fund it or multiple vendors, as a group, will be willingly to take that risk?
  • by Julian Morrison ( 5575 ) on Monday November 28, 2005 @01:56PM (#14130516)
    A major possible fault of subject-is-buyer studies is the possibility of bias by selective publication. Do ten thousand completely fair studies, publish the favourable results and bury the rest. Or, a similar procedure but preemptive, focus the study's remit upon a known strength which is in fact surrounded and dwarfed by (un-studied) weaknesses.

    In this the researcher may not actually be methodologically at fault at all. How did you protect your study from this kind of externally induced bias?
  • by crulx ( 3223 ) on Monday November 28, 2005 @02:01PM (#14130552)
    Many of us have several questions about the level of incompetence displayed by these Linux Admins. From the choice of distros to the botched installation of glibc, they made egregious errors that would have sunk ANY startup that they were intended to help setup. And given your knowledge of Linux from your home use, I think you know this.

    Do you see this as a credible challenge to your study?

    Can we talk with these supposed "admins" to gain insight into why they behaved so incompetently?

    And given that you don't have enough admins to be in adherence to the central limit theorem, how do you feel your study applies in a general way to anything at all?
  • Data Points (Score:3, Interesting)

    by quantaman ( 517394 ) on Monday November 28, 2005 @03:00PM (#14131121)
    A lot of people are trying to poke holes in the study itself though it seems to have been fairly well implemented.

    I did however notice two interesting bits that cause me to put a lot less importance on the results

    With three people there's certainly likely to be a lot of variability and to get some conclusive results, I'd love to get a huge group of administrators across the spectrum in terms of experience. I'd also love to do it across multiple scenarios, beyond the ecommerce study.

    And a little later

    it is up to the sponsor if the study is publicly released

    Simply fund a lot of small legitimate studies with a high variance, publish only the results that fit your case. In a way it's like one big badly done study where someone throws out all the data points that don't fit their hypothesis, for all we know he, or another researcher, might have done a dozen other studies which came out in favour of Linux and were subsequently ignored. The research itself is all completely legitimate but Microsoft creates a false overall conclusion through selective publication, perhaps companies who fund the studies should be held to the same eithical standard as those who do the research?

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...