Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
News

Ask Ingo Molnar About TUX 158

Ingo Molnar is the guy behind the TUX Web server, which produced those astounding SpecWeb results reported here last week. He's agreed to a Slashdot community interview. So ask away at the man who created what appears by some measures to be the world's most powerful Web server at present. Please make Ingo's job easier by first reading the LinuxToday articles (here's the first LW story, and here's the second LW story) commenting on the SpecWeb numbers and the background of how they were achieved, as well as Ingo's informative post in the initial Slashdot story, and the SpecWeb results themselves. The moderators may have no mercy otherwise.
This discussion has been archived. No new comments can be posted.

Ask Ingo Molnar About TUX

Comments Filter:
  • by cxreg ( 44671 ) on Tuesday July 11, 2000 @09:01AM (#942326) Homepage Journal
    You killed my penguin.

    Prepare to die.
  • by FascDot Killed My Pr ( 24021 ) on Tuesday July 11, 2000 @09:03AM (#942327)
    Given that part of the explanation for Tux's impressive performance is the use of a kernel-based httpd server, how much how stable and secure do you expect it is.

    BTW, this isn't a flame. I'm sure it's better than IIS/NT on both fronts--but is it better than Apache/Linux, even after factoring in the speed?
    --
  • by Anonymous Coward on Tuesday July 11, 2000 @09:03AM (#942328)
    Will there be versions of Tux for other Operating Systems, such as FreeBSD, OpenBSD, BeOS, MacOS, Windows and others?

  • by mpost4 ( 115369 ) on Tuesday July 11, 2000 @09:05AM (#942329) Homepage Journal
    What is the reason for moving the http server into the kernel, I do not see any benefits to it, could you enlighten me about it.
  • by Matts ( 1628 ) on Tuesday July 11, 2000 @09:05AM (#942330) Homepage
    Would you ever advocate using TUX as a real life web server?

    Think of a high availability environment, where you are building a highly dynamic application such as an e-commerce system. Would you even think of using TUX in such a situation, or would you go with the far more sensible Apache + mod_backhand + (pick one of mod_perl, php, or servlets)?

    The problem is, its all too easy to generate web server software that can withstand a high "hit" rate. But the pressures on web servers, and particularly web developers, lie in completely different areas: Time to market, ease of development, and configuration capability.
  • What i would like to see, is some serious testing of all current unix'es, doing real world tasks... Focusing on not only their performance, but their stability, AND load bearing... ( who cares if something is fast, if as soon as a great load comes, it crawls )


    Something like THIS [innominate.org]

    FreeBSD... ;)
  • by BoLean ( 41374 ) on Tuesday July 11, 2000 @09:08AM (#942332) Homepage
    Are there any plans for or existing features in Tux that allow for adding custom modules such as WebDav? How about custom protocols?
  • by Anonymous Coward on Tuesday July 11, 2000 @09:09AM (#942333)

    This is a version 1 of the web server, and it has proven itself to be pretty nifty when it comes to serving both static webpages (through a kernel level httpd) and dynamic webpages. Do you see TuX getting more lean and faster as time wears on, past versions 2, 3, ... or do you see it getting bogged down in mostly unnecessary cruft and bloat?

    Will there be a way to port an existing Apache configuration across to the TuX configuration? How about IIS, Netscape, Zeus, etc? Will TuX have the option of a GUI setup screen for those who don't like the command line? Will TuX have a simple installer?

  • by Anonymous Coward
    There's a bad link in the story. "Ingo's informative post" in the initial Slashdot story is linked to slashdot.org [slashdot.org].
  • by Signal 11 ( 7608 )
    Given that a benchmark as popular as this will tend to have vendors adding, uhh, "features" to make their webservers run faster for the benchmark, how did you manage to beat them anyway? Did you modify the TCP/IP stack? DoS the other servers during the test? Connect a compulsator to a large coil? Slashdot is dying to know.
  • by 91degrees ( 207121 ) on Tuesday July 11, 2000 @09:11AM (#942336) Journal
    This is based very much on the Linux kernel, so I presume that it couldn't be ported easily to different kernels. Hwever, are there any major optimisations dependent on the use of a x86 architecture?
  • by nd ( 20186 ) <nacase AT gmail DOT com> on Tuesday July 11, 2000 @09:12AM (#942337) Homepage
    What is the target purpose of TUX? From the benchmarks, it appears to be for very high traffic sites. I ask because I'm curious if that is its specific purpose, or does it serve well in other areas as well (i.e, Apache flexibility)?

    This is important because it will also help indicate what Red Hat's stance will likely be in either replacing Apache with TUX or just including it in their Professional distributions.
  • It works with Apache but is TUX generic enough to be interfaced with another server?
  • Does/Will TUX provide any sort of load balancing for a cluster of heterogenous TUX servers?
  • by Anonymous Coward
    Tim, after Windows/IIS's outstanding performance in the Mindcraft tests, why weren't we given the opportunity to pick the brains of the Microsoft engineers? Come on. They're smart people too.
  • by 11223 ( 201561 ) on Tuesday July 11, 2000 @09:18AM (#942341)
    You mentioned in the second Linux Today article that you intend to integrate TUX with Apache. However, Apache has always been a cross-platform server and is heavily used on *BSD and Solaris. Do you feel that this integration will undermine the portability work of the Apache team, or will it simply provide an incentive for web servers to be running Linux? If you intend to encourage people to move to Linux, can a similar idea as TUX be applied to an SQL server to make up for the speed deficit between Linux SQL servers and Microsoft SQL?
  • Context switches are expensive.
  • by konstant ( 63560 ) on Tuesday July 11, 2000 @09:20AM (#942343)
    You've said before on slashdot that TUX supports a number of usability features, although not the full complement. Could TUX have been made faster at the job of serving static pages if you had ripped out every single extraneous bit? What if TUX had been small to the point of being basically unusable in the real world, but served pages faster even than it does now?

    The question is, would that have been a fair benchmark?

    If your answer is No, then the followup question is, how is that materially different from what you *did* do?

    -konstant
    Yes! We are all individuals! I'm not!
  • With the advent of Tux would linux win if the Mindcraft test ran again? Or is the Mindcraft test still slanted towards Microsoft's operating systems?
  • Moving web serving into the kernel is very beneficial, but only for static content. If you think through what happens on a typical Linux/Apache server when a static page is requested, there's lots of room for performance improvement. This is basically because both communication paths for the Apache server (the socket communication to the user and the I/O to the disk file) go through the kernel seperately. This means the data is (redundantly) read from disk by the kernel, copied to userspace for Apache to see, just so that Apache can copy it back to kernel space through socket calls to send it out a TCP port basically unaltered. Doing this in the kernel means you can achieve the same result with a single copy from the disk driver to the TCP stack within kernel space.

    Note that I haven't looked at TUX, and what I've said above is a general explanation of the type of concept that TUX is about, therefore I might be missing a few technicalities.

  • by chuckfee ( 93392 ) on Tuesday July 11, 2000 @09:33AM (#942347)
    In the second LW article, ingo writes:

    - So in our opinion TUX is a new and unique class
    - of webserver, there is no prior art implementin
    - such kind of 'HTTP stack' and 'abstract object
    - cache' approach. It's i believe a completely
    - new approach to webserving. Please read this
    - comment too, which (i hope) further explains
    - the in-kernel issue:

    Maybe I'm paranoid, but "new and unqiue"
    and "prior art" in the same sentence mean
    patent filing to me.

    Are there plans to see patent protection for
    TUX? As I recall, the RTLinux folks got a
    patent for RTLInux's prioritization stuff.

    Is a patent in the works?

    Regardless, TUX is an interesting idea and I hope
    to try it out soon.

    --chuck
  • by 11223 ( 201561 ) on Tuesday July 11, 2000 @09:37AM (#942348)
    I think the link it should be is here [slashdot.org], though he answered a bunch of questions in that story. If you browse through his user-info [slashdot.org], you'll be able to see all of the informative posts he's made recently...
  • by Chainsaw ( 2302 ) <jens...backman@@@gmail...com> on Tuesday July 11, 2000 @09:39AM (#942349) Homepage
    Unix programmers seems to dislike using threads in their applications. After all, they can just fork(); and run along instead of using the thread functions. But, that's not important right now.

    What is your opinion on the current thread implementation in the Linux kernel compared to systems designed from the ground up to support threads (like BeOS, OS/2 and Windows NT)? In which way could the kernel developers make the threads work better?

  • by Mr. Sketch ( 111112 ) <<moc.liamg> <ta> <hcteks.retsim>> on Tuesday July 11, 2000 @09:40AM (#942350)
    What do you see is the primary market for TUX? With the ability to handle such high traffic it would be suitable for busy e-commerce sites, but I'm sure there will be more than a few people who are wary about a http server in their kernel. However, the embedded market might see this as a good thing and allow for web-based configuration, monitoring, etc of embedded devices. I would think there would be more people willing to just throw the webserver in kernel to save space if nothing else, but of course this still raises the security/stability concern.

    So the main question is really just where and in what applications do you see TUX in the future?
  • How would TUX perform using CGI/Servlets/PHP/etc. compared to Apache or IIS? The ability to serve static pages fast is not that useful in the real world, as all the sites that get really big hits-per-second are those with dynamic content (Yahoo, Slashdot, Amazon.com, etc.)


    "Evil beware: I'm armed to the teeth and packing a hampster!"
  • by ErMaC ( 131019 ) <ermac@@@ermacstudios...org> on Tuesday July 11, 2000 @09:44AM (#942352) Homepage
    How will the TUX Webserver integrate with RedHat's Linux distributions? Will RedHat create a special distribution with an identical setup to yours? Will RedHat start releasing more specialized distributions, preferably ones more suited to a secure server environment but focused on performance like your setup was?

    As it is RedHat seems too insecure and bloated for a streamlined server environment. Ideal would be installation options where I can say "This server will do these 3 things (i.e. DNS, Mail, HTTP) so make it suited for that and nothing else." This kind of flexibility would be a HUGE boon to the server market, giving customers a high performance machine running TUX + Apache that was secure and did the functions they needed it to.


    Yea that was a long question, you can chop off last paragraph if you like. Hehe, insecure and bloated, can we say WinNT/2K?

    "I want to get more into theory, because everything works in theory." -John Cash
  • 1 They're probably under NDAs of some kind.
    2 Free software thrives on having forceful, interesting personalities leading the projects. Those types of people tend to be more interesting to interview, I'd imagine.
    --
  • Ah! The Windows approach to software integration. If you can't beat them, join them!

    Microsoft started a war, and they can't expect that others won't give Microsoft back what they dish out.

    Anyway, I wonder what security holes will appear in this kernel level software!

    Could be a problem if it isn't done right. Luckily with Linux you've got alternatives if you want to run a different server, so you can take out anything you don't like.

    People moaned about incorporating the web browser into the OS, but this is incorporating the web server - surely Microsoft have a cause to moan about this??!?!?

    Microsoft has no cause to complain. Nobody forces you to buy Linux when you buy a new computer like most people are forced to buy Windows when they buy a new computer. Also, since you get source code for Linux, it is easy to remove the web server if you don't like it. Microsoft claims (and they did make it really difficult) that it is impossible to remove Internet Exploder from Windows.

  • by JohnZed ( 20191 ) on Tuesday July 11, 2000 @09:50AM (#942355)
    I have a few questions about TUX's caching system. Before I go any further, I want to say that I'm incredibly impressed by the results. I've been following specWeb99 for a while and have been wondering when someone would manage to build a great dynamic cache like this one. I hope it'll get the wide acceptance it seems to deserve.
    First, it seems that basically the entire test file set was loaded into memory ahead of time for use by TUX. How adaptable is TUX to more dynamic, limited-memory environments in terms of setting cache size limitations, selectivity (e.g. "cache all .GIFs, but not .html files"), and expiration/reloading algorithms?
    Second, can a tux module programmer modify the basic tux commands, or do they always do the same thing? For instance, if I were adapting TUX to work with a web proxy cache, I'd want TUX_ACTION_GET_OBJECT to actually go out over the network and do a GET request if it couldn't find a requested object in the cache. You can imagine lots of other circumstances where this would come up as well.
    Third, is it possible to execute more than one user-space TUX module at one time?
    Fourth, when can we play with the code?
    Thanks a lot!
    --JRZ
  • The hardware differences somewhat devalued the original SPECWeb benchmarks.

    If the pointy-haired stupids see this, they just say, "Oh but look, they slugged the results by running Linux on faster kit"

    If the smarter people see it, they'll be kept too busy explain-o-LARTing to the stupids how irrelevant this is to be able to get full use of it in the Great Jihad against Redmond.

    ... anbd if M$oft's PR Weevils see it, they'll probably whine to the DoJ about unfair competition.

  • I saw those words and had the same thought. Of course, this would be a very important patent (if issued) and depending on redhat's actions, could leverage cross licensing.... the real question in my mind is how redhat would leverage such a patent, balancing their own needs against the needs of the larger linux community.

    I would guess that you can't speak for what redhat's actions would be, should this be patented, by the question nonetheless is how redhat would use such a patent and how it would balance its needs (against its competitors) vs the needs of the entire linux community.

  • You might want to re-read the second LinuxToday article before you reply. Here's what Ingo Molnar has to say on the subject:

    TUX is a very different thing, it's a:

    event-based HTTP protocol stack providing encapsulated HTTP services to user-space code

    object cache, where objects can be combined with dynamic content freely. You can witness this in the SPECweb99 submission, the SPECweb99 TUX dynamic module (every vendor has to write a SPECweb99 module to serve dynamic content) 'embedds' TUX objects in dynamic replies. The TUX object cache is not a dumb in-memory HTTP-reply cache, it's a complex object cache providing async disk IO capabilities to user-space as well. User-space application code can request objects from the cache and can use them in dynamic (or static) replies.

    full fledged webserver providing HTTP 1.1 keepalive, CGI, logging, transparent redirection, and many other features. So in our opinion TUX is a new and unique class of webserver, there is no prior art implementing such kind of 'HTTP stack' and 'abstract object cache' approach. It's i believe a completely new approach to webserving.

    As you can see, Ingo Molnar seems to think that TUX is beneficial both for static and dynamic content. If you would have read the articles in question (or if you knew anything about SpecWeb99 which is a test of both static and dynamic content) then you would realize that your statement is completely unfounded.

    There is a kernel-space static content webserver called khttpd, but TUX is something completely other.

    An interesting side note is that TUX would appear to be quite innovative, in many ways it is a whole new concept in web serving. It will be interesting to see commercial vendors chasing the tail-lights of the Free Software world in this particular regard.

  • by WillAffleck ( 42386 ) on Tuesday July 11, 2000 @09:53AM (#942359)
    I know it's a sleazy thought, but the reason why Red Hat is the most popular distro has more to do with marketing and deals than the code itself.

    So my question is this - how will TUX market itself and what kind of deals are you looking at making so that it becomes more widely adopted?

    I don't think we need specifics, just some of the general methods you plan to use for marketing and some probable categories of companies you are looking at making deals with.

    [yeah, I know, free bheer - but it's a good question]

  • I hate to be paranoid, but how much do you want to bet that MS is trying to find a configuration right now to show that Win2000 is faster than Linux with TUX?

    When they do, and I'm sure that eventually they will, we will hear about it from Mindcraft.
  • Firstly, I'm no Linux expert or OS expert.

    Is having the web server in the kernel a security hazard ? Does it increase the potential for damage if there is something like a buffer-overflow exploit in there too ? There are good reasons why the muggles and their IP-tomfoolery are normally kept out of the kernel.

    Can you imagine the uproar if M$oft announced a new IIS embedded in kernel32.dll ?

  • Just in case this is not a troll, which I believe it is, you took that quote out of context. At the time when everyone was giving their source away it was seen as very counter to everything that the programmers at the time believed in. Thus it was stated in the tense of that time period that it was audacious. It is common now and an industry is in place.
    Molog

    So Linus, what are we doing tonight?

  • by zorgon ( 66258 ) on Tuesday July 11, 2000 @09:56AM (#942363) Homepage Journal
    I've heard one of the reasons for Windows' historical instability is that user applications are permitted (nay, even encouraged) to corrupt the kernel, whereas in a typical vmunix implementation this is not allowed. I.e. my bad calls to free() result in core dumps not BSoDs... But here is a Linux application program right there with hooks in the kernel, and not only that it is hooked up to the network! Is Ingo a) Godlike or b) Nuts?

    WWJD -- What Would Jimi Do?

  • Probably because the last thing that we needed was an interview where the questions are coming from a group of really angry, biased, emotional people. The questions that would have made it would probably would not have been constructive.

    I know that I would have fired off a couple of nasty questions, and the moderators are human just like the rest of us.

    That and the fact that I don't think that they would have agreed.
  • With the ability to handle such high traffic it would be suitable for busy e-commerce sites

    eCommerce sites don't grind to a halt because they can't serve enough static pages, but because they can't grind the back-end processing fast enough(whether that be CGI Perl or Servlets).

    OK, so some of them fall over because they've got their heads so far up their open sources that using a non-transactional database has become such a point of religious dogma that they won't go to any more sensible platform...

  • Oh, in case you are wondering Mindcraft did infact help out with the SPECweb99 tests. The information on how their machine faired [spec.org] in the test it is with the rest of the results [spec.org].
    Of course it did only have one NIC instead of 4.
  • Did you try to become a world famous programmer? Would you be this famous if did not release your code under GPL?
  • by Caballero ( 11938 ) <daryll@@@daryll...net> on Tuesday July 11, 2000 @10:08AM (#942368) Homepage
    TUX includes a variety of kernel and apache changes. Can you give a rough measure of how each of the changes improved the http performance? I'm interested in the amount of improvement as well as why it improved performance. Do those particular changes have negative impact on the performance of other applications?
  • You don't have to be quite so scathing. I think I fully disclaimed my comments appropriately. I know that TUX isn't khttpd. However, you will note that TUX does concede that khttpd was an important learning experience on the way to TUX. The other "dynamic" features are nice, and I'm sure they are great, but I'm betting the primary performance benefit of TUX comes from the same basic concepts as khttpd. I'm not dissing TUX by any means, I think it sounds great.
  • Most powerful = Fastests?
  • Sorry, this is off topic (I got to it via an article mentioned in the story), but:

    Windows User Rant [linuxtoday.com] (and quote here:)

    "Aahh, see you cannot compare Win2k and Linux because they do not have the same drivers, one OS might perform better on some hardware, and the results would never be consistent. So you linux freaks stop trying to prove linux is better. Even if Windows has license cost its for a reason, its made by a commercial company (oh you've never heard of it? I'm sorry then) and they need to make money to make more software & profit. Besides Windows is done by professional coders, some of Linux is done by hobby-programmers. If you want to argue about the internals of Windows give me a ring at my ICQ # at 31546029. I've had enough of this Linux crap, stupid people trying to force their choices on Windows users."

    This diatribe should show that immaturity is one thing the Linux community definately does NOT have a monopoly on. Why do the press of all sorts mention the rudeness of Linux advocates, yet fail to do the same for Windows advocates? This is hardly unusual. Any discussion on zd-net or elsewhere features similar mindless rants, yet they are ignored.

    Perhaps the next time some 'journalist' mentions how Linux advocates act, they could merely be sent the URL and quote of something like the above to prove that it works both ways. At the same time send some of the essays written by ESR, Linus, Alan Cox, etc, etc, etc, etc, etc to show that there an equally large (if not greater) number of intelligent, rational, well-meaning Linux advocates.

    And if that doesn't work, we could just use this guy's ICQ number, find out what his 'puter's IP is, and crack away>;)

  • by / ( 33804 ) on Tuesday July 11, 2000 @10:20AM (#942372)
    Are you afraid that if a particularly buggy version of TUX slips out the door and trashes people's systems and loses valuable data, that a certain angry penguin [tamu.edu] will open a whole can of whoopass on you for defamation of character? Are you investing in reserves of herring and icecubes in anticipation of this event? Perhaps an adapted Ursus anti-bear suit [slashdot.org]?
  • You deserve a beer. I'll send you some. What kind do you want?

    (Assuming you're not like Raster and would rather have spirits....)
  • by dweezil ( 116568 ) on Tuesday July 11, 2000 @10:21AM (#942374)

    You appear to have take an "architectural" approach to designing TUX, so I have some architectural questions.

    1. The choice of a kernel space implementation is probably going to be a controversial one. You suggest that HTTP is commonly used enough to go in the kernel just as TCP/IP did years ago. What performance or architectural advantages do you see to moving application protocols into the kernel that cannot be achieved in user space?
    2. What is your approach to concurence? In particular, you refer to "event driven". What do you mean by that and what did you choose as the core to your event engine? Also, how do you handle threading to scale on SMP machines?
    3. Are there any plans to generalize the infrastructure elements of TUX so that other protocol servers can take advantage of the TUX architecture and also go fast as hell?
  • No one put a gun to my head when I bought my computer. In fact, it seems that the crux of your argument is based in jealousy. Linux isn't available on most Personal Computers because of people like Ingo. You spend your time working on the kernel making it more powerful, while every professional OS company works on what the common person really wants: a nice and easy GUI.

    While a CLI has it's place, the average user enjoys the simplicity of a GUI. Microsoft, while receiving great praise from the Slashdot community for it's innovations, has done something that Linux hasn't: it made an OK product and made people think it was Great. Microsoft can not be faulted for good advertising. To assume that a business must play fair, just because it deals with software, is asinine.

    I applaud this development, and I enjoy the approach that Ingo took. But to say that bundling a server with the kernel is different than what MS has done, is to invite a swift beating. Not everyone is a genius programmer, and thus the same limitations are in place as in Windows. I wouldn't worry about the FTC going after a commercial producer of this, though, because the kernel has been specifically improved for content serving. The server is the OS, and the OS is the server. Pity to the moron who would buy it for gaming.

    I hate Microsoft, don't get me wrong, but they haven't committed the great sins we accuse them of. I just wish they would release software that was less buggy and more efficient. Otherwise, as soon as a Linux distributor gets some marketing intelligence Apple and Microsoft will be screwed. Just save a spot for Be, ok?
  • From one of the given URL, you can read that TUX and Apache could (not yet done, but the API is there ?) be bolted together, so you could switch from standalone Apache to Apache + TUX setup, then slowly transfer some load from Apache to TUX for everything TUX does better...

    BTW, I tried doing something (barely) similar with Apache and thttpd (on different ports). The problem I've found is that it doesn't work well because so many people are now behind firewalls that only allow outgoing HTTP connections to port 80 only (not tried IP aliases yet). "Bolting" looks promising (not to mention TUX seems to be really, really fast...)

  • or will it simply provide an incentive for web servers to be running Linux?
    Since he is a Redhat employee, I would assume at least one of the reasons is to add value to Redhat's lucrative server distibutions.
  • This is the correct link to Ingo's informative post [slashdot.org].
  • by Jason Earl ( 1894 ) on Tuesday July 11, 2000 @10:38AM (#942379) Homepage Journal

    Actually there is a specific feature that would probably make TUX incompatible with the BSDs. TUX is licensed under the GPL and the BSD maintainers would probably be very reluctant to port it to their OSes. Especially since it is possible that this would require them to release the derivative work under the GPL.

    Which leads to the obvious question for Ingo. You mention a specific disclaimer that would allow the Apache to be linked with TUX, do the BSDs get the same privilege?

    Not that I particularly care, as I am not a BSD user, but the putting such a nifty program as TUX under the GPL is bound to cause weeping and gnashing of teeth in the BSD camp. Which brings up another question. How much pressure do you get from your BSD compatriots to release software like this under a more liberal BSD-friendly license?

  • Uhm - that isn't true. Part of Tux's feature set is it's close relation to the khttpd portion of Linux. khttpd isn't going to be ported to another OS...it's a Linux internal.

    As I recall from my reading of the discussion on the linux-kernel when khttpd was first introduced, it was actually an attempt to copy something MS does! I believe that the MS product also uses kernel space processes to get better performance results!

  • I think that the rule in the commercial world is "all's fare in war and benchmarks. Love is irrelevant".

    That having been said, my guess is that the Open Software community is a little less likely to invest time in unusable 'improvements' that do little more than give better benchmark results.

  • Granted, I will tone it down. When I re-read my post I was actually embarrassed. I hope you will accept my apology, there was no reason for me to lash out at you in that manner.

    The ironic part is that you probably know more about the subject than I do. I am no kernel space hacker, by any stretch of the imagination, and I am simply reguritating what I read in the various articles.

  • It could be made so that the patent could
    be used inn all kinds of Open-software,
    but required royalties for closed software.
    Software licensed through any kind of "open source"-approved license would do for me :-)
  • (I know this isn't the meat of your comment but...)

    > Unix programmers seems to dislike using threads
    > in their applications. After all, they can just
    > fork(); and run along instead of using the
    > thread functions.

    This is a somewhat naive view of threading on Unix. The biggest factor is probably that many Unix programs don't use threads because they're not portable - only relatively "modern" (as Unix systems go) systems support them. Yes, you can go with userspace threads but then you give up a lot of the advantages of programming with threads, and a userspace threads package needs to be ported to any new platform it runs on, which gets back to the whole portability problem. Remember, a lot of the programs you run on Unix were originally written back in the 80's or even earlier.

    Asides from that, there are some good reasons for not being wedded to the "Everything should thread" model. Foremost, multiple fork()ed processes insulates you quite a bit. If a thread in a process starts overwriting random memory, that affects ALL threads running. A well written server running as multiple processes won't be affected nearly as much by misbehaving siblings. Signal handling and a variety of other things become more complicated as well in a threaded application.

    Of course, threads are really neat anyhow and make a lot of sense for lots of applications (and in fact a large portion of my job involves working with threads). But too often people make the assumption that threads == "cool, fast, well-written code" and fork == "old, crusty code" when both programming models have their place.
  • That is a very interesting link. Microsoft/Mindcraft is using a dual processor PIII 800 Mhz system here. Overall hardware wise it has a lower configuration than the Dell/TUX system. But if you look at it another way, it takes 4 of these beasts (8 processors!) to match TUX, and we know that parellisation does not scale linearly so in essence Mindcraft will not be able to use the SPEC benchmark to beat TUX.

    I still wonder though if we will fair much better under Redhat/TUX if the original Mindcraft test and the recent ZDNet test was redone, and I'd really like to know Ingo's opinion. But unfortunately my original post isn't being moderated upwards, so I'll be happy to carry on the discussion here.....
  • There are many advantages to having the HTTP server execute in the domain of the kernel. First, there is no overhead for multiple process instances; most people will use a statically compiled binary to achieve a slightly higher performance, at the expense of system memory. Also, the HTTP server can use the kernel disk and network buffers as it's own. This means that when you want a file from the server, it can use the buffer cache, which probably already holds the file (if it's recently used) and just send those buffers directly to the network; this is how the sendfile function works. The HTTP server can also take better advantage of scheduling intimacy. If you're in the kernel, you can lock-step the server with the scheduler, so no time is wasted. The latency for connection handling is greatly reduced because there is no need to wakeup a user space program, wait for a scheduling slot, and accept the connection; this can all be done in one fell swoop by the server, because it's inside the kernel. Also, taking advantage of multiple processors is easier to do inside the kernel, where you want a very tight control on what processor does what. The enables certain processes to be tasked with running the webserver, and this helps eliminate cache hits because the thread actually exists as one chunk of code, and doesn't get swapped out. This brings up another issue, anything inside the kernel is non-swapable, so there is no swap overhead, reducing latency and context switches. Most of the benefits of a kernel based server are realized from the tight integration (in an efficient way, not IE type of way) into the 'core' of the operating system. This also makes way for appliance based servers. Something like a proxy cache or appliance web server. Network appliance has an HTTP server protocol available for their filers, but it is far less featurful than what TUX sounds like, plus TUX comes with source code!
  • by lal ( 29527 ) on Tuesday July 11, 2000 @11:02AM (#942387)
    How will our favorite Apache modules, like mod_perl and php, be helped by TUX? Will my mod_perl code or php code run faster with TUX?
  • That would be interesting. Tux is GPL'ed, and that isn't gonna change. Normally, patent and copyright on software reinforce each other. The person who owns the patent also owns the copyright. In this case, the copyright is owned by a number of people, who have all agreed that anyone may distribute the software. The patent is owned by an individual, and patents were never part of the GPL agreement.

    The situation gets even worse though. SuSE is a German company. In Europe, there are no software patents (in practice there are, but this wouldn't get one because the function isn't new. Just the method)

    This does assume that Redhat would want to do this. Something that I don't think is likely. They must know how much bad press this would create.
  • I understand that TUX was primarily designed and coded by Red Hat. Did other Distrobutions/Vendors have there hands in parts of this project? Who and what did they contribute?
  • Lots of developers create websites under the assumption that higher performance can be achieved by adding more machines behind a load balancing mechanism. After all, hardware is cheap compared to the cost of creating a website. Under this assumption, the scalability of a webserver does not matter nearly as much. Given this assumption, do you still feel that there are still benefits to having kernel-space http functionaty? Even compared to the risks of extraneous kernel code?
  • I disagree with point 1. They might or might not be. On the other hand the people responsible for Tux might not be bothered to talk about it. The only way to find out is to ask.
  • Hmm. Kernel-mode file service has been around forever, and caching (even of objects of differing types) has been popular as well.

    Even Microsoft's SWC [microsoft.com] (Scalable Web Cache) uses this technique. I don't see how this is really anything new.

    My question is, would this really be an appropriate service for anything but static pages, considering that an errant CGI script might possibly take down not only the service, but also the server?

    BTW, it's lame to say that there's no difference- if the server does anything except serve static pages, there's a real chance for data loss if the kernel panics in the middle of a write to disk, etc.
  • Are there any plans for embedded kernals? Would it even be different? (I don't know that's whay I'm asking)

    It would seem to me this would be a very fast cache/proxy sever. Embed it in hardware and you've a very cool little product.

  • Have you tuned TUX for any particular benchmarks, or do you just write it as best you can and throw it in the ring? If it's tuned to some benchmarks, does that hurt its performance on other benchmarks?

    Have any benchmark tests ever been particularly useful for revealing bugs/inefficiencies in your code? That is, are the benchmarks tools to you, or are they just the end product?
  • by wowbagger ( 69688 ) on Tuesday July 11, 2000 @11:18AM (#942395) Homepage Journal
    But rather for the /. crew: when would you see deploying TUX as a server for /.?

    This is the real question: when will people for whom serving web pages is their bread and butter adopt this? Apache already has this level of trustworthiness, how long until TUX has it?
  • The thing that I find the most disturbing about the SPEC99 results [spec.org] is that Linux/TUX cleaned everyones clock so easily. It wasn't just the other IIS machines but the full blown UNIX boxes. The machine that came the closest was the IBM RS/6000 with 8 processors running Zeus, and it only had a performance score of 3216.

    It is hard to say if RedHat/TUX would best NT in the orginal Mindcraft test. I would tend to believe that it probably would.

    What I would like to see is how a Linux box of the same configuration except without TUX would fair. That might be a better indication of exactly how much faster TUX is. If you noticed the only machines that ran Linux were the Dell machines, and all of them had TUX on them.

    The OS's I would like to see tested are the following:
    NT 4.0 with IIS
    NT 4.0 with Apache
    Win2000 with IIS
    Win2000 with Apache
    Linux 2.2 kernel, Apache w/ TUX
    Linux 2.2 kernel, Apache w/o TUX
    Linux 2.4.pre kernel, Apache w/ & w/o TUX
    Solaris 8 with iPlanet
    Solaris 8 with Apache
    OpenBSD with Apache

    If you read the full report from Mindcraft they also tested Solaris 7 x86 version and FreeBSD. Where NT scored >2000, FreeBSD scored ~1200, and Solaris scored >6000.
  • good call. Anyone out there have plans for using this soon? Are there any sites running it now. Could they withstand being /.ed?

  • Benchmarks don't really mean much, unless you're comparing like with like. Which leads me onto my question....:

    Do you think that ultra-high-end web servers, such as TUX, will ever be able to compete with dedicated systems such as Transputer nets, or dedicated WWW OS', such as Exopc + Cheetah?

    (The overhead of a full-scale OS, plus full-feature system library, plus massively-extensible WWW server -must- impose penalties on multi-purpose systems that simply don't exist for the more basic systems. And transputer networking is a LOT less heavy than that SMP quagmire.)

  • by nadador ( 3747 ) on Tuesday July 11, 2000 @11:47AM (#942405)
    TUX appears to me to be rather specialized, eg. if you own a web server, eventually you'll use TUX; if you also serve web pages, you probably won't, just because you don't need to.

    Do you see TUX as indicative of a growing realization that general purpose computing might not be perfect for everything? More specifically, do you see it as part of a movement towards more specialize hardware and software? For instance, why should a web server run the same kernel as a workstation, and why should the be built of the same parts?
  • Would you recommend the use of loadable kernel
    modules as a mechanism for developing/improving
    other services?

    Beyond the use of TUX, what other features/fixes/
    tunes may be necessary to move Linux into more of
    the enterprise class of servers?

    How long did it take to develop TUX? What about
    Linux helped/hindered this development?

    Is SPECweb a good benchmark for TUX? What would
    you like to see in a better web benchmark?
  • TUX was performing with dynamic content. SpecBENCH is designed
    to mimic a real world situation with mixed static and dynamic content.

    For Ingo:
    Do you foresee this becoming the enterprise platform for web
    serving based on the combination of
    1) threaded TCP/IP stack
    2) httpd modules
    3) TUX ?

    ie: What possibilities exist for further optimizing ? It is really incredible
    how much more optimized TUX + 4 CPUs/NICs is that something like
    an 8 CPU IBM AIX machine, or solaris...
  • What makes you think BSD folk won't accept GPLd code?

    From the FreeBSD contributions HowTO:


    When working with large amounts of code, the touchy subject of copyrights also invariably comes up. Acceptable copyrights for code included in FreeBSD are:

    1.The BSD copyright. This copyright is most preferred due to its ``no strings attached'' nature and general attractiveness to commercial enterprises. Far from discouraging such commercial use, the FreeBSD Project actively encourages such participation by commercial interests who might eventually be inclined to invest something of their own into FreeBSD.

    2.The GNU Public License, or ``GPL''. This license is not quite as popular with us due to the amount of extra effort demanded of anyone using the code for commercial purposes, but given the sheer quantity of GPL'd code we currently require (compiler, assembler, text formatter, etc) it would be silly to refuse additional contributions under this license. Code under the GPL also goes into a different part of the tree, that being /sys/gnu or /usr/src/gnu, and is therefore easily identifiable to anyone for whom the GPL presents a problem.



    No problems here.

    Still, it would be nice if folks really let their code be free.
  • How does TUX handle the configuration of the server? Since it seems to be pretty much kernel level code, does it need a reboot for new configs? Or is it just read into memory using user space code?

    Can TUX handle apache modules as of now?

    Is this technology possible to implement on FTP, SQL, SMTP etc. servers?

    uhmm... the patent question was taken, so was the threading and the portability... hmm... I guess I have no further questions, your honor.

    .sig
  • by carlos_benj ( 140796 ) on Tuesday July 11, 2000 @12:15PM (#942415) Journal
    I hope you will accept my apology, there was no reason for me to lash out at you in that manner.

    Pardon me. I thought I was logged into Slashdot.

    carlos

  • Spec'ing software is merely meant to coagulate your thoughts into something codeable. If you are smart enough to write software, let alone an entire operating system, without doing so, then obviously your work will be good enough to make designing it formally a moot point.

    Don't think that just because you learned it in school, it's correct.

    --
  • This is a convenient ivory-tower way of looking at coding, however the fact is that any large complex problem (ie the ones you'd like to use threading or multiple processes for) have bugs. Period. Being able to recover better from bugs is simply a good thing, especially when you're going for reliability (databases, anyone?).

    And asides from that, you don't even know if the bug is yours - For instance, Apache processes die off after handling a certain number of requests to try to limit the effect of memory leaks *in vendor libraries*. It's nice working on (Linux|*BSD) where you can just fix the library in question, but if you have to work on Irix, Siemens Unix or something even more obscure, do you want to be waiting on a vendor to get you a bug fix? And of course the libc on various systems may have even more nefarious bugs than memory leaks...

    In short, in an ideal world, threading is probably almost always the more attractive solution. In the real world, however, compromises sometimes need to be made, and frequently fork() is a good compromise.
  • by Pinball Wizard ( 161942 ) on Tuesday July 11, 2000 @12:56PM (#942429) Homepage Journal
    Since your goal is to incorporate HTTP into the Linux kernel, I'll assume you are intimately familiar with how the OS deals with things like caching, memory, etc.

    First of all, great job. For those of us whom speed is a primary concern, integrating HTTP into the kernel is a godsend. Obviously this will be a great improvement.

    That said, don't you think the hardware differences in this last test are big enough to discredit the results? The W2K machine had an Ultra2 SCSI channel, 80MB per second data transfer vs. an Ultra 160, 160 MB per second data transfer rate of the Linux machine. The test operator claimed that since the machines more memory than the total size of the files they were serving that the SCSI bus speed did not matter. Is this true? Secondly, the Linux box had a dedicated 1000MB/s ethernet adapter while the W2K machine was using a 10/100/1000 NIC. The tester claimed that since they were plugged into the same network, that the NIC's were functionally equal.

    In your opinion, do the hardware differences mean anything? I'm asking because if this were the other way around(and the Windows machine won) I think the Linux community would have been up in arms about it.

  • No one put a gun to my head when I bought my computer.

    Try to buy a computer without Windows. Most places, that just isn't available.
    In fact, it seems that the crux of your argument is based in jealousy.

    What about my argument looks like jealousy? I am not jealous of Microsoft at all. I don't want to see Linux as the only choice when people go to buy a computer. They should be able to pick between different OSes pre-loaded, or none at all. Don't mistake the fact that I think that other companies besides Microsoft should have a fair opportunity to do business with my being jealous of Microsoft.

    Linux isn't available on most Personal Computers because of people like Ingo.

    Eh? That isn't a fair statement at all.

    You spend your time working on the kernel making it more powerful, while every professional OS company works on what the common person really wants: a nice and easy GUI.

    Bzzt. Wrong again. Look at such 'professional OS companies' as Compaq (OpenVMS), Sun (Solaris), HP (HP/UX) and SCO (Open Desktop, Unixware). They all use X, and have GUI's which are arguably less nice and easy than KDE.

    While a CLI has it's place, the average user enjoys the simplicity of a GUI.

    And personally, I think that KDE, at least is better in many ways than the Windows GUI. I think Gnome is only lagging slightly behind KDE, and has some things about it that are nice as well.

    Microsoft, while receiving great praise from the Slashdot community for it's innovations,

    Er, either you mistyped that or you are being sarcastic. Microsoft has never had any real innovations, let alone received any great praise from the Slashdot community.

    has done something that Linux hasn't: it made an OK product and made people think it was Great. Microsoft can not be faulted for good advertising.
    Actually, Microsoft's advertising isn't always that good, it is just smotheringly pervasive. Linux doesn't have the kind of money to spend on advertising that Microsoft does. Microsoft's commercial success can't be completely explained by advertising, it is also all of the other things they do.

    To assume that a business must play fair, just because it deals with software, is asinine.

    To assume that a business shouldn't have to play fair, just because it deals with software is what is truly assinine.

    I applaud this development, and I enjoy the approach that Ingo took. But to say that bundling a server with the kernel is different than what MS has done, is to invite a swift beating.

    It is different, if only from the fact that you have to choose to run Linux to begin with, and you can easily de-bundle the kernel based server if you like. How is that the same?

    Not everyone is a genius programmer,

    Oh, please, you don't have to be a genius programmer to choose to or not to load a module, for example. I've known people who aren't programmers who've figured out how to rebuild a kernel for goodness sake.

    and thus the same limitations are in place as in Windows. I wouldn't worry about the FTC going after a commercial producer of this, though, because the kernel has been specifically improved for content serving. The server is the OS, and the OS is the server. Pity to the moron who would buy it for gaming.

    Actually, Unreal Tournament runs pretty well on my main Linux box at home... But I didn't buy that box for gaming.

    I hate Microsoft,

    I don't hate Microsoft, I hate what they do. If the would clean up their act, as for example, IBM has for the most part the past few years, I would be willing to tone down my criticism of them.

    don't get me wrong, but they haven't committed the great sins we accuse them of.

    What do you think they haven't done? One of the things that irks me most about Microsoft's behaviour is that they seem to think they have to play dirty and nasty all the time, even when they have already essentially won in a market. Other companies play dirty too, on occasion, but nobody is as ruthlessly cutthroat as Microsoft is. Its often like a pro sports team playing a bunch of kids on a sand lot, and thinking they have to cheat because they are afraid they might lose, or even let the kids score once.

    I just wish they would release software that was less buggy and more efficient.

    I am not so concerned about the quality of Microsoft's products. In a competitive market, eventually a company that produces bad products with no redeeming values would get pushed down. Only when monopolistic powers are in play can a company consistantly get away with forcing people to buy shoddy products.

    Otherwise, as soon as a Linux distributor gets some marketing intelligence Apple and Microsoft will be screwed.

    Its not that simple. Once a company has a monopolistic position in a market, they can often crush smaller competitors no matter how much marketing savvy they might have.

    Just save a spot for Be, ok?

    I've got nothing against Be.

  • I can't imagine the BSD folks wanting to put their kernel in /sys/gnu, it would undoubtedly break a heck of a lot of BSD software :).

    In other words, while the BSD folks are willing to include GPLed utilities in their distribution, mainly because they wouldn't even have a compiler without gcc, but I would bet money that they aren't willing to include GPL code in their kernel. IANAL, but the FSF and the Debian Project feel that the KDE project is distributing software illegally simply because they link the QT libs from GPLed software. They say that QT then becomes part of the "derivative work." The monolithic BSD kernels almost certainly would constitute a "derivative work" as is regarded by the GPL and so the entire kernel source would have to fall under the GPL.

    Needless to say, this is very unlikely to happen.

  • This is a good example of a coherent, interesting /. bost that's also complete bullshit. Moderators seem to like these.

    This isn't a troll. Matts obviously hasn't bothered to read any of articles referenced above. If he had, he would have noticed that TUX is designed *Specifically* to be integrated into Apache.

  • That is pure BS. When I purchase new computers, I have never had to pay for Windows at all.

    Are you sure? You think Microsoft is giving it to you for free? It is included in the price of the computer whether you like it or not. Until recently, the only way you could buy an assembled computer at retail without Windows was to buy one from a very small computer store that was small enough to get away with building one without Windows without getting noticed by Microsoft.

    As a matter of fact, I've never seen Maxtor or any other HD manufacturer sell drives with Windows pre-installed. If you are talking about newbies who are buying whole PC's from Dell or Compaq, they know that Windows come with those systems, that why they buy them.

    Oh please, many of those newbies don't really even understand what Windows is, or why they should have a choice as to something else. Microsoft has made it virtually impossible for people to have any choice.

    Think before you post moron.

    I think about my posts and stand behind what I said. As to who is a moron, I will let the court of Slashdot opinion decide.

  • This means the data is (redundantly) read from disk by the kernel, copied to userspace for Apache to see, just so that Apache can copy it back to kernel space

    When sending a file to the network, you can use sendfile(2) [rt.com] to avoid this scenario. It doesn't look like apache currently does this, but hopefully future versions will.
  • I've downloaded the code and browsed it; I'm not entirely certain precisely what Tux is.

    Is it:

    • A "hack" designed to make IIS look as stupidly slow on carefully "hacked-at" benchmarks as the IIS benchmarks make Apache look?

      No particular flaming intended here; in either direction, this represents "benchmarketing" as opposed to anything realistic.

      It may be as unrealistic to "real world" situations to use a highly tuned combo of TUX and Apache and make IIS "look sick" as it was for Mindcraft to use a heavily tuned IIS to make a poorly-tuned Apache look bad.

    • Something that would be embedded into a sophisticated web application framework to make certain cases of page accesses run ravingly fast ?

      In which case someone building the next Slashdot might care, as they need to write finely-tuned code, whereas I, when running a lightly loaded web server at home, will have a hard time detecting differences between Roxen, [fsf.org] Apache, [apache.org] Boa, [boa.org] and WN. [nwu.edu]

    • Something that I could run in lieu of Boa [boa.org] as a tiny, fast web server?

    This isn't quite a flame; it truly is important for a piece of software that you want people to use to be described in an economical manner that makes it easy for people to determine its relevance.

  • Say, didja ever think that maybe forks also take advantage of multiple processors?

    And didja ever think that user threads, where the kernel knows nothing of the threads, can't take advantage of multiple processors?

    --
  • by JordanH ( 75307 ) on Tuesday July 11, 2000 @04:29PM (#942449) Homepage Journal
    • I think that the rule in the commercial world is "all's fare in war and benchmarks. Love is irrelevant".

    You don't have to defend the TUX benchmarks as being exploitive of some weakness with the SpecWeb test. Nobody did anything "unfair" here.

    I'm finding the pro-Microsoft moderation bias around here lately a little hard to stomach. If I had wanted to read FUD surrounding Linux Benchmarks, I'd just tune into ZDnet.

    I guess that moderators think someone is brave for expressing pro-Microsoft opinions that will likely catch derision from all the close-minded Microsoft bashers here. The fact is, if you write anything even vaguely pro-Microsoft here these days, and keep a cool, even tone, you're likely to be moderated way up.

    An MS employee posting pro-MS Comments brave? Hardly... That employee surely have nothing to fear from an eWatch [businessweek.com] investigation.

    konstant intimates that making design tradeoffs (features for speed) somehow makes a benchmark invalid.

    Those who develop benchmarks are supposed to take into account the "real world". If you feel that the benchmark allows someone to compare impractical, unusable software to more fully featured software, then you should criticize the benchmark and be specific about how the benchmark is not addressing these "real world" concerns so that we can be educated and the benchmark can be improved. Don't ask leading questions that suggest that features were thrown out to the point of making a product that's not usable in the "real world". Perhaps he didn't really suggest that TUX was unusable in the "real world". No, he did something more subtle. He suggested that if features were thrown out to benefit performance, then this test was no different than if features had been thrown out to the point that it was unusable (asking "how is this different...").

    Both Spec and Ingo Molnar have been quite open about the conditions of the test and the capabilities of TUX. As Ingo Molnar says here [slashdot.org]:

    • "...while it's not as feature-full as Apache, TUX is a 'full fledged' HTTP/1.1 webserver supporting HTTP/1.1 persistent (keepalive) connections, pipelining, CGI execution, logging, virtual hosting, various forms of modules, and many other webserver features."

    The list of capabilities given above for TUX covers what is needed by the overwhelming majority of Web sites. Sure, there may have been some usability tradeoffs, but look at the HUGE performance benefits.

    So, exactly what is konstant suggesting? That it's not a "fair" benchmark because it doesn't support all of the usability features that Apache has? Or is it only a fair benchmark if TUX can do everything that IIS does?


    -Jordan Henderson

  • Do you really think that if newbies out there don't even understand what Windows is, that they're gonna want to use something "open" and "free" like Linux?

    I don't think that being "open" will be something those people will understand one way or the other. Free as in "free beer" is something that many of them will understand. Free as in free speech takes a little more explanation.

    I wish some of the posters on Slashdot would get their heads out of their asses long enough to realize that LINUX ISN'T A DESKTOP OS!!

    Linux is a desktop OS. It may not be a desktop OS for everyone yet, but then again, I don't think Windows fits that either. There are some things in Windows that are still much worse than MacOS for the truly computer illiterate. With KDE, I don't really see how Linux is very far beyond the Windows 9x interface. I never get too many details on just what people think is so great about Windows 9x compared to KDE, maybe you can say?

    Another thing is that Linux has emerged as one of the only alternatives to Windows on the x86 architecture because Microsoft's tactics have squashed virtually every other competitor, regardless of technical merits. OS/2 could have been a contender, for example, had Microsoft not stabbed IBM in the back. BeOS could be a contender, but they have an uphill battle trying to build a market against Microsoft.

    Hell, I've been seriously interested in computers for more than 2 years,

    Oh my, two whole years. I've been seriously interested in computers for something like 20 years. Linux is a good desktop for serious users, and is making strides towards being friendly for those not quite at that level yet.

    and I still have fits trying to configure some stupid things in Linux.

    Actually, if you weren't used to Windows, it would probably be easier in some ways to learn Linux. I learned UNIX before I had used MS-DOS much. There was no such thing as Windows back then -- the only widespread GUI was MacOS, and PC users swore up and down that the GUI was a fad, and that the command line was superior to any GUI.

  • The Linux kernel has had threads for a very long time now. In fact, it has no other concept available to user-space for an executable task.

    Threads are either alone in the VM, in which case we call the whole thing a ``process'', or there can be more threads of execution in one VM area (a so-called multithreaded process or whatever). The difference is _only_ whether there's one or more threads in that VM area.

    The difference between Linux and NT is, to the programmer, that on Linux you usually use the pthreads library to create a thread (the library will call clone() which tells the kernel to create a new thread), whereas on NT you use the Win32 library call CreateThread() to tell the NT kernel to create a thread.

    pthreads is fairly inefficient, which is why some people believe that threads aren't native to Linux. That is, using pthreads compared to a fork() isn't a lot faster usually, whereas on NT CreateThread() is a lot faster than CreateProcess(). What people tend to forget is, that creating a full-blown process using fork() on Linux, is still a lot faster than creating just a single thread using CreateThread() on NT, on identical hardware (measured in clock-cycles from start of call till first line in new process/thread is reached - source is article LJ some time ago).

    Threads can't work much better in the Linux kernel. The pthreads _library_ could probably be improved to make thread creation faster, or you could just call clone() yourself. But this is not a kernel issue.

  • I have to agree with you on this one. But with exceptions. MS didn't "develop an OS that supports a wide range of hardware". What they did is maintain and improve a codebase (MS-DOS) that was the defacto standard for personal computing. This was accomplished with fairly little effort:
    a few file system changes
    a few performance optimizations
    and a few extensions to provide new functionality (networking, CD-ROM's)

    Making NT is Microsoft's single biggest accomplishment. NT is to MS-DOS what today's UNIX is to CPM: Several orders of magnitude more functionality, far better performance under load, a modern feature set, and runs on both destop to server.(Now I know that UNIX is *NOT* a derivative of CPM, and I know somebody would have tried to point that out if I didn't say it).
    So, they get half credit. They rebuilt thier core OS from scratch, yet maintained compatibility (mostly) for old devices/APIs/interfaces. But they haven't eliminated the usability and stability problems that plagued DOS. Maybe when they become a real "OS" comapany (as opposed to a "monopoly maintaining" company) they'll start to fix some of those bugs that cause all those damn blue screens, and refine that interface a bit, and actually add features in a sensible and consistent way. They might even make some products that can compete head to head with UNIX on servers. It could happen... really!! ;-)

  • I think as soon as someone does something like this, it naturally comes to mind, "hmm, why do I have all that other crap on my {web, sql, ftp, email} server anyway?"

    Considering that the trend of the 90s was to centralize services so that each server performed only a single function (the webserver, the fileserver, the pdc, the mailserver...), do you think that people trying to minimize maintanence will be moving toward "appliance" computers, with only a single server app and a stripped down operating system?

    We've seen it before -- routers used to be real computers. Now -- when's the last time you heard of a router crashing?

    Do you see server rooms in five years looking like a component stereo system?
  • I've actually spoken to Ingo in depth about TUX already. In fact I know the answer to my own question, but figured others would be interested.

    Note though that once TUX gets integrated into Apache, its stuck with Apache's process model, its API and all the other overhead in there. Needless to say, its going to be a lot slower, and I'd be willing to bet that it wouldn't offer any significant performance improvement over Apache 2.0's MPM.

    Besides, I doubt very much that the ASF will incorporate TUX. Read their list archive some time (new-httpd), and you'll see how strict they are about patches for the sake of performance (e.g. the SGI 10x speedup patches were rejected).
  • I wouldn't use TUX for a highly dynamic app or an e-commerce system, unless someone makes a mod_perl (or similar) for it. OTOH, I would very certainly consider TUX as a static webserver, to serve images. real-world servers serve several images per page, and it typically pays to have a separate box, or at least a separate httpd, for these. thttpd has so far been known as a good choice for this; it seems that TUX may be even better.
  • You can't go into most retail outlets and buy a computer without a hard drive. Only the smaller stores will sell you one that way. CompUSA, Best Buy and the department stores will not. Things might be different in other parts of the country or world, but around here, it isn't that easy.
  • Although I rather appreciate your fairly even handed tone to your post, I think this point is getting over blown. It's a point that I feel needs to be addressed with a bit more reality, oddly enough for the sake of Linux and open source in general.

    I think the important thing is to think about what we are going to do about the problem more than worrying about the problem itself.

    Microsoft has done one thing that no other computer company or organization has even approached accomplishing. They developed an OS that can support an extremely wide range of hardware, and brought computing to what you would refer to as the average user.

    What is innovative about that? They didn't invent MS-DOS, let alone the fact that MS-DOS was a somewhat cheezy CP/M clone. If anyone deserves the credit for developing an OS that could support an extremely wide range of hardware, it is the late Gary Killdal of Digital Research. As for nuturing a 3rd party hardware market, the IBM PC was nothing new, as it borrowed its slot expandability ideas from S100 machines (originally Altair and MITS) and the Apple II. As for bringing computing to the average user, Microsoft wasn't very innovative there either, what they did was copy Apple's copy of Xerox's work. If anyone deserves credit for bringing the GUI to the computing world, it is Steve Jobs of Apple.

    Sun, IBM and Apple all rolled together haven't accomplished half of what MS has in this regard.

    I'm afraid we'll have to agree to disagree on this one. I think Microsoft likes to take credit for a lot of other people's work.

    This keeps getting referred to as something trivial, when it's anything but.

    Being nontrivial and being innovative are not the same thing.

    Linux is still going through the pains of trying to get all these various hardware drivers to play nice together, and to work in OEM support for them. Things are finally starting to get easier, with more vendors willing to release specs, reference code and sometimes even let their own people work on drivers. Part of the reason vendors are willing to do this is because Linux has not only started to achieve a certain level of stature in the world, but also because vendors are less worried about retaliation from Microsoft because Microsoft is now under scrutiny by both the computing community and the feds.

    Linux has a lot harder work to create device drivers, because most of them are still being written by users. Microsoft has all of the OEMs doing the work for them. It is pretty amazing to me how wide Linux's hardware support is given the obstacles it has faced in getting there, especially with hardware vendors who won't release specs.

    There's obviously still quite a bit of work yet to be done to bring Linux to the desktop as well.

    Depends on what you mean by quite a bit. I personally don't think that KDE is that far off from Windows 9x, and in some ways I think it is better. I keep hearing how far Linux is behind in this area, but I seldom get very many details as to what people think is lacking in KDE, for example or any positive suggestions as to what to do about it.

    Point is, MS has a lot of good and bad about them.

    Hmmm... a little good and a lot bad would be the most I could see. In my opinion the bad painfully outweighs the good.

    They're a big company that does a lot of stuff, or course there's going to be a lot of aspects to them. To constantly only focusing in on just the bad limits you to only learning half the lessons that a company like MS can provide.

    Just because I complain loudly about the bad 90% of Microsoft doesn't mean I haven't paid attention to what they are doing. Its just that of the good 10%, probably 90% is borrowed from someone else, and I'd prefer to look to the original.

    As I've said before, if Microsoft can clean up their act as much as IBM has over the past few years, I will be willing to tone down my criticism of them. I don't hate them for the sake of hating them, I hate what they do.

    Disclaimer:
    This is not meant to be a pro-MS post. There is a bigger picture here than "MS Sucks" and I just find it unfortunate that so many folks around here can't see that.


    While the picture may be bigger than that, it is mostly obscured by that. I think the thing is that you have to be ready to justify conclusions based on concrete criteria rather than just saying "_____ sucks".

    Sorry for picking on you SoftwareJanitor, there are certainly folks far worse than yourself in what I'm talking about here.

    I'm a big boy, I can take criticism. I pretty much completely disagree with you, but you are entitled to your opinion, and entitled to express it.

  • I would have to agree with you pretty much. If Microsoft didn't exist, or at least hadn't crushed everyone in their path with their ruthless tactics, the computer industry would be a lot further along than it is now.

  • I know it's a sleazy thought, but the reason why Red Hat is the most popular distro has more to do with marketing and deals than the code itself.

    And what exactly is wrong with Red Hat being the "Most Popular Distro"? Is there a popularity prize that a distro can win which makes it somehow better? So Linux isn't Linux if there's some sinister plot? And could you please tell me which bits of code are "inferior"?

    Facts are always better than hyperbole, but often hard to find when faced with an OS bigot. Are you bigoted? If my NT box has an uptime of 187 days and meets my needs for a particular task is that bad? If I use Red Hat Linux instead of SuSE or Slack or Debian is that wrong? You'd begrudge me my Red Hat implementation for what reasons, exactly? Show me the code, pal.

    At the risk of starting a flame war, I just have to say that this sort of sentiment is exactly what is wrong with the Linux "community" right now. We need solidarity. We don't need ill-informed newbies dividing the Linux camp. "My distro is better than yours..." Well nyah nyah nyah. I got the same damn kernel you got. I have a system that does exactly what I want and is highly extensible should future needs arise. But it's based on Red Hat. That's bad?!? What have you been smoking, my friend?

    Drop the 'tude and realize that we're all pals here. At least one Linux distro has to be popular, and if that was Debian, you'd probably come up with unfounded reasons why it's bad. You'd be using Red Hat or Mandrake then, wouldn't you? Just because they were the "alternate Linux". Admit it.

    Maybe Red Hat isn't the cool distro as far as you're concerned, but I have news for you: A choice between your opinion and my needs isn't a choice. I choose what works, regardless of what the cool distro du jour happens to be. Your opinion doesn't help anything.

    Get a clue and stop being such an elitist.

    -B

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs

Working...