Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Microsoft

ESR and the MindCraft Fiasco 204

The one and only Eric S. Raymond has submitted his response to the Mind Craft report that we've talked about a bit here lately. This is a good wrap-up type piece which nicely summarizes the flaws with the testing (which range "yeah maybe" to "you gotta be kidding!"). Anyone who thought the tests had any validity should read this.
The followingw as written by Slashdot reader, Jargon File Maintainer, Fetchmail Author, Open Source Evangelist, Eric S. Raymond

The Mindcraft fiasco

Microsoft's latest FUD (Fear, Uncertainty and Doubt) tactic may be backfiring.

A 21 April ITWeb story reported results by a benchmarking shop called Mindcraft that supposedly showed NT to be faster than Linux at SMB and Web service. The story also claimed that technical support for tuning the Linux system had been impossible to find.

Previous independent benchmarks (such as "Microsoft Windows NT Server 4.0 versus UNIX") have found Linux and other Unixes to be dramatically faster and more efficient than NT, and independent observers (beginning with a celebrated InfoWorld article in 1998) have lauded the Linux community's responsiveness to support problems. Linux fans smelled a rat somewhere (uttering responses typfied by "Mindcraft Reality Check"), and amidst the ensuing storm of protest some interesting facts came to light.

  1. The benchmark had been paid for by Microsoft. The Mindcraft press release failed to mention this fact.
  2. Mindcraft did in fact get a useful answer to its request for help tuning the Linux system. But they did not answer the request for more information, neither did they follow the tuning suggestions given Also, they forged the reply email address to conceal themselves -- the connection was made after the fact by a Usenetter who noticed that the unusual machine configuration described in the request exactly matched that of the test system in the Mindcraft results.
  3. Red Hat, the Linux distributor Mindcraft says it asked for help, reports that it got one phone call from them on the installation-help line, which isn't supposed to answer post-installation questions about things like advanced server tuning. Evidently Mindcraft's efforts to get help tuning the system were feeble -- at best incompetent, at worst cynical gestures.
  4. An entertainingly-written article by the head of the development team for Samba (one of the key pieces of Linux software involved in the benchmark) described how Mindcraft could have done a better job of tuning. The article revealed that one of Mindcraft's Samba tweaks had the effect of slowing their Linux down quite drastically.
  5. Another Usenet article independently pointed out that Mindcraft had deliberately chosen a logging format that imposed a lot of overhead on Apache (the web sever used for the Linux tests).

So far, so sordid -- a fairly standard tale of Microsoft paying to get exactly the FUD it wants from a nominally independent third party. But the story took a strange turn today (22 Mar) when Microsoft spokesperson Ian Hatton effectively admitted [8] that the test had been rigged! "A very highly-tuned NT server" Mr. Hatton said "was pitted against a very poorly tuned Linux server".

He then attempted to spin the whole episode around by complaining that Microsoft and its PR company had received "malicious and obscene" email from Linux fans and slamming this supposed "unprofessionalism". One wonders if Hatton believes it would be "unprofessional" to address strong language to a burglar caught in the act of nipping the family silver.

In any case, Microsoft's underhanded tactics seem (as with its clumsy "astroturf" campaign against the DOJ lawsuit) likely to come back to haunt it. The trade press had largely greeted the Mindcraft results with yawns and skepticism even before Hatton's admission. And it's hard to see how Microsoft will be able to credibly quote anti-Linux benchmarks in the future after this fiasco.

This discussion has been archived. No new comments can be posted.

ESR and the MindCraft Fiasco

Comments Filter:
  • by Anonymous Coward
    Netcraft is site surveying site that has always been objective and fair. MINDCRAFT are the idiots in question.
  • by Anonymous Coward
    Demonstrate performance on SPECweb96 on a 4-way Xeon. Compare against existing NT4 and Solaris benchmarks on similar 4-way Xeons.

    --G
  • by Anonymous Coward
    Does it really matter whether Linux outperforms NT on a high-end SMP server? We know that Linux outperforms NT at the low end, and other Unixes outperform NT at the high end. Linux has NT looking like a poor choice for both the small-scale departmental server that has always been its bread-and-butter, and Unix has always made NT look like a poor choice for the high-end enterprise servers, and that doesn't seem to be changing.

    So what is left for NT?
  • by Anonymous Coward
    You have to be real careful performing benchmarks against commercial software. If you read thru some of the Micro$oft EULA's there's specific prohibitions on publishing benchmarks. In short, without an agreement from Micro$oft, it's a good way to get sued!

    The commercial product I work on would love to publish direct comparison benchmarks against the competing MS product. Our legal department won't let us.

  • by Anonymous Coward
    Doesn't anyone else find it interesting that after funding this Mindcraft study, MS didn't put out a press release touting the results?

    Personally, I think MS realized that Mindcraft didn't do a good job of running the tests, so they declined to advertise the study. Mindcraft just put out a release on their own.

    You have to understand how these things work. All good software companies run tests of their product under various conditions on various hardware configurations. When they find one that beats some portion of their competition, they try to advertise that fact. To get credibility, they hire an independent lab to reproduce the tests.

    In this case, it looks like even MS was suspicious of how poorly Linux fared. My guess is that MS knows that even a well-tuned Linux will lose this particular test, otherwise they wouldn't have hired Mindcraft to run it in the first place.
  • With the recent report on the untuned NT vs Linux with Oracle server, many of the marks were 20 times faster on Linux than NT. So, I got to wondering, wouldn't it be funny if someone was able to achieve similar/better scores in a Linux system with a less powerful machine (how many of us have quad xeons?). If it were shown that a dual-p2 or something running Linux was just as good as their xeon beefed up NT box, that would make quite the statement, both on the study and on NT.
  • As the guy above said, a quad-cpu machine isn't terribly high end, but more than the average desktop certainly.

    But with regards to the study, wouldn't it have been more truthful to tune the Linux box appropriately? Then if the results showed that NT was better, a point could be made that Linux needs more work in these areas. I really believe people would accept that as honest criticism (again, IF the results showed that NT was faster, I have my doubts that would happen :)).

    Mindcraft obviously didn't make much of an effort in tuning the box (ie hire a decent Linux admin), posted a message to a couple newsgroups (lacking sufficient details, and not responding when someone requested more details), one call to RedHat (which was directed to the wrong group as ESR stated). And yet, they build an entire study around these points, making claims that Linux support is bad and all. For these reasons they were blasted, not for saying NT is better than Linux (as some people I've talked to think is the motive for Linux people's outrage).
  • by whoop ( 194 ) on Friday April 23, 1999 @08:31AM (#1920162) Homepage
    In the Performance Testing section on their web page [mindcraft.com], second paragraph they say flat out:

    "...we work with you to define test goals. Then we put together the necessary tools and do the testing. We report the results back to you in a form that satisfies the test goals."

    Since they say Microsoft sponsored the test, we can replace "you" with "Microsoft." So they worked with MS to define the test goals (NT is 2 or more times better than Linux). Then they put together the tools to do that, hacking the registry and all to beef NT up, slowing Linux apache/samba servers. And finally, report the results back in a form that satisfies the test goals, lo and behold NT is 2-3 time faster than Linux. Such a surprise, right?
  • Very interesting, What was the total cost of your system?

    Things I could see improve your system:

    drop the Apache RPM's and compile it.. Specificaly with the PGCC compiler, I have heard of 30+% speed improvements with it. you could even go for the PGCC based distro stampede for your base install.

    look at mylex RAID boards.. they are supposed to work a little better than the AMI cards that Dell uses.

    It'd be fun to send that box to mindcraft, and have them test it as same preformance or 90% preformance of the NT box.. but cost less than $5k
    not $18,000
  • Xenophobia. Managers are scared of Linux because it's not NT, which they're used to.

    Managers are used to NT? Where? Managers that propose NT for high-end stuff definitely never seen NT for long enough to be "used to" it -- but they are "used to" being bombarded by M$ advertising/propaganda of NT, and this, not mythical "they are already used to NT" should be counteracted.

  • an impartial third party that can optimize both Linux *and* NT fairly.

  • Please note that this report was (deliberately?) released a week before Comdex, when most Linux vendors were running around like chickens with their heads cut off trying to get everything straight for their Comdex displays. And most of the top people for Linux vendors have been at Comdex all of this week, while the home offices are understaffed and slowly going crazy because there's not enough people to do everything that's needed to keep the business running...

    Given all of that, it's a little early for sponsored benchmarking by members of the Linux community. This stuff takes time, and if it's a choice between doing Comdex right and repeating a discredited benchmark, doing Comdex right comes up at the top of the list every time.

  • Isn't that what I said in the "Mindcraft Reality Check"?

    There are valid limits to Linux scalability, problems that need fixing, and honest benchmarking can help us find those limits. Unfortunately, Mindcraft's benchmarking was so flawed by misconduct and poor judgment that it is not useful for that purpose.

    BTW, I do agree with the Microsoft spokesman who said that he was certain that NT would have come out on top even with an honest test. I suspect the SAMBA results would have been quite competitive, within 3% (more or less) of the NT numbers, but the Apache server has never been known for its static file serving speed (though mod_mmap_static may change that!). On the other hand, there is a big difference between the 5%-10% advantage that I bet Mindcraft would have found, and the ridiculous numbers that they actually reported. They actually shot themselves in the foot here, because if they'd reported the real numbers, the Slashdot Crowd would have howled, but Jeremy Allison and other technical heavyweights would have stayed on the sidelines working on fixing the problems found, and the media would have ignored the Slashdot Crowd.

    Just count it as another example of Microsoft Arrogance (tm) outweighing their good sense. It's amazing how such bright people can do such stupid things.

    -- Eric
  • Tuning Linux properly does not have to involve cryptic commands...a Tcl/Tk (or Perl/Tk...or Python...ad infinitum) applet could do the trick just as well, thanks to the sysctl stuff in /proc. I've written quite a few of these for my personal use...perhaps I should spiff them up and release them.
  • Posted by kkkotta:

    Everyone is complaining how Microsoft rigged this test, but no one is surprised or doing anything about it. Why doesn't someone just sponsor an HONEST test, and let the best system win.
  • Posted by smich:

    Wasn't the test supposed to be about how nt can scale up but Linux can't? We all know the test was rigged, and I'd like to see it run again, but over a range of machines.

    What does nt vs Linux look like on a 486 66 w. 8 meg of ram? Hmmm?
  • Posted by kewlmann:

    I work for a fairly large company thats coming out with a kick ass machine pretty soon. Its enterprise class for sure. Lots of processors. I would love to see linux tuned for it.
  • Posted by The Masked Miscreant >:):

    Bear in mind that Intel's integer core is more primative than either AMD or Cyrix's current design. That means that for the vast majority of non-3D/CAD software you'll see a marked improvement over a PII 300 when you use even Cyrix's relatively weak MII chip (whatever P rating actually runs at a 300MHz clock speed).

    I sat through an Intel marketing presentation back when I worked retail, and was told flat out that floating point power was more important for word processing, and that every processor that runs at a given clock speed generates the same heat, because it's the little clock crystal that generates all the heat (not the resistance as electricity flows through the chip).

    Just goes to show you can't trust marketing to give you straight facts.
  • The difference between you and MS, obviously, is that you have integrity and pride.
  • Yes the study was flawed. But I remember a comment by Matt Welsh in which he said that Linux is not properly represented in the high-end machines.

    That is a natural consequence of the open source
    development. Many of the features of Linux are there because some users needed them (scratch an itch, as ESR says...)

    So, bearing in mind that there aren't many Linux
    users with quad xeons with 4 Gb Ram, it's only natural that issues relating with that kind of machine have a lower priority to the Linux user community
  • The PC won over the mainframe? Perhaps in the general-utility-computing arena, but for truly obscene loads and outrageous availability requirements, mainframes still rule. Ask E-Schwab, for example, or REI.

    Mainframes will never die. The legacy system of tomorrow will be mainframe transaction-processing systems fronted by SP/2 analysis clusters, with something like Linux or NT gating the whole mess to the web. I can almost guarantee it.

    Of course, Linux is being ported to the S/390...

  • I'm curious about this too. I wonder if people who pay Netcraft money get that kind of breakdown.

    One thing to remember is that Netcraft counts domains, not IP addresses. ISPs that host sites for clients probably dominate the survey and most of them run Apache. Companies that run their own web site off a T1 or ISDN line probably favour NT, but they would be in the minority.

    I like to take each months numbers and pretend that nobody is switching (clearly not true, but might be a good approximation). This month, for example, Apache gained 423063 sites and MS gained 132943. So new Apache sites outnumbered new MS sites by MORE THAN 3 TO 1! That's impressive.

  • by sterwill ( 972 )
    I've seen very little comment that Linux might actually really be slower on a 4-way. I would be disappointed, but not amazed if Linux were slightly slower on a 4-way given the maturity of NT SMP compared to Linux.
    Maturity of NT's SMP? Wait, I thought we were talking performance. Obviously you've never used SMP NT. Add a processor it gains 20%. Add two more and it loses 10%. Scalable like mud.
  • I would still like to see some more conclusive stuff on Linux's highend SMP abilities (4 or more), bother on i386 and on Alpha and UltraSPARC. Alan Cox claimed there were some speedups to SMP late in 2.1.x (i think) that should have significantly improved hi end perf. Perhaps VAR would sponsr a test?

    If Linux doesn't beat all on up to 16 or so processors, we should fix it.

    It's not that simple. Currently the biggest problem with the Linux SMP implementation is the IO subsystem. (SCSI and IDE, etc.) The problem is that this subsystem isn't SMP safe. So whenever the kernel enters this portion, it grabs what is known as the kernel lock. Thus disk activity can only happen on 1 processor at a time (bad). In Linux 2.3 they're going to strip this away, so that Linux 2.4 (or 3.0, or whatever) will probably scale far better than Linux 2.2 (of course they'll be making other nice optimizations along the way). But this isn't something you can just fix on a whim.

  • One could interpret it that way, but considering that they willfully avoided being helped to tune Linux (ask for samba help anywhere but on the samba lists/newsgroups, call the wrong helpline, don't ask for the right one, when they accidentally do find a helpful person, ignore him, etc...) it seems that the goal must have been to find that NT outperforms Linux.

    It reminds me of an old Mad Magazine comic, A father orders his steriotypical hippy son living at home to go get a job. The son, dressed in his most casual attire (torn cutoff shorts, open shirt, many beads etc), goes to a clothing store and says "You ain't got no jobs do you?". Later to his father, "Well, I TRIED!".

    The only difference is, Mindcraft actually did get an offer, so they had to ignore it.

  • Hmm... I think that most of us are well-versed with the arguments against those benchmark results.

    I feel the need to ask why this piece was worthy of "airtime", and can reach only one conclusion: That a refutation can somehow become more valid because it comes from the pen of Eric Raymond.

    I'm now happy that Katz periodically contributes his dozen screenfuls of drivel, and I don't have a problem with Eric either, but would I have got a whole item on /., had I summarised the obvious flaws of that ridiculous benchmark? I doubt it.

    Matthew
    - what happened to our great meritocracy?

  • the PC community *won*?

    what have you been smoking?

    with the rise of n-tier architecture and thin client, mainframe-style computing is getting stronger than ever. centralised data and logic, just the display on the desktop.
  • I've seen very little comment that Linux might actually really be slower on a 4-way. I would be disappointed, but not amazed if Linux were slightly slower on a 4-way given the maturity of NT SMP compared to Linux.

    I would like to know if Linux does scale as well or better than NT with 4 and 8 processors -- both systems properly tuned and using the same webserver. When that question is answered, I'd like to know what to expect in the future. Is Linux going to leave NT in the dust, or will this be the key niche ground for NT servers that Microsoft will defend to the end, and Linux will never conclusively defeat?
  • As an extremely unhappy user of an sp/2 system, I hope the sp/2 isn't the system of the future. I can do some things faster on my pentium 90 with Linux than the SP/2 system. We have 32 nodes with 4G each and 2 Terabyte of disk spread out over them. With this setup, you'd think things would be fast. The only thing that happens fast is the corruption of my data by their filesystem. In short, the SP/2 system isn't made for databases or web serving. It's meant as a compute box (and it doesn't even do that well!)
  • It isn't the sp at OSU, I don't even know if they have one. We also use a proprietary dbms that searches roughly 2 terabytes of data. The IO performance is bad, and if CPU usage is high, IO drops to about one read a minute.
  • I think it'd be best to have one set of benchmarks for default configuration, one set of amateur tuning, and one set of tuning by experts for each of the platforms tested. Get a linux expert... someone from the Samba team, someone from the Apache team... get an NT expert, someone who works at Microsoft, I guess (though not just anyone, obviously). And let them go to town.

    Sometimes people set their systems up by just leaving them in the default because of laziness/time constraints, whatever. And sometimes a minimal amount of configuration is done, but not a whole lot. And then there are performance freaks. Got to show the spread...
  • While our box is not SMP, I very much doubt that SMP on 4 processors (which even 2.0.36 was known to cope quite well with, especially for simple tasks like web serving) would cripple a box to the extent provided by the mindcraft survey.

    I can see an SMP system to only improve performance up to a point, eventually hitting a limit (e.g. 8 processors won't be twice as good as 4 because of time spent scheduling), but to see the effects that mindcraft saw you would have to do something pretty crafty...

    Matt.

  • I think the total cost was ~£3500 UK.

    If you read what I wrote you'll realise I did drop Apache and recompile - I just used RPM's. I don't really think the problem is the compiler used for Apache - I think most of the work is in the kernel, managing processes and file caching. I guess a pgcc compiled kernel would be a better option - but that's not something I'm desperate to get into - this system took me 1/2 a day to build - I don't have the time to really increase that.

    A better option would be to just use Apache for mod_perl processes, and use thttpd for static content, and use squid to proxy requests to the right port. I think we'd probably blow away even our own estimations with thttpd.

    Matt.

  • You're only right on the kernel front - that hasn't really been used enough on high end machines AFAIK.

    But both Samba and Apache have been heavily tested on very high end servers. The Samba crew have even been heavily involved in making Samba fast on high end servers.
  • by Matts ( 1628 ) on Friday April 23, 1999 @09:51AM (#1920190) Homepage
    Please note that the dejanews reference that ESR links to is quite wrong. The presence of %h does _not_ cause host name lookups under Apache - only the directive "HostNameLookups on" causes that to occur. I don't believe this to be the case.

    I strongly believe however that their httpd was running under inetd, and that would cause the effect they saw.
  • by Matts ( 1628 ) on Friday April 23, 1999 @09:08AM (#1920191) Homepage
    At a large company I'm working with we're trying to prove to the phb's that Linux is a good thing. The mindcraft study set us back a ways. So what did we do? We did our own tests.

    Server:
    - Hand built by our best hardware guy
    - PIII 500 (single CPU)
    - Adaptec 2940U2W SCSI Adapter
    - 10,000 rpm LRW drive. 1 drive only.
    - 100Mb/s network card
    - 256Mb PC100 RAM.
    - Linux 2.2.6, upgraded from stock Linux-Mandrake box
    - Apache 1.3.6, configured for best performance.

    No changes to the /proc fs to speed things up. Stock kernel options selected from "make xconfig". Apache was the apache+mod_perl srpm found on redhat/contrib, compiled with no configuration changes. We didn't test NT on this box - we were trying to compare against Mindcraft's results.

    Want to know the results so far?

    Well, we can get about 2200 requests per second out of that box. The Quad Xeon NT box that mindcraft tested got 3700 requests per second at its maximum rate. We are at very early stages so far, and I think I can squeeze more out of the box by dumping Apache and using thttpd or something else that uses a threaded model. But since this is to be a pure mod_perl box I don't think that's important.

    Things to remember:

    The mindcraft server had 1Gb of RAM.
    The mindcraft server had RAID (RAID/0 I believe).
    The mindcraft server had 4 10/100 network cards.

    We're so far pretty pleased with our little Linux box... It was a fair bit cheaper than Mindcraft's server....

  • From the linux-kernel list:
    Mindcraft also used the v0.92 MegaRAID driver. An SMP race condition was fixed in v0.93 which was almost certainly available from the AMI web site long before the Mar 10-13 test. So SMP NT "beat" a non-SMP Linux on a quad-Xeon server. Big hairy deal.
    Original poster is "Doc" Savage. Original post [linuxhq.com] 14 Apr 99.
  • Hmm Linus has one so I would expect that the kernel at least would operate very well on it.

    If the kernel operates well then it is highly likely that other software will get a boost.
  • by cjr ( 2590 ) on Friday April 23, 1999 @08:30AM (#1920194)
    Here is what the ITWeb editorial says:

    "Linux supporters have reacted violently to the Microsoft SA release (Independent research shows NT 4.0 outperforms Linux) published on ITWeb yesterday, saying "the study was paid for by Microsoft" and that "a very highly-tuned NT server was pitted against a very poorly tuned Linux server".

    That is, the claim attributed by Eric to Ian Hatton was really made by reacting Linux supporters.

    What Hatton did admit, was:

    "Microsoft did sponsor the benchmark testing and the NT server was better tuned than the Linux one."

    This isn't much, but it is sufficient. Hatton admits that "the NT server was better tuned than the Linux was" and even without adjectives that invalidates the report.

  • My old group at JPL actually bought the GNUPro tools a few years ago. The deal is that you get access to prerelease versions of GCC, as well as technical support. They needed the prerelease version because they were using a lot of hairy C++ features and GCC 2.7.2 just didn't cut it. Then again, neither did the GNUPro tools: they had the features but lots of bugs too (since they were beta versions).

    Now that EGCS is available under a more rapid release schedule, you probably won't need GNUPro just to get your code to compile, but it might be a good deal if you want the latest PII/PIII optimizations that haven't been rolled into the public codebase yet. You also get a visual debugger and some other goodies.

    Here's a press release [cygnus.com] on the PII/PIII optimized GNUPro tools that will be available next quarter.

  • "Remember the mainframe days? Shortly after the PC came out, a torrent of similar "debate" emerged from the mainframe community. First they laughed, then they fought, then the PC community won. Suprise. History repeats itself."

    Well, you have to define "won" pretty carefully. If you mean that the group that controlled the centralized (mainframe) resources was forced to give up complete control of information management services, then yes, the PC "won".

    But keep in mind that big dollar, mega-user, high-bit-rate applications are almost always run on IBM (or compatible) mainframes. Or on mainframe-class minis (Sun, etc) that are designed, installed, and operated using mainframe class operations discipline. And centralization seems to be on the rise at the moment, not on the decline.

    sPh
  • I am not suprised by how "blatant" this whole episode is. Relatviely speaking, this is nothing to the FUD heaped on OS/2 by MS after the split with IBM.

    Those who witnessed "OS Wars" of the early, mid-90's are well aware of the ability of MS to bludgeon superior technology into submission through marketing.

    The main (and important) difference I can see is that today, MS has less credibity, and their target, being Linux in this case has no corporate "owner" like IBM. IBM sat idle while MS (and the IBM PC Company, "MS' biggest customer") and the trade press trashed OS/2 into oblivion.

    I do not see the Linux community standing idle and taking it, ESR's post is a fine example of this. Note that benchmarks like mindcrafts were done with NT vs OS/2 over and over with no real response from IBM. The OS/2 users who protested were categorized as "zealots" and written off. On Compuserve, false user accounts (see "Barkto") were alleged to have been created to depict "real users" who then went on and on about serious OS/2 errors that "trashed my hard disk" and "my backups", ad nauseum. (Such reports were then published in PC Week, Infoworld, Computerworld to drive home the FUD).

    MS has it's hands full trying to FUD Linux into obscurity. But be assured, they are experienced at this type of "warfare" and will attack furiously. With such deep pockets, I expect they feel a war of attrition is winnable.

    This remains to be seen. The Linux community is not an impotent IBM. And today, we have a maturing internet to get some real facts distributed that the traditional "legacy" trade rags tend to not report.
  • You guys bring tears to my eyes, matching a fully tuned $25k NT box with an untunes $3000 linux box.

    If you want some more performance, I suggest moving to a dual system. My experience in SMP linux has only been with dual systems, and it's been very good. In addition, there are some very nice, cost effective dual motherboards out there. Tyan makes one with onboard aha2940 SCSI, Intel 10/100mbit ethernet, and sound too :). Hey, if it's good for /., it must be good enough for everyone else.

    I wouldn't be surprised if with some tweaks to your box configuration you could make it as fast as the NT box without any hardware mods, perhaps by following some of the advice Eric links to...

    ----

    I would still like to see some more conclusive stuff on Linux's highend SMP abilities (4 or more), bother on i386 and on Alpha and UltraSPARC. Alan Cox claimed there were some speedups to SMP late in 2.1.x (i think) that should have significantly improved hi end perf. Perhaps VAR would sponsor a test?

    If Linux doesn't beat all on up to 16 or so processors, we should fix it...
  • how good PGCC is on p6-core processors. In my experience, the key on p6 is not pairing, like it was on the Pentium, but avoiding partial stalls, which empty the pipeline and really fsck things up.

    In our own tests, we found that VisualC++ 5.0's (otherwise an excellent compiler) ftol() stalled like crazy on pII's, eating 10% processor power in Fire and Darkness. How good is PGCC at avoiding similar problems.

    Are there tools under linux (analogous to Intel's VTune) for analyzing this?
  • There have already been plenty of demonstrations that Linux works well on small server. What's really needed now is an impartial test run on a nice big SMP box with oodles of memory and a decent RAID array -- the system Mindcraft were (ab)using would do fine -- to demonstrate that, especially with 2.2 kernels, Linux scales quite well.

    Remember that a certain number of sites really need big-iron servers (hey, slashdot isn't exactly gentle on its hardware, although in that case I suspect database performance may be more of an issue), and even when they don't it's the results from high-end server tests which impress the management the most.

    Having seen Linux/SMP in action and made some subjective judgements I'm quite confident that, properly configured, it ought to scale fairly well onto hardware of the class Mindcraft were `testing'. But it would still be nice to have some number...

  • I think you're a little bit out of touch with the vast majority of the Linux community.

    One of Linux's prime virtues is what it can do with older hardware. I have a friend who wouldn't believe that the Linux machine she was using at my house was a P90 because Netscape/WordPerfect/etc. "felt" as fast as W95 on her PC (a PII/333).

    I have three Linux boxes, the (dual) P90 mentioned above, a K6/166 that acts as my web/mail/ftp/telnet/IRC/MOO/everything server and my 386/25 laptop that I use to do my homework.

    I would wager that the vast majority of Linux boxes are not high-end monsters but old machines that are "worthless" in the eyes of many people. That is the true power of Linux in my eyes, an attribute that I think is all-too-often ignored here lately.

    IMHO, YMMV.

    --
  • Actually he said that he was comparing results with the NT box, which was tuned to the max. Nice results!

    ElpDragon.

  • I don't know where you get your definition from, but where I come from there's nothing good about fear, uncertainty, or doubt.

    You definitely do NOT have to say anything nice to sling FUD. It's not in the definition.
  • I didn't see anywhere a reference to duplexing or
    ethernet card configuration. In redhat's default
    configuration I have noticed that it selects 10Mbit most of the time. And HALF duplex all the
    time..

    Only way out of this is to pass the options= command when loading the ethernet module.
  • No, you don't fight fire with fire. You fight it with water. You fight FUD with the truth. You made a valid point though when you said that unbiased testing cannot be done or paid for by the Linux community. Nor can it be done by Microsoft. So what is needed is for the Linux community to throw down the gauntlet. Challenge Microsoft to a duel. Let both sides agree upon an independent referree to determine how each system performs on a series of benchmarks. Half of the testing criteria could be determined by MS. The other half by the Linux community. The types of machines involved would be agreed upon and each side would be responsible for optimizing their own setup.
  • then it shouldn't be tough to distribute freely, just one person buys it, and then gives it to the world.
  • First - sponsoring a rebuttal in the
    form of another benchmark will have to
    be left to the Distro guys making money...

    Cause it costs money to put together such
    a test system, or purchase NT for that matter!

    On the other hand - putting out a decent
    rebuttal in the form of accurate criticism
    such as ESR has done(I REALLY like his
    article) is perhaps the best way to point
    that Emporer Bill isn't wearing any clothes.
    The only remaining trick is to get that
    rebuttal circulated amongst the press
    widely. ESR has the credibility to get
    quoted in such places. Looks like a
    good combination, and the right path
    to me.

    Steve
  • by dvdeug ( 5033 )
    Actually GUNPro is not GPL'ed, only the compiler part is. Also the $7000 cost also includes a support contract.
  • I rather ambivalent about some of erics latest moves, but he has written software, and tried to do a good job, but these rather petty comments begs the question. What have you done for the open movment?
  • by law ( 5166 ) on Friday April 23, 1999 @08:51AM (#1920210) Homepage
    Good summary.
    Seems to me that what we really need is a bench marking rebuttal; is there another
    benchmark going on? I saw that in Jeremy Allison's article he was working with PC Week,
    does anyone else know any other active bench marking going on?

    I think that the only way to prove against FUD is education, bench marking can go a long
    way.

    I have about 7 Linux servers with no down time, great performance on lesser hardware then
    my commercial servers in my company, that should be proof enough; but my pointy haired
    boss still asks "Why not NT?". I do not need any more fuel for that fire.

    We need Benchmarks on larger servers, with more memory, RAID, and a high-end server
    guide.
  • Any decent UNIX admin can performance tune a box. There are books (and books...and books...) which describe the process. So I don't think we need a howto to teach performance tuning.

    What we DO need is a HOWTO describing the idiosynchrocies of doing this under Linux. What parts of the tuning are in the Kernel and need recompilation? Where are the tweakable parts? What needs to be frobbed under /proc?

    From there, it's a short jump to the developers of Apache and Samba to say "increase the PROC table", or "increase the file buffer area". This advice would apply to all architectures. (But even if they don't tell us how, a good admin can probably figure most of this out.)


  • Sure, maybe a good admin CAN figure this out. But I am an Oracle DBA primarily, not a Unix Administrator. Sure, I can keep a box up and do the necessary maintenance and generally perform that job. But not with the nuances and expertise that I can administrate an Oracle database. I would like to have a good source to learn this stuff without having to change careers to do it..

    And then there are those who aren't GREAT administrators. We all know 'em and have met them.. They can do their job (well some can't) moderately well but not very well..


    Stick Boy
  • Good article Eric, and I'm proud of the Linux community for the way we've reacted to the Mindcraft "benchmarks". I think the Linux community fought back, but in a mature balanced way. I think it is important that we continue to do so, and try not to appear too much like reactionary fanatics which somtimes happens too.

    Also, I must say I really learned a lot by following the debates. Next time I need to install Apache and Samba you can bet I'll be referencing the responses to Mindcraft to see the proper way to optimize this stuff.

    Kudos and thanks to the Linux Community!
  • Without being too fanatic, I think that we all should inform any magazine publishing the Netcraft results (and thus concluding Linux is sh** compared to NT) of the facts and unreliability of this survey.

    Admit, "M$ guilty of consumer fraud" is a better headline than "NT beats Linux on all fronts".
  • It would, IMO, not be a good idea to get Mindcraft to do another test.

    It would be a far better idea to get a truly _impartial_ party to re-do these tests, with proper help from the Linux community. Then we'll see the results!

    I simply don't _trust_ MindCru^Haft.
  • The VMS guys have been used to this kind of FUD for a while. They call it 'BenchMarketing', which I think is a clever term.

    :^)
  • One of the things to keep in mind is that you're never going to get more out of this (1 CPU, 1 NET card) configuration than the mindcraft NT performance figures: The peak NT performance was above 100mbps.

    Some people are suggesting to use squid to direct requests to apache for the complicated stuff, and to thpptd for the simple stuff. I personally would try to make a tool that would change all local URLs to include the portnumber for all references that are a simple file. This might not be allowed for the benchmark, but it sure would help in the real world.

    Roger.
  • The language makes it very unclear whether the quote came from the Linux supporters or from the Microsoft SA release.

    Of course, it's unlikely that the quote came from the Microsoft SA press release (It is referring to the original brag release: "Independent research shows NT 4.0 outperforms Linux"). I think that cjr was quoting Linux users and therefore Eric was wrong to attribute it to Microsoft, much less to Ian Hatton directy.
  • In the wake of mounting "corroboration", it may now, more than ever, behoove the Linux Industry to respond to the Mindcraft challenge. "False", "rigged", "FUD" - these are the cries of the outraged knowing, but where corporate IS is concerned they fall on deaf (or "endeafened") ears. We must remember that up until narely a couple years ago, Linux had practically no coverage in mainstream media and was principally the purvue of relatively small group dedicated Internet hackers and computer enthusiasts. Linux did not make its way into the hallowed halls of big business by atrition the way Microcrap did. Rather, Linux found its way into corporate IS by way of covert installs and closed-door set-ups thanks to the efforts of IS geeks and tech-heads. Unfortunately, however, these are not the people that dictate corporate computer-systems policies... IT management does. If it weren't for the graces and virtues of Linux (stability, availability, cost-of-ownership, etc.), IT management would likely have mandated its removal long ago, relegating Linux's hope of survival back to the domain of devoted hackers and geeks. Now, with the claims produced by Mindcraft and their corroborators, and the direct challenge by Microcrap, IT management can be expected to watch closely the Linux Industry's response to these claims and challenges. An outraged and snorting response is not likely to engender sympathy, but rather apathy! This can not and will not be good for the future of Linux in the eyes of corporate IT management. Big vendor applications will not be enough, big vendor investments will not be enough, user support will not be enough, media support will not be enough, the voices of evangelism will not be enough! These things and more are already on the side of the corporate IS O/S's Linux is climbing up against. In the end, it will be the non-geeky, non-technical IT management force that will decide whether Linux lives or dies in their venue. They do not read Slashdot, they read the Wall Street Journal, they read the New York Times, they read PC WEEK and PC Magazine, but most of all, they listen to the BS of Microcrap! Win or lose, meeting the Mindcraft challenge can only be in the best interests of the Linux Industry.
  • Ummm That should be Mindcraft right?

    This is an alramingly common mistake - poor netcraft.
  • by Signal 11 ( 7608 ) on Friday April 23, 1999 @08:26AM (#1920221)
    Well, I think this incident has damaged microsoft's credibility, but that's beside the point. Microsoft isn't talking to us, the technical community. They aren't trying to convince us that NT is better. For those of us in server closets, in the operations center, and in system administration - we already know the truth. We don't need benchmarks and statistics to tell us NT is unreliable.

    The plain fact is, Microsoft did this to appeal to middle/upper-management, not us. They need to keep feeding them reasons to keep their NT investment without looking stupid. Remember the mainframe days? Shortly after the PC came out, a torrent of similar "debate" emerged from the mainframe community. First they laughed, then they fought, then the PC community won. Suprise. History repeats itself.



    --
  • I remember when I first implemented the RC5DES cow on my home Linux box (AMDK5PR90) and on a mostly idle Win95 workstation at work (Intel P166), and my home Linux box was running circles around the blocks of the Win95 workstation (in fact, a few of my friends were embarrassed by their WinXX Intel PII's for not being all that much better than my Linux AMDK5PR90).

    So, it makes me wonder about this test as well... Is it possible for someone to tweak a common-joe-affordable Linux box to outperform a supercharged, out-of-affordability-range NT box? Can someone duplicate the server load from the Mindcraft test on a highly tuned Linux PC and show that Linux can beat NT even when Linux is on a smaller machine?
  • by kzinti ( 9651 ) on Friday April 23, 1999 @08:27AM (#1920223) Homepage Journal
    What strikes me about the entire "Mindcrap Affair" is the resulting coverage. I can recall seeing only one press article covering the original story (the "benchmarking"), but I have seen many press articles covering the resulting controversy. Of course, my impression may be biased because I take pointers to news stories from Slashdot and Linux Today. On the other hand, I have done some looking outside of the "linux community", at sites such as CNet News, and they definitely seem more interested in covering the fiasco than in the original benchmark. Maybe these sites too can smell a rat.

    --JT

  • An anonymous user wrote:
    In this case, it looks like even MS was suspicious of how poorly Linux fared. My guess is that MS knows that even a well-tuned Linux will lose this particular test, otherwise they wouldn't have hired Mindcraft to run it in the first place.
    That's very easy to say, particularly from your anonymous vantage point. I don't see any reason to suppose, when you add up all the positive tweaks that were done to NT and the negative tweaks that were done to Linux, that the result has any connection with reality.

    If MS really thought Linux would lose a fair test, we'd now be seeing the test results with a detailed, reproducible description of what was done and it would clearly indicate NT the victor, comparing apples to apples.

    Instead, we have an extremely flawed study which Microsoft paid for, which looks to exhibit systematic bias. Coincidence? Could be. ;)

  • Please call the imposter MindCraft, not netcraft...

    I'm begining to think that this is all a conspiracy to undermine Netcraft:)
  • I think your version would test the abilities of the respective administrators much more than the software, and is nowhere near the real world. A sysadmin has available plenty of resources to find information and ask questions, not just a limited 24 hours. I think two *groups* should be given identical machines, have each set the box up as best they can in a week, and then run the test. Each group can get outside help, of course, but it must be limited to public information. To prevent "we applied patch xyzzy which makes the kernel faster when doing z but six times slower at y, but the test only tests z", the groups could instead generate a list of things to do to the box to tune it and have someone who can just get around do the monkey work...
  • i have tested a dual pII/400 with pretty similar specs in Linux:
    3COM 905 10/100
    onboard SCSI on Gigabyte Motherboard
    256 megs of RAM
    9 gig Quantum SCSI hard drive

    The results were about 20% above the quotted figures for the quad xeon. Perhaps i'm wrong but my $1649 system shouldn't be faster than a $25,000 system. Linux SMP is not nearly as stunning as BeOS SMP. Is there issues in Linux with 4 gigs of RAM slowing down the system last I checked Samba only supports 2 gigs.

    By the way the Mindcraft config favored NT quite obviously by using NTFS file system and Raid 0, which should alone roughly double HD access speeds.
  • Sorry, but a test (particularly one that is supposed to be objective and scientific/engineering in nature) has but ONE GOAL: to determine the facts as objectively as possible.

    That isn't 'your' goal or my goal, or Saddam Hussein's goal. It's THE goal, PERIOD. Anybody who makes statements like that admits they run a bovine excrement factory instead of a testing facility.


  • Remember the mainframe days? Shortly after the PC came out, a torrent of similar "debate" emerged from the mainframe community. First they laughed, then they fought, then the PC community won. Suprise. History repeats itself.


    Of course, IBM didn't help themselves by trying to alienate their customers in the early 90s by trying to withdraw all their mainframe source code. That was about the time we started looking into unix systems. BTW we're going to dump our entire machine room full of IBM mainframe equipment at the end of the year ... and the source issue had a lot to do with it.
  • Hasn't anyone considered for a second that perhaps their definition of test goals is "determine which OS is a faster web server in a RAID/SMP system" or some other criteria?

    Not to defend the report (it's not Scottish, so it's craaaaaaaap :) but I read that sentence in a completely different way.

  • A Quad Xeon is the highest end box that NT runs on (barring Alpha).

    Note that WinNT is driving the "high-end" x86 hardware market. Vendors like Dell and Compaq make boxes with only 4 CPUs because that's all vanilla NT will support. When Win2000 comes out, it will support 8? processors, which means the hardware companies will immedeatly follow with 8 CPU iron. (Implictly making this hardware available to some Linux folks.)

    Of course a better benchmark would be the $50,000 NT/Dell box versus the $50,000 Sun/HP/DEC box, etc.
    --
  • A few years ago, nobody in their right mind would have proposed any x86 (Novell, OS/2, NT, Linux)solution for anything other than workgroup filesharing or a ccMail postoffice. The fact that you can now seriously consider Linux or NT as a contender at the low end of the midrange market is primarily due to the advances in Intel hardware.

    In a few years, Solaris, Tru64, HP/UX, Linux, and NT will all be running on essentially the same Intel IA64 hardware. At this point, the appeal of NT's one-size-fits-all design is going to start breaking down. But on the other hand, hardware equality is going to get Microsoft's salesmen in the door for midrange solutions that were previously above their heads. And Microsoft is more price competitive than commercial Unix, so NT deployment is probably going to increase in this market, not decrease. (Same argument for Linux.)

    --

  • If I understand correctly, those numbers are public webservers only. MS IIS's market strength has been internal Intranet solutions (where there's probably an existing NT file+print setup). IIIS's intranet market is probably going to being going up, not down, as things like the Office 2000 server get deployed.


    --
  • Following myself up, Allison does have a message for those who take the Mindcraft bench at FUD value alone:

    The study has also shown that the knowledge of how to tune filesystem performance in Linux is equally obscure. We need to do a better job of educating Linux administrators about how to get the most out of their systems.

    I'm sure there'll be more benchmark disappointments in store for us. After all, how else do we learn what we need to fix? But the strength of open source is that we can face the errors without trying to deny them. We just fix 'em and move on.



    --

  • The 'Rush Limbaugh' principle is a very valid point, especially in this context. Don't forget the target market for this study is Microsoft partners and WinNT-based shops.

    Aside from all the meaninless numbers (who cares if your web server can saturate a 100BT line with static pages!), the study drives home an important point to NT Administrators - If you've invested in a high end IIS system, and you've got it tuned, there's probably no good reason to switch that box over to Linux. If the Linux box was tuned correctly, I doubt the difference would be that great performance-wise.

    Of course, the study didn't address stability, which is the number one problem with IIS.
    --
  • by IntlHarvester ( 11985 ) on Friday April 23, 1999 @03:21PM (#1920236) Journal
    I just took a look at the linked article written by Jeremy Allison of Samba.

    A few interesting points -

    * In the often referred-to ZD Samba versus NT benchmarks (where Linux+Samba wins), the Samba/Linux configuration was tuned by a Samba team member. Objectively, this makes the ZD benchmark actually less valid as the Mindcraft study, because as far as we know, a Microsoft-employeed SMB developer wasn't actually there tuning the server.

    * Tuning Linux properly involves cryptic commands such as:

    echo "80 500 64 64 80 6000 6000 1884 2" >/proc/sys/vm/bdflush
    echo "60 80 80" >/proc/sys/vm/buffermem


    While I'm sure these commands are documented somewhere, this sort of tuning makes the NT Registry Editor look like a model user interface. Low level tuning like this really needs a nicer front end, or preferably, a daemon which monitors system activity and dynamically tunes these settings.

    It sounds like the Mindcraft study has been a kick in the pants for the Linux community to get some high performance documentation together. I'd like to see a nice How-To which lays out some of the more obscurantist tricks such as echoing strings to the /proc filesystem.
    --

  • Okay, so Netcraft says that Apache's market share is 1.3% greater than the previous month, and IIS's market share is 0.41% smaller.

    But what does that mean?

    Netcraft also says that the total number of web servers just exceeded five million. Is all of this Apache vs. IIS activity happening on existing web servers, on the new ones? Is Apache growing slowly-but-steadily across the board, or is it growing like a weed on new web servers, while market share on the existing ones remains frozen? That's good news, too, but it's different news. Among other things, it suggests that people aren't so unhappy with IIS that they're willing to put up with the annoyance of moving to a different server.

    I dunno, I'm just wondering.


    "Once a solution is found, a compatibility problem becomes indescribably boring because it has only... practical importance"
  • by Venomous Louse ( 12488 ) on Friday April 23, 1999 @09:45AM (#1920238)

    The truth or falsehood of the Mindcraft study is irrelevant to its intended audience. The point is to give NT "believers" something to quote in arguments, that's all. It's the Rush Limbaugh Principle. In a disagreement, it's helpful to have official-sounding statistics to back up your point. It doesn't matter where they came from, and it doesn't matter whether they're even remotely accurate. What counts is that somebody "important" (read "well-known") said it in public, which "validates" it. This "validation" isn't about truth. What it means is that the proper forms have been followed, and so it's acceptable to introduce the "evidence" in an argument. What's being offered is not evidence in the conventional sense, but the appearance of evidence, or the outward form of evidence. In poker, what does the four of diamonds mean? It means the four of diamonds. It's pure, disembodied symbol.

    Disagreement and debate in our culture (especially on the net) isn't a whole lot less stylized (nor a whole lot less predictable) than Noh drama. You have to play by the rules and observe the forms. The content of the Mindcraft study is arbitrary. The study is a signifier, or token. A yacc parser says, "hey, this token is a function, hey, that one's an operator." The actual content of the token is not significant; what matters is what kind of token it is.

    Everybody should learn at least a bonehead popularized minimum of semiotics (which is all I know, obviously :)

    While we're at it, let's be honest with ourselves: How many of us are going to check Eric Raymond's facts for ourselves -- even to the minimal extent of clicking on the links he provides? And how many of us who don't check the facts are going to run around repeating them? Quite a few, probably. Dammit, I think Raymond's right on the money with this, and I'm confident that he's done his homework -- but I don't have the time to go about proving it. As far as many of us are concerned, Eric has given us a counter-signifier. Some "good spin" to match against the "bad spin". (That makes it sound dishonest, but IMHO if the "good spin" is factual and accurate, then "good" is a perfectly reasonable thing to call it.)

    Think about it.



    (Experienced sysadmins are a bit of a special case here. They can judge for themselves. The Limbaugh Principle applies mainly to people who are arguing in an area outside of their field of expertise -- I don't recall who it was who said that "every man is gullible outside his specialty", but it's true even of the best of us.)


    "Once a solution is found, a compatibility problem becomes indescribably boring because it has only... practical importance"
  • Why don't we just send a couple of linux guru's down to netcraft and tune the linux side of that machine? I'm sure they still have the NT side still rigged up. Then run the tests again and watch linux beat NT on every test!
  • You might want to check out Cygnus's GNUPro utilities, which have lots of PII/PIII optimizations, I think. THey're fully GPL'd, too (not free, though, they cost $).
  • Ya know, Oscar Robinson was a damn good basketball player in his day, but I don't think there are too many teams that would take him as a player today. He's still a great person, but his best athletic days are long behind him.

    Similarly, Raymond's better days of advocacy are long behind him, back around the Cathedral and Halloweens I and II days. Ever since then, it's seems like he's turned into a self-serving egomaniac who can't take one lick of criticism, committing one blunder after another.

    Sure, he didn't do any damage with his post here, but I remember seeing at least a dozen other people discussing this much more effectively. Not to mention days ago. If someone's going to weigh in with their opinion so long after the fact, it better be good. This wasn't by a long shot. It was basically, "I am ESR, I have now come to allow you to listen to my infinite wisdom on the subject. Feel grateful, I command it!" What a boob. The guy only remains a player because people like Rob (or whoever posted it) take anything he says, no matter how inconsequential, no matter how many other people said it earlier and better, and elevate it to Topic status. It creates a self-fulfilling fame, like Zsa Zsa Gabor or Charles Nelson Reilly, where a person ends up being famous simply for being famous, and you eventually can't even remember what made them famous in the first place. Lame.

    Cheers,
    ZicoKnows@hotmail.com

  • What is missing in all this -- and I'm afraid in ESR's rather sloppy summary as well -- is any sense of whether benchmarking is reliable in the least for analyzing application processing. Things are bad enough with processor benchmarking -- remember the tuning code in Quake?

    In databases, things are much worse. The TPC [tpc.org] has been wrestling with these issues for a decade now and still doesn't really have a good handle on it. It is too easy to put your thumb on the benchmarking scale without anyone noticing, and make the results go the way you want. This is true even if the vendors themselves do the tuning.

    Even if you can equalize the platforms, it still gets down to issues like the mix of instructions in the test suite. The TPC has wrestled with that one for years, trying to avoid tipping the balance toward any given vendor.

    Schematically, of course, Web servers are database servers (aside from the issue that they may request data from an actual database).

    Frankly, I'm even more appalled at Mindcraft (and by extension, Microsoft) for pushing this "study" the way they did. It was at least borderline unethical, given their admittedly lame-if-you-are-being-kind-about-it effort to equalize tuning between the two systems.

  • Well, it probably depends on what youre doing with the machine. At my job we have a Sun running our Web Server and were secretly running a PII 400 under the table with Linux. Basically, for progamms (usually Perl scripts) requiring large amounts of drive access, the Sun generally blows the Pentium away; obviously this is barely even a matter of which processor the machine's got, but rather I/O capability and Memory(-management). However for scripts which are just doing a lot of 'text-crunching ' the Pentium is faster. Admittedly it's usually running a lighter load, but I still think it shows that the Pentium is a contender in some areas.
    As far as I know, Suns and the like are optimized for moving around large amounts of data, whereas x86's are more optimized to crunch numbers. For a home-system with one fulltime user I'm pretty sure a multi-processor Intel with a bunch of memory will give a lot more bang for the buck.

    chris
  • Your comments validate the fact that Linux can make far better use of a lower end box, but it MIGHT indicate that Linux SMP is not too impressive. The Mindcraft box appears to be roughly four times the machine that you tested with. Yet the number of connections is only about 1.7 times as much. Granted you tuned your box and they didn't tune their box and may very well have crippled it, but still it may indicate some lack of SMP efficiency.

    If somebody will buy me a system like Mindcraft used, I'll be more than happy to benchmark it myself! It might take me a while though, so be patient with me. If I have the box back to you in say 5 years, is that sufficient? :)

    ---

  • Actually it's not so much the `crystal' (I don't think they are crystals; rather ring oscillators in PLLs I would think), but the clocking that draws a very substantial part of the power (50 %?).

    Keep in mind the clocking circuit needs to drive a lot of transistors, and this takes quite a lot of current !

    But decreasing voltage levels will have a bigger impact on power than frequency will.
  • According to the CraftyMind survey, the peak performance on that quad box was 1,000 rps', 3,770 for IIS. This implies that a $25k Dell Linux box is more than 2x inferior to you $3k solution.

  • The Mindcraft box was 1.7 times faster under NT/IIS. The single-cpu box was 2.2x faster that the Mindcraft box under Linux. So this says nothing about Linux's SMP capabilities.

  • by dillon_rinker ( 17944 ) on Friday April 23, 1999 @08:47AM (#1920250) Homepage
    You have taken a quote your first quote COMPLETELY out of the context of the article.

    "Linux supporters have reacted violently to the Microsoft SA release (Independent research shows NT 4.0 outperforms Linux) published on ITWeb yesterday, saying 'the study was paid for by Microsoft' and that 'a very highly-tuned NT server was pitted against a very poorly tuned Linux server'. In response, Ian Hatton, Windows platform manager at Microsoft SA, says these comments are valid."
  • by Le douanier ( 24646 ) on Friday April 23, 1999 @11:09AM (#1920267) Homepage
    I think a contest would be better than a benchmark.

    In a benchmark their are great odds that the benchmark will be sponsored by one of the party (M$ in this case).

    If you do a contest, like the best ratio performance/price : you benchmark the performance of all the competing teams and then divide by the price the team involved in the hardware (not the software because due to Linux openness many people would say Linux price biased the contest).

    If someone do so you can have a M$ team which will try to tune NT to is best, a Linux/Samba/Apache team which will try to tune Linux to his best, a Novell team, a Sun team...

    You could choose your hardware so small team can try to compete. Even companies unrelated to NT/Linux/Novell/Another OS could compete so that can do a lot of publicity to these companies if they are well placed in the results.

    It would be a good thing so every people supporting an operating system and so knowing how to tune it would be able to compete and their would be a greater range of results than in a single benchmark.

    Of course we now need to find somebody to finance the contest :)
  • I am probably in the minority here, but I don't believe that it really matters how well an OS scales to a piece of hardware. It really matters how well an OS scales to a job.

    The interesting question to me isn't "How much power can you get out of hardware X with OS Y", but rather "How much hardware do you need to throw OS Y on to do job Z".

    From what I've been reading, NT does better SMP than Linux does. Frankly, Linux doesn't need SMP nearly as badly as NT. If uniprocessor Linux can do the same job as SMP NT, who cares how good or bad SMP Linux is?

  • You don't want to fight FUD with FUD in that way. It's not just a matter of morality, either. Microsoft has clout because of their success as a big corporation with an established monopoly. They can afford to lose a little credibility by spinning a few lies. The Linux community has only one source of credibility -- that their stuff *works* -- and that's the very thing M$ is attacking. If you bend the truth and are caught, your credibility will suffer a lot more than Microsoft's. You'll be helping their FUD campaign, not hindering it.

    Keep the high ground, folks. It's really in your best interests.
  • Hatton also admits that the Linux system would have performed better if it had been better optimised. "Having said that, I must say that I still trust the Windows NT server would have outperformed the Linux one."

    Trust? Obviously you don't have too much confidence in NT, Ian.
  • I see lots of calls for doing another benchmark that's "fair" to prove the Linux system superior. The problem with that is that it could never be "fair" if carried out by Linux partisans. Even if it were, likely there would be one missed tweak which would throw the whole thing in doubt.

    Instead, why not fight FUD with FUD? Mindcraft claims the study's still valid even though the systems weren't tweaked equally. If that's the truly the case we're home free! Do a study designed to show how *badly* an NT server can be tweaked, and publish the results. As long as you promote the results as "just as valid as the Mindcraft benchmarks", you are being perfectly honest. :-)

    So next time MS throws out the invalid Mindcraft survey (NT 2.5 times better), don't attack the survey. Just throw out the new Linux survey (Linux 153 times better) done using the "Mindcraft method".
  • I remember that shortly after the report was published MindCraft semi-officially said something to the effect that : "if we were to run the test again....we would not make those optimizations" (e.g. the ones that slowed samba down).
    My question is: Why cant they do it again? Just do the tests again....
    I realize it will be expensive. But someone paid for it originally and came up with flawed results. Most companies would be looking at how to do it right the second time instead of saying
    "Well ya know if we were to do it again we would not screw it up (But since microsoft isn't interested in a real test that isn't going to happen)"
    Most companies do their best to cusion bad publicity. But microsoft seems to be proving time and again that ANY publicity is good.
    Even if its bad.

"If it ain't broke, don't fix it." - Bert Lantz

Working...