
ESR and the MindCraft Fiasco 204
The Mindcraft fiasco
Microsoft's latest FUD (Fear, Uncertainty and Doubt) tactic may be backfiring.
A 21 April ITWeb story reported results by a benchmarking shop called Mindcraft that supposedly showed NT to be faster than Linux at SMB and Web service. The story also claimed that technical support for tuning the Linux system had been impossible to find.
Previous independent benchmarks (such as "Microsoft Windows NT Server 4.0 versus UNIX") have found Linux and other Unixes to be dramatically faster and more efficient than NT, and independent observers (beginning with a celebrated InfoWorld article in 1998) have lauded the Linux community's responsiveness to support problems. Linux fans smelled a rat somewhere (uttering responses typfied by "Mindcraft Reality Check"), and amidst the ensuing storm of protest some interesting facts came to light.
- The benchmark had been paid for by Microsoft. The Mindcraft press release failed to mention this fact.
- Mindcraft did in fact get a useful answer to its request for help tuning the Linux system. But they did not answer the request for more information, neither did they follow the tuning suggestions given Also, they forged the reply email address to conceal themselves -- the connection was made after the fact by a Usenetter who noticed that the unusual machine configuration described in the request exactly matched that of the test system in the Mindcraft results.
- Red Hat, the Linux distributor Mindcraft says it asked for help, reports that it got one phone call from them on the installation-help line, which isn't supposed to answer post-installation questions about things like advanced server tuning. Evidently Mindcraft's efforts to get help tuning the system were feeble -- at best incompetent, at worst cynical gestures.
- An entertainingly-written article by the head of the development team for Samba (one of the key pieces of Linux software involved in the benchmark) described how Mindcraft could have done a better job of tuning. The article revealed that one of Mindcraft's Samba tweaks had the effect of slowing their Linux down quite drastically.
- Another Usenet article independently pointed out that Mindcraft had deliberately chosen a logging format that imposed a lot of overhead on Apache (the web sever used for the Linux tests).
So far, so sordid -- a fairly standard tale of Microsoft paying to get exactly the FUD it wants from a nominally independent third party. But the story took a strange turn today (22 Mar) when Microsoft spokesperson Ian Hatton effectively admitted [8] that the test had been rigged! "A very highly-tuned NT server" Mr. Hatton said "was pitted against a very poorly tuned Linux server".
He then attempted to spin the whole episode around by complaining that Microsoft and its PR company had received "malicious and obscene" email from Linux fans and slamming this supposed "unprofessionalism". One wonders if Hatton believes it would be "unprofessional" to address strong language to a burglar caught in the act of nipping the family silver.
In any case, Microsoft's underhanded tactics seem (as with its clumsy "astroturf" campaign against the DOJ lawsuit) likely to come back to haunt it. The trade press had largely greeted the Mindcraft results with yawns and skepticism even before Hatton's admission. And it's hard to see how Microsoft will be able to credibly quote anti-Linux benchmarks in the future after this fiasco.
Yes but be careful. (Score:1)
HOW-TO put an end to the fiasco! (Score:1)
--G
Does it matter? (Score:1)
So what is left for NT?
Careful, you can get sued.... (Score:1)
The commercial product I work on would love to publish direct comparison benchmarks against the competing MS product. Our legal department won't let us.
MS is guilty of nothing (Score:1)
Personally, I think MS realized that Mindcraft didn't do a good job of running the tests, so they declined to advertise the study. Mindcraft just put out a release on their own.
You have to understand how these things work. All good software companies run tests of their product under various conditions on various hardware configurations. When they find one that beats some portion of their competition, they try to advertise that fact. To get credibility, they hire an independent lab to reproduce the tests.
In this case, it looks like even MS was suspicious of how poorly Linux fared. My guess is that MS knows that even a well-tuned Linux will lose this particular test, otherwise they wouldn't have hired Mindcraft to run it in the first place.
Interesting Hatton comment (Score:1)
Yes... But... (Score:1)
But with regards to the study, wouldn't it have been more truthful to tune the Linux box appropriately? Then if the results showed that NT was better, a point could be made that Linux needs more work in these areas. I really believe people would accept that as honest criticism (again, IF the results showed that NT was faster, I have my doubts that would happen
Mindcraft obviously didn't make much of an effort in tuning the box (ie hire a decent Linux admin), posted a message to a couple newsgroups (lacking sufficient details, and not responding when someone requested more details), one call to RedHat (which was directed to the wrong group as ESR stated). And yet, they build an entire study around these points, making claims that Linux support is bad and all. For these reasons they were blasted, not for saying NT is better than Linux (as some people I've talked to think is the motive for Linux people's outrage).
Don't forget their motto (Score:4)
"...we work with you to define test goals. Then we put together the necessary tools and do the testing. We report the results back to you in a form that satisfies the test goals."
Since they say Microsoft sponsored the test, we can replace "you" with "Microsoft." So they worked with MS to define the test goals (NT is 2 or more times better than Linux). Then they put together the tools to do that, hacking the registry and all to beef NT up, slowing Linux apache/samba servers. And finally, report the results back in a form that satisfies the test goals, lo and behold NT is 2-3 time faster than Linux. Such a surprise, right?
Apache Benchmarking (Score:1)
Things I could see improve your system:
drop the Apache RPM's and compile it.. Specificaly with the PGCC compiler, I have heard of 30+% speed improvements with it. you could even go for the PGCC based distro stampede for your base install.
look at mylex RAID boards.. they are supposed to work a little better than the AMI cards that Dell uses.
It'd be fun to send that box to mindcraft, and have them test it as same preformance or 90% preformance of the NT box.. but cost less than $5k
not $18,000
Are managers really "used to" NT? (Score:1)
Xenophobia. Managers are scared of Linux because it's not NT, which they're used to.
Managers are used to NT? Where? Managers that propose NT for high-end stuff definitely never seen NT for long enough to be "used to" it -- but they are "used to" being bombarded by M$ advertising/propaganda of NT, and this, not mythical "they are already used to NT" should be counteracted.
Hard to find... (Score:1)
Maybe everybody is at Comdex? (Score:1)
Given all of that, it's a little early for sponsored benchmarking by members of the Linux community. This stuff takes time, and if it's a choice between doing Comdex right and repeating a discredited benchmark, doing Comdex right comes up at the top of the list every time.
Isn't that what I said? (Score:2)
There are valid limits to Linux scalability, problems that need fixing, and honest benchmarking can help us find those limits. Unfortunately, Mindcraft's benchmarking was so flawed by misconduct and poor judgment that it is not useful for that purpose.
BTW, I do agree with the Microsoft spokesman who said that he was certain that NT would have come out on top even with an honest test. I suspect the SAMBA results would have been quite competitive, within 3% (more or less) of the NT numbers, but the Apache server has never been known for its static file serving speed (though mod_mmap_static may change that!). On the other hand, there is a big difference between the 5%-10% advantage that I bet Mindcraft would have found, and the ridiculous numbers that they actually reported. They actually shot themselves in the foot here, because if they'd reported the real numbers, the Slashdot Crowd would have howled, but Jeremy Allison and other technical heavyweights would have stayed on the sidelines working on fixing the problems found, and the media would have ignored the Slashdot Crowd.
Just count it as another example of Microsoft Arrogance (tm) outweighing their good sense. It's amazing how such bright people can do such stupid things.
-- Eric
Linux-Tuner Applets? (Score:1)
Put an end to the fiasco! (Score:1)
Everyone is complaining how Microsoft rigged this test, but no one is surprised or doing anything about it. Why doesn't someone just sponsor an HONEST test, and let the best system win.
How about a range of machines? (Score:1)
Wasn't the test supposed to be about how nt can scale up but Linux can't? We all know the test was rigged, and I'd like to see it run again, but over a range of machines.
What does nt vs Linux look like on a 486 66 w. 8 meg of ram? Hmmm?
Proud of the Linux community and I learned a lot! (Score:1)
I work for a fairly large company thats coming out with a kick ass machine pretty soon. Its enterprise class for sure. Lots of processors. I would love to see linux tuned for it.
I've wondered about this too... (Score:1)
Bear in mind that Intel's integer core is more primative than either AMD or Cyrix's current design. That means that for the vast majority of non-3D/CAD software you'll see a marked improvement over a PII 300 when you use even Cyrix's relatively weak MII chip (whatever P rating actually runs at a 300MHz clock speed).
I sat through an Intel marketing presentation back when I worked retail, and was told flat out that floating point power was more important for word processing, and that every processor that runs at a given clock speed generates the same heat, because it's the little clock crystal that generates all the heat (not the resistance as electricity flows through the chip).
Just goes to show you can't trust marketing to give you straight facts.
There's a difference (Score:1)
Yes... But... (Score:2)
That is a natural consequence of the open source
development. Many of the features of Linux are there because some users needed them (scratch an itch, as ESR says...)
So, bearing in mind that there aren't many Linux
users with quad xeons with 4 Gb Ram, it's only natural that issues relating with that kind of machine have a lower priority to the Linux user community
Mainframes (Score:1)
Mainframes will never die. The legacy system of tomorrow will be mainframe transaction-processing systems fronted by SP/2 analysis clusters, with something like Linux or NT gating the whole mess to the web. I can almost guarantee it.
Of course, Linux is being ported to the S/390...
About Netcraft, semi-off-topic question (Score:1)
I'm curious about this too. I wonder if people who pay Netcraft money get that kind of breakdown.
One thing to remember is that Netcraft counts domains, not IP addresses. ISPs that host sites for clients probably dominate the survey and most of them run Apache. Companies that run their own web site off a T1 or ISDN line probably favour NT, but they would be in the minority.
I like to take each months numbers and pretend that nobody is switching (clearly not true, but might be a good approximation). This month, for example, Apache gained 423063 sites and MS gained 132943. So new Apache sites outnumbered new MS sites by MORE THAN 3 TO 1! That's impressive.
Huh? (Score:1)
Not an easy problem. (Score:1)
I would still like to see some more conclusive stuff on Linux's highend SMP abilities (4 or more), bother on i386 and on Alpha and UltraSPARC. Alan Cox claimed there were some speedups to SMP late in 2.1.x (i think) that should have significantly improved hi end perf. Perhaps VAR would sponsr a test?
If Linux doesn't beat all on up to 16 or so processors, we should fix it.
It's not that simple. Currently the biggest problem with the Linux SMP implementation is the IO subsystem. (SCSI and IDE, etc.) The problem is that this subsystem isn't SMP safe. So whenever the kernel enters this portion, it grabs what is known as the kernel lock. Thus disk activity can only happen on 1 processor at a time (bad). In Linux 2.3 they're going to strip this away, so that Linux 2.4 (or 3.0, or whatever) will probably scale far better than Linux 2.2 (of course they'll be making other nice optimizations along the way). But this isn't something you can just fix on a whim.
Not to side with Mindcraft/MS, but... (Score:1)
One could interpret it that way, but considering that they willfully avoided being helped to tune Linux (ask for samba help anywhere but on the samba lists/newsgroups, call the wrong helpline, don't ask for the right one, when they accidentally do find a helpful person, ignore him, etc...) it seems that the goal must have been to find that NT outperforms Linux.
It reminds me of an old Mad Magazine comic, A father orders his steriotypical hippy son living at home to go get a job. The son, dressed in his most casual attire (torn cutoff shorts, open shirt, many beads etc), goes to a clothing store and says "You ain't got no jobs do you?". Later to his father, "Well, I TRIED!".
The only difference is, Mindcraft actually did get an offer, so they had to ignore it.
And...? (Score:1)
I feel the need to ask why this piece was worthy of "airtime", and can reach only one conclusion: That a refutation can somehow become more valid because it comes from the pen of Eric Raymond.
I'm now happy that Katz periodically contributes his dozen screenfuls of drivel, and I don't have a problem with Eric either, but would I have got a whole item on /., had I summarised the obvious flaws of that ridiculous benchmark? I doubt it.
Matthew
- what happened to our great meritocracy?
Microsoft's credibility (Score:1)
what have you been smoking?
with the rise of n-tier architecture and thin client, mainframe-style computing is getting stronger than ever. centralised data and logic, just the display on the desktop.
You mean Mindcraft, not Netcraft I presume? (Score:1)
www.mindcraft.com [mindcraft.com]
www.netcraft.co.uk [netcraft.co.uk]
So, is linux faster than NT on a 4-way w/ 2GB mem? (Score:3)
I would like to know if Linux does scale as well or better than NT with 4 and 8 processors -- both systems properly tuned and using the same webserver. When that question is answered, I'd like to know what to expect in the future. Is Linux going to leave NT in the dust, or will this be the key niche ground for NT servers that Microsoft will defend to the end, and Linux will never conclusively defeat?
Mainframes (Score:1)
Mainframes (Score:1)
What about the numbers? (Score:1)
Sometimes people set their systems up by just leaving them in the default because of laziness/time constraints, whatever. And sometimes a minimal amount of configuration is done, but not a whole lot. And then there are performance freaks. Got to show the spread...
But how is Linux SMP (Score:1)
I can see an SMP system to only improve performance up to a point, eventually hitting a limit (e.g. 8 processors won't be twice as good as 4 because of time spent scheduling), but to see the effects that mindcraft saw you would have to do something pretty crafty...
Matt.
Apache Benchmarking (Score:1)
If you read what I wrote you'll realise I did drop Apache and recompile - I just used RPM's. I don't really think the problem is the compiler used for Apache - I think most of the work is in the kernel, managing processes and file caching. I guess a pgcc compiled kernel would be a better option - but that's not something I'm desperate to get into - this system took me 1/2 a day to build - I don't have the time to really increase that.
A better option would be to just use Apache for mod_perl processes, and use thttpd for static content, and use squid to proxy requests to the right port. I think we'd probably blow away even our own estimations with thttpd.
Matt.
No - you're wrong. (slightly) (Score:2)
But both Samba and Apache have been heavily tested on very high end servers. The Samba crew have even been heavily involved in making Samba fast on high end servers.
HostNameLookups (Score:3)
I strongly believe however that their httpd was running under inetd, and that would cause the effect they saw.
Apache Benchmarking (Score:5)
Server:
- Hand built by our best hardware guy
- PIII 500 (single CPU)
- Adaptec 2940U2W SCSI Adapter
- 10,000 rpm LRW drive. 1 drive only.
- 100Mb/s network card
- 256Mb PC100 RAM.
- Linux 2.2.6, upgraded from stock Linux-Mandrake box
- Apache 1.3.6, configured for best performance.
No changes to the
Want to know the results so far?
Well, we can get about 2200 requests per second out of that box. The Quad Xeon NT box that mindcraft tested got 3700 requests per second at its maximum rate. We are at very early stages so far, and I think I can squeeze more out of the box by dumping Apache and using thttpd or something else that uses a threaded model. But since this is to be a pure mod_perl box I don't think that's important.
Things to remember:
The mindcraft server had 1Gb of RAM.
The mindcraft server had RAID (RAID/0 I believe).
The mindcraft server had 4 10/100 network cards.
We're so far pretty pleased with our little Linux box... It was a fair bit cheaper than Mindcraft's server....
Possibly only using one processor? (Score:2)
Mindcraft also used the v0.92 MegaRAID driver. An SMP race condition was fixed in v0.93 which was almost certainly available from the AMI web site long before the Mar 10-13 test. So SMP NT "beat" a non-SMP Linux on a quad-Xeon server. Big hairy deal.
Original poster is "Doc" Savage. Original post [linuxhq.com] 14 Apr 99.
Yes... But... (Score:1)
If the kernel operates well then it is highly likely that other software will get a boost.
Eric made a factual mistake (Score:3)
"Linux supporters have reacted violently to the Microsoft SA release (Independent research shows NT 4.0 outperforms Linux) published on ITWeb yesterday, saying "the study was paid for by Microsoft" and that "a very highly-tuned NT server was pitted against a very poorly tuned Linux server".
That is, the claim attributed by Eric to Ian Hatton was really made by reacting Linux supporters.
What Hatton did admit, was:
"Microsoft did sponsor the benchmark testing and the NT server was better tuned than the Linux one."
This isn't much, but it is sufficient. Hatton admits that "the NT server was better tuned than the Linux was" and even without adjectives that invalidates the report.
Reasons for using GNUPro (Score:1)
Now that EGCS is available under a more rapid release schedule, you probably won't need GNUPro just to get your code to compile, but it might be a good deal if you want the latest PII/PIII optimizations that haven't been rolled into the public codebase yet. You also get a visual debugger and some other goodies.
Here's a press release [cygnus.com] on the PII/PIII optimized GNUPro tools that will be available next quarter.
Follow the bits... (Score:1)
Well, you have to define "won" pretty carefully. If you mean that the group that controlled the centralized (mainframe) resources was forced to give up complete control of information management services, then yes, the PC "won".
But keep in mind that big dollar, mega-user, high-bit-rate applications are almost always run on IBM (or compatible) mainframes. Or on mainframe-class minis (Sun, etc) that are designed, installed, and operated using mainframe class operations discipline. And centralization seems to be on the rise at the moment, not on the decline.
sPh
Suspicious (Score:1)
Those who witnessed "OS Wars" of the early, mid-90's are well aware of the ability of MS to bludgeon superior technology into submission through marketing.
The main (and important) difference I can see is that today, MS has less credibity, and their target, being Linux in this case has no corporate "owner" like IBM. IBM sat idle while MS (and the IBM PC Company, "MS' biggest customer") and the trade press trashed OS/2 into oblivion.
I do not see the Linux community standing idle and taking it, ESR's post is a fine example of this. Note that benchmarks like mindcrafts were done with NT vs OS/2 over and over with no real response from IBM. The OS/2 users who protested were categorized as "zealots" and written off. On Compuserve, false user accounts (see "Barkto") were alleged to have been created to depict "real users" who then went on and on about serious OS/2 errors that "trashed my hard disk" and "my backups", ad nauseum. (Such reports were then published in PC Week, Infoworld, Computerworld to drive home the FUD).
MS has it's hands full trying to FUD Linux into obscurity. But be assured, they are experienced at this type of "warfare" and will attack furiously. With such deep pockets, I expect they feel a war of attrition is winnable.
This remains to be seen. The Linux community is not an impotent IBM. And today, we have a maturing internet to get some real facts distributed that the traditional "legacy" trade rags tend to not report.
That's beautiful (Score:1)
If you want some more performance, I suggest moving to a dual system. My experience in SMP linux has only been with dual systems, and it's been very good. In addition, there are some very nice, cost effective dual motherboards out there. Tyan makes one with onboard aha2940 SCSI, Intel 10/100mbit ethernet, and sound too
I wouldn't be surprised if with some tweaks to your box configuration you could make it as fast as the NT box without any hardware mods, perhaps by following some of the advice Eric links to...
----
I would still like to see some more conclusive stuff on Linux's highend SMP abilities (4 or more), bother on i386 and on Alpha and UltraSPARC. Alan Cox claimed there were some speedups to SMP late in 2.1.x (i think) that should have significantly improved hi end perf. Perhaps VAR would sponsor a test?
If Linux doesn't beat all on up to 16 or so processors, we should fix it...
I'm curious (Score:1)
In our own tests, we found that VisualC++ 5.0's (otherwise an excellent compiler) ftol() stalled like crazy on pII's, eating 10% processor power in Fire and Darkness. How good is PGCC at avoiding similar problems.
Are there tools under linux (analogous to Intel's VTune) for analyzing this?
Scaling is what counts (Score:2)
Remember that a certain number of sites really need big-iron servers (hey, slashdot isn't exactly gentle on its hardware, although in that case I suspect database performance may be more of an issue), and even when they don't it's the results from high-end server tests which impress the management the most.
Having seen Linux/SMP in action and made some subjective judgements I'm quite confident that, properly configured, it ought to scale fairly well onto hardware of the class Mindcraft were `testing'. But it would still be nice to have some number...
Yes... But... (Score:1)
One of Linux's prime virtues is what it can do with older hardware. I have a friend who wouldn't believe that the Linux machine she was using at my house was a P90 because Netscape/WordPerfect/etc. "felt" as fast as W95 on her PC (a PII/333).
I have three Linux boxes, the (dual) P90 mentioned above, a K6/166 that acts as my web/mail/ftp/telnet/IRC/MOO/everything server and my 386/25 laptop that I use to do my homework.
I would wager that the vast majority of Linux boxes are not high-end monsters but old machines that are "worthless" in the eyes of many people. That is the true power of Linux in my eyes, an attribute that I think is all-too-often ignored here lately.
IMHO, YMMV.
--
But how is Linux SMP (Score:1)
Actually he said that he was comparing results with the NT box, which was tuned to the max. Nice results!
ElpDragon.
FUD? imho not (Score:1)
You definitely do NOT have to say anything nice to sling FUD. It's not in the definition.
I wonder. (Score:1)
ethernet card configuration. In redhat's default
configuration I have noticed that it selects 10Mbit most of the time. And HALF duplex all the
time..
Only way out of this is to pass the options= command when loading the ethernet module.
Re: Fight FUD with FUD (Score:1)
GNUPro (Score:1)
Put an end to the fiasco! (Score:1)
form of another benchmark will have to
be left to the Distro guys making money...
Cause it costs money to put together such
a test system, or purchase NT for that matter!
On the other hand - putting out a decent
rebuttal in the form of accurate criticism
such as ESR has done(I REALLY like his
article) is perhaps the best way to point
that Emporer Bill isn't wearing any clothes.
The only remaining trick is to get that
rebuttal circulated amongst the press
widely. ESR has the credibility to get
quoted in such places. Looks like a
good combination, and the right path
to me.
Steve
GNUPro (Score:1)
What have you done for open software? (Score:1)
benchmarks. (Score:3)
Seems to me that what we really need is a bench marking rebuttal; is there another
benchmark going on? I saw that in Jeremy Allison's article he was working with PC Week,
does anyone else know any other active bench marking going on?
I think that the only way to prove against FUD is education, bench marking can go a long
way.
I have about 7 Linux servers with no down time, great performance on lesser hardware then
my commercial servers in my company, that should be proof enough; but my pointy haired
boss still asks "Why not NT?". I do not need any more fuel for that fire.
We need Benchmarks on larger servers, with more memory, RAID, and a high-end server
guide.
Close, but too much. A HOWTO is needed though. (Score:1)
What we DO need is a HOWTO describing the idiosynchrocies of doing this under Linux. What parts of the tuning are in the Kernel and need recompilation? Where are the tweakable parts? What needs to be frobbed under /proc?
From there, it's a short jump to the developers of Apache and Samba to say "increase the PROC table", or "increase the file buffer area". This advice would apply to all architectures. (But even if they don't tell us how, a good admin can probably figure most of this out.)
Not too much (Score:1)
Sure, maybe a good admin CAN figure this out. But I am an Oracle DBA primarily, not a Unix Administrator. Sure, I can keep a box up and do the necessary maintenance and generally perform that job. But not with the nuances and expertise that I can administrate an Oracle database. I would like to have a good source to learn this stuff without having to change careers to do it..
And then there are those who aren't GREAT administrators. We all know 'em and have met them.. They can do their job (well some can't) moderately well but not very well..
Stick Boy
Proud of the Linux community and I learned a lot! (Score:2)
Also, I must say I really learned a lot by following the debates. Next time I need to install Apache and Samba you can bet I'll be referencing the responses to Mindcraft to see the proper way to optimize this stuff.
Kudos and thanks to the Linux Community!
Spread the word (Score:2)
Admit, "M$ guilty of consumer fraud" is a better headline than "NT beats Linux on all fronts".
No! Get someone else! (Score:1)
It would be a far better idea to get a truly _impartial_ party to re-do these tests, with proper help from the Linux community. Then we'll see the results!
I simply don't _trust_ MindCru^Haft.
'benchmarking shop' (Score:1)
:^)
Apache Benchmarking (Score:1)
Some people are suggesting to use squid to direct requests to apache for the complicated stuff, and to thpptd for the simple stuff. I personally would try to make a tool that would change all local URLs to include the portnumber for all references that are a simple file. This might not be allowed for the benchmark, but it sure would help in the real world.
Roger.
cjr used unclear language (Score:1)
Of course, it's unlikely that the quote came from the Microsoft SA press release (It is referring to the original brag release: "Independent research shows NT 4.0 outperforms Linux"). I think that cjr was quoting Linux users and therefore Eric was wrong to attribute it to Microsoft, much less to Ian Hatton directy.
Challenge should be met (Score:1)
Spread the word (Score:2)
This is an alramingly common mistake - poor netcraft.
Microsoft's credibility (Score:3)
The plain fact is, Microsoft did this to appeal to middle/upper-management, not us. They need to keep feeding them reasons to keep their NT investment without looking stupid. Remember the mainframe days? Shortly after the PC came out, a torrent of similar "debate" emerged from the mainframe community. First they laughed, then they fought, then the PC community won. Suprise. History repeats itself.
--
I've wondered about this too... (Score:1)
So, it makes me wonder about this test as well... Is it possible for someone to tweak a common-joe-affordable Linux box to outperform a supercharged, out-of-affordability-range NT box? Can someone duplicate the server load from the Mindcraft test on a highly tuned Linux PC and show that Linux can beat NT even when Linux is on a smaller machine?
The Mindcrap Affair: second-order effects (Score:3)
--JT
Hehe, more FUD ;) (Score:1)
An anonymous user wrote: That's very easy to say, particularly from your anonymous vantage point. I don't see any reason to suppose, when you add up all the positive tweaks that were done to NT and the negative tweaks that were done to Linux, that the result has any connection with reality.
If MS really thought Linux would lose a fair test, we'd now be seeing the test results with a detailed, reproducible description of what was done and it would clearly indicate NT the victor, comparing apples to apples.
Instead, we have an extremely flawed study which Microsoft paid for, which looks to exhibit systematic bias. Coincidence? Could be. ;)
Spread the word, err please dont... (Score:1)
I'm begining to think that this is all a conspiracy to undermine Netcraft:)
A fair and realistic benchmarking test (Score:1)
That's beautiful (Score:1)
3COM 905 10/100
onboard SCSI on Gigabyte Motherboard
256 megs of RAM
9 gig Quantum SCSI hard drive
The results were about 20% above the quotted figures for the quad xeon. Perhaps i'm wrong but my $1649 system shouldn't be faster than a $25,000 system. Linux SMP is not nearly as stunning as BeOS SMP. Is there issues in Linux with 4 gigs of RAM slowing down the system last I checked Samba only supports 2 gigs.
By the way the Mindcraft config favored NT quite obviously by using NTFS file system and Raid 0, which should alone roughly double HD access speeds.
Sorry, guy... (Score:1)
That isn't 'your' goal or my goal, or Saddam Hussein's goal. It's THE goal, PERIOD. Anybody who makes statements like that admits they run a bovine excrement factory instead of a testing facility.
Microsoft's credibility (Score:1)
Remember the mainframe days? Shortly after the PC came out, a torrent of similar "debate" emerged from the mainframe community. First they laughed, then they fought, then the PC community won. Suprise. History repeats itself.
Of course, IBM didn't help themselves by trying to alienate their customers in the early 90s by trying to withdraw all their mainframe source code. That was about the time we started looking into unix systems. BTW we're going to dump our entire machine room full of IBM mainframe equipment at the end of the year
Not to side with Mindcraft/MS, but... (Score:1)
Not to defend the report (it's not Scottish, so it's craaaaaaaap
Yes... But... (Score:1)
A Quad Xeon is the highest end box that NT runs on (barring Alpha).
Note that WinNT is driving the "high-end" x86 hardware market. Vendors like Dell and Compaq make boxes with only 4 CPUs because that's all vanilla NT will support. When Win2000 comes out, it will support 8? processors, which means the hardware companies will immedeatly follow with 8 CPU iron. (Implictly making this hardware available to some Linux folks.)
Of course a better benchmark would be the $50,000 NT/Dell box versus the $50,000 Sun/HP/DEC box, etc.
--
Does it matter? (Score:1)
In a few years, Solaris, Tru64, HP/UX, Linux, and NT will all be running on essentially the same Intel IA64 hardware. At this point, the appeal of NT's one-size-fits-all design is going to start breaking down. But on the other hand, hardware equality is going to get Microsoft's salesmen in the door for midrange solutions that were previously above their heads. And Microsoft is more price competitive than commercial Unix, so NT deployment is probably going to increase in this market, not decrease. (Same argument for Linux.)
--
About Netcraft, semi-off-topic question (Score:1)
If I understand correctly, those numbers are public webservers only. MS IIS's market strength has been internal Intranet solutions (where there's probably an existing NT file+print setup). IIIS's intranet market is probably going to being going up, not down, as things like the Office 2000 server get deployed.
--
Samba article (Score:1)
The study has also shown that the knowledge of how to tune filesystem performance in Linux is equally obscure. We need to do a better job of educating Linux administrators about how to get the most out of their systems.
I'm sure there'll be more benchmark disappointments in store for us. After all, how else do we learn what we need to fix? But the strength of open source is that we can face the errors without trying to deny them. We just fix 'em and move on.
--
Content is beside the point. (Score:2)
The 'Rush Limbaugh' principle is a very valid point, especially in this context. Don't forget the target market for this study is Microsoft partners and WinNT-based shops.
Aside from all the meaninless numbers (who cares if your web server can saturate a 100BT line with static pages!), the study drives home an important point to NT Administrators - If you've invested in a high end IIS system, and you've got it tuned, there's probably no good reason to switch that box over to Linux. If the Linux box was tuned correctly, I doubt the difference would be that great performance-wise.
Of course, the study didn't address stability, which is the number one problem with IIS.
--
Samba article (Score:3)
A few interesting points -
* In the often referred-to ZD Samba versus NT benchmarks (where Linux+Samba wins), the Samba/Linux configuration was tuned by a Samba team member. Objectively, this makes the ZD benchmark actually less valid as the Mindcraft study, because as far as we know, a Microsoft-employeed SMB developer wasn't actually there tuning the server.
* Tuning Linux properly involves cryptic commands such as:
echo "80 500 64 64 80 6000 6000 1884 2" >/proc/sys/vm/bdflush
echo "60 80 80" >/proc/sys/vm/buffermem
While I'm sure these commands are documented somewhere, this sort of tuning makes the NT Registry Editor look like a model user interface. Low level tuning like this really needs a nicer front end, or preferably, a daemon which monitors system activity and dynamically tunes these settings.
It sounds like the Mindcraft study has been a kick in the pants for the Linux community to get some high performance documentation together. I'd like to see a nice How-To which lays out some of the more obscurantist tricks such as echoing strings to the
--
About Netcraft, semi-off-topic question (Score:1)
Okay, so Netcraft says that Apache's market share is 1.3% greater than the previous month, and IIS's market share is 0.41% smaller.
But what does that mean?
Netcraft also says that the total number of web servers just exceeded five million. Is all of this Apache vs. IIS activity happening on existing web servers, on the new ones? Is Apache growing slowly-but-steadily across the board, or is it growing like a weed on new web servers, while market share on the existing ones remains frozen? That's good news, too, but it's different news. Among other things, it suggests that people aren't so unhappy with IIS that they're willing to put up with the annoyance of moving to a different server.
I dunno, I'm just wondering.
"Once a solution is found, a compatibility problem becomes indescribably boring because it has only... practical importance"
Content is beside the point. (Score:5)
The truth or falsehood of the Mindcraft study is irrelevant to its intended audience. The point is to give NT "believers" something to quote in arguments, that's all. It's the Rush Limbaugh Principle. In a disagreement, it's helpful to have official-sounding statistics to back up your point. It doesn't matter where they came from, and it doesn't matter whether they're even remotely accurate. What counts is that somebody "important" (read "well-known") said it in public, which "validates" it. This "validation" isn't about truth. What it means is that the proper forms have been followed, and so it's acceptable to introduce the "evidence" in an argument. What's being offered is not evidence in the conventional sense, but the appearance of evidence, or the outward form of evidence. In poker, what does the four of diamonds mean? It means the four of diamonds. It's pure, disembodied symbol.
Disagreement and debate in our culture (especially on the net) isn't a whole lot less stylized (nor a whole lot less predictable) than Noh drama. You have to play by the rules and observe the forms. The content of the Mindcraft study is arbitrary. The study is a signifier, or token. A yacc parser says, "hey, this token is a function, hey, that one's an operator." The actual content of the token is not significant; what matters is what kind of token it is.
Everybody should learn at least a bonehead popularized minimum of semiotics (which is all I know, obviously
While we're at it, let's be honest with ourselves: How many of us are going to check Eric Raymond's facts for ourselves -- even to the minimal extent of clicking on the links he provides? And how many of us who don't check the facts are going to run around repeating them? Quite a few, probably. Dammit, I think Raymond's right on the money with this, and I'm confident that he's done his homework -- but I don't have the time to go about proving it. As far as many of us are concerned, Eric has given us a counter-signifier. Some "good spin" to match against the "bad spin". (That makes it sound dishonest, but IMHO if the "good spin" is factual and accurate, then "good" is a perfectly reasonable thing to call it.)
Think about it.
(Experienced sysadmins are a bit of a special case here. They can judge for themselves. The Limbaugh Principle applies mainly to people who are arguing in an area outside of their field of expertise -- I don't recall who it was who said that "every man is gullible outside his specialty", but it's true even of the best of us.)
"Once a solution is found, a compatibility problem becomes indescribably boring because it has only... practical importance"
a very fair test (Score:1)
GNUPro (Score:1)
What has Raymond done for open software *lately*? (Score:1)
Ya know, Oscar Robinson was a damn good basketball player in his day, but I don't think there are too many teams that would take him as a player today. He's still a great person, but his best athletic days are long behind him.
Similarly, Raymond's better days of advocacy are long behind him, back around the Cathedral and Halloweens I and II days. Ever since then, it's seems like he's turned into a self-serving egomaniac who can't take one lick of criticism, committing one blunder after another.
Sure, he didn't do any damage with his post here, but I remember seeing at least a dozen other people discussing this much more effectively. Not to mention days ago. If someone's going to weigh in with their opinion so long after the fact, it better be good. This wasn't by a long shot. It was basically, "I am ESR, I have now come to allow you to listen to my infinite wisdom on the subject. Feel grateful, I command it!" What a boob. The guy only remains a player because people like Rob (or whoever posted it) take anything he says, no matter how inconsequential, no matter how many other people said it earlier and better, and elevate it to Topic status. It creates a self-fulfilling fame, like Zsa Zsa Gabor or Charles Nelson Reilly, where a person ends up being famous simply for being famous, and you eventually can't even remember what made them famous in the first place. Lame.
Cheers,
ZicoKnows@hotmail.com
Benchmarking Considered Harmful (Score:1)
In databases, things are much worse. The TPC [tpc.org] has been wrestling with these issues for a decade now and still doesn't really have a good handle on it. It is too easy to put your thumb on the benchmarking scale without anyone noticing, and make the results go the way you want. This is true even if the vendors themselves do the tuning.
Even if you can equalize the platforms, it still gets down to issues like the mix of instructions in the test suite. The TPC has wrestled with that one for years, trying to avoid tipping the balance toward any given vendor.
Schematically, of course, Web servers are database servers (aside from the issue that they may request data from an actual database).
Frankly, I'm even more appalled at Mindcraft (and by extension, Microsoft) for pushing this "study" the way they did. It was at least borderline unethical, given their admittedly lame-if-you-are-being-kind-about-it effort to equalize tuning between the two systems.
Please help me out here (Score:1)
As far as I know, Suns and the like are optimized for moving around large amounts of data, whereas x86's are more optimized to crunch numbers. For a home-system with one fulltime user I'm pretty sure a multi-processor Intel with a bunch of memory will give a lot more bang for the buck.
chris
But how is Linux SMP (Score:1)
If somebody will buy me a system like Mindcraft used, I'll be more than happy to benchmark it myself! It might take me a while though, so be patient with me. If I have the box back to you in say 5 years, is that sufficient?
---
The power of clocking (Score:1)
Keep in mind the clocking circuit needs to drive a lot of transistors, and this takes quite a lot of current !
But decreasing voltage levels will have a bigger impact on power than frequency will.
Apache and IIS Benchmarking (Score:1)
Apples and Orangutans (Score:1)
No, cjr made a referential mistake. (Score:3)
"Linux supporters have reacted violently to the Microsoft SA release (Independent research shows NT 4.0 outperforms Linux) published on ITWeb yesterday, saying 'the study was paid for by Microsoft' and that 'a very highly-tuned NT server was pitted against a very poorly tuned Linux server'. In response, Ian Hatton, Windows platform manager at Microsoft SA, says these comments are valid."
No more benchmark... a contest (Score:3)
In a benchmark their are great odds that the benchmark will be sponsored by one of the party (M$ in this case).
If you do a contest, like the best ratio performance/price : you benchmark the performance of all the competing teams and then divide by the price the team involved in the hardware (not the software because due to Linux openness many people would say Linux price biased the contest).
If someone do so you can have a M$ team which will try to tune NT to is best, a Linux/Samba/Apache team which will try to tune Linux to his best, a Novell team, a Sun team...
You could choose your hardware so small team can try to compete. Even companies unrelated to NT/Linux/Novell/Another OS could compete so that can do a lot of publicity to these companies if they are well placed in the results.
It would be a good thing so every people supporting an operating system and so knowing how to tune it would be able to compete and their would be a greater range of results than in a single benchmark.
Of course we now need to find somebody to finance the contest
So, is linux faster than NT on a 4-way w/ 2GB mem? (Score:2)
The interesting question to me isn't "How much power can you get out of hardware X with OS Y", but rather "How much hardware do you need to throw OS Y on to do job Z".
From what I've been reading, NT does better SMP than Linux does. Frankly, Linux doesn't need SMP nearly as badly as NT. If uniprocessor Linux can do the same job as SMP NT, who cares how good or bad SMP Linux is?
Stooping to their level (Score:2)
Keep the high ground, folks. It's really in your best interests.
Interesting Hatton comment (Score:2)
Trust? Obviously you don't have too much confidence in NT, Ian.
Fight FUD with FUD (Score:2)
Instead, why not fight FUD with FUD? Mindcraft claims the study's still valid even though the systems weren't tweaked equally. If that's the truly the case we're home free! Do a study designed to show how *badly* an NT server can be tweaked, and publish the results. As long as you promote the results as "just as valid as the Mindcraft benchmarks", you are being perfectly honest.
So next time MS throws out the invalid Mindcraft survey (NT 2.5 times better), don't attack the survey. Just throw out the new Linux survey (Linux 153 times better) done using the "Mindcraft method".
Why can't they run it again? (Score:2)
My question is: Why cant they do it again? Just do the tests again....
I realize it will be expensive. But someone paid for it originally and came up with flawed results. Most companies would be looking at how to do it right the second time instead of saying
"Well ya know if we were to do it again we would not screw it up (But since microsoft isn't interested in a real test that isn't going to happen)"
Most companies do their best to cusion bad publicity. But microsoft seems to be proving time and again that ANY publicity is good.
Even if its bad.