Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News Technology

Judge (Tech) Advice By Results 162

Bennett Haselton writes "What advice would you give someone who just bought a new laptop? What would you tell someone about how to secure their webserver against attacks? For that matter, how would you tell someone to prepare for their first year at Burning Man? I submit that the metric by which we usually judge tech advice, and advice in general, is fundamentally flawed, and has bred much of the unhelpful tech advice out there." Read below to see what Bennett has to say.

First, take a step back and imagine trying to come up with good advice in an area where results are easy to measure, like weight loss. (For the sake of argument, assume the advice recipients are genuinely medically obese people who can benefit from safe weight loss, not anorexics.) Suppose you were trying to measure the effects of two pieces of weight-loss advice, say, Program 1 and Program 2. You would think the most straightforward way to measure the effectiveness of the programs would be to divide a group of 100 volunteers randomly into two groups of 50, then have Group 1 follow Program 1, and have Group 2 follow Program 2 (with some type of monitoring for compliance). At the end of some time period, you simply measure which group has lost more weight (up to some healthy maximum threshold), and the program they were following, is the better program. What could be simpler than that? Isn't that the best, most obvious way to compare the two programs?

Actually no. I would say that's a terrible way to measure the two programs' effectiveness, under almost any reasonable set of assumptions about how the programs will be applied in the real world.

First of all, it's trivially easy to devise a program that would score really well under this system -- exercise for an hour and a half total every day, while eating nothing but fruits and vegetables and lean meats (or whatever would be considered a "perfect" diet by people who follow fanatically healthy eating habits -- I have no idea, because I don't). On the other hand, this by itself is not a valid reason to reject this measurement, because just because it's easy to score well under a particular measurement system, doesn't mean the measurement is not valid.

The real problem with this metric is that it has no bearing on what good it would do to give this advice to people in the real world, because in the case of the work-out-and-eat-kale gospel, most people are not going to follow it. So consider an alternative metric: Take 100 volunteers, divide them randomly into two groups, tell Group 1 about Program 1, and tell Group 2 about Program 2. That's it -- but you have no power to force them to actually follow the advice. All you know is that they were all drawn from a pool of volunteers who were sincerely interested in losing weight, but if you make the advice too complicated, they'll tune out, or if you make the advice too hard to follow, they'll lose motivation. And then at the end of some time period, you check in and see which group has lost more weight. You could call this "whole-audience based results" (I promise I'm not trying to coin a neologism, but let's call it WABR), because you're looking at the results achieved by everyone who heard the advice, not just the people who were deemed to have "followed" the advice correctly. (The previously rejected metric, looking only at the results of people who are judged to have followed the advice correctly, could be called Compliance-Based Results or CBR).

Consider that if a fitness fanatic gives weight-loss advice to one particular person, who either doesn't follow it perfectly or quits after a short period, the advice-giver can always claim that the advice was great, the recipient just didn't "do it right". But if you're giving your advice to 50 people in Group 1, and someone else is giving different advice to 50 people in Group 2, the samples are large enough that the proportion of unmotivated people is going to be about the same in each group -- so if Group 2 loses more weight, you probably can't use the excuse that you got stuck with all the unmotivated losers in Group 1. The advice that Group 2 must have worked better because it struck some sort of balance between effectiveness and ease of compliance.

Under this metric, it's not as easy to come up with a "program" that would score well. Simply telling people "Just eat less and exercise more," for example, would obviously score terribly under this metric, since (1) "less" and "more" are not defined precisely and (2) most people in the target audience have heard this advice before anyway. You would have to think carefully about what kinds of cooking and diet advice are easy to follow and fairly enjoyable, or what kind of exercise advice would fit into the average person's lifestyle. If someone objects that "No one piece of advice works for everyone" -- fair enough, so you could even design a program that segments your target audience: "If you have lots of time on your hands but not a lot of money for things like fresh produce, do A, B and C. Otherwise, if you have a very busy schedule but you can afford to buy whatever you want, do X, Y, and Z." You could nonetheless combine all that "if-then-else" advice into a single program and call it Program 1 -- as long as the metric for the success of Program 1 is to give it to 50 volunteers who are interested in losing weight, and track how much weight they actually use, without getting into arguments about whether they "really followed" the program or not.

If Michelle Obama made me her anti-obesity czar, that's more or less what I would do:

  • Recruit a large number of test volunteers who are interested in losing weight.
  • Recruit some (much smaller) number of doctors, nutritionists, and general fitness blowhards who are interested in giving people advice about losing weight.
  • Each advice-giver is allowed to submit a set of instructions on how to lose weight.
  • The volunteer pool is randomly divided into groups, and each group is assigned one of the submitted methods (probably after a panel of doctors pre-screened the methods for medical safety; otherwise, the winning method would probably end up being something involving heroin). That method is distributed to everyone in the volunteer group, but nobody will monitor them for compliance.
  • Check back in with each volunteer pool at the end of some time period. Whichever volunteer group has lost the most weight, the person who submitted the advice that was given to that group, gets a million dollars, and the glory that is rained down upon them as their winning advice is promoted all the world.

No, really, seriously. If you want to reduce obesity rates in the country, shouldn't the ideal solution be something WABR-based, very close to this? It does no good to come up with a piece of advice that works well under CBR -- where you can force people to follow the program (or exclude them from the results if they don't) -- because that doesn't predict how the advice will work when distributed to the population at large, where of course you can't force people to follow the program. On the other hand, if the advice works reasonably well for a group of volunteers whose compliance is entirely up to them, then that should be a better predictor of how well it would work on a larger audience.

(Of course, someone might object that the true metric of healthy weight-loss advice is not how much weight you've lost after several months, but whether you've made a permanent lifestyle change that keeps it off even several years later. In that case you would just make that the new prize-winning criterion -- which group has lost and kept the most weight off three years down the road -- but still sticking to the WABR principle.)

Another advantage of WABR is that it avoids squabbling over whether a person "really" followed the advice, if they failed to achieve the desired result. If an advice-giver tells you to "eat less and exercise more", and you eat a little less and exercise a little more but fail to achieve any noticeable changes, it's highly unlikely that the advice-giver is going to concede their advice didn't work, even if you did follow it literally. On the other hand, no matter how much less you eat or how much more you exercise, if it doesn't work, the advice-giver can always say that you didn't reduce your calories or exercise enough -- which makes the advice unfalsifiable, because there's no circumstance under which the advice-giver would have to admit they were wrong. This also applies to advice that's extremely difficult to follow, such as "Eliminate all sugar from your diet" -- if the advice fails, it would be easy for the advice-giver to find ways that the advice recipient deviated from the program (if they ate fruits -- which most doctors recommend doing -- does fructose count?). WABR means that you don't have to adjudicate who actually followed the advice, because the results are collected from everyone who heard the advice.

Now, back to tech. I've deliberately avoided dwelling on technical examples, because after reading through the weight loss example, you can probably generalize this pretty easily. If Bob tells you to keep your new laptop virus-free by ditching Windows and all of your programs and switching to Linux, and Alice tells you to keep your new laptop virus-free by installing a free anti-virus program, then in a WABR test, I'll bet Alice's group would be left with fewer virus infections at the end of the year than Bob's group, for the simple reason that most people can't or won't follow Bob's advice. I'd even concede that the small number of people who do switch to Linux might have fewer viruses to deal with, but I'd say it's irrelevant. By any reasonable definition, Alice's advice is more helpful, or, simply put, better.

When I wrote "4 Tips For Your New Laptop" for Slashdot last Christmas, I think I was subconsciously using WABR as a metric for how well the advice would work for people. Because if you sincerely want the advice to be helpful (and I did), shouldn't the definition of success be the average benefit across all the people who read or attempt to follow the advice? Rather than a piece of advice that has a 100% success rate among readers who can follow it, but only 5% of them can?

One user posted this comment in response to the article:

First, syncing to cloud is not backup. Second, being at the mercy of a provider doesn't strike me as a good idea in long-term.

Better invest in a NAS. A 2-bay Synology would suffice. 2 4TB drives in Mirrored Raid work great. WD has the "red" line of drives specifically made and tested for NAS storage. They are not as fast but run cool, silent, no vibrations.

Most NAS units run on linux so you can easily add syncing, versioning, "personal cloud", maybe use to play movies on smart TVs via DLNA and so on.

Finally, from time to time do proper backups. For home use, proper backup means burning data on DVD/BD - on 2 separate discs.

OK. Let's suppose every word in that comment is correct. Now suppose we gave 50 people the advice from my original article, and 50 other people the advice I just quoted, but we have no power to actually force either group to follow the advice in either case. Which group do you think would have fewer computer catastrophes over the course of the year? (Yes, of course a lot of people would drop out of following the quoted advice because they didn't know what the guy was talking about, but imagine a version that had each sentence fleshed out in more detail explaining the acronyms and describing what the hardware costs. I still think my simpler advice would win.) I don't mean to pick on that guy in particular. Most computing advice out there would not score very well under WABR.

Similarly, when I wrote about how to make your first trip to Burning Man easier, it was partly in response to all the veterans who had given me CBR-based advice, like, "Build a hexayurt to sleep in." Of course, if you look only at a sample of people who actually did build a hexayurt at Burning Man, most of them probably had a great experience there. But if your advice is to tell people to build a hexayurt, only a small proportion of them will try it (and if they try and fail, you can claim that they didn't actually "follow your advice"!). The advice I wrote was to buy a tent and stake it down, because I think that if you tell 50 people to do that, and tell another group of 50 people to build a hexayurt, the people that you tell to buy a tent are on average more likely to have a good experience. (Although it wouldn't be a huge difference, because most people that you tell to build a hexayurt, will eventually figure out that you were fucking with them and will buy a tent anyway.)

Of course, as I said in a previous article about the sorry state of cooking instructions on the Internet (scroll down to the part about jalapeno poppers), the real reason most directions on the Internet suck, is because they were written to grab search engine traffic. That just requires some keywords to appear in the title of the page and in multiple spots in the body content, and has nothing to do with whether the directions work. So nothing I say is going to change the minds of people who are farming "how-to" content for some extra clicks.

I'm more concerned about people who are supposedly trying to be helpful, but revert to advice that sounds as if it would do well under CBR but badly under WABR. Consider -- if your goal in giving the advice is, very generally, to bring the greatest benefit to the average person hearing it, then WABR should be your metric for success, shouldn't it? Obviously I'm not suggesting that it's usually practical to test one piece of advice against another by recruiting 100 volunteers, dividing them into two groups of 50, etc. I'm saying that in cases where it's instinctively very likely that one piece of advice would do much better under WABR than another, then that's the advice you should give to people -- a fact that is lost on the leet hax0rs who think they're being useful by saying things like "Dump Windows and install Linux."

And it's not merely that advice which scores poorly under WABR is unhelpful. WABR is the measurement by which a person's advice is helpful to other people, so if a person is giving advice that they can't possibly sincerely believe would score well by that metric, it comes across as caring more about something other than being helpful. Perhaps the advice-giver wants to sound smart, or simply wants to avoid the possibility of having to admit they were wrong (if you make your advice hard to follow, that reduces the chance of somebody actually climbing that mountain and then pointing out to you if your suggestion didn't work). So it's not just that the advice-giver is being unhelpful, it's that they're being a dick.

For a long time, I would hear pieces of tech advice that I knew would probably give a good result if I followed them to the letter (i.e. would do well under CBR), but something would nag at me, not only making me think that I probably would not end up with a good result, but making me resent the advice-giver for some reason that I couldn't precisely define. Now, I think, I've precisely defined it: I should have told them, "If you gave this advice to 50 people, and some other comparable advice to another similar group of 50 people, and if we measured the results by looking at everybody in each group without getting into arguments over whether they 'properly followed' the advice or not, you must be aware that the advice you just gave me would score worse than any number of alternatives that you could have supplied with just a little more effort." Unfortunately that's not very compact.

So, if someone asks you for general technical guidance, I submit you will be doing them a favor if you keep WABR in mind. I would also advocate for it as a way to settle disputes over which of two pieces of third-party advice is actually "better".

According to my own rule, though, I'm not sure how many people reading this will actually keep this approach in mind next time they're giving technical advice. On the other hand, it's hard to imagine an alternative exhortation that would achieve a better result.

This discussion has been archived. No new comments can be posted.

Judge (Tech) Advice By Results

Comments Filter:
  • Too long, didn't read.

    • by noh8rz10 ( 2716597 ) on Monday April 07, 2014 @10:16AM (#46683975)

      Too long, didn't read.

      quite long, but I read a bit and found it much more interesting than I expected. Money shot:

      WABR is the measurement by which a person's advice is helpful to other people, so if a person is giving advice that they can't possibly sincerely believe would score well by that metric, it comes across as caring more about something other than being helpful.

      this is for all the people who tell people to install linux rather than windows. it's more of an ideological thing than a desire to help.

      Perhaps the best advice you can give is to tell people to install all the software updates and to use a modern browser. this isn't perfect protection by any means, but it is like wearing a condom on your computer.

      • by Chrisq ( 894406 ) on Monday April 07, 2014 @10:31AM (#46684173)

        this is for all the people who tell people to install linux rather than windows. it's more of an ideological thing than a desire to help.

        I think often it is a desire to help by someone who misjudges the ability, desire to learn, and time someone is prepared to put towards solving a problem. I have heard people advise owners of old XP-based laptops to upgrade to linux because its free. The people giving the advice often enjoy tinkering, see time spent getting it working and learning the new interfaces as fun, pick up new tech easily, and assume that the others will be the same. The person receiving the advice may see the time spent as boring, difficult, and wish they'd bought a new copy of windows (or a new laptop) instead.

        • I think often it is a desire to help by someone who misjudges the ability, desire to learn, and time someone is prepared

          And perception.

          Example: A friend of mine was still using MS Office 2003 because he hated MS Office 2007. Then, one day, he received an Office 2007 document that Office 2003 could not handle. I asked him to give me a copy of the file, then opened it in Open Office. He happily did what he needed to do, saved his changes, copied the file back to his PC and emailed it to whomever needed it. Then he asked me what version of Office I was running. When I showed him, he said "That's not acceptable. No one will be a

      • If somebody gave me the advice to keep windows and install the virus checker, then I'd ignore the advice and install Linux. At the end of the trial period, I'd hopefully be virus free, so that would be +1 to them under this system.
    • Well, obviously the sucker needs to install both McAfee and Norton, just to make sure his machine will be pest free for a few days...
  • don't go tecky on someone who's doesn't understand what the word computer means. Ask them some basic questions on their knowledge on the subject and go from there. Adapt to their knowledge and understanding. If they learn slow, you need to teach them slow. If they learn like sponges...teach them fast and strong.
    • don't go tecky on someone who's doesn't understand what the word computer means. Ask them some basic questions on their knowledge on the subject and go from there. Adapt to their knowledge and understanding. If they learn slow, you need to teach them slow. If they learn like sponges...teach them fast and strong.

      Also - don't take advice from Bennett Haselton. He comes across as quite a douchebag.

  • by oneiros27 ( 46144 ) on Monday April 07, 2014 @10:00AM (#46683793) Homepage

    How quickly it gets to the point.

    • by jellomizer ( 103300 ) on Monday April 07, 2014 @10:31AM (#46684169)

      Also how much it makes sense.
      If you are already a technical person, if the advice makes sense then it probably is good, if not then it is probably BS.

      For example:
      Don't use IE use Firefox because Firefox is more secure because it is Open Source. Is bad advice because being Open Source doesn't make it secure by magic.

      Don't use IE use Firefox because Firefox is more secure because it will get updated regularly with fixes, because it is supported by a large community of developers who are interested in keeping it secure, IE gets a longer interval between updates. This is better advice because it makes sense.

      We can explain things without getting overly technical. But you can't assume just because the person isn't technical that they are an idiot either.

       

    • by Anonymous Coward on Monday April 07, 2014 @10:55AM (#46684463)

      Even simpler: If it's written by Bennett it's obviously crap.

    • by u38cg ( 607297 )
      If it comes from Bennett, RUN LIKE HELL.
  • by NotDrWho ( 3543773 ) on Monday April 07, 2014 @10:12AM (#46683931)

    Someone will ask a question. This will illicit:

    10 responses from people who don't know what the fuck they're talking about
    3 responses from people trying to sell some solution that probably won't even work
    5 joke responses
    8 responses along the lines of "You're stupid to be asking this question."
    1 response that actually answers the question and provides useful information--this response is buried somewhere under all the responses above.

    • Re: (Score:2, Offtopic)

      You forgot - 2 responses pointing out that the past tense of breed is bred, not breeded.

      I propose it go between 8 responses... and 1 response...

      In other words some completely off topic response that points out the original person asking the question is either bad at grammar or can't spell.

      • In other words some completely off topic response that points out the original person asking the question is either bad at grammar or can't spell.

        See the post underneath this one QED

        illicit

        "Elicit". Unless you're asking how to break the law.

    • Re: (Score:3, Informative)

      by Qzukk ( 229616 )

      illicit

      "Elicit". Unless you're asking how to break the law.

    • by Anonymous Coward

      On Slashdot it's more like:
       
      10 responses to "just install Linux"
      3 responses from people who don't seem to speak english.
      5 joke responses (x10 if it's a science article)
      4 posts about how the free market has failed.
      A post that is modded +5 Insightful for blaming big [industry]/government/religion.
      And a reply to the +5 Insightful that explains that there's no way for some concept to work in this universe that is largely ignored or modded as overrated.

    • Exact. And add this one:

      4 responses "google it, moron" (They tell you to search Google, when you already arrived at the forum in question because you have already researched on Google)
  • by Anonymous Coward

    If I wanted to read the article, I wouldn't have come to Slashdot.

  • You are not going to change human nature, most of us are lazy and will choose convenience over effectiveness. Also, you have a limited number of hours in a day, and only so many years to live. Simple changes, that do not inconvenience too much, and do not take too much time out of your schedule are much more likely to be effective, than more profound but nominally more effective changes.
    • Basically, this guy's advice is "let's all just succumb to inertia and think in the short-term only!!"

      If something's too hard, even though it'll produce a better long-term results, it isn't worth doing according to this guy because too many people are stupid and lazy and want quick fixes rather than to do things right, which takes more time. In the short term, this will seem like a winning strategy, but over the long term so much cruft will build up that we'll be in a much worse position.

      • If one piece of advice produces better long-term results than another piece of advice, you could use this using WABR as well. You just have to check in after three years to see how both groups are doing, while still following the WABR rule of collecting results from everybody and not just from the people that you think "did it right". I mentioned that in the weight-loss example -- what you really want to do is check in to see who's kept the weight off three years down the road.
        • The problem is this depends on your definition of "long term". If choice 1 works out better 3 years down the road, but choice 2 works out much, much better 15 years down the road, we should probably pick choice 2. However, it takes a long time to see these results. So taking a wait-and-see approach really doesn't work that well for very long-term choices. You need to make intelligent decisions, rather than leaving things to evolution.

          Think about this problem: you have two choices for how to handle energ

          • I think this is true, just that it doesn't invalidate WABR, it just means WABR is impossible to use in this case. For energy usage, clearly the long-term consequences are what we care about, we just can't measure them in the short term. So we have to actually think, like you said :)

            WABR might be better for helping people to reduce their own personal carbon footprint. If someone tells me to reduce my carbon footprint by taking the bus, I'm going to point out that it takes twice as long to get anywhere u
        • Get your own damn blog.
  • by jqh1 ( 212455 )

    Does advice that crosses the TLDR threshold score well with CBR but poorly with WABR? From TFA, [brackets added]:

    > (if you make your advice hard to follow [read], that reduces the chance of somebody actually climbing that mountain [reading it]
    >and then pointing out to you if your suggestion didn't work). So it's not just that the advice-giver is being unhelpful, it's that they're being a dick.

    what is the TLDR threshold anyway? I'd love to see a quantification of the amount of information that can fi

    • by Pope ( 17780 )

      These days on Reddit, it looks like 3 sentences is enough for people to self TL;DR.

    • by tepples ( 727027 )

      Does advice that crosses the TLDR threshold score well with CBR but poorly with WABR?

      BH is aware of this irony [slashdot.org]. This is a first coherent draft to get the information out there in hopes that someone can improve it for better WABR effectiveness.

      what is the TLDR threshold anyway? I'd love to see a quantification of the amount of information that can fit inside it

      Above the fold [wikipedia.org] is related.

  • What happened to the one question per post rule? That should apply to the OP too.

    I am only going to answer one of the questions. " ... the metric by which ...(I)... usually judge tech advice"
    I judge tech advice (and most advice) by asking if the person giving the advice has done it before. If I am trying to set up a webserver, I will take the advice of someone who sets them up for a living over someone who has just read the manual. There is little substitute for practical hands-on experience.

    That go
  • by LordLimecat ( 1103839 ) on Monday April 07, 2014 @10:36AM (#46684225)

    Give actual advice that mitigates the most serious threats with the least posssible effort, in order to seek the widest possible compliance. Giving "effective but not practical" advice is unlikely to be helpful.

    The real answer is that the person giving advice needs to be experienced both with tech and with customer interaction. Advice like "install Chrome, because it keeps your plugins up to date and mitigates the most serious flaws with no user interaction" is helpful. "Abandon windows for linux" is not helpful, nor is it helpful to think that you can give sufficient advice to make the asker an expert in technology.

    In the example given, the proper answer of "what to do with new laptop" would have been:
      1) Get Crashplan. It has sane defaults, and a cheap "backup to cloud" that requires no configuration and has encryption built in. Its literally set-it-and-forget-it.
      2) Get Avast AV. Its free, and has been well regarded for many years.
      3) Use Google docs, or Sky Drive. Pick one, and stick with it. It is recommended to pick one "ecosystem" in the cloud, and stick with it. If you go Google, you probably want an android as well. If you go Sky Drive, you probably want WinPhone. If you want iDevices, just get a Mac because iTunes sucks on windows and you dont want "worlds colliding" (Seinfield had it right!).
      4) Use Chrome. It still has the best auto-update scheme out there and is still regarded as one of the most secure browsers; using it generally removes the biggest malware vectors (out of date java / flash / adobe). If you truly care about privacy concerns, they can be addressed in the settings menu: Google it.

    Pretty simple stuff, avoids common pitfalls of being a relentless fanboy, and addresses the most pressing concerns users will face.

    • I think this sounds like great advice and is better than 90% of the suggested "improvements" that were posted on my "4 tips for your new laptop" article. In particular thanks for the pointer about Chrome. Although can you explain what's the benefit of Crashplan's backup if you're already saving documents to Google docs / Sky Drive / Dropbox?
      • Syncing solutions are not backups, for many reasons, and they are not intended as backups. Its too easy to wipe a file out of dropbox, and that deletion gets synced. A proper backup system doesnt allow that.

        Crashplan has the benefit that it by default grabs all data in the user's profile, it allows backing up to a local device (free) or a friend's computer (also free), and does versioning. It also will retain prior copies of a file. Dropbox etc may have versioning, but they might wipe out an older copy

        • Some people posted responses to my original article (in which I told people to use Dropbox) saying that Dropbox actually does maintain a version history, and if you delete a file (or make an update that you regret), you can get the old one back. I haven't tried it.

          Are you saying the main difference is that Crashplan's versioning goes back further? (Or that you think it probably does, if you're not sure how it works on Dropbox?)
          • It is possible either on purpose or accident to permenantly remove data from Google Drive and dropbox. Backup plans generally do not allow this, though with some it is possible (in Crashplan you CAN trigger data removal by changing your backup target).

            Thats part of it, part of it is the fact that it syncs, which can cause unnecesary consequences.

      • You don't even know the difference between actual backup and dropbox/google docs document sharing/syncing.

        Do you even read slashdot, or do you just come here to post your awful shit?

        • Most of the people who wrote this in response to the first article, were unaware of the fact that Dropbox maintains a version history of your file for you.
          • Still fundamentally different things. Dropbox is meant for continuous live access (and is actually a handy way to get some semblance of version control for word docs and other binary files that don't play well with VCSs). It is also meant for syncing across computers--which doesn't make sense at all if you are using it to back up system-specific information/configuration files/etc. It will protect the documents you store in it, but it makes disaster recovery a huge pain since you are starting from scratc
            • I understand but I still don't see why Dropbox isn't good enough for the scenario you're describing. It gives you "quick and easy access" to the stuff you want restored from backup, after all.

              Unless you're talking about a system image or something so that you can get back up and running quickly without having to re-install all of your needed programs? In which case Dropbox wouldn't be appropriate.
              • Syncing solutions make it far to easy to change the "backup" copy. They are focused on syncing, not read-only backups. Backup solutions focus on immutable backups. The real issue is that you generally have far fewer assurances of data integrity with a syncing service, and if something nasty happens on the server (which is not impossible) you lose everything because the nastiness is pulled down. With a backup service that is not the case. This is also why rsync is a poor backup plan: its all to easy to

  • by prefec2 ( 875483 ) on Monday April 07, 2014 @10:37AM (#46684233)

    "WABR" and "CBR" are different metrics and they can support to reason about different questions. While CBR allows you to compare two applied advises by their effectiveness, the WABR metrics answers to ability of people to apply one of the two advises. For example, the given Linux vs. Windows+virus scanner comparison, has two different solutions. As it is easier to apply the virus scanner advice, more people will be able to apply it. So if your goal is to reduce virus infections in general, it might be the solution which provides better results in shorter time. However, the other solution has the higher potential to get rid of problem after all. The second solution is more radical, but in the end (when implemented) the result is better.

    In an engineering context you would go for the better implementation advice even if it is more expensive (more learning time, more money etc.) to switch technology. This is especially true in risky environments. However, for average people this might not be the selected route. In the end they use the easier path and end up with more complex solutions, which do not fully achieve what they need, but they are even more unwilling to change because it was so complicated to get there. So if you want to do them a favor, give them the more complicated advice and support them to get there.

    BTW: The example with "eat less, exercise more" is flawed, as to be measurable to must specify limits. It is like suggesting to use more virus checkers or update more often. Furthermore, eating and living habits are hard to change as you have to change behavioral patterns. In such cases small steps, reflecting behavior patterns and avoiding situations which cause those pattern to be triggered is more helpful. In technology it is merely to help them to be brave to do the transition to a new system. They do not really have to learn new behavior.

  • by Anonymous Coward

    On the other hand, it's hard to imagine an alternative exhortation that would achieve a better result.

    Translation: Because I can't think of a better way, I must be right?!

  • by raymorris ( 2726007 ) on Monday April 07, 2014 @10:43AM (#46684297) Journal

    I actually read the whole thing because I'm killing time waiting for something. I think the conclusion is mistaken, though it does have a kernel of a good idea in it. Taken strictly, his sugggestion is dangerous.

    It may be that more people will follow the advice of "wear your seatbelt while you text and drive" than "don't text and drive" . Still, the former is bad advice.

    Both measures are actually important - what gets the best results (best practice) AND what's most likely to be followed. In the example of avoiding viruses, it would be false to teach that running Avast is the best security from viruses. Running FreeBSD is several orders of magnitude more secure from viruses. The best advice, therefore, acknowledges that fact:

    For security-sensitive systems, consider a secure OS such as FreeBSD or Linux. (The national security agency uses Linux for their top-secret systems). If you decide security isn't important enough to leave Windows, then AT LEAST run up-to-date antivirus. For Windows users, we recommend the following anti-virus.

    That, I think, is the best advice. In security, I regularly encounter people who have been confused, been taught the "at a minimum, do this" in a way that lead them to believe that minimum is the best that can be done.

    To address the weight loss analogy, the best advice would consider both, as follows:
    Try to exercise 1-10 hours per week. A morning jog EVERY morning is great. At minimum, park in the back of the parking lot at work and walk two minutes to the door.

    • Good point, perhaps the best advice is something along the lines of, "If security is a very high priority, switch to Linux, otherwise, install such-and-such." I mentioned in the article that you could segment your advice by different cases -- If your scenario is A, do B, but if your scenario is C, do D -- and still call the whole thing "one piece of advice", as long as it's still judged based on the average results achieved by everybody who hears it, without nit-picking whether they "followed' it or not.
      • That's true, you did point out you can put two pieces of advice together, as we did in the "if security is a high priority" scenario.
        I guess I'm taking it a little further. WABR, measuring _effectiveness_, is certainly a valid and important way of measuring how "good" a suggestion is. I'm thinking also that correctness, accuracy, is also an important measure of goodness, separate from it's part in effectiveness.

        Some correct, accurate advice could be nearly impossible to follow, so it might have no effect o

      • Another thing that came my mind when reading that the people who are most motivated are the ones who most need to hear the truth. Optimizing for the majority, who is unmotivated, i optimizing for those who are lowest priority.

        An example that came to mind is alcoholism, hardcore, chronic alcoholism. There is a process by which a hopeless alcoholic can recover, but it's extremely hard. 95% of people who are "interested in cutting down on their drinking" won't do it. The other 5% have had their lives so utterl

  • Since when has Slashdot of all places become accepting of general mediocrity over personal excellence?

    " Perhaps the advice-giver wants to sound smart, or simply wants to avoid the possiblity of having to admit they were wrong (if you make your advice hard to follow, that reduces the chance of somebody actually climbing that mountain and then pointing out to you if your suggestion didn't work). So it's not just that the advice-giver is being unhelpful, it's that they're being a dick."

    Well, I'm glad the autho

    • Since when has Slashdot of all places become accepting of general mediocrity over personal excellence?

      I didn't read it as intended for Slashdot users giving advice to other Slashdot users, who are on the whole more likely to comply. I read it as intended for Slashdot users giving advice to the general public.

  • ha, ha, breeded...ha, haha, ha, breeded...
  • by feenberg ( 201582 ) on Monday April 07, 2014 @10:50AM (#46684409)

    Economists and doctors have been using the WABR concept for many years now. They call it judging results by "intention to treat". So if 100 people are offered a training program or medicine, and only 90 complete the course of "treatment", the base for the percentage successes is 100, not 90. This is a pretty important idea when judging any experimental treatment on humans who can decline after enrolling. It wasn't so much a problem when the treatment was fertilizer on a field.

    • Economists and doctors have been using the WABR concept for many years now. They call it judging results by "intention to treat". So if 100 people are offered a training program or medicine, and only 90 complete the course of "treatment", the base for the percentage successes is 100, not 90. This is a pretty important idea when judging any experimental treatment on humans who can decline after enrolling. It wasn't so much a problem when the treatment was fertilizer on a field.

      Thank you. I was going to post this. His entire premise about the results being skewed by people dropping out of a study is incorrect. If the one program of bland diet and strenuous exercise worked well, but 90% of the participants dropped out because they hated it then that program's standard ITT stats would reflect this (and it would be scored lower than a more moderate program that had fewer dropouts).

      It was at this point I stopped reading the submitter's premise. This counts as irony, right? Giving advi

      • Thanks for the pointer to the "intention to treat" term -- I knew vaguely that there was a term for this in medicine specifically, but I couldn't find it.

        I knew this wasn't an entirely new idea. I just think there isn't enough general awareness of the importance of judging advice by this metric, even casually given advice.

        There must be doctors working with Michelle Obama who are aware of the "intention to treat" rule, for example. And yet they still signed off on the "Let's Move" campaign, which proba
        • I've read a few of your posts so far. Something that comes across to me is that you seem to be building all your ideas from scratch. I'm not going to tell you that there's no point in doing that, but when you come to basically the same conclusion as the rest of the world came to decades ago, that's a sign that you're not insane, not a sign that you have some revolutionary idea. Everyone goes through stages where they think that the entire world is controlled by people who don't know what they're doing; but

          • I should have been more clear that I didn't presume to think this was a new idea. (Perhaps people took it the wrong way that if I genuinely thought this was a new idea, I was so naïve that they didn't need to read the rest of it.)

            Rather, I think this is something that a lot of people kinda know, but keep forgetting. I think it's something we should all keep foremost in our minds when giving advice ourselves, or in critiquing advice given by other people. I mean, surely you agree that there is a lo
  • How would you tell someone to prepare for their first year at Burning Man?

    It's absolutely identical to the Gathering of the Juggalos, but hugely more pretentious.

  • Long winded, but nice.

    It reminds me of the "coffee is for closers" scene in Glengary, Glen Ross.

    There you have Alec Baldwin, yelling and insulting salesmen, telling them they are crap and they should only get paid for results. That companies hire people to do things, not to 'try' and fail, and they should get no 'good' leads' and no coffee because they are incompetent closers.

    The problem is he himself is NOT doing a job. Alec Baldwin - despite his claim to be rich and successful - may or may not be tel

    • You completely missed the point of that movie.

      It is a fairly accurate representation of pure commissioned sales jobs. They do only get paid when then close. They do sell garbage to suckers. The higher the commission the worse of a deal they are selling. e.g. timeshares, funeral plots, whole life insurance, over the counter stocks.

      The point of that movie is that _everybody_ doing that job was a scumbag. Jack Lemon was just as much a douche as Alec Baldwin, just an unsuccessful douche. They were all thie

      • I understand the movie and in general agree with you about the point - they were scum who did not know they were scum. But it does not in fact change that Alec Baldwin was acting as a motivational speaker to a bunch of salesmen and his advice did not work and could not work.

        Alec Baldwin's speech was designed to serve the point of the MOVIE, not to actually do the job he was told to do (motivate the douchey salemen).

        Instead of motivating him, it insulted him. While he deserved it, it doesn't change th

        • Don't ask about my second job out of school.

          First and second level sales managers also only make money when someone closes. If someone isn't a closer they want them out, not taking up a desk and sucking down coffee. The motivation in Baldwins speech was for 'the closers'. New guys have maybe 3 months to produce. There are always more wanna-be sales weasels. Only closers matter to the managers.

          At least I was running their computers etc. Not actually selling. On the other hand I saw a $100,000 commish on

  • by robot256 ( 1635039 ) on Monday April 07, 2014 @10:54AM (#46684449)

    In recent years at least, this is precisely the method they have used to develop CPR training for the general public. Even if a more complicated routine would result in a better chance of survival in any given case, they have to make the rules simple enough that people can remember and apply them years later and under stress. This increases the statistical survival rate overall, which is exactly the point.

    But agree with everyone else, you could have explained this to a mildly intelligent person in about 1/4 of the words.

    • by Zalbik ( 308903 )

      you could have explained this to a mildly intelligent person in about 1/4 of the words.

      This

      I find it ironic that a post describing the benefits of "whole-audience based results" so completely and utterly fails its own success criteria....

    • I think you're right, someone could probably write an article making a similar point that would score better under WABR, one reason being that it could probably be made shorter. I just wanted to get the point out there because right now there was nothing else making this point, that I could find (at least with regard to tech advice).
  • by gurps_npc ( 621217 ) on Monday April 07, 2014 @10:59AM (#46684523) Homepage
    Time and time again, people give crappy dating advice. not because their advice is wrong perse, but because it is worded in a way so as to ensure that the person is LEAST likely to actually do it correctly.

    Prime example is 'be yourself'. Most people who are still dating don't know who they are. Saying be yourself is not in any way helpful. The proper way to word that bit of advice is do not try to be something you are not. The advice is at hear the same, but the second way is fundamentally better advice because it is far more likely to be understood and used.

    Another example from the dating world is "Be confident". That is about the worst possible advice possible, because 1) confidence, like height, weight, your bank account, etc. is something not under your direct, immediate control. You can't decided to be confident anymore than you can decide to be tall, thin, or wealthy. Yes, with years of hard work, you can become those things (tall requires HGH injections before your bones close - but it still takes years of work). and also 2) Most women are incredibly bad at detecting confidence, often mistaking disinterest, a slight alcohol buzz, practiced smoothness, ignorance or mere blind chance for confidence. Here, the proper advice is not to 'be confident', but instead practice with girls you don't want to date before you ask the girl you want to date out. That method helps a lot with the behaviors that women mistake for a lack of confidence.

    • Exactly, WABR would be the right test for all of these and I do think your alternatives would work better.

      Although, virtually no dating "advice" for men can ever work really well, because studies have shown that when you ask women under lie-detector conditions, the thing they care overwhelmingly about is physical looks:
      http://psycnet.apa.org/index.c... [apa.org]
      and this corresponds pretty well with actual dating preferences as revealed by which men get the most inquiries on dating sites, etc. This might actuall
      • I know that women, for the most part, care a LOT more about superficial stuff than they admit - studies say height is the single most important factor.

        But I do believe my alternative advice does help at least a little bit. Among other things, the advice to talk to women you don't want to date may end up introducing you to someone that, while not preferable, after you get to know her you may decided she is acceptable.

  • First off, kudos for sharing your codification of a useful and rational approach to evaluating advice, which I think is extensible to plenty of other things (don't buy your parents an extremely complicated DVR, don't buy your kid a Standard Transmission if you're not planning on teaching them how to use it, you probably don't need another infomercial kitchen appliance, etc.). But you missed a very important part of giving advice: don't treat your audience poorly! For example, claiming that your version of g
    • I think these are both excellent points -- that if you are trying to increase utility for a single person, then WABR advice should be based on estimates for the population that you think that person is part of. And if you're giving advice to a group, then the goal should depend on whether you're trying to achieve best-possible outcomes for a subset of the group or only pretty-good outcomes for most people in the group. Thanks.
  • I'd phrase it like this:

    If the advice you gave was too difficult to follow, you didn't take your audience into account. / If the advice they need requires extra knowedge/effort, be there to help them implement.

    On the whole however I think the idea is spot on. Could do with some <h1> and <h2> lines to help the TL;DR crowd.

    • Anybody who has tried to put a bog-standard user on Free Software Only laptops (Yeelong or X60 exclusively) with only Free Software and no proprietary.... knows that the user runs screaming back to the motherly proprietary vendors with reinforced assurance that the FSF are nuts. And we all lose.

  • by njnnja ( 2833511 ) on Monday April 07, 2014 @12:04PM (#46685329)

    Halfway through the dieting example it became clear that the author is completely unaware of multiple regression techniques, instrumental variables, or bayesian analysis, let alone experiment design. One would expect at least a cursory literature search (or even google) before writing so much about what is effectively a solved problem. He probably invents his own sort methodologies, revolutionizes page ranking algorithms, and rolls his own cryptographic hashes too.

    • Do you have a better alternative, and an argument as to why the alternative is better? (I will assume that all answers of the form, "Yes I have a better alternative, but I'm not going to say it" are equivalent to "I don't have a better alternative.")
      • by u38cg ( 607297 )
        If someone claims to be Napoleon, rational debate will not make him any less certain that he is a short and somewhat acquisitive European emperor.
      • by njnnja ( 2833511 )

        Yes I have a better alternative, and I'll do the best to help you out in the length of a comment. Everyone who runs social science experiments nowadays knows that there are problems with interventions; namely, some people who are assigned to the intervention group will not do it (e.g., proper diet and exercise) and some people who are assigned to the control group will actually do the intervention (e.g. they will eat healthy and exercise even if you don't tell them to). Modern statistics deals with these is

        • I understand, but all of that sounds like it's supporting the approach I was proposing: Divide your volunteers randomly into two groups, give Program 1 to the first group and Program 2 to the second group, and then check at the end of some time period which group has better results.

          Would you suggest modifying that approach in any way based on your knowledge of statistics?
          • by njnnja ( 2833511 )

            tl;dr Yes I would modify your approach. You are proposing a solution that grossly oversimplifies the problem by making a huge assumption that rarely holds in real life. It's not even wrong [wikipedia.org]

            Take 100 volunteers, divide them randomly into two groups...
            ...But if you're giving your advice to 50 people in Group 1, and someone else is giving different advice to 50 people in Group 2, the samples are large enough that the proportion of unmotivated people is going to be about the same in each group -

            That is a huge assumption that will not be true. Simply dividing a group randomly does not make the raw results coming out the other end meaningful. Do the 50 people in group 1 have the same starting weights as group 2? Same disposable incomes? Same amount of free time? Same stress level at home? Same family history? What

            • Well in any case all of this seems to be saying is that it would be hard to actually measure WABR in practice. However I don't think that detracts from my main point, because when giving advice you're almost never going to have the opportunity split people into large test groups and actually measure the WABR of two different pieces of advice. You just have to consider if two different pieces of advice were given to exactly identical groups of people (which can't happen in real life, but the concept is wel
              • by njnnja ( 2833511 )

                What I know is that the issues that you are trying to address, namely, exposure to an intervention vs. effectiveness of that intervention, is exactly what real researchers deal with all the time when they say something like "Telling people to eat less and exercise more is/is not an effective weight loss strategy." It's not hard to get appropriate metrics for it and interpret those metrics to make a conclusion like that if you know what you are doing. Just because you are ignorant of them doesn't mean it can

  • "What advice would you give someone who just bought a new laptop? What would you tell someone about how to secure their webserver against attacks? For that matter, how would you tell someone to prepare for their first year at Burning Man?

    Lotion. Lots of it.

  • oh god. shut up! (Score:4, Insightful)

    by iamagloworm ( 816661 ) on Monday April 07, 2014 @12:58PM (#46685931)
    can we please stop posting this asshole's diarrhea?
  • Writing is terrible as others have mentioned..... I won't rehash, but how could you get anyone to agree when they can't maintain interest.

    Also, the premise is poor - at least how I understood it - "Advice is only good if it is followed" ?? People don't do simple things, so the problem is not the advice, it is the inherent laziness or not caring of most people.

    Using your example - advice to quit smoking is not good as people still smoke. Sorry, the advice is still valid, but for some reason people feel that

    • I think it works better as a comparison between two different pieces of advice, than as an absolute measurement.

      For example, if one person tells their friends "Quit smoking!", and the other person tells their friends about a specific smoking cessation program or e-cig product that is known to produce good results, then the second person will extend more of their friends' lives on average. Isn't that a good thing? Why would anybody want to be the first person instead of the second?
  • The headline was confusing enough that after

    "What advice would you give someone who just bought a new..."

    I was expecting 'judge'.

  • Are you guys changing the name to Bennett Haselton's blog of Ignorance? It seems you've already changed the content you post to at least one Haselton post a week, and he's a fucking moron.

    Since I've already asked how much it costs to get regular front page posting like he does, let me try this a different way:

    Who do I have to blow to get such front page posts all the time?

    Or are you guys letting him post this crap to the front page because he lets you blow him?

  • How do you get this shit to the front page Bennet? Serious question: I have a bunch of articles about where bears shit and how wet water is that I want to share with the /. audience.

  • Zzzzzz..... *urp* *wake up*

    Is the article over yet?

    Wake me up when bozo boy learns to follow his own advice and be concise.

  • When asked for advice, I usually start with a "best practices" answer, and then follow up with "but if that's too much, at least do ..." Basically, the best answer, followed by the answer that's most likely to be followed if they decide the best answer is too difficult/time-consuming/more effort than it's worth/whatever.
  • ...try a 4 year degree in education. As a second-year teacher with just BS Ed. (no Masters work yet) this exact problem (optimizing presented information so it is useful and effective to the majority of your audience) is a daily struggle. I like where this is going, but to anyone who thinks this is long, teachers learn this in college and then spend their careers perfecting the methodology - at least those that don't get burned out along the way...

    Oh, and he's only dealing with people who WANT the informati

  • It is not always necessary for an article to be well written for it to have a good idea at its core. As a technical writer and former network manager, who spent over a decade on Usenet giving and reading advice, and has also studied education - I know that the success rate of any advice is very audience dependent. If your audience is going to be regular people, then you have to give advice that regular people can or will follow. If your audience is tech people, then you can be more technical. Duh. Sometime

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...