Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM

The Power Of Deep Computing 110

IBM's announcement that it was funding a Deep Computing Institute made news on the Web when it was announced on May 24, but little offline. That's a shame. Deep Computing is a hugely significant convergence of technology, deep corporate pockets, the open source software model, artificial intelligence, powerful new 3-D visualization programs, a new generation of supercomputers and some of the best researchers in the world. This won't solve all the world's problems, but it will sure tackle them in a radical new way. Especially the ones whose solutions have been beyond reach.

Think what Hollywood?s late and loopy sci-fi producer Ed Wood ("Plan 9 From Outer Space", "Night of the Ghouls" and other spectacularly dreadful classics) might have done with the idea of a Deep Computing Institute. It would probably be in a gothic tower filled with giant machines like the Johnniac that sits in the old computer museum in a Silicon Valley NASA station, a monstrous thing with bulbs, tubes and heavy metal casing. At some point the machines would surely run amok, vaporize everybody in sight and take over the minds and power systems of the world.

Sometimes it seems that computing is outrunning the imagination of sci-fi writers and producers. Reality is constantly overtaking them.

We now do have a Deep Computing Institute (DCI). This week, one of the biggest technology stories of the year - perhaps in several years - the institutionalization of deep domputing by one of the most powerful corporations on earth -- was buried deep inside the news pages of most of the country?s newspapers, if it was reported at all.

This has always been media?s big problem with technology - the less significant the story (pornography, for one), the more disseminated it is. That?s why 80 per cent of Americans actually believe the Net was partly to blame for Columbine, that cyberstalkers are likely to pounce on their children, or that video games turn kids into killers.

As a result, people are continuously blindsided. When it comes to technology, Deep Computing is a blockbuster of an idea.

Typically, hardly any Americans even know what was reported all over the Web Tuesday -- the world?s largest computer maker is creating Deep Computing Institute (see Slashdot Tuesday and [http://www.zdnet.com/zdnn/stories/news/),4586,2264162,00.html], an evolutionary movement in technological and computing history that could affect many more people?s lives than online pornographers.

Deep computing techniques involve the use of supercomputers, advanced software, complex mathematical formulas and a consortium of researchers who try and take on some of the world?s most complicated and elusive challenges - the weather, for one, and business problems like complex airline personnel and flight scheduling.

Supercomputing isn?t some Utopian fantasy, and is vastly more significant than the raging hype about Internet stocks. NASA and the National Weather Service have been using supercomputers for several years (so presumably, has the NSA and CIA). Instead of saying there?s a 40 per cent chance of rain tomorrow afternoon, the NWS can now say it will rain from 2:15 to 3:30 p.m. Instead of making forecasts for standard 30-kilometer grids, supercomputers can narrow them to one kilometer (the storm will be in Queens, not the Bronx).

It?s logical that Deep Computers will be asked to consider some of the world?s most intractable social as well as business problems, especially in an era when politicians would much rather investigate people?s sex lives than even discuss serious issues.

Perhaps, and blessedly, the politicians won?t have to. The world is getting a potent new tool for problem solving and decision making.

The UN or other international monitoring agencies might use supercomputers to predict famine or natural catastrophes. They might spot, even foresee the spread of dread plagues like AIDS. Failing that, they could great accelerate the search for cures.

William Pulleybank, director of the new DC institute, says deep computing will use extensive computation and sophisticated software algorithms to tackle problems previously beyond reach. Pulleybank said he thinks its time to use scientific modeling for decision making. Super computers could, for example, help utilities plan power plant use and daily trading strategies in the spot market for electricity.

Semi-super computers have already been used to reduce crime in cities like New York by collating vast amounts of data, tracking police reports and incidents, and predicting where serial attackers and robbers might strike again. The DCI?s will be a lot bigger.

In the digital age, information really is power, and the DCI will have an enormous chunk of it, plus the means to sort and visualize it. IBM is spending $29 million to set up the DCI, which will be linked up to an advisory board selected from universities, government labs and corporations.

Deep Computing brings a number of technology?s most significant contemporary forces together: artificial intelligence, the growing power of computers to store and analyze vast amounts of data, even the open source model of distributing software. Unthinkable just a few years ago, OS is becoming synonymous with rapid innovation and creativity, even among the world?s most powerful corporations.

As part of IBM?s project, the DCI will publish the IBM Visualization Data Explorer on the Net. This is a powerful software package that can be used to create three-dimensional representations of data. The underlying programming code for Data Explorer will be given away as open source software to researchers at the Institute?s Web site, beginning May 25. The site will be located at http://www.research.ibm.com/dci/software.html

The creation of the DCI is a big move for IBM, as well as computing, partly because it?s an expensive risk, but even more so because it reflects the kind of creative risk-taking so rare among big corporations. The term "deep computing" was inspired by the company?s Deep Blue chess-playing computer, which defeated world chess champion Garry Kasparov in l996.

Futurists like Ray Kurzweil and Freeman Dyson frequently cite the chess match as a landmark date in the evolution of AI (artificial intelligence), the first step towards computers being able to inevitably match the human brain in memory storage and decision making.

Kurzweil (The Age of Spiritual Machines) expects computers to store as much information as the human brain (and pass the Turing Intelligence Test) early in the next century. IBM is positioning itself and its institute as the premier center of this advanced kind of problem-solving supercomputing, a far more visionary step than anything Bill Gates (who has tons of cash in the back, and is - unlike IBM?s executives -- slobbered over in the media continuously as a Millenial visionary) has ever advanced.

If Kurzweil and other inventors are right, supercomputers are about to become a lot more super.

Deep Computers might help us sort through still confusing statistics on issues like homelessness: how many and where? Or spot famines, monitor population overcrowding, pollution, global warming, asthma and de-forestation, all problems shrouded in confusing statistics and conflicting data. They could track job opportunities and co-ordinate a changing economy with educational and training institutions.

If some of the most specialized existing data on the planet were focused on specific medical problems, treatment and research be greatly accelerated. Supercomputers could collect and visualize medical research on cancer and other dread diseases, even as thousands of disparate researchers are inching towards various possibilities for a cure in hundreds of different places?

Perhaps supercomputing could do to ethnic and regional warfare what it does to weather: warn us about where it?s likely to occur. Is political unrest - Rawanda, Kosovo - cyclical or predicable in some cases, like crime has been found to be?

Human behavior is, in many ways, less predictable than weather and power needs. Deep computing can?t present miraculous cures for humanity?s problems, but it just might permit society to approach it?s biggest problems in a new way as networked computing is beginning to do in so many other areas of American life.

IBM has also further legitimized the open source model of distributing information and programs. "Where open source really works," Pulleybank says, "is where you build a community to accelerate innovation. And we think this advanced visualization application will attract that kind of community."

The kind of community Pulleybank is describing - some of the world?s best researchers choosing tasks and problems for an ever powerful new generation of supercomputers and rapidly speeding up research, collating and problem solving while giving anybody who wants it free access to visualization software - is amazing, if it actually comes to be.

The DCI could be a sort of digital Los Alamos running globally and openly, working continuously with some of the best minds in the world to go after problems whose solutions have been beyond reach.

What a big and fascinating story. What a shame so few people will get to read or see it.

Some questions: Will the DCI really accelerate research and problem solving? How powerful to computers have to be to use IBM's visualization software? Can this model really approach problems in a completely different way?

This discussion has been archived. No new comments can be posted.

The Power Of Deep Computing

Comments Filter:
  • by Anonymous Coward

    Contrary to popular belief, not all problems are amenable to `beowulf' style computing. In fact, not many of what I personally consider the most interesting problems can be handled well by such clusters.

    What the beowulf clusters do very well with is what is known as `perfectly' or `embarrassingly' parallel problems. For example, Monte Carlo studies.

    Systems of coupled linear equations (e.g., weather) are not quite so simple.
  • by Anonymous Coward
    You really write good articles about social topics. I, for one, enjoyed the Hellmouth stories very much. You are, however, to say it in a very friendly way, technically uneducated. Thus, please, refrain from writing about technical topics, even if you believe that you understand what is going on.

    A few clarifications:

    * Supercomputers are not called 'Super' because they are super. Rather, it is supposed to stand for the way these computers work. Supercomputers usually are vector-processing machines, meaning they can very efficiently deal with uniform operations on large data sets, which is called SIMD parallelism.

    * Supercomputers have been used for more than a few years by a lot of parties including the military (atomic weapons simulations) and car manufacturers (crash simulations, aerodynamics).

    * This is not a risk for IBM. 29 million dollars is at best pocket money for an organization of this size.

    * Artificial intelligence has nothing to do with supercomputing. Also, Deep Blue's slaying Kasparov was not a breakthrough but simply inevitable, because at some point brute force wins over knowledge and intuition. It's just a question of quantity.

    * Computers reaching the level of complexity of the human brain does not mean anything. We are all very well aware that hardware will at some point pass this level of complexity. What we need is the software to put this hardware to use and things don't look that bright in this departement.

    bye
    schani
  • by Anonymous Coward
    Just a little clarification on your clarification:

    In recent years, "supercomputers" don't use SIMD. They use explicit parallelism SISD chips. Basically, true SIMD machines went under when CRAY was purchased by SGI. SGI still makes some of the most powerful "supercomputers" in the world, like Red Mountain, etc., which are basically a huge number of SGIs with a fast backplane.

    Whether the term "supercomputer" originated due to vector processing or not, it generally applies to all "big iron", incredibly powerful computers.

    As an aside about military use of supercomputers, a couple of years ago, I spoke to a programmer who did one dimensional atomic explosion simulations during the sixties. The idea was that an explosion was spherical and thus uniform for any vector beginning at the sphere's center going out to a specific distance. This was at a time when a PDP-7 was considered a "supercomputer".

    Jason
  • by Anonymous Coward on Thursday June 03, 1999 @04:44AM (#1868764)
    I know its easy to say that Deep Computing will bring us the HAL 9k. I know its easy to say that the world would be better if we just keep our Leave-It-To-Beaver attitude and try to keep the 1950's version of a "real family" and not let technology get in the way. Its just too damn easy. But things don't happen that way. People begin to think different.

    Times Change.

    IBM sees and understands this. What a concept! Bringing all of these powerful computing and intellectual minds together to make something that may benefit everyone. But don't let anyone even mention AI. Becuase that's "Aritificial Intellegence" and that means Terminators, 13th Floors, Matrixes, and all sorts of other science-fiction caused myths that everyone is afraid of.

    Riddle me this: If Microsoft had began this project, what would you think the media response then would be? You thought that the world had the conception that they were big brother before, this would have been the boiling point. "There goes Bill, trying to take over the world again." But no, this is IBM, a company that went against the grain, I believe, to not only help and bring something new and exciting to everyone, but to show everyone that they were not afraid to take chances, to put money into something that has no real starts and endings, something that may bring us greatness--or failure.

    I applaud everyone at IBM who had this notion. I applaud everyone who thinks this can work and is willing to work at it. I do not applaud or comprehend those who are so stuck in their own misconceptions about what Deep Computing is, and the fact that they have 1984 on their minds, they cannot see the benefits. This might bring us Terminators. This might bring us Star Trek. It might bring us a new kind of Super-Twinkie, but that doesn't matter. The point is we're trying, we're trying to get something better out of this world, out of these things called computers that we have so long slaved over to make faster and better. Artificial Intellegence is only as good as you make it. Supercomputing is only as good as you make it. If you want to do both or either, you have to be willing to jump off that bridge first--because you're the only one who realizes that its not all water down there, it may just be cotton.

    Evan Erwin
    obiwan@frag.com
    Systems Administrator
    The Citizens Bank of East Tennessee

  • "Big computers can model complex problems."

    And this is supposed to be news?
  • Posted by Perkolater:

    Isn't Super-Twinkie opening for Gran Torino tomorrow night at the Cat's Cradle? Those kids put on a damn good show.

    Where's my rimshot?

    -David
  • Posted by TheCanoleCaptain:

    The Deep Computing Institute will be able to make some great progress as long as they don't get mired in difficulties deciding what to work on next. So many things to compute - so few computers... :)

    What's really missing here is the power of Massively Parallel Computing. The folks at SETI@Home have really landed a great piece of work. Instead of spending a ton of money on in-house computing resources, convince a million individuals around the world to spare their computing resources while they are not using them. I have 2 computers at work running 24/7 churning on nothing most of the time, and 3 at home running part time. Cooperatively leveraging the massively parallel resources of the net while understanding the chaotic nature of each individual node will be the internet's greatist benefit to mankind - and the individuals who are truely able to make it work. :)
  • Posted by The Apocalyptic Lawnmower:

    Dear AC,

    I do not completely agree with your definition of "supercomputers". The scientific community can probably not even agree on one, except that all definitions of supercomputers have in common that they are relatively fast.

    Actually, I've seen a definition of the term "supercomputer" that said it is a relative term, depending on the current state-of-the-art. At that time (according to that source) supercomputer meant any computer that had a peak performance of more than 1 GFLOPs. Now, it would probably be 5 or 10 GFLOPs, given that your home computer can have a P3 or K6-2 with vector instructions that can peak over 1 GFLOPs.

    Typically, it does not matter what the architecture is, as long as it can be view as a single machine. Yes, vector parallel computers have been the most powerful for years, and the latest generations of the computers are still supercomputers.

    However, massively parallel computers should not be ruled out. The question of Beowulf clusters being included in these is interesting; the fact is, they can provide the required performance.

    - ze Apocalyptic Lawnmower

  • >> Unfortunately I do not have the time to write
    >> either an introductory tutorial on scientific
    >> computing

    The shorter version: see under "numeric integration" and/or "matrix/tensor manipulation"...

    >> Er.. Jon, who do you think used what you call
    >> "deep computing" before? Some kids in a garage?

    some graduate student at MIT with a secretary, maybe (viz. the original Connection Machine). Granted this is not representative, but then again, things like Beowulf (don't start) are making embarrassingly parallel simulation much more accessible to the average lab or business. And that appears to be the (muddled) point that Katz is getting at, i.e. that IBM has effectively recognized that Joe Average might have something to contribute back to the field. Not trivial.

    Beyond that, there's some sort of relatively understandable source for people to go look at now when you need to explain why on earth you'd want a 1024-node Origin 2000 or a real Beowulf. "No, it doesn't play Quake any faster..."

    >> Jon, I don't think you understand what
    >> supercomputers do.

    Kaa, I don't think you are acknowledging what some people use supercomputers to do. Realtime visualization ("intelligence amplification" in some peoples' jargon) is at least as useful as numerical simulation (more often complementary to it, as a tool for extracting useful conclusions) and provably more so than AI in a general sense.

    If you do understand this, then you're purposely ignoring these uses in order to flame Katz, which, while tempting, is irresponsible. Not least because the general readership of Slashdot isn't going to rebut you, and you know it.


    >> They have not magically acquired any
    >> problem-solving technology. All they do is
    >> crunch numbers, usually vectors and matrices,
    >> really really fast. The class of problems
    >> suitable for these machines is not big at all.

    Now you're really misleading people. Relational databases are quite useful to the general computing public -- you are using one whenever you post to Slashdot or make a withdrawal at the ATM. This single application alone has probably done more to advance the practice of high-end computer engineering than AI, the NSA, and the NSF put together. (the military... well, that's inevitable) Anyways, not everyone uses their SP2 (or what have you) for CFD, molecular mechanics, or other noble intellectual pursuits. In fact, I'd bet that a minority of them do.

    One of the major uses of ultra-high-end (nonspecialized, i.e. non-vector-processor) machines is to serve as the backend for OLAP (database analysis and decision support) in huge corporations. Data visualization ala IBM's suite of mining products is a major application for these people, and (perhaps equally so, though not necessarily using the same toolset) for scientific users who get to sift through reams of simulation results. It's a whole hell of a lot easier to render a fracture simulation in realtime after applying the appropriate transforms to the experimental results than it is to try and grasp the same results as raw data. (although sometimes the opposite is true -- whoever said "a debugger doesn't replace good thinking, but good thinking doesn't replace a debugger" must have had the same experience) Likewise (and this is where I am coming from) a good tool for getting useful conclusions from protein folding simulations or ligand docking can literally be worth millions of dollars (esp. to Big Pharma).



    >> Believing that increased specialized processing
    >> power will solve the world's political and
    >> social problems is naive at best.

    To put it gently. This is the crux of my argument against your post, perhaps paradoxically. The tools and thought from top-notch researchers (which IBM hires quite a few of) are critical to the effective use of big iron. That's why, in my estimation, the formation and dialogue with the public about "Deep Computing" (what a silly name -- I'd rather see "the Grand Challenge Institute") by IBM actually is significant. Besides, maybe some kids in a garage will find enough use for a pile of P90's running DDX to get a grant and do something useful. Don't rule it out, and don't forget that developing similar tools from scratch would waste months/years of their life. VTK, a competing model to DX, has been open for quite some time, and research applications of it have been quite clever -- there was even an article in Linux Journal on how to use VTK for engineering simulation analysis a while ago. If you think that making the tools to create better predictive models available is inconsequential, maybe you haven't had to come up with one in a while! It's a real pain in the ass -- as you seem to point out.


    >> You are
    >> confusing ability to solve a problem (e.g.
    >> build a good predictive model) and raw
    >> computing power.

    I hope he's not, but I wanted to say the same about your post. I'm not supporting Katz in general -- his wide-eyed optimism bothers me -- but I do think you were overly harsh and might turn some people off from a vibrantly interesting field which (thank god) is getting some of the recognition and money which it deserves.


    >>>> If some of the most specialized existing data
    >>>> on the planet were focused on specific
    >>>> medical problems, treatment and research be
    >>>> greatly accelerated.

    >> The meaning of this sentence is beyond me. Does
    >> it mean that if medical researchers read each
    >> others publications we would be able to ...

    So an heuristic approach to relating useful information within the avalanche of academic literature produced each month would be unhelpful to active researchers? Realtime visualization of otherwise indistinguishable tissues (see this month's _Scientific American_ and try not to vomit when they refer to visualization as "Virtual Reality") is not an advance for neurosurgeons? Sifting faster and more effectively through the flood of genomic and proteomic data published each day is of no interest to patients or insurers?

    Have you been working in CFD, many-body simulations, or some other "Grand Challenge" field for so long that you have forgotten about the mundane uses that the unwashed masses have for big iron? Katz may not necessarily know what he's talking about, but this happens to be correct. And your puny little Ultra won't put a dent in most of these problems. Making tools for using real Big Iron more affordable and visible could be the difference between budgeting $3 million for a Microsoft junkware upgrade and buying a UE10K or setting up a farm of parallel & distributed compute nodes at some places.


    If you want to continue this dialogue offline, for better or worse (please feel free to flame the shit out of any hyperbole in my reply, for starters), please do so. I am about 8 months out of the loop WRT real supercomputing, but the release of the DX source and patents was as exciting to me as most anything in recent memory. More importantly, it looks like I'm going back to the Big Iron, so we may be able to use these tools for day-to-day business, even more so than at my current job (where the market research/data analysis crew was delighted that tools like DX are now available for use on lower-end hardware -- they can afford to wait a week for results I used to get in 30 minutes). All in all I view IBM's announcements as very significant, far more so than the latest JVM or the newest Microsoft vaporware update, and I agree with Katz in that respect.

    As for politicians... well, you're right, that part of the article is beyond hope. However, people at places like the Santa Fe Institute actually do work on simulating social and economic developments, so Katz may not be 100% off base in that respect. I don't know enough about the accuracy of those simulations to say.

  • > Deep Blue's slaying Kasparov was not a breakthrough but simply inevitable...

    Kasparov was not at his best in his match against Deep Blue; he resigned one game when there was a line of play that would have given him a draw, and in another game he made a gross error in the opening, effectively giving away that game.

    Inevitable, maybe; but avoidable at that time.
  • Instead of saying there's a 40 per cent chance of rain tomorrow afternoon, the NWS can now say it will rain from 2:15 to 3:30 p.m. Instead of making forecasts for standard 30-kilometer grids, supercomputers can narrow them to one kilometer (the storm will be in Queens, not the Bronx).
    Here's a first cut on what the computational costs of that are: to improve the horizontal resolution dx by a factor of 30, you have to improve the dy and dz by the same, and dt by either a factor of 30 (in a transport-dominated situation) or 30^2 = 900 (in a diffusion-dominated situation; the transition from transport-dominated to diffusion dominated tends to happen somewhere in the general neighhborhood of 1 KM, btw). Going from a 30 KM horizontal resolution to a 1 KM resolution requires at least 30^4 = 810,000 increase in the compute power required, as well as a corresponding increase in the quality of the observational data used to drive the forecast. (Fortunately, satellite data is starting to be useful for that, but not everything is visible from satellites -- particularly, not the underlying soil moisture below 1 cm., and satellites don't have truly adequate temporal frequency, either.)

    <SHAMELESS PLUG>

    That fourth-power relationship is why my group is running our (meteorology+air pollution) forecasts this summer at 15 KM resolution instead of, say, 12, which would have been twice as expensive (since (12/15)^4 = 0.512); see http://envpro.ncsc.org/projects/NAQP/ [ncsc.org]

    </SHAMELESS PLUG>

  • Before you can solve a problem using a computer you need to know how to solve the problem. Here is an analogy: if you don't know how to get somewhere, getting a faster car will not help you get there.

    I disagree. Advances in supercomputing allow a scientist to simulate ever more complex phenomenon. Supercomputing is rarely about instant results. Many simulations can take up years and months of supercomputer time. Any increase in computaional power is spent on increasing the complexity of the simulation, and not on reducing simulation time.

    In Cosmology, many of the theoretical underpinnings have only been proved mathematically. Applying these theorums to empirical results requires a great deal of computational power. Some problems are known to be unsolvable at the moment, because the computational requirements can not be met by current technology.
  • It's not the actual ? character it's printing, it's printing ? as in "unknown character" because the word processor is using a code from within the section marked as unprintable control codes by the ISO standard.
    Just typical MS behaviour really.
  • rather than the universally used 0x27
  • I agree with you. Jon seems to have no clue here.

    He says for example:

    Perhaps supercomputing could do to ethnic and regional warfare what it does to weather: warn us about where it's likely to occur. Is political unrest - Rawanda, Kosovo - cyclical or predicable in some cases, like crime has been found to be?

    First of all computers don't help that much in weather prediction. Weather is an inherently chaotic phenomena - a sneeze by Jon Katz can lead to a hurricane in Japan. The apparent success of short term weather forecasting comes from information gained from satelites. Just for a moth keep track of the five day forcast and see how often it is right!

    Before you can solve a problem using a computer you need to know how to solve the problem. Here is an analogy: if you don't know how to get somewhere, getting a faster car will not help you get there.

    ...richie

  • I disagree. Advances in supercomputing allow a scientist to simulate ever more complex phenomenon. Supercomputing is rarely about instant results. Many simulations can take up years and months of supercomputer time. Any increase in computaional power is spent on increasing the complexity of the simulation, and not on reducing simulation time.

    Yes, but to program a simulation you need to start with a mathematical model of the phenomena. If you don't understand the phenomena well enough to have a mathematical model, you can't write a simulation.

    In that respect weather is easy. We know the physics of air - i.e. it gets hot it goes up.

    ...richie

  • You are right. The equations that describe smooth flows are only approximations of reality.

    But this just remakes my point. Since our model is faulty, even for movement of air, no matter how much computing power we throw at the problem the results will not get much better.

    ...richie

  • was Deep Blue beating Kasparov. Kasparov psyched himself out and played poorly. IBM seemed to know this as well as they immediately retired Deep Blue after the match in order to do stuff like weather computations (you have to give them credit for quitting while they're ahead). I wouldn't call this a significant event for AI but rather a REALLY bad day for Kasparov.
  • A project currently in development, Cosm (http://cosm.mithral.com/), is developing a multiplatform scheme to promote distributed computing in a big way. Eventually it will be a whole scheme for anyone to develop their own projects to run on their own computers, or perhaps if the cause is "noble" other people would want to run it on their computers.

    Take a look at it. Its exciting stuff, and its just the beginning. Did I mention its also opensource? Join the development.

    Scott Dier
    dieman!ringworld.org
  • ...has more lives than a cat, apparently.

  • IBM's move to fund more basic research is certainly good news for everyone, but as a working scientist I am happy that this is not big news in the mainstream press. Here's why:
    • The public's expectations for science have often been way out of proportion to reality. Think about neural networks, AI, chaos and complexity theory, etc. While all of these topics have introduced either new insights or new technology, none of them have or will supply us with any sort of holy grail.
    • I never found the exact founding level for the DCI, but if the quoted level of $29M is correct, then this really isn't that big a deal. $29M will get you a research lab with between 50-200 research scientists. If the scientists have a working budget, then count on 50, but if they get money from outside sources, then count on 200. This is probably a mere fraction of what IBM pays for advertising.
    • ``Deep Computing'' is advertisement-speak, not science-speak.

    In short, I'd rather see science over-deliver and under-promise than over-promise and under-deliver.

  • Jon,

    Like many people who've replied to this article, I'm a bigger fan of your social writings. While you're right that Deep Computing didn't get much press, it's also not going to be as impressive as you imagine.

    Remember the old saw about "those who do not study history are condemned to repeat it?" Many times in history have people come upon ideas like yours. After Newton, science brought about a great flurry of new ideas. The Englightenment ensued, with great optimism for solving problems. The Universe was seen as one large machine, ticking to Newtonian laws. One only needed to discover the rules and everything would be revealed.

    Even until the early 1900s, science had this sort of optimism. Ever hear of David Hilbert and his great list of unsolved mathematical problems? What about blackbody radiation. Both of these brought this idealistic science to its knees. Goedel smashed Hilbert's grand ideas of proving all mathematical problems. Heisenburg and Plank smashed classical physics. Some things just can't be calculated or proved.

    In my view, this has been something of the hallmark of 20th Century science. Scientists know they may never know the "real answer," but they'd like to get close. More recently, chaos theory makes this even more apparent. The story about the butterfly in Tokyo affecting the weather is in other comments.

    Back to the point, DC will likely not produce the sweeping effects we'd love to see. No matter how good the models we make, they'll never do all we want, or should they. I'd be depressed if I knew the weather more than a few days in advance. :-)

    On the other hand, DC will certainly make some scientific research easier. It will certainly be easier to model a few hundred thousand particle interactions in physics and chemistry (my particular interest). But we have to remember that unfounded optimism is just that.

    My $0.02,
    -Geoff Hutchison
  • I believe the term is that things are deterministic but not computable. Read Penrose's
    • The Emperor's New Mind
    for a reasonably plain-english explanation of how this is possible.
  • ye gad, netscape seems to take <ul> to mean <blockquote>

    weird. ah well the title deserves emphasis anyhow
  • Weather is causal (sp?). It's not like a hurricane in Japan would just come from nowhere, Jon Katz's sneeze would've caused it (not too sure about a sneeze causing a hurricane, but you get the idea) and meterologists would see it coming.

    The meaning behind the "sneeze causing a hurricane across the globe" is based on the well-founded (i'm assuming) notion that the global weather system is mathematically chaotic.

    One of the properties of chaos is Sensitive Dependence on Initial Conditions (SDIC). Consider the possible conditions of weather (or any chaotic system) as the set X. The function f(x) maps current conditions to conditions a small time into the future. Mathematically speaking, f maps X -> X. Successive iterations of f map further and further into the future.

    SDIC says that the value dy = |f(n)(x) - f(n)(x + dx)| can be made as large as possible, no matter how small the value of dx -- simply by choosing an appropriate number n. To clarify, f(3)(x) = f(f(f(x))). (/. doesn't let me use superscripts: =P CmdrTaco)

    This means that this is possible:
    f(2days)(world) = calm weather in Asia
    f(2days)(world + JKatz's sneeze) = typhoons

    Or any number of other possible combinations...Chaos means that, over time, no matter how precise our measurement of initial conditions can get, it's still not precise enough.

  • Not being a mathematician or meteorologist, I can't be 100% sure, but I'm pretty confident that your statement is inaccurate. Despite, or maybe even because of it, chaos theory, a certain amount of accuracy can be found. It may not help us discover the weather a week from now, but there are instances where knowing the weather 15 minutes from now, and down to the meter, would be enormously useful. I had a feeling I wasn't precise enough. You make some good points. Chaos theory says that we can't predict the weather deterministically for all future times, given knowledge of one state. What, in essence, your post is talking about is limiting the size of X -- which is the range of possible conditions, and placing an upper bound on n, the number of iterations made on the initial conditions. I wasn't trying to say that supercomputers are useless for weather -- more just an explanation of an accepted but often not understood maxim of chaos.
  • second try - /. barfed the first time

    Not being a mathematician or meteorologist, I can't be 100% sure, but I'm pretty confident that your statement is inaccurate.

    Despite, or maybe even because of it, chaos theory, a certain amount of accuracy can be found. It may not help us discover the weather a week from now, but there are instances where knowing the weather 15 minutes from now, and down to the meter, would be enormously useful.

    I had a feeling I wasn't precise enough. You make some good points. Chaos theory says that we can't predict the weather deterministically for all future times, given knowledge of one state.

    What, in essence, your post is talking about is limiting the size of X -- which is the range of possible conditions, and placing an upper bound on n, the number of iterations made on the initial conditions.

    I wasn't trying to say that supercomputers are useless for weather -- more just an explanation of an accepted but often not understood maxim of chaos.

  • My Math is a bit rusty, but IIRC, chaos theories say that things are deterministic but not predictable (because we do not have enough computing power), meaning given one initial condition, only one set of final results may be expected. For example, those nice Julian sets always give you the same result if you use the same input parameters.

    yes -- things are deterministic, if you can know initial conditions precisely. In the real world, this is never possible. And as the treatise on SDIC above points out, once you have uncertainty, those uncertainties can have as large an effect as is possible.

  • There's a deceptive notion that wealth is only passed around, never created, and that economics is therefore a zero-sum game. This is untrue. Whenever parties transact voluntarily, each expects to profit by the transaction, and that happens a lot. If economics is a positive-sum game, it's possible for one person to get wealthier without directly diminishing the wealth of others.

    Suppose you're a wealthy person living among the less wealthy. You understand that economics is a positive-sum game. Assuming your wealth weren't diminished, would an improvement in their lot make your life better or worse? Better, I think. Remember that Louis XIV was the wealthiest guy for hundreds of miles around in his time, but all the wealth of Versailles couldn't buy him a flu shot or a ballpoint pen. Over-centralization of wealth does not benefit the wealthy, contrary to intuition.

    If large corporations could end world hunger (and the problem is one of distribution, not food supply), they would gain a much larger customer base. They'll do it as soon as (A) the cost of doing so falls below the profit available from the larger customer base, and (B) they find out how.


  • How are "First Katz Flame Post!" replies any less lame than "First Post!" replies?

    Back up your criticism with some real content or stay away from the damn keyboard.
  • What about that perl script called "de-moron-izer" that converts it for you?

    or just s/?/'/g and then edit for real question marks

    -Doviende

    "The value of a man resides in what he gives,
    and not in what he is capable of receiving."

  • I would tend to have to agree. I've been reading his articles for a while now, and every time I read one all I can think is "Jeeze, why can't this idiot use a decent editor". I mean, if this was microsoft.com, I'd expect to see this sort of crap...
  • There are lots of problems that "supercomputers" can't solve. Complexity theory certainly proves that. So does Goedelian incompleteness. But this doesn't matter. There are also lots of problems that CAN be solved.

    Also, todays supercomputer is tomorrows desktop.

    If you think that the project is dangerous, then you could join in and try to steer it in a more desireable direction. Of course I really think that we can't predict where this will lead, but that's part of what the future is about. It has good AND bad possibilities. And also just about any other dicotomy that you could construct. Including possible/impossible.
  • You need to consider whether something can actually be intelligent if it's not allowed to think along certain lines, however. I suspect you'd have great trouble doing the two things.
  • Sure. On more suerpcomputing site, probably with hardware matching close to 5% of the hardware NSA employs already. I'm sure that'll change the world. Afterall, it's well known that what's really needed to get rid of famine, poverty and homelessness is more number-crunching. Get a grip Katz. These issues are political, and will remain so even if supercomputers _can_ help analysing somre problems better.
  • This isn't entirely true. If you look into what Chaos theory has to say about this, there are observeable patterns in seemingly random data. Like Katz said, history has shown that Crime patterns are cyclical. Global weather patterns, while difficult to predict on a local or even national scale sometimes, become orders of magnitude easier to predict accurately when you change the time frame from days to decades, centuries, or even millenia. So why not a computer that can tell us that in 25 years, there will be starvation in some such part of the world. Or that if certain crop growing areas fail to rotate their crops correctly their yields will start decreasing in 8.4 years or something. Even vague ideas about what will happen in the future can be exploited for gain, whether that gain is buisiness or humanitarian. So, with enough background data, we actually can use a simple (relatively) simulation to get accurate results. You just can't expect too much granularity from those results.
  • i beg to differ. we do not know the "physics of" air, or any other fluid, in real world scenarios. fluid dynamics, especially those aspects of it that deal with turbulent situations, is amazingly young and undiscovered.

    - pal
  • I agree. Basically what I got from this is that these computers will be way faster than the old fast computers and will be able to do way more than the old fast computers. *snore*
  • I'd like to point out that this well-thought-out debate is healthy. It's good that one person can say, "JonKatz, you're a frappin' loonie, and you don't know from beans." And someone else can then root around and eventually find facts which support Katz's articles. But as a journalist--even an opinion journalist whose medium happens to provide built-in b.s. detection--he has a responsibility to do some fact-checking.
    His shrinking fan base (and objective thinkers who /.'s flamer club hasn't wooed yet) are covering his butt, the way newsroom editorial staff cover reporters. The difference is that it happens after publication.

    Jon--I know you have important things to say, and your opinions, though oft-flamed, are worth reading. But if you don't back them up with solid facts (names of researchers, universities, studies, etc.) then eventually nobody else will back you up either, and you'll be relegated to the ZineWriters' "I-have-a-PC-and-an-ISP-so-I-can-publish-my-opinio ns" graveyard.

    Jabbo covered your ass this time. I'd be tempted to help just to prove the power of this medium, but if you keep publishing articles with no solid grounding in reality, then eventually, you'll reap only flames.

    As my high school English teacher said--
    When in doubt, B.S. : Be Specific.

    I hope this doesn't come off as a flame... I truly intend this to help both posters and flamers.

    That's my 2 cents.
    Penny for your thoughts?
    jurph
  • Or any number of other possible combinations...Chaos means that, over time, no matter how precise our measurement of initial conditions can get, it's still not precise enough.

    Not being a mathematician or meteorologist, I can't be 100% sure, but I'm pretty confident that your statement is inaccurate.

    Despite, or maybe even because of it, chaos theory, a certain amount of accuracy can be found. It may not help us discover the weather a week from now, but there are instances where knowing the weather 15 minutes from now, and down to the meter, would be enormously useful.

    For example, if data crunching were not a bottleneck, satellite weather data may be able to give us an additional 10 or 15 minutes of warning of approaching tornado, as opposed to 5 or 6 minutes, when the tornado is already clearly forming. Likewise, once it has formed, knowing it's path down to the meter can only help those who may be stuck in the tornado's path. I'm sure that absurd amounts of computing power would only help hone the precision of our current estimates.

    Another instance in which this capability is useful might by hurricane travel prediction, and wind speed prediction. One would know whether to evacuate, how and where to prepare, and what to do if one knew that one had 98% probability of being in the path of n-mph winds, or if one had to deal with only m-mph winds.

    Or knowing hours in advance that a storm capable of closing an airport, to give airborn flights time and resources enough to find alternative destinations, rather than finding out half an hour too late and an hour too far from the next nearest airport.

    Sure we may not be able to predict weeks in advance; instead, even if it's only a hour in advance, deep computing may make weather prediction much more accurate and much more useful.



    -AS
  • First of all computers don't help that much in weather prediction. Weather is an inherently chaotic phenomena - a sneeze by Jon Katz can lead to a hurricane in Japan.

    Not being a mathematician, I think that chaotic phenomena are not inherently random, just that they become statistically more unpredictable the more prediction is applied, ie more precision or more time.

    So computers, with massive amounts of satellite feeds, should be able to do ever more precise prediction with current info, rather than longer range forecasts that have increasing chances of inaccuracy as time passes.

    The apparent success of short term weather forecasting comes from information gained from satelites. Just for a month keep track of the five day forcast and see how often it is right!

    The issue being that with computers, one would be able to massively increase the probability of 5 day forecasts, and increase the detail of 1 day forecasts. One would be able to compute not only with satellite data, but with ground based tracking instruments, of humidity, sunlight, air pressure, wind speeds, particulates, etc.

    I think we do know how to 'solve' the problem, the real issue is getting enough computation power to actually tackle the data. A later/other poster mentions that we have enough data to crunch for a month, what we get in a day. Which means the predictions are useless unless one can get accuracy for the next month. If, however, one can crunch in one day the data one gets for a day, that increases the knowledge of the next day, and the next day plus this day's knowledge increases the accuracy of the following days.

    I agree that ethnic and regional warfare is tough to tackle, but weather is not.

    Well, not *as* tough.


    -AS
  • What do we care?

    What if increased computation allows us to predict hurricane path and windspeeds? Or tornado paths down to the meter? It would enable people to prepare properly, and with more foresight than saying, 'Geez, there's a tornado coming straight for my house!'.

    Or if airports and airborn flights could have a decent warning, enough to switch destination airports ahead of time and schedule alternate passage for their passengers, with advanced storm watch technology?

    Or what if we could actually predict an earthquake an hour ahead of it's arrival, by crunching the ground based seismic data being feed continuously from instruments all over a region?

    We certainly do have enough data; it's nothing more than planting more seismographs, tapping into satellite feeds, placing more barometric sensor packages, observing ocean currents and temperatures, observing the reflectivity of cloud cover, all these millions of little details that *have* to be ignored, until now, because the computers weren't fast enough to deal with them in a reasonable amount of time.

    There is so much more that I am not creative enough to list here, but it does exist.

    As for politicians... that's out of my league, and I feel Jon shouldn't have used that as a topic in his essay.


    -AS
  • One can only create scarcity if one has a monopoly, I think =)

    If there are 3 companies producing *anything*, unless they collude, the ones who withhold service get screwed over by those that don't. Competition works, in this manner.

    The problem with world hunger is not food production, it's allocation and distribution. I can't support my claims because I don't have the research results in front of me, but during the worst famine years of Ethiopia, a much talked about starving country, they had more food per capita than many non-starving(a lot of donations and some pretty good agricultural yields, I think.)

    What kept the people hungry? Lack of highways and transports to get the food to cities and people. Political turmoil and strife that kept food rotting in warehouses, on docks and wharves, stuck in vans that won't be driven.

    I'm not sure what you mean by Monsanto creating genetic locking mechanisms... To do what? So that farmers can't grow anything else? I'm confused by your statement.


    -AS
  • That was never my point, that corporations will solve world/global problems. If they do, it will always be incidental and accidental, with profit and growth being their primary motivation.

    The original poster was mentioning that corporations won't solve world hunger; my counterpoint not is that they will, but they *can*, if they can be convinced they will make a profit, and if they think they can dominate the market.

    The free market will only solve what problems people think to pose to the free market; before FedEX and UPS, shipping and mail was thought to be to akward and inconvenient for business to handle, so a government run monopoly was formed. Guess what? We now have businesses that specialize in shipping and postage. Likewise, we take a very stupid and silly way to solve global hunger; send lots of food, even if it all rots in the sun, undelivered, unconsumed.

    The strength of a free market is, ostensibly, efficiency; if you are inefficient, a competitor who can, will take advantage of that inefficiency to make more money.

    In this case, if some genius can pose the problem of world hunger in terms of market, control, and profit, then same genius can solve that problem with market economics.

    The problem is how to pose social problems as something one can 'profit' from.


    -AS
  • by Anonymous Shepherd ( 17338 ) on Thursday June 03, 1999 @04:34AM (#1868807) Homepage
    Yes.

    Corporations, like FMC, who sell chemicals and compounds, farming and processing machinary, food companies like Nabisco or Campbells, or sundry like Dixie Queen, I'm sure would solve world hunger in a flash, if it could boost their profits...

    And I'm sure, sometime, someone will figure out how to make a bunch of money, right? It's a captive market, people starving, with little competition.

    Of course, the tragedy is that perhaps people who need this the most can least afford it... But if starvation were an issue of food supply, rather than socio-political infrastructure, the capitalists and profiteers would have done something by now.


    -AS
  • I agree whole heartedly
  • I see ?s all over the web instead of 's so when i was reading the article... I really didnt know what the hell you guys were bitchin about... Kinda petty if you ask me. I can still read it.
  • Looks like Jon has once again exposed his shallow abilities to research an issue. Take a look at IBM's "Deep Computing" [ibm.com] website. Has been around for a while.

    I for one tend to bug my show off ultra-techie friends by posing them a challenge from their Mathematics section. [ibm.com]

  • My Math is a bit rusty, but IIRC, chaos theories say that things are deterministic but not predictable (because we do not have enough computing power), meaning given one initial condition, only one set of final results may be expected. For example, those nice Julian sets always give you the same result if you use the same input parameters.
  • You probably have to either context search/replace the '?' or 1,$s/?[ ]/' or something similar, else it'd replace his real question marks.
  • what do we care?

    there's not even 1 in 1000 predictions that hit the reality. besides applauding JK's effort in providing some interesting predictions, what do we really care?

    Indeed, even we have a 1 million times more powerful supercomputer nowadays, we wouldn't be able to solve most of the problems we would like to solve. It's not simply about the power of the computing device. Do we really understand how our world works? If we don't, even the most complex simulation can't approach the reality.

    And, even the device is fast enough, do we have the resources to provide the huge reality data feed to the computing model? Without that, even the most complex models won't work.

    It's a dangerous thing to have the supercomputer help the cleverest to predict everythings and implement policies... we do need some stupid politicians to represent the average Joe to balance the power. Luckily, our world are far too complex and so, right now the usefulness of stupid politicians is low.
  • How powerful to computers have to be to use IBM's visualization software?

    Although much of the world wasn't exposed or just didn't care about DCI, we all were. And we all want it. I think for DCI to work, they will have to keep their ideas and processes not just on the cutting edge, but remain advanced. Once someone else gets capabilities and access to their ideas, it will only be improved upon and cultivated further by independent parties.


    Can this model really approach problems in a completely different way?

    Remember HAL9000? There's something scary to me about fully functional AI.
  • > If you don't know how to get somewhere, getting a faster car will not help you get there.

    No offense, but that's a lousy analogy, especially when you're talking about the kinds of massive simulations being referred to. Take a major metropolitan area. Somewhere in a 15x15 city block area is a big red "X". Neither of us knows where exactly the "X" is, other than the 15x15 block area, and we'll know when we find it. If I take a U-haul moving truck, and you take a Ferrari, barring speed limits, stop signs, pedestrians, and other such annoyances, who stands a better chance of finding it first?


  • by Kaa ( 21510 )
    Dear Jon,

    Please, please stick to the subjects that you have at least have some approximation to a clue about. It is painfully obvious that you have no idea at all about heavy-number-crunching big-iron computing, how it works, where it is needed, and what uses it has. Really. Stick to human-interest stories, OK?

    Kaa
  • by Kaa ( 21510 ) on Thursday June 03, 1999 @07:00AM (#1868818) Homepage
    Earlier I posted a message basically saying Katz doesn't have a clue about big number-crunching. A number of people, mostly AC, asked me to provide evidence. Unfortunately I do not have the time to write either an introductory tutorial on scientific computing, or a sentence-by-sentence refutation of the Katz's article. Instead I'll just use a couple of quotes from the article to, hopefully, demonstrate the mind-boggling cluelessness of the author.

    one of the biggest technology stories of the year - perhaps in several years - the institutionalization of deep domputing (sic!) by one of the most powerful corporations on earth

    Er.. Jon, who do you think used what you call "deep computing" before? Some kids in a garage? Massive number-crunching was *always* the domain of the government, the academia, and large corporations -- only they had and have the resources to do it. I don't know how you can get more institutional than that. Besides, are you telling us that IBM is just now getting into supercomputers??

    It?s logical that Deep Computers will be asked to consider some of the world?s most intractable social as well as business problems

    Perhaps supercomputing could do to ethnic and regional warfare what it does to weather: warn us about where it?s likely to occur. Is political unrest - Rawanda (sic!), Kosovo - cyclical or predicable in some cases, like crime has been found to be?


    Jon, I don't think you understand what supercomputers do. They have not magically acquired any problem-solving technology. All they do is crunch numbers, usually vectors and matrices, really really fast. The class of problems suitable for these machines is not big at all. Does it mean that, say, weather forecasting will become more precise? Yes, sure. But it's a function of the growth of the processing power in general and has nothing to do with supercomputers. Believing that increased specialized processing power will solve the world's political and social problems is naive at best. You are confusing ability to solve a problem (e.g. build a good predictive model) and raw computing power.

    its time to use scientific modeling for decision making

    Welcome to the real world, pal. In front of me is a Sun Ultra 1, a middle-powered workstation. It runs a whole bunch of scientific models which are used in decision making all the time. What do supercomputers have with decision support? I don't know and I don't think you do either.

    If some of the most specialized existing data on the planet were focused on specific medical problems, treatment and research be greatly accelerated.

    The meaning of this sentence is beyond me. Does it mean that if medical researchers read each others publications we would be able to ... aahh, no, this is hopeless.

    I could go on and on about AI, forecasting models, hope that increasing computation speed will solve social problems, etc. etc., but really, the article is beyond salvation.

    Kaa
  • I agree. Artificial Intelligence has, since its creation, been plagued by grandiose, unsubstantiated claims. This is understandable -- computers able to pass the Turing Test really would be a major step forward, and journalists and theorists get excited about what they could do. But building Yet Another Supercomputer probably isn't going to get us there; at least, building them in the past for these exact purposes haven't gotten us there.

    As far as supercomputing goes, it sounds like they're building some cool machines, able to do some cool number crunching. That is how other journalists have spun the story; they were wise enough not to hype their chickens prior to hatching.
  • Its a Mac PC thing. John is probably typing a ' but his word processor will be using smart quotes to convert it to the curly variety.

    The curly quote is a different character on the Mac and Windows, hence the ?. Everything above the traditional 7 bit 127 character ASCII charset can cause this problem.

    There's no simple mapping between Mac/PC charsets because DOS uses a different one to Windows and some text editors still favour the DOS charset I believe.

    John either needs to

    (i) turn off smart quotes
    (ii) write his text in unicode
    (iii) write his text in an HTML editor that will convert the curly quote to some sort of &XXX; code.

    He definately doesn't want to
    put if through a mac->pc converter cause then all the Mac people out there will see ? or something similar.
  • Stick to human-interest stories, OK?

    Huh? -- This is a human interest story. It's not like he tried to detail the algorythims and hardware used in this number crunching, he dealt with the impact such a scheme could have on the world.

    I personally like this article a lot better than his last.

    -John
  • I rather like to remember that Deep Blue was the successor to Deep Thought, a chess-playing computer obviously named after Douglas Adams' creation.
    How often does science-fiction parody inpact the future?
  • There are certain buzzwords that are sure to make semi-informed technophiles starry-eyed. Mr. Katz invokes two of the most effective of these: AI and supercomputer. As with most articles written for the consumption of the masses that use these buzzwords, this article paints a rosy picture of the future ("When we get to the Emerald City, the Wizard will solve all of our problems, Toto!)

    However, reality is more complex than he realizes.

    Although I am convinced that artificial intelligence is possible, I am skeptical that it will be such a huge advance over an existing technology known as "intelligence". Humans have possessed intelligence for millenia (at least since H. habilis, and probably earlier, but I'm not a paleontologist), but somehow we're still not living in Utopia. Mr. Katz's bold prediction that AI will solve social problems ignores the fact that attempts to solve society's ills have consumed countless processor-years on the organic supercomputers installed in the human frame, all without much useful result. It would be nice to think that an AI philosopher could discover an idea that would make all of humanity behave decently towards one another, but I don't expect it.

    As for the other buzzword: a supercomputer is just a peek at tomorrow's microcomputers. Think of Moore's law, and start counting off powers of two. I find microcomputers much more interesting because they can be owned by ordinary individuals, rather than being the sole property of Big Government, Big Business, and Big Science.

  • This is it. I can't stand it anymore.



    I sure wish you?d just quit using what ever POS word processor you use to write your stuff, and switch to edit, edlin, vi, emacs, pico, joe, or a god-damned typewriter. Hell, a telegraph key would be better.

    I?m sick and tired of having these fsckin' question marks popping up EVERYWHERE in your articles Jon.



    Please fix this. Oh, I?m sticking them in on purpose here. It really is annoying, isn't it?

    I'm using NT and IE 4.0. (Damn POS work machine!)

    -Josh
  • I do not see where he dealt with the social impact!

    The story just reinforced my perception of John that he is an american media entity that could not be more stereotypical.

    Everything he says is true to a point. IBM has
    done great by the OpenSource crowd, and has good ideas and longtime trackrecord in creating the most powerfull computers. And the software to make use of this power. And harness the minds of brilliant researchers. Thus they will probably realize all the technical advances they set out to do and will be able to differentiate in the weather forecast between the Bronx and Queens.

    But the point is moot! IMHO it is empty 'hail technica', somthing most geeks are prone to do, and I allways whether John caters the crowd or believes that we just need more technical advances to solve all problems.

    All this information will be worthless to society as whole. Because most western countries are full of people that are getting more and more depressed with technology. They do not understand it and they resent us geeks for letting them know day after day.

    More and more politicians in the western countries use the information given to them for slagging of their opponents, and not to better their countries.

    As long there is a continued lack of leadership, all the data will be useless and only serve to make an arbitrary point of somebody in need of an extra couple of millions {Votes|Monetary Units}

    Corporare entities are at least honest about the use of any advantage they are given. It's the whole point of their existence to make the most financial gain of any opportunity.


    Hmmm.... This is rather political for a ./ Diskussion, but we allways must ask ourselv whether we still are in contact with the RealWorld(tm)

    read you later.
    [enter you fav. quote here, I am to lazy]

    P.S If this gives you flame-jimmies in any form, please refrain from critizising my orthography or grammar. Save your electrons.
  • their businesses are based on creating scarcity! they arent interested in feeding starting children in ____ countrie(s) . monsanto etc. want to create genetic mechanisms to lock in farmers to their product; solving world hunger isnt in the game plan.
  • my ramblings were meant to point out that corporations dont necessarily solve all social problems. to think that the free market will take care of extremely marginalized populations is kind of foolish...whats profitable doesnt always align with nice goals like ending world hunger or ending gun violence etc...
  • Check out http://www.newsmaps.com/ [newsmaps.com] (preferebly from a T1) and you'll see an interface to data which I suspect is alot easier to use than the IBM system. It branched off of a government lab, and we first also had a 3D jazzy interface like IBM has, and although it looks cool, it's not the best way to display things. The map paradigm that we've adopted is easier for people to understand and provides the same information, again, organizing the content automatically into areas of similar content.

    The basic premise is that you map documents onto a 2 dimensional plane, where proximity of documents relates to how related they are. (ie, if you have two documents right next to each other, there's a high likelyhood that they will be related) A landscape is added onto this to add a 3rd dimension which represents the density of the information. Labels are added to the mountains and peaks to give you some idea of where things are layed out and you can fully interact with the map to view documents in areas.

    This is cool stuff, and although I'll admit I'm plugging my own company, I think it's worth a look to get an idea of where information visualization is heading.

    -Nic

  • Deep Computers might help us sort through still confusing statistics on issues like homelessness: how many and where?

    If only we had a super computer to solve all of our problems? You forget that most of the problems in the world occur because of greed and avarice couple with uneven policies. Once all the data is present, can you really expect the politicians to do the right thing with it? How will the validity of the data be confirmed?

    Deep computing will do little to solve any real problems.
  • I concur KAA. Katz is in another world with his presumptions.

    He ignores the influence of banking and well-financed power mongers in the scheme of social problems.

    Does anyone realize that Hitler and Lenin recieved funding from bankers to initiate their campaigns?

    The murder of the Romanov family was financed.
    Germany's recovery and rise to power after WWI was financed. Computations won't tell you this, only a good review of history will.

    The AMA, BAR Association and other trade guilds with strangleholds on entire populations have more to do with people than computation.

    People seem to forget that computers don't solve problems. People do.
  • There are two types of "engineers / scientists" that are necessary for technology to progress: Research engineers and product engineers. 98% of engineers are product; Take a technological innovation and make something useful out of it. The research engineer represents 2%, but that 2% allows the other 98% to contine making products. An industry gets stale and doesn't progress if 100% of the work is put into product developement.

    I am a product engineer. I work in the networking industry for a large networking equipment company (the largest), doing ASIC (chip) design. I am dependent upon the physicists and engineers working at IBM / Bell Labs / etc. to come up with better, faster, smaller, lower power semiconductor technology. Without them, I would be dead in the water. Obviously, that is only an example that relates to me, but research in general allows the world to continue moving forward.

    On a side note, I know that Microsoft is a bad word around here, but many years ago, they have set up a research center similar to Deep Computing. Dedicated to solving the tough problems.

    http://research.microsoft.com

    Todd
  • Remember HAL9000? There's something scary to me about fully functional AI.

    i second that, and add that it is pretty exciting also

    proibily the most scary thing about AI is that if we can't raise our kids to not to try to blow up school, how do we expect to raise a computer to not to blow up the planet.

    Issac Azimov's three laws of robotics is (IMHO) the best way to keep the "Matrix" in the area of fiction

    hey, but what the h#!! do i know?
  • My last 3 sneezes have caused tornadoes, and even though this won't hold up in criminal court, the civil proceedings are gonna kill me. Anybody know a good lawyer for this sort of thing?
  • He dealt with the human interest, but without
    any clue about the technology. Supercomputing
    has been around for decades. Repeating what
    the IBM press release says and adding extra,
    wrong details does not make good journalism.
  • I am not a mathematician. I don't know that much about chaos theory. But I have used Crays, SGIs, IBM SP/2s, even KSRs since I was 20. Right now I am struggling with a computer model of a car, so you could say I am a 'deep computing' power user.

    With all of Mr. Katz's optimism, he doesn't seem to know what these things do: they crunch numbers. A lot of numbers, really, really fast. That's all. What bigger, faster machines allow people like me to do is solve bigger and more complex problems that we already know how to solve.

    See the point? We know how to model a car (which is what I do, or an airplane engine, an oil reservoir, a skyscraper, we can even make a damn good approximation of a city's weather (the accuracy of which, BTW, is a function, AFAIK, of data points not computing power, and even with all the resolution in the world, you can't get much better forecasts than for a week or so, because chaos kicks in) but we have no idea how to computationally solve social problems.

    Now, say, you gather really, really, good info on homelessness and poverty in America, and then trained a great neural network on a massive IBM SP/2 to crunch that data, what will you get? correlations, which any scientist knows are only clues, not answers, not cause and effect. In the absense of a good understanding and of good ideas behind a certain problem, really big and expensive calculators are not much better than my old HP-48.

    Case in point: I was peripeherally involved, a few years ago with the effort to make the NASP (National Aerospace Plane, aka X-30), a multi-billion dollar fluke. The great thing about the X-30 is that it was supposed to fly so fast, we had to simulate everything computationally. NASA and the DoD had all the computing power in the world to solve that one. Well, because of the absence of good physics theory wrt to what happens to an airplane at speeds greater than Mach, I dunno, 15 or so, the NASP was cancelled.

    We need thinkers, not calculators, software not hardware. So, look around: what is much more important is that fast number-crunching becomes affordable to more institutions and even individuals; then the engineers and mathematicians of the world (and much, much later, the social scientists) can play with new theories and new algorithms. And this site has actively supported Beowulf, the greatest perhaps effort towards that.

    In the long run, Beowulf will be much more important than 'Deep Computing'...

    Just my $0.02...
  • This stuff is going to be used to figure out how to sell you more Danielle Steele books or determine who is the right market for direct marketing.

    Do you think corporations are in this to solve world hunger?
  • Actually it's a bug in the HTML parsing (I came across a detailed explanation for it a few weeks ago, but can't find it offhand), and it's not exclusively a M$ problem. I =used= to see ? instead of ' (and some other substitution that I don't recall, tho both seem to have gone away as of about a week ago) and I use Netscape 3.04 on WFWG, and the /. fastload pages exclusively.

  • Weather: inherently chaotic, but according to a well-known model. Butterfly flaps its wings in India, one week later it rains in New York, sure -- but the weather sats will pick the changes that are necessary to make it rain between day two and day four. (Depending on how much power you throw at the problem, weather is predicatable between three and five days ahead of time.) Also, while weather systems are chaotic, they are also bounded -- I don't care how hard that butterfly flaps its wings, if there's not enough water in the air of new york, it's not going to rain. Basically, weather prediction is all about how much (accuracy and precision) data you can throw at a model and how fast you can crunch that model. Historically susceptible to increases in computing power; no reason to expect that to change now.

    Chess: not only a matter of raw horsepower, though Deep Blue had plenty of it, it's also a matter of achieving a complex enough program that it could be *trained*. That people only 'see' two moves deep is compensated for by intution & pattern-matching, two things computers are notoriously bad at. Think about the Turing test: does it matter how something gets done?

    Quick question: what's the overlap between Deep Computing and the Grand Challenge problems/projects?

    -_Quinn
  • It's not just Deep Computing that didn't make the headlines, the media tends to ignore any science story. I think this is for two reasons:

    1) Deep down, people want to turn on their news and say things like "Oh, how auful" and "What is this world coming to?" Stories that show the progress of mankind don't tend to evoke this kind of reaction.

    2) Most of the general public, who uses thier home computers for Solitaire and deer hunting games, probably wouldn't understand the story or the implications of the story anyway. Aren't newspapers written at something like the 5th grade level?

    -NG


    +--
    Given infinite time, 100 monkeys could type out the complete works of Shakespeare.
  • Sure supercomputers would/do help with weather forcasts.

    Weather is causal (sp?). It's not like a hurricane in Japan would just come from nowhere, Jon Katz's sneeze would've caused it (not too sure about a sneeze causing a hurricane, but you get the idea) and meterologists would see it coming.

    I heard some meterologist say that there is enough data obtained in order to _acurrately_ forcast the weather for a week (or something) it would just take them a month to crunch all of the factors. If these factors could be taken into account faster then we could get forcasts for later dates that are more accurate.

    I could be completely wrong, but I'm pretty sure on this. If anyone knows for sure please comment.

    ODiV
  • Yes SETI@home has now managed to garner almost 500K users and there numbers are still growing, and yes not all problems are easily parrallizable, however loosely coupled computation can and will provide huge resources for more global projects such as SETI. Right now we are still looking at mostly "old" problems to solve with new technologies, I'm sure there will be many more "new" problems that can be solved with technology. As for what IBM is doing on the visualisation front well thats something we really need as more and more information is made available it becomes easier to be swamped, and I salute what they are doing,IBM has come along way from the days when they tried to stitch up customers as Microsoft does now with proprietry technology, not only with this new Deep Computing Instuitute but even with more mundane stuff like GMR disk drives. All the best DCI.

"Imitation is the sincerest form of television." -- The New Mighty Mouse

Working...