The Power Of Deep Computing 110
Think what Hollywood?s late and loopy sci-fi producer Ed Wood ("Plan 9 From Outer Space", "Night of the Ghouls" and other spectacularly dreadful classics) might have done with the idea of a Deep Computing Institute. It would probably be in a gothic tower filled with giant machines like the Johnniac that sits in the old computer museum in a Silicon Valley NASA station, a monstrous thing with bulbs, tubes and heavy metal casing. At some point the machines would surely run amok, vaporize everybody in sight and take over the minds and power systems of the world.
Sometimes it seems that computing is outrunning the imagination of sci-fi writers and producers. Reality is constantly overtaking them.
We now do have a Deep Computing Institute (DCI). This week, one of the biggest technology stories of the year - perhaps in several years - the institutionalization of deep domputing by one of the most powerful corporations on earth -- was buried deep inside the news pages of most of the country?s newspapers, if it was reported at all.
This has always been media?s big problem with technology - the less significant the story (pornography, for one), the more disseminated it is. That?s why 80 per cent of Americans actually believe the Net was partly to blame for Columbine, that cyberstalkers are likely to pounce on their children, or that video games turn kids into killers.
As a result, people are continuously blindsided. When it comes to technology, Deep Computing is a blockbuster of an idea.
Typically, hardly any Americans even know what was reported all over the Web Tuesday -- the world?s largest computer maker is creating Deep Computing Institute (see Slashdot Tuesday and [http://www.zdnet.com/zdnn/stories/news/),4586,2264162,00.html], an evolutionary movement in technological and computing history that could affect many more people?s lives than online pornographers.
Deep computing techniques involve the use of supercomputers, advanced software, complex mathematical formulas and a consortium of researchers who try and take on some of the world?s most complicated and elusive challenges - the weather, for one, and business problems like complex airline personnel and flight scheduling.
Supercomputing isn?t some Utopian fantasy, and is vastly more significant than the raging hype about Internet stocks. NASA and the National Weather Service have been using supercomputers for several years (so presumably, has the NSA and CIA). Instead of saying there?s a 40 per cent chance of rain tomorrow afternoon, the NWS can now say it will rain from 2:15 to 3:30 p.m. Instead of making forecasts for standard 30-kilometer grids, supercomputers can narrow them to one kilometer (the storm will be in Queens, not the Bronx).
It?s logical that Deep Computers will be asked to consider some of the world?s most intractable social as well as business problems, especially in an era when politicians would much rather investigate people?s sex lives than even discuss serious issues.
Perhaps, and blessedly, the politicians won?t have to. The world is getting a potent new tool for problem solving and decision making.
The UN or other international monitoring agencies might use supercomputers to predict famine or natural catastrophes. They might spot, even foresee the spread of dread plagues like AIDS. Failing that, they could great accelerate the search for cures.
William Pulleybank, director of the new DC institute, says deep computing will use extensive computation and sophisticated software algorithms to tackle problems previously beyond reach. Pulleybank said he thinks its time to use scientific modeling for decision making. Super computers could, for example, help utilities plan power plant use and daily trading strategies in the spot market for electricity.
Semi-super computers have already been used to reduce crime in cities like New York by collating vast amounts of data, tracking police reports and incidents, and predicting where serial attackers and robbers might strike again. The DCI?s will be a lot bigger.
In the digital age, information really is power, and the DCI will have an enormous chunk of it, plus the means to sort and visualize it. IBM is spending $29 million to set up the DCI, which will be linked up to an advisory board selected from universities, government labs and corporations.
Deep Computing brings a number of technology?s most significant contemporary forces together: artificial intelligence, the growing power of computers to store and analyze vast amounts of data, even the open source model of distributing software. Unthinkable just a few years ago, OS is becoming synonymous with rapid innovation and creativity, even among the world?s most powerful corporations.
As part of IBM?s project, the DCI will publish the IBM Visualization Data Explorer on the Net. This is a powerful software package that can be used to create three-dimensional representations of data. The underlying programming code for Data Explorer will be given away as open source software to researchers at the Institute?s Web site, beginning May 25. The site will be located at http://www.research.ibm.com/dci/software.html
The creation of the DCI is a big move for IBM, as well as computing, partly because it?s an expensive risk, but even more so because it reflects the kind of creative risk-taking so rare among big corporations. The term "deep computing" was inspired by the company?s Deep Blue chess-playing computer, which defeated world chess champion Garry Kasparov in l996.
Futurists like Ray Kurzweil and Freeman Dyson frequently cite the chess match as a landmark date in the evolution of AI (artificial intelligence), the first step towards computers being able to inevitably match the human brain in memory storage and decision making.
Kurzweil (The Age of Spiritual Machines) expects computers to store as much information as the human brain (and pass the Turing Intelligence Test) early in the next century. IBM is positioning itself and its institute as the premier center of this advanced kind of problem-solving supercomputing, a far more visionary step than anything Bill Gates (who has tons of cash in the back, and is - unlike IBM?s executives -- slobbered over in the media continuously as a Millenial visionary) has ever advanced.
If Kurzweil and other inventors are right, supercomputers are about to become a lot more super.
Deep Computers might help us sort through still confusing statistics on issues like homelessness: how many and where? Or spot famines, monitor population overcrowding, pollution, global warming, asthma and de-forestation, all problems shrouded in confusing statistics and conflicting data. They could track job opportunities and co-ordinate a changing economy with educational and training institutions.
If some of the most specialized existing data on the planet were focused on specific medical problems, treatment and research be greatly accelerated. Supercomputers could collect and visualize medical research on cancer and other dread diseases, even as thousands of disparate researchers are inching towards various possibilities for a cure in hundreds of different places?
Perhaps supercomputing could do to ethnic and regional warfare what it does to weather: warn us about where it?s likely to occur. Is political unrest - Rawanda, Kosovo - cyclical or predicable in some cases, like crime has been found to be?
Human behavior is, in many ways, less predictable than weather and power needs. Deep computing can?t present miraculous cures for humanity?s problems, but it just might permit society to approach it?s biggest problems in a new way as networked computing is beginning to do in so many other areas of American life.
IBM has also further legitimized the open source model of distributing information and programs. "Where open source really works," Pulleybank says, "is where you build a community to accelerate innovation. And we think this advanced visualization application will attract that kind of community."
The kind of community Pulleybank is describing - some of the world?s best researchers choosing tasks and problems for an ever powerful new generation of supercomputers and rapidly speeding up research, collating and problem solving while giving anybody who wants it free access to visualization software - is amazing, if it actually comes to be.
The DCI could be a sort of digital Los Alamos running globally and openly, working continuously with some of the best minds in the world to go after problems whose solutions have been beyond reach.
What a big and fascinating story. What a shame so few people will get to read or see it.
Some questions: Will the DCI really accelerate research and problem solving? How powerful to computers have to be to use IBM's visualization software? Can this model really approach problems in a completely different way?
Re:Massively Parallel Computing (Score:1)
Contrary to popular belief, not all problems are amenable to `beowulf' style computing. In fact, not many of what I personally consider the most interesting problems can be handled well by such clusters.
What the beowulf clusters do very well with is what is known as `perfectly' or `embarrassingly' parallel problems. For example, Monte Carlo studies.
Systems of coupled linear equations (e.g., weather) are not quite so simple.
Dear Jon Katz, (Score:2)
A few clarifications:
* Supercomputers are not called 'Super' because they are super. Rather, it is supposed to stand for the way these computers work. Supercomputers usually are vector-processing machines, meaning they can very efficiently deal with uniform operations on large data sets, which is called SIMD parallelism.
* Supercomputers have been used for more than a few years by a lot of parties including the military (atomic weapons simulations) and car manufacturers (crash simulations, aerodynamics).
* This is not a risk for IBM. 29 million dollars is at best pocket money for an organization of this size.
* Artificial intelligence has nothing to do with supercomputing. Also, Deep Blue's slaying Kasparov was not a breakthrough but simply inevitable, because at some point brute force wins over knowledge and intuition. It's just a question of quantity.
* Computers reaching the level of complexity of the human brain does not mean anything. We are all very well aware that hardware will at some point pass this level of complexity. What we need is the software to put this hardware to use and things don't look that bright in this departement.
bye
schani
Re:Dear Jon Katz, (Score:2)
In recent years, "supercomputers" don't use SIMD. They use explicit parallelism SISD chips. Basically, true SIMD machines went under when CRAY was purchased by SGI. SGI still makes some of the most powerful "supercomputers" in the world, like Red Mountain, etc., which are basically a huge number of SGIs with a fast backplane.
Whether the term "supercomputer" originated due to vector processing or not, it generally applies to all "big iron", incredibly powerful computers.
As an aside about military use of supercomputers, a couple of years ago, I spoke to a programmer who did one dimensional atomic explosion simulations during the sixties. The idea was that an explosion was spherical and thus uniform for any vector beginning at the sphere's center going out to a specific distance. This was at a time when a PDP-7 was considered a "supercomputer".
Jason
Misconceptions (Score:3)
Times Change.
IBM sees and understands this. What a concept! Bringing all of these powerful computing and intellectual minds together to make something that may benefit everyone. But don't let anyone even mention AI. Becuase that's "Aritificial Intellegence" and that means Terminators, 13th Floors, Matrixes, and all sorts of other science-fiction caused myths that everyone is afraid of.
Riddle me this: If Microsoft had began this project, what would you think the media response then would be? You thought that the world had the conception that they were big brother before, this would have been the boiling point. "There goes Bill, trying to take over the world again." But no, this is IBM, a company that went against the grain, I believe, to not only help and bring something new and exciting to everyone, but to show everyone that they were not afraid to take chances, to put money into something that has no real starts and endings, something that may bring us greatness--or failure.
I applaud everyone at IBM who had this notion. I applaud everyone who thinks this can work and is willing to work at it. I do not applaud or comprehend those who are so stuck in their own misconceptions about what Deep Computing is, and the fact that they have 1984 on their minds, they cannot see the benefits. This might bring us Terminators. This might bring us Star Trek. It might bring us a new kind of Super-Twinkie, but that doesn't matter. The point is we're trying, we're trying to get something better out of this world, out of these things called computers that we have so long slaved over to make faster and better. Artificial Intellegence is only as good as you make it. Supercomputing is only as good as you make it. If you want to do both or either, you have to be willing to jump off that bridge first--because you're the only one who realizes that its not all water down there, it may just be cotton.
Evan Erwin
obiwan@frag.com
Systems Administrator
The Citizens Bank of East Tennessee
The one-line summary: (Score:2)
And this is supposed to be news?
Super-Twinkie!!! (Score:1)
Isn't Super-Twinkie opening for Gran Torino tomorrow night at the Cat's Cradle? Those kids put on a damn good show.
Where's my rimshot?
-David
Massively Parallel Computing (Score:2)
The Deep Computing Institute will be able to make some great progress as long as they don't get mired in difficulties deciding what to work on next. So many things to compute - so few computers...
What's really missing here is the power of Massively Parallel Computing. The folks at SETI@Home have really landed a great piece of work. Instead of spending a ton of money on in-house computing resources, convince a million individuals around the world to spare their computing resources while they are not using them. I have 2 computers at work running 24/7 churning on nothing most of the time, and 3 at home running part time. Cooperatively leveraging the massively parallel resources of the net while understanding the chaotic nature of each individual node will be the internet's greatist benefit to mankind - and the individuals who are truely able to make it work.
Def. of supercomputers? (Score:2)
Dear AC,
I do not completely agree with your definition of "supercomputers". The scientific community can probably not even agree on one, except that all definitions of supercomputers have in common that they are relatively fast.
Actually, I've seen a definition of the term "supercomputer" that said it is a relative term, depending on the current state-of-the-art. At that time (according to that source) supercomputer meant any computer that had a peak performance of more than 1 GFLOPs. Now, it would probably be 5 or 10 GFLOPs, given that your home computer can have a P3 or K6-2 with vector instructions that can peak over 1 GFLOPs.
Typically, it does not matter what the architecture is, as long as it can be view as a single machine. Yes, vector parallel computers have been the most powerful for years, and the latest generations of the computers are still supercomputers.
However, massively parallel computers should not be ruled out. The question of Beowulf clusters being included in these is interesting; the fact is, they can provide the required performance.
- ze Apocalyptic Lawnmower
Counterpoint (not 100% disagreement, maybe 75%) (Score:3)
>> either an introductory tutorial on scientific
>> computing
The shorter version: see under "numeric integration" and/or "matrix/tensor manipulation"...
>> Er.. Jon, who do you think used what you call
>> "deep computing" before? Some kids in a garage?
some graduate student at MIT with a secretary, maybe (viz. the original Connection Machine). Granted this is not representative, but then again, things like Beowulf (don't start) are making embarrassingly parallel simulation much more accessible to the average lab or business. And that appears to be the (muddled) point that Katz is getting at, i.e. that IBM has effectively recognized that Joe Average might have something to contribute back to the field. Not trivial.
Beyond that, there's some sort of relatively understandable source for people to go look at now when you need to explain why on earth you'd want a 1024-node Origin 2000 or a real Beowulf. "No, it doesn't play Quake any faster..."
>> Jon, I don't think you understand what
>> supercomputers do.
Kaa, I don't think you are acknowledging what some people use supercomputers to do. Realtime visualization ("intelligence amplification" in some peoples' jargon) is at least as useful as numerical simulation (more often complementary to it, as a tool for extracting useful conclusions) and provably more so than AI in a general sense.
If you do understand this, then you're purposely ignoring these uses in order to flame Katz, which, while tempting, is irresponsible. Not least because the general readership of Slashdot isn't going to rebut you, and you know it.
>> They have not magically acquired any
>> problem-solving technology. All they do is
>> crunch numbers, usually vectors and matrices,
>> really really fast. The class of problems
>> suitable for these machines is not big at all.
Now you're really misleading people. Relational databases are quite useful to the general computing public -- you are using one whenever you post to Slashdot or make a withdrawal at the ATM. This single application alone has probably done more to advance the practice of high-end computer engineering than AI, the NSA, and the NSF put together. (the military... well, that's inevitable) Anyways, not everyone uses their SP2 (or what have you) for CFD, molecular mechanics, or other noble intellectual pursuits. In fact, I'd bet that a minority of them do.
One of the major uses of ultra-high-end (nonspecialized, i.e. non-vector-processor) machines is to serve as the backend for OLAP (database analysis and decision support) in huge corporations. Data visualization ala IBM's suite of mining products is a major application for these people, and (perhaps equally so, though not necessarily using the same toolset) for scientific users who get to sift through reams of simulation results. It's a whole hell of a lot easier to render a fracture simulation in realtime after applying the appropriate transforms to the experimental results than it is to try and grasp the same results as raw data. (although sometimes the opposite is true -- whoever said "a debugger doesn't replace good thinking, but good thinking doesn't replace a debugger" must have had the same experience) Likewise (and this is where I am coming from) a good tool for getting useful conclusions from protein folding simulations or ligand docking can literally be worth millions of dollars (esp. to Big Pharma).
>> Believing that increased specialized processing
>> power will solve the world's political and
>> social problems is naive at best.
To put it gently. This is the crux of my argument against your post, perhaps paradoxically. The tools and thought from top-notch researchers (which IBM hires quite a few of) are critical to the effective use of big iron. That's why, in my estimation, the formation and dialogue with the public about "Deep Computing" (what a silly name -- I'd rather see "the Grand Challenge Institute") by IBM actually is significant. Besides, maybe some kids in a garage will find enough use for a pile of P90's running DDX to get a grant and do something useful. Don't rule it out, and don't forget that developing similar tools from scratch would waste months/years of their life. VTK, a competing model to DX, has been open for quite some time, and research applications of it have been quite clever -- there was even an article in Linux Journal on how to use VTK for engineering simulation analysis a while ago. If you think that making the tools to create better predictive models available is inconsequential, maybe you haven't had to come up with one in a while! It's a real pain in the ass -- as you seem to point out.
>> You are
>> confusing ability to solve a problem (e.g.
>> build a good predictive model) and raw
>> computing power.
I hope he's not, but I wanted to say the same about your post. I'm not supporting Katz in general -- his wide-eyed optimism bothers me -- but I do think you were overly harsh and might turn some people off from a vibrantly interesting field which (thank god) is getting some of the recognition and money which it deserves.
>>>> If some of the most specialized existing data
>>>> on the planet were focused on specific
>>>> medical problems, treatment and research be
>>>> greatly accelerated.
>> The meaning of this sentence is beyond me. Does
>> it mean that if medical researchers read each
>> others publications we would be able to
So an heuristic approach to relating useful information within the avalanche of academic literature produced each month would be unhelpful to active researchers? Realtime visualization of otherwise indistinguishable tissues (see this month's _Scientific American_ and try not to vomit when they refer to visualization as "Virtual Reality") is not an advance for neurosurgeons? Sifting faster and more effectively through the flood of genomic and proteomic data published each day is of no interest to patients or insurers?
Have you been working in CFD, many-body simulations, or some other "Grand Challenge" field for so long that you have forgotten about the mundane uses that the unwashed masses have for big iron? Katz may not necessarily know what he's talking about, but this happens to be correct. And your puny little Ultra won't put a dent in most of these problems. Making tools for using real Big Iron more affordable and visible could be the difference between budgeting $3 million for a Microsoft junkware upgrade and buying a UE10K or setting up a farm of parallel & distributed compute nodes at some places.
If you want to continue this dialogue offline, for better or worse (please feel free to flame the shit out of any hyperbole in my reply, for starters), please do so. I am about 8 months out of the loop WRT real supercomputing, but the release of the DX source and patents was as exciting to me as most anything in recent memory. More importantly, it looks like I'm going back to the Big Iron, so we may be able to use these tools for day-to-day business, even more so than at my current job (where the market research/data analysis crew was delighted that tools like DX are now available for use on lower-end hardware -- they can afford to wait a week for results I used to get in 30 minutes). All in all I view IBM's announcements as very significant, far more so than the latest JVM or the newest Microsoft vaporware update, and I agree with Katz in that respect.
As for politicians... well, you're right, that part of the article is beyond hope. However, people at places like the Santa Fe Institute actually do work on simulating social and economic developments, so Katz may not be 100% off base in that respect. I don't know enough about the accuracy of those simulations to say.
Deep Blue vs. Kasparov (Score:1)
Kasparov was not at his best in his match against Deep Blue; he resigned one game when there was a line of play that would have given him a draw, and in another game he made a gross error in the opening, effectively giving away that game.
Inevitable, maybe; but avoidable at that time.
Computational costs of weather forecasting (Score:1)
<SHAMELESS PLUG>
That fourth-power relationship is why my group is running our (meteorology+air pollution) forecasts this summer at 15 KM resolution instead of, say, 12, which would have been twice as expensive (since (12/15)^4 = 0.512); see http://envpro.ncsc.org/projects/NAQP/ [ncsc.org]
</SHAMELESS PLUG>
False Analogy (Score:1)
I disagree. Advances in supercomputing allow a scientist to simulate ever more complex phenomenon. Supercomputing is rarely about instant results. Many simulations can take up years and months of supercomputer time. Any increase in computaional power is spent on increasing the complexity of the simulation, and not on reducing simulation time.
In Cosmology, many of the theoretical underpinnings have only been proved mathematically. Applying these theorums to empirical results requires a great deal of computational power. Some problems are known to be unsolvable at the moment, because the computational requirements can not be met by current technology.
Re:Damn it Jon. (Re: ? or ' You Choose!) (Score:1)
Just typical MS behaviour really.
In fact, it'll be character 0x92 (Score:1)
Re:Groan (Score:1)
He says for example:
Perhaps supercomputing could do to ethnic and regional warfare what it does to weather: warn us about where it's likely to occur. Is political unrest - Rawanda, Kosovo - cyclical or predicable in some cases, like crime has been found to be?
First of all computers don't help that much in weather prediction. Weather is an inherently chaotic phenomena - a sneeze by Jon Katz can lead to a hurricane in Japan. The apparent success of short term weather forecasting comes from information gained from satelites. Just for a moth keep track of the five day forcast and see how often it is right!
Before you can solve a problem using a computer you need to know how to solve the problem. Here is an analogy: if you don't know how to get somewhere, getting a faster car will not help you get there.
Re:False Analogy (Score:1)
Yes, but to program a simulation you need to start with a mathematical model of the phenomena. If you don't understand the phenomena well enough to have a mathematical model, you can't write a simulation.
In that respect weather is easy. We know the physics of air - i.e. it gets hot it goes up.
Re:False Analogy (Score:1)
But this just remakes my point. Since our model is faulty, even for movement of air, no matter how much computing power we throw at the problem the results will not get much better.
another less significant event widely disseminated (Score:1)
Cosm and Distributed Computing. (Score:1)
Take a look at it. Its exciting stuff, and its just the beginning. Did I mention its also opensource? Join the development.
Scott Dier
dieman!ringworld.org
Cat's Cradle... (Score:1)
Hype may be the worst enemy of big science (Score:1)
In short, I'd rather see science over-deliver and under-promise than over-promise and under-deliver.
Ever hear of intractibility? (Score:1)
Like many people who've replied to this article, I'm a bigger fan of your social writings. While you're right that Deep Computing didn't get much press, it's also not going to be as impressive as you imagine.
Remember the old saw about "those who do not study history are condemned to repeat it?" Many times in history have people come upon ideas like yours. After Newton, science brought about a great flurry of new ideas. The Englightenment ensued, with great optimism for solving problems. The Universe was seen as one large machine, ticking to Newtonian laws. One only needed to discover the rules and everything would be revealed.
Even until the early 1900s, science had this sort of optimism. Ever hear of David Hilbert and his great list of unsolved mathematical problems? What about blackbody radiation. Both of these brought this idealistic science to its knees. Goedel smashed Hilbert's grand ideas of proving all mathematical problems. Heisenburg and Plank smashed classical physics. Some things just can't be calculated or proved.
In my view, this has been something of the hallmark of 20th Century science. Scientists know they may never know the "real answer," but they'd like to get close. More recently, chaos theory makes this even more apparent. The story about the butterfly in Tokyo affecting the weather is in other comments.
Back to the point, DC will likely not produce the sweeping effects we'd love to see. No matter how good the models we make, they'll never do all we want, or should they. I'd be depressed if I knew the weather more than a few days in advance.
On the other hand, DC will certainly make some scientific research easier. It will certainly be easier to model a few hundred thousand particle interactions in physics and chemistry (my particular interest). But we have to remember that unfounded optimism is just that.
My $0.02,
-Geoff Hutchison
Re:Weather (Score:1)
- The Emperor's New Mind
for a reasonably plain-english explanation of how this is possible.Re:Weather (Score:1)
weird. ah well the title deserves emphasis anyhow
Re:Weather (Score:1)
The meaning behind the "sneeze causing a hurricane across the globe" is based on the well-founded (i'm assuming) notion that the global weather system is mathematically chaotic.
One of the properties of chaos is Sensitive Dependence on Initial Conditions (SDIC). Consider the possible conditions of weather (or any chaotic system) as the set X. The function f(x) maps current conditions to conditions a small time into the future. Mathematically speaking, f maps X -> X. Successive iterations of f map further and further into the future.
SDIC says that the value dy = |f(n)(x) - f(n)(x + dx)| can be made as large as possible, no matter how small the value of dx -- simply by choosing an appropriate number n. To clarify, f(3)(x) = f(f(f(x))). (/. doesn't let me use superscripts: =P CmdrTaco)
This means that this is possible:
f(2days)(world) = calm weather in Asia
f(2days)(world + JKatz's sneeze) = typhoons
Or any number of other possible combinations...Chaos means that, over time, no matter how precise our measurement of initial conditions can get, it's still not precise enough.
Re:Weather (Score:1)
Re:Weather (Score:1)
second try - /. barfed the first time
I had a feeling I wasn't precise enough. You make some good points. Chaos theory says that we can't predict the weather deterministically for all future times, given knowledge of one state.
What, in essence, your post is talking about is limiting the size of X -- which is the range of possible conditions, and placing an upper bound on n, the number of iterations made on the initial conditions.
I wasn't trying to say that supercomputers are useless for weather -- more just an explanation of an accepted but often not understood maxim of chaos.
Re:Weather (Score:1)
yes -- things are deterministic, if you can know initial conditions precisely. In the real world, this is never possible. And as the treatise on SDIC above points out, once you have uncertainty, those uncertainties can have as large an effect as is possible.
Will corporations try to end hunger? (Score:1)
Suppose you're a wealthy person living among the less wealthy. You understand that economics is a positive-sum game. Assuming your wealth weren't diminished, would an improvement in their lot make your life better or worse? Better, I think. Remember that Louis XIV was the wealthiest guy for hundreds of miles around in his time, but all the wealth of Versailles couldn't buy him a flu shot or a ballpoint pen. Over-centralization of wealth does not benefit the wealthy, contrary to intuition.
If large corporations could end world hunger (and the problem is one of distribution, not food supply), they would gain a much larger customer base. They'll do it as soon as (A) the cost of doing so falls below the profit available from the larger customer base, and (B) they find out how.
Re:Groan (Score:1)
How are "First Katz Flame Post!" replies any less lame than "First Post!" replies?
Back up your criticism with some real content or stay away from the damn keyboard.
Re:Damn it Jon. (Re: ? or ' You Choose!) (Score:1)
or just s/?/'/g and then edit for real question marks
-Doviende
"The value of a man resides in what he gives,
and not in what he is capable of receiving."
Re:Damn it Jon. (Re: ? or ' You Choose!) (Score:1)
Re:futuristic stuffs are not interesting. (Score:1)
Also, todays supercomputer is tomorrows desktop.
If you think that the project is dangerous, then you could join in and try to steer it in a more desireable direction. Of course I really think that we can't predict where this will lead, but that's part of what the future is about. It has good AND bad possibilities. And also just about any other dicotomy that you could construct. Including possible/impossible.
Re:Hal 9000 (Score:1)
Get a grip Jon (Score:1)
Re:futuristic stuffs are not interesting. (Score:1)
Re:False Analogy (Score:1)
- pal
Re:Groan (Score:1)
As a member of the unwashed masses... (Score:1)
His shrinking fan base (and objective thinkers who
Jon--I know you have important things to say, and your opinions, though oft-flamed, are worth reading. But if you don't back them up with solid facts (names of researchers, universities, studies, etc.) then eventually nobody else will back you up either, and you'll be relegated to the ZineWriters' "I-have-a-PC-and-an-ISP-so-I-can-publish-my-opini
Jabbo covered your ass this time. I'd be tempted to help just to prove the power of this medium, but if you keep publishing articles with no solid grounding in reality, then eventually, you'll reap only flames.
As my high school English teacher said--
When in doubt, B.S. : Be Specific.
I hope this doesn't come off as a flame... I truly intend this to help both posters and flamers.
That's my 2 cents.
Penny for your thoughts?
jurph
Re:Weather (Score:2)
Not being a mathematician or meteorologist, I can't be 100% sure, but I'm pretty confident that your statement is inaccurate.
Despite, or maybe even because of it, chaos theory, a certain amount of accuracy can be found. It may not help us discover the weather a week from now, but there are instances where knowing the weather 15 minutes from now, and down to the meter, would be enormously useful.
For example, if data crunching were not a bottleneck, satellite weather data may be able to give us an additional 10 or 15 minutes of warning of approaching tornado, as opposed to 5 or 6 minutes, when the tornado is already clearly forming. Likewise, once it has formed, knowing it's path down to the meter can only help those who may be stuck in the tornado's path. I'm sure that absurd amounts of computing power would only help hone the precision of our current estimates.
Another instance in which this capability is useful might by hurricane travel prediction, and wind speed prediction. One would know whether to evacuate, how and where to prepare, and what to do if one knew that one had 98% probability of being in the path of n-mph winds, or if one had to deal with only m-mph winds.
Or knowing hours in advance that a storm capable of closing an airport, to give airborn flights time and resources enough to find alternative destinations, rather than finding out half an hour too late and an hour too far from the next nearest airport.
Sure we may not be able to predict weeks in advance; instead, even if it's only a hour in advance, deep computing may make weather prediction much more accurate and much more useful.
-AS
Re:Groan (Score:2)
Not being a mathematician, I think that chaotic phenomena are not inherently random, just that they become statistically more unpredictable the more prediction is applied, ie more precision or more time.
So computers, with massive amounts of satellite feeds, should be able to do ever more precise prediction with current info, rather than longer range forecasts that have increasing chances of inaccuracy as time passes.
The apparent success of short term weather forecasting comes from information gained from satelites. Just for a month keep track of the five day forcast and see how often it is right!
The issue being that with computers, one would be able to massively increase the probability of 5 day forecasts, and increase the detail of 1 day forecasts. One would be able to compute not only with satellite data, but with ground based tracking instruments, of humidity, sunlight, air pressure, wind speeds, particulates, etc.
I think we do know how to 'solve' the problem, the real issue is getting enough computation power to actually tackle the data. A later/other poster mentions that we have enough data to crunch for a month, what we get in a day. Which means the predictions are useless unless one can get accuracy for the next month. If, however, one can crunch in one day the data one gets for a day, that increases the knowledge of the next day, and the next day plus this day's knowledge increases the accuracy of the following days.
I agree that ethnic and regional warfare is tough to tackle, but weather is not.
Well, not *as* tough.
-AS
You are confusing =) (Score:2)
What if increased computation allows us to predict hurricane path and windspeeds? Or tornado paths down to the meter? It would enable people to prepare properly, and with more foresight than saying, 'Geez, there's a tornado coming straight for my house!'.
Or if airports and airborn flights could have a decent warning, enough to switch destination airports ahead of time and schedule alternate passage for their passengers, with advanced storm watch technology?
Or what if we could actually predict an earthquake an hour ahead of it's arrival, by crunching the ground based seismic data being feed continuously from instruments all over a region?
We certainly do have enough data; it's nothing more than planting more seismographs, tapping into satellite feeds, placing more barometric sensor packages, observing ocean currents and temperatures, observing the reflectivity of cloud cover, all these millions of little details that *have* to be ignored, until now, because the computers weren't fast enough to deal with them in a reasonable amount of time.
There is so much more that I am not creative enough to list here, but it does exist.
As for politicians... that's out of my league, and I feel Jon shouldn't have used that as a topic in his essay.
-AS
Re:why solving world hunger isnt profitable (Score:2)
If there are 3 companies producing *anything*, unless they collude, the ones who withhold service get screwed over by those that don't. Competition works, in this manner.
The problem with world hunger is not food production, it's allocation and distribution. I can't support my claims because I don't have the research results in front of me, but during the worst famine years of Ethiopia, a much talked about starving country, they had more food per capita than many non-starving(a lot of donations and some pretty good agricultural yields, I think.)
What kept the people hungry? Lack of highways and transports to get the food to cities and people. Political turmoil and strife that kept food rotting in warehouses, on docks and wharves, stuck in vans that won't be driven.
I'm not sure what you mean by Monsanto creating genetic locking mechanisms... To do what? So that farmers can't grow anything else? I'm confused by your statement.
-AS
Re:why solving world hunger isnt profitable (Score:2)
The original poster was mentioning that corporations won't solve world hunger; my counterpoint not is that they will, but they *can*, if they can be convinced they will make a profit, and if they think they can dominate the market.
The free market will only solve what problems people think to pose to the free market; before FedEX and UPS, shipping and mail was thought to be to akward and inconvenient for business to handle, so a government run monopoly was formed. Guess what? We now have businesses that specialize in shipping and postage. Likewise, we take a very stupid and silly way to solve global hunger; send lots of food, even if it all rots in the sun, undelivered, unconsumed.
The strength of a free market is, ostensibly, efficiency; if you are inefficient, a competitor who can, will take advantage of that inefficiency to make more money.
In this case, if some genius can pose the problem of world hunger in terms of market, control, and profit, then same genius can solve that problem with market economics.
The problem is how to pose social problems as something one can 'profit' from.
-AS
If it will make them money... (Score:3)
Corporations, like FMC, who sell chemicals and compounds, farming and processing machinary, food companies like Nabisco or Campbells, or sundry like Dixie Queen, I'm sure would solve world hunger in a flash, if it could boost their profits...
And I'm sure, sometime, someone will figure out how to make a bunch of money, right? It's a captive market, people starving, with little competition.
Of course, the tragedy is that perhaps people who need this the most can least afford it... But if starvation were an issue of food supply, rather than socio-political infrastructure, the capitalists and profiteers would have done something by now.
-AS
Re:Deep hurting (Score:1)
Re:Damn it Jon. (Re: ? or ' You Choose!) (Score:1)
IBM's Deep Computing Website is worth a look... (Score:1)
I for one tend to bug my show off ultra-techie friends by posing them a challenge from their Mathematics section. [ibm.com]
Re:Weather (Score:1)
Re:Damn it Jon. (Re: ? or ' You Choose!) (Score:1)
futuristic stuffs are not interesting. (Score:3)
there's not even 1 in 1000 predictions that hit the reality. besides applauding JK's effort in providing some interesting predictions, what do we really care?
Indeed, even we have a 1 million times more powerful supercomputer nowadays, we wouldn't be able to solve most of the problems we would like to solve. It's not simply about the power of the computing device. Do we really understand how our world works? If we don't, even the most complex simulation can't approach the reality.
And, even the device is fast enough, do we have the resources to provide the huge reality data feed to the computing model? Without that, even the most complex models won't work.
It's a dangerous thing to have the supercomputer help the cleverest to predict everythings and implement policies... we do need some stupid politicians to represent the average Joe to balance the power. Luckily, our world are far too complex and so, right now the usefulness of stupid politicians is low.
Good Insight (Score:2)
Although much of the world wasn't exposed or just didn't care about DCI, we all were. And we all want it. I think for DCI to work, they will have to keep their ideas and processes not just on the cutting edge, but remain advanced. Once someone else gets capabilities and access to their ideas, it will only be improved upon and cultivated further by independent parties.
Can this model really approach problems in a completely different way?
Remember HAL9000? There's something scary to me about fully functional AI.
Re:Groan (Score:1)
No offense, but that's a lousy analogy, especially when you're talking about the kinds of massive simulations being referred to. Take a major metropolitan area. Somewhere in a 15x15 city block area is a big red "X". Neither of us knows where exactly the "X" is, other than the 15x15 block area, and we'll know when we find it. If I take a U-haul moving truck, and you take a Ferrari, barring speed limits, stop signs, pedestrians, and other such annoyances, who stands a better chance of finding it first?
Groan (Score:1)
Please, please stick to the subjects that you have at least have some approximation to a clue about. It is painfully obvious that you have no idea at all about heavy-number-crunching big-iron computing, how it works, where it is needed, and what uses it has. Really. Stick to human-interest stories, OK?
Kaa
More groan (with reasons attached) (Score:3)
one of the biggest technology stories of the year - perhaps in several years - the institutionalization of deep domputing (sic!) by one of the most powerful corporations on earth
Er.. Jon, who do you think used what you call "deep computing" before? Some kids in a garage? Massive number-crunching was *always* the domain of the government, the academia, and large corporations -- only they had and have the resources to do it. I don't know how you can get more institutional than that. Besides, are you telling us that IBM is just now getting into supercomputers??
It?s logical that Deep Computers will be asked to consider some of the world?s most intractable social as well as business problems
Perhaps supercomputing could do to ethnic and regional warfare what it does to weather: warn us about where it?s likely to occur. Is political unrest - Rawanda (sic!), Kosovo - cyclical or predicable in some cases, like crime has been found to be?
Jon, I don't think you understand what supercomputers do. They have not magically acquired any problem-solving technology. All they do is crunch numbers, usually vectors and matrices, really really fast. The class of problems suitable for these machines is not big at all. Does it mean that, say, weather forecasting will become more precise? Yes, sure. But it's a function of the growth of the processing power in general and has nothing to do with supercomputers. Believing that increased specialized processing power will solve the world's political and social problems is naive at best. You are confusing ability to solve a problem (e.g. build a good predictive model) and raw computing power.
its time to use scientific modeling for decision making
Welcome to the real world, pal. In front of me is a Sun Ultra 1, a middle-powered workstation. It runs a whole bunch of scientific models which are used in decision making all the time. What do supercomputers have with decision support? I don't know and I don't think you do either.
If some of the most specialized existing data on the planet were focused on specific medical problems, treatment and research be greatly accelerated.
The meaning of this sentence is beyond me. Does it mean that if medical researchers read each others publications we would be able to
I could go on and on about AI, forecasting models, hope that increasing computation speed will solve social problems, etc. etc., but really, the article is beyond salvation.
Kaa
Re:Groan (Score:2)
As far as supercomputing goes, it sounds like they're building some cool machines, able to do some cool number crunching. That is how other journalists have spun the story; they were wise enough not to hype their chickens prior to hatching.
Re:Damn it Jon. (Re: ? or ' You Choose!) (Score:1)
The curly quote is a different character on the Mac and Windows, hence the ?. Everything above the traditional 7 bit 127 character ASCII charset can cause this problem.
There's no simple mapping between Mac/PC charsets because DOS uses a different one to Windows and some text editors still favour the DOS charset I believe.
John either needs to
(i) turn off smart quotes
(ii) write his text in unicode
(iii) write his text in an HTML editor that will convert the curly quote to some sort of &XXX; code.
He definately doesn't want to
put if through a mac->pc converter cause then all the Mac people out there will see ? or something similar.
Well he dealt more with the social impact. (Score:1)
Huh? -- This is a human interest story. It's not like he tried to detail the algorythims and hardware used in this number crunching, he dealt with the impact such a scheme could have on the world.
I personally like this article a lot better than his last.
-John
Deep Thought, not Deep Blue (Score:1)
How often does science-fiction parody inpact the future?
Going to Utopia, one buzzword at a time (Score:1)
However, reality is more complex than he realizes.
Although I am convinced that artificial intelligence is possible, I am skeptical that it will be such a huge advance over an existing technology known as "intelligence". Humans have possessed intelligence for millenia (at least since H. habilis, and probably earlier, but I'm not a paleontologist), but somehow we're still not living in Utopia. Mr. Katz's bold prediction that AI will solve social problems ignores the fact that attempts to solve society's ills have consumed countless processor-years on the organic supercomputers installed in the human frame, all without much useful result. It would be nice to think that an AI philosopher could discover an idea that would make all of humanity behave decently towards one another, but I don't expect it.
As for the other buzzword: a supercomputer is just a peek at tomorrow's microcomputers. Think of Moore's law, and start counting off powers of two. I find microcomputers much more interesting because they can be owned by ordinary individuals, rather than being the sole property of Big Government, Big Business, and Big Science.
Damn it Jon. (Re: ? or ' You Choose!) (Score:1)
I sure wish you?d just quit using what ever POS word processor you use to write your stuff, and switch to edit, edlin, vi, emacs, pico, joe, or a god-damned typewriter. Hell, a telegraph key would be better.
I?m sick and tired of having these fsckin' question marks popping up EVERYWHERE in your articles Jon.
Please fix this. Oh, I?m sticking them in on purpose here. It really is annoying, isn't it?
I'm using NT and IE 4.0. (Damn POS work machine!)
-Josh
Re:Well he dealt more with the social impact. (Score:1)
The story just reinforced my perception of John that he is an american media entity that could not be more stereotypical.
Everything he says is true to a point. IBM has
done great by the OpenSource crowd, and has good ideas and longtime trackrecord in creating the most powerfull computers. And the software to make use of this power. And harness the minds of brilliant researchers. Thus they will probably realize all the technical advances they set out to do and will be able to differentiate in the weather forecast between the Bronx and Queens.
But the point is moot! IMHO it is empty 'hail technica', somthing most geeks are prone to do, and I allways whether John caters the crowd or believes that we just need more technical advances to solve all problems.
All this information will be worthless to society as whole. Because most western countries are full of people that are getting more and more depressed with technology. They do not understand it and they resent us geeks for letting them know day after day.
More and more politicians in the western countries use the information given to them for slagging of their opponents, and not to better their countries.
As long there is a continued lack of leadership, all the data will be useless and only serve to make an arbitrary point of somebody in need of an extra couple of millions {Votes|Monetary Units}
Corporare entities are at least honest about the use of any advantage they are given. It's the whole point of their existence to make the most financial gain of any opportunity.
Hmmm.... This is rather political for a
read you later.
[enter you fav. quote here, I am to lazy]
P.S If this gives you flame-jimmies in any form, please refrain from critizising my orthography or grammar. Save your electrons.
why solving world hunger isnt profitable (Score:1)
Re:why solving world hunger isnt profitable (Score:1)
IBM's stuff is cool but there is cooler... (Score:1)
The basic premise is that you map documents onto a 2 dimensional plane, where proximity of documents relates to how related they are. (ie, if you have two documents right next to each other, there's a high likelyhood that they will be related) A landscape is added onto this to add a 3rd dimension which represents the density of the information. Labels are added to the mountains and peaks to give you some idea of where things are layed out and you can fully interact with the map to view documents in areas.
This is cool stuff, and although I'll admit I'm plugging my own company, I think it's worth a look to get an idea of where information visualization is heading.
-Nic
Don't forget people (Score:1)
If only we had a super computer to solve all of our problems? You forget that most of the problems in the world occur because of greed and avarice couple with uneven policies. Once all the data is present, can you really expect the politicians to do the right thing with it? How will the validity of the data be confirmed?
Deep computing will do little to solve any real problems.
Re:More groan (with reasons attached) (Score:1)
He ignores the influence of banking and well-financed power mongers in the scheme of social problems.
Does anyone realize that Hitler and Lenin recieved funding from bankers to initiate their campaigns?
The murder of the Romanov family was financed.
Germany's recovery and rise to power after WWI was financed. Computations won't tell you this, only a good review of history will.
The AMA, BAR Association and other trade guilds with strangleholds on entire populations have more to do with people than computation.
People seem to forget that computers don't solve problems. People do.
An excellent use of technology (Score:2)
I am a product engineer. I work in the networking industry for a large networking equipment company (the largest), doing ASIC (chip) design. I am dependent upon the physicists and engineers working at IBM / Bell Labs / etc. to come up with better, faster, smaller, lower power semiconductor technology. Without them, I would be dead in the water. Obviously, that is only an example that relates to me, but research in general allows the world to continue moving forward.
On a side note, I know that Microsoft is a bad word around here, but many years ago, they have set up a research center similar to Deep Computing. Dedicated to solving the tough problems.
http://research.microsoft.com
Todd
Hal 9000 (Score:1)
i second that, and add that it is pretty exciting also
proibily the most scary thing about AI is that if we can't raise our kids to not to try to blow up school, how do we expect to raise a computer to not to blow up the planet.
Issac Azimov's three laws of robotics is (IMHO) the best way to keep the "Matrix" in the area of fiction
hey, but what the h#!! do i know?
Re:Weather (Score:1)
Re:Well he dealt more with the social impact. (Score:1)
any clue about the technology. Supercomputing
has been around for decades. Repeating what
the IBM press release says and adding extra,
wrong details does not make good journalism.
Beowulf is much more important (Score:2)
With all of Mr. Katz's optimism, he doesn't seem to know what these things do: they crunch numbers. A lot of numbers, really, really fast. That's all. What bigger, faster machines allow people like me to do is solve bigger and more complex problems that we already know how to solve.
See the point? We know how to model a car (which is what I do, or an airplane engine, an oil reservoir, a skyscraper, we can even make a damn good approximation of a city's weather (the accuracy of which, BTW, is a function, AFAIK, of data points not computing power, and even with all the resolution in the world, you can't get much better forecasts than for a week or so, because chaos kicks in) but we have no idea how to computationally solve social problems.
Now, say, you gather really, really, good info on homelessness and poverty in America, and then trained a great neural network on a massive IBM SP/2 to crunch that data, what will you get? correlations, which any scientist knows are only clues, not answers, not cause and effect. In the absense of a good understanding and of good ideas behind a certain problem, really big and expensive calculators are not much better than my old HP-48.
Case in point: I was peripeherally involved, a few years ago with the effort to make the NASP (National Aerospace Plane, aka X-30), a multi-billion dollar fluke. The great thing about the X-30 is that it was supposed to fly so fast, we had to simulate everything computationally. NASA and the DoD had all the computing power in the world to solve that one. Well, because of the absence of good physics theory wrt to what happens to an airplane at speeds greater than Mach, I dunno, 15 or so, the NASP was cancelled.
We need thinkers, not calculators, software not hardware. So, look around: what is much more important is that fast number-crunching becomes affordable to more institutions and even individuals; then the engineers and mathematicians of the world (and much, much later, the social scientists) can play with new theories and new algorithms. And this site has actively supported Beowulf, the greatest perhaps effort towards that.
In the long run, Beowulf will be much more important than 'Deep Computing'...
Just my $0.02...
Deep Computing Not A Social Resource (Score:1)
Do you think corporations are in this to solve world hunger?
Re:Damn it Jon. (Re: ? or ') -- it's a bug (Score:1)
Deep Computing (Score:2)
Chess: not only a matter of raw horsepower, though Deep Blue had plenty of it, it's also a matter of achieving a complex enough program that it could be *trained*. That people only 'see' two moves deep is compensated for by intution & pattern-matching, two things computers are notoriously bad at. Think about the Turing test: does it matter how something gets done?
Quick question: what's the overlap between Deep Computing and the Grand Challenge problems/projects?
-_Quinn
Media ignoring Deep Computing (Score:1)
1) Deep down, people want to turn on their news and say things like "Oh, how auful" and "What is this world coming to?" Stories that show the progress of mankind don't tend to evoke this kind of reaction.
2) Most of the general public, who uses thier home computers for Solitaire and deer hunting games, probably wouldn't understand the story or the implications of the story anyway. Aren't newspapers written at something like the 5th grade level?
-NG
+--
Given infinite time, 100 monkeys could type out the complete works of Shakespeare.
Re:Weather (Score:1)
Weather is causal (sp?). It's not like a hurricane in Japan would just come from nowhere, Jon Katz's sneeze would've caused it (not too sure about a sneeze causing a hurricane, but you get the idea) and meterologists would see it coming.
I heard some meterologist say that there is enough data obtained in order to _acurrately_ forcast the weather for a week (or something) it would just take them a month to crunch all of the factors. If these factors could be taken into account faster then we could get forcasts for later dates that are more accurate.
I could be completely wrong, but I'm pretty sure on this. If anyone knows for sure please comment.
ODiV
Re:Massively Parallel Computing (Score:1)