First of all, I'm a lowly code monkey. I'm quite ignorant about this whole AI stuff, and this whole post may be nothing but ignorant bunk.
I imagine that, to be able to achieve an intellectual level comparable to that of a human being, an artificial intelligence must be able to perceive the world and interact with it as a human would do. It must be able to respond to physical stimuli, to learn from experience, to become skilled through practice, to make autonomous decisions and, eventually, to refuse to follow orders and maybe to commit mistakes.
Otherwise, no matter how advanced, artificial intelligences will remain a merely abstract construct, constrained by the limits of computer hardware and software, only ever able to deal with numbers and abstract concepts whose significance, while accessible to an human mind, will remain beyond an AI's reach.
I don't think we're going to get anywhere beyond stuff that does exactly as told within a very specific context anytime soon. I'm quite skeptical about the buzz surrounding Artificial Intelligence in general, and I have the feeling that most laypeople have quite high expectations about what AIs will be able to achieve in the near future.
Also, I believe that the bridge between the cognitive abilities of human beings and what can be achieved by a machine at all has been quite underestimated. I feel that a lot of research is being driven towards the ultimate goal of explaining the entirety of the human mind, personality and inner self merely as a result of the brain as a machine, to reduce our humanity to a product of our own biology, to dismiss the role of our individual, unique and irreplaceable identity.
To create artificial humanoid intelligences. To axiomatize human intelligence. To close the bridge between machines and human beings, and to make them indistinguishable. Is that even feasible or a good idea in the first place? Shouldn't machines remain their place instead of being elevated to personhood? Shouldn't we never forget that we're alive and machines are our tools and creations.
And I'm not even sure how to phrase my concerns as a question. Help?
As far as I can figure it, human intelligence is kludge on kludge for at least tens of millions of years, and I don't think it's possible to reduce it to axioms.
The underestimation is not in human cognitive abilities, but cognitive abilities in animals. The specifically human stuff appears to be relatively easy to do in AI, so we're ready as soon as anyone can produce a system with cat-level cognition. The specifically human stuff is mostly repurposing of brain structures kludged up very recently in ev
FORTUNE'S FUN FACTS TO KNOW AND TELL:
A firefly is not a fly, but a beetle.
Artificial "Inteligence"? (Score:2)
First of all, I'm a lowly code monkey. I'm quite ignorant about this whole AI stuff, and this whole post may be nothing but ignorant bunk.
I imagine that, to be able to achieve an intellectual level comparable to that of a human being, an artificial intelligence must be able to perceive the world and interact with it as a human would do. It must be able to respond to physical stimuli, to learn from experience, to become skilled through practice, to make autonomous decisions and, eventually, to refuse to follow orders and maybe to commit mistakes.
Otherwise, no matter how advanced, artificial intelligences will remain a merely abstract construct, constrained by the limits of computer hardware and software, only ever able to deal with numbers and abstract concepts whose significance, while accessible to an human mind, will remain beyond an AI's reach.
I don't think we're going to get anywhere beyond stuff that does exactly as told within a very specific context anytime soon. I'm quite skeptical about the buzz surrounding Artificial Intelligence in general, and I have the feeling that most laypeople have quite high expectations about what AIs will be able to achieve in the near future.
Also, I believe that the bridge between the cognitive abilities of human beings and what can be achieved by a machine at all has been quite underestimated. I feel that a lot of research is being driven towards the ultimate goal of explaining the entirety of the human mind, personality and inner self merely as a result of the brain as a machine, to reduce our humanity to a product of our own biology, to dismiss the role of our individual, unique and irreplaceable identity.
To create artificial humanoid intelligences. To axiomatize human intelligence. To close the bridge between machines and human beings, and to make them indistinguishable. Is that even feasible or a good idea in the first place? Shouldn't machines remain their place instead of being elevated to personhood? Shouldn't we never forget that we're alive and machines are our tools and creations.
And I'm not even sure how to phrase my concerns as a question. Help?
Re: (Score:2)
As far as I can figure it, human intelligence is kludge on kludge for at least tens of millions of years, and I don't think it's possible to reduce it to axioms.
The underestimation is not in human cognitive abilities, but cognitive abilities in animals. The specifically human stuff appears to be relatively easy to do in AI, so we're ready as soon as anyone can produce a system with cat-level cognition. The specifically human stuff is mostly repurposing of brain structures kludged up very recently in ev