Human Level AI by 2025?

Maybe. But more interestingly… With that sort of power, even if you dont have real human AI, we will no doubt be able to make computers seem like they have real AI. A much easier problem.

Embedded Link

How To Predict The Future
Today’s post is a guest post from William Hertling, author of the award-winning Avogadro Corp: The Singularity Is Closer Than It Appears and A.I.

Google+: View post on Google+

Post imported by Google+Blog. Created By Daniel Treadwell.

12 Responses to “Human Level AI by 2025?”

  1. Both exciting and somewhat scary!

  2. Both exciting and somewhat scary!

  3. Tom Deloford says:

    so theres a difference between actual and perceived intelligence?

  4. Tom Deloford says:

    so theres a difference between actual and perceived intelligence?

  5. Depends if you believe Alan Turing's definition of intelligence or not…

  6. Depends if you believe Alan Turing's definition of intelligence or not…

  7. Tom Deloford says:

    I guess I never questioned it. All I really know about AI now is I never want to see a prolog command prompt ever again

  8. Tom Deloford says:

    I guess I never questioned it. All I really know about AI now is I never want to see a prolog command prompt ever again

  9. What's interesting about the turning test is that it's not a test for intelligence but a definition of it. Basically it takes "you know it when you see it" literally.

    It's a purely behavioral definition, so if you don't buy the philosophical foundations you may have a problem with it. But… We don't have any other definition of intelligence, we don't know what it is or how to define it other than what you see. Which gets quickly into psychology and people like the Churchland's even argue that all of (what they somewhat condescendingly call "folk") psychology is completely wrong headed.

    Personally I think it comes down to levels of description and our strong tendency for anthropocentrism. If you look at intelligent behavior of swarms or colonies (ants for instance) it actually starts to make more sense as thinking about the "organism" at the level of the colony not the individual ant. We don't think about humans on the level of the cell or the neuron. Just because humans in the west in the 21st century like to think of themselves as individuals defined by the bodily separation of actors doesn't mean all animals (or intelligent agents) work the same way…

    There's a great twist on the Arthur C. Clarke quote that "any sufficiently advanced technology is indistinguishable from magic." Reconfigured by another writer "any sufficiently advanced technology is indistinguishable from nature".

    Humans operate on a very specific scale of time and space so plants don't look intelligent to us, yet watch a time lapse and they appear to have intelligent behaviors. Especially across larger scales, not just individual plants. Or octopus. They are not a social species. So the intelligence that we think about in terms of communication and technology makes no sense whatsoever if you don't co-operate much with other members of your species.

    Basically, humans are particularly smart not because of whats in our heads but because we have culture that we use to offload processing to the world. The faster and smarter and more information the culture has, the smarter and faster and more intelligently we can act (with the exception of luddite's like creationists). In a side issue, that's why anti-intellectualism and anti-science gets me so angry. Science is not a theory, it's a method and a tool. Ignoring the methods and tools we know to be most productive actively damages the collective intelligence of society.

    If we create machines that are active agents within our culture that look intelligent by using cheats and scripts (by tricking our emotional responses to visuals, for instance by having smiling faces and such) then they will start to feel like intelligent things maybe even human level intelligent agents (just like we attribute anthropomorphic intelligent behavior to cats and dogs, which is nonsense). But they won't be, and a rigorous enough Turing Test should ultimately be able to root those out. On the other hand, I do think beating te Turing Test is an engineering problem that will be solved within our lifetime. Whether you want to call that "human intelligence" or not is up to you.

    Ultimately, I like the Turing Test because it might be the worst definition of human intelligence (except for all the other ones) but it is scientific, repeatable and falsifiable within the confines of human culture. If the test gives a positive then that is intelligent because the definition of "intelligence" is a positive on the test.

    Personally, I can't wait to have a robotic dog I can have a conversation with to search what movies are on tonight and buy me tickets.

    The movie "Ted" is the near future, not a comedy…

    Hmmm. That was a longer response than I thought it was going to be.

  10. What's interesting about the turning test is that it's not a test for intelligence but a definition of it. Basically it takes "you know it when you see it" literally.

    It's a purely behavioral definition, so if you don't buy the philosophical foundations you may have a problem with it. But… We don't have any other definition of intelligence, we don't know what it is or how to define it other than what you see. Which gets quickly into psychology and people like the Churchland's even argue that all of (what they somewhat condescendingly call "folk") psychology is completely wrong headed.

    Personally I think it comes down to levels of description and our strong tendency for anthropocentrism. If you look at intelligent behavior of swarms or colonies (ants for instance) it actually starts to make more sense as thinking about the "organism" at the level of the colony not the individual ant. We don't think about humans on the level of the cell or the neuron. Just because humans in the west in the 21st century like to think of themselves as individuals defined by the bodily separation of actors doesn't mean all animals (or intelligent agents) work the same way…

    There's a great twist on the Arthur C. Clarke quote that "any sufficiently advanced technology is indistinguishable from magic." Reconfigured by another writer "any sufficiently advanced technology is indistinguishable from nature".

    Humans operate on a very specific scale of time and space so plants don't look intelligent to us, yet watch a time lapse and they appear to have intelligent behaviors. Especially across larger scales, not just individual plants. Or octopus. They are not a social species. So the intelligence that we think about in terms of communication and technology makes no sense whatsoever if you don't co-operate much with other members of your species.

    Basically, humans are particularly smart not because of whats in our heads but because we have culture that we use to offload processing to the world. The faster and smarter and more information the culture has, the smarter and faster and more intelligently we can act (with the exception of luddite's like creationists). In a side issue, that's why anti-intellectualism and anti-science gets me so angry. Science is not a theory, it's a method and a tool. Ignoring the methods and tools we know to be most productive actively damages the collective intelligence of society.

    If we create machines that are active agents within our culture that look intelligent by using cheats and scripts (by tricking our emotional responses to visuals, for instance by having smiling faces and such) then they will start to feel like intelligent things maybe even human level intelligent agents (just like we attribute anthropomorphic intelligent behavior to cats and dogs, which is nonsense). But they won't be, and a rigorous enough Turing Test should ultimately be able to root those out. On the other hand, I do think beating te Turing Test is an engineering problem that will be solved within our lifetime. Whether you want to call that "human intelligence" or not is up to you.

    Ultimately, I like the Turing Test because it might be the worst definition of human intelligence (except for all the other ones) but it is scientific, repeatable and falsifiable within the confines of human culture. If the test gives a positive then that is intelligent because the definition of "intelligence" is a positive on the test.

    Personally, I can't wait to have a robotic dog I can have a conversation with to search what movies are on tonight and buy me tickets.

    The movie "Ted" is the near future, not a comedy…

    Hmmm. That was a longer response than I thought it was going to be.

  11. Tom Deloford says:

    Wow.. interesting stuff, I will need some time to digest all of these points! Certainly an area to watch, I think you are right ultimately, its closer to reality than most might believe

  12. Tom Deloford says:

    Wow.. interesting stuff, I will need some time to digest all of these points! Certainly an area to watch, I think you are right ultimately, its closer to reality than most might believe

Leave a Reply