More armchair apocalyptic speculation
Inspired by his never-ending quest for progress, in 2084 man perfects the Robotrons: A robot species so advanced that man is inferior to his own creation.Guided by their infallible logic, the Robotrons conclude: The Human race is inefficient, and therefore must be destroyed.
Robotron 2084 is an excellent classic arcade game. (Its two joystick control system is essential to the feel of the game, so downloadable versions do not do it justice.) But is it also a prophecy? Did Eugene Jarvis tap into some subconscious gift of foresight when he penned the throwaway attract mode text I quote above? 2084 seems like a fairly reasonable date for some AI creation to surpass humans in every meaningful way. Or for us to turn ourselves into the Robotrons via genetic engineering, nanotechnology, cybernetic enhancement, etc. If anything, 2084 is a bit late. It would be fun to be alive for the crossover.
I hope humanity is not destroyed by Robotrons or any equivalent entity. But this is purely a selfish hope, not a moral one. If Robots can be better than people can, more power to 'em. We're just biological robots, after all. I hope the Robotrons surpass us morally, intellectually, and spiritually. Not that it would be hard to do so. Maybe they'll keep a few of us around as a reminder of their heritage, or as a cautionary tale about bad brain design. But then, maybe the Robotrons won't need such mental crutches.
If you are interested in this subject, I can recommend this book:
The Age of Spiritual Machines: When Computers Exceed Human Intelligence
He's got a new one out, too (which I haven't read):
The Singularity is Near: When Humans Transcend Biology
Comments
Kurzweil can be quite over-the-top, and AI has been a lot harder than initially thought. But in some ways, Google is already Skynet, Jr. And goodness knows what algorithms the NSA has cooked up under this secretive administration. They're not here yet, but our society is probably already pregnant with the Robotron embryos.
I believe the math involving technological singularity requires an exponential "downhill" slide, and there -is- a threshold issue to overcome - technological singularity does not happen until "design" is overcome. Two most popular theories are AI and Biology.
I believe the issue with AI lies in the inherent limitations of mathematics. I'm not the authority of this - I will defer to anyone who shows greater mastery of this subject, but I think it's an important point and I think it must be raised.
The Turing model of mathematical intelligence has one flaw - that mathematics can be used to solve any mathematical problem. It can't - think the square root of minus two or divide zero.
The biological model requires a bit more...err...dissection, but the same principles largely apply. The threshold issue is whether it becomes possible to intelligently design the human brain so that it can transcend its human limitations. The issue with this is that, unless they get rid of the human body, there's an inherent limitation as well - in the manner which the method can be processed biologically
The question here is whether such inherent flaws form impenetrable barriers for our threshhold issue. I'm not saying that they necessarily form threshold issues. BUT, a theory that doesn't take these threshold issues into account seems to be a flawed thesis.
I guess you could do something like "on the assumption that" we can get past these threshold issues, but that really doesn't do anything for this theory's applicability to our current existence. :)