Tuesday, July 11, 2006

More armchair apocalyptic speculation

Inspired by his never-ending quest for progress, in 2084 man perfects the Robotrons: A robot species so advanced that man is inferior to his own creation.

Guided by their infallible logic, the Robotrons conclude: The Human race is inefficient, and therefore must be destroyed.

Robotron 2084 is an excellent classic arcade game. (Its two joystick control system is essential to the feel of the game, so downloadable versions do not do it justice.) But is it also a prophecy? Did Eugene Jarvis tap into some subconscious gift of foresight when he penned the throwaway attract mode text I quote above? 2084 seems like a fairly reasonable date for some AI creation to surpass humans in every meaningful way. Or for us to turn ourselves into the Robotrons via genetic engineering, nanotechnology, cybernetic enhancement, etc. If anything, 2084 is a bit late. It would be fun to be alive for the crossover.

I hope humanity is not destroyed by Robotrons or any equivalent entity. But this is purely a selfish hope, not a moral one. If Robots can be better than people can, more power to 'em. We're just biological robots, after all. I hope the Robotrons surpass us morally, intellectually, and spiritually. Not that it would be hard to do so. Maybe they'll keep a few of us around as a reminder of their heritage, or as a cautionary tale about bad brain design. But then, maybe the Robotrons won't need such mental crutches.

If you are interested in this subject, I can recommend this book:

The Age of Spiritual Machines: When Computers Exceed Human Intelligence
The Age of Spiritual Machines: When Computers Exceed Human Intelligence

He's got a new one out, too (which I haven't read):

The Singularity is Near: When Humans Transcend Biology
The Singularity is Near: When Humans Transcend Biology


Blogger Anthony said...

Funny coincidence. I've also been reading the technological singularity. I find the premises a bit spurious myself - it presumes that a machine that CAN be used to design a a better machine WILL be.

9:57 AM, July 11, 2006  
Blogger Zachary Drake said...

Well, eventually someone will do it, if only to try to predict the financial markets or something. I think one reason why technological advancement has such an inevitable quality is that there are so many different reasons to engage in it, and these reasons can be very powerful. "Arms race" situations of all kinds can easily crop up. Someone with no intention of building the Robotrons could end up doing it just because they're trying to build a better house cleaning robot than some other company. Or more likely, some hacker uses the relatively dull set of innovations developed for robotic space probes or some descendent of tamaguchi or Roomba and twists them to loftier/more sinister purposes.

Kurzweil can be quite over-the-top, and AI has been a lot harder than initially thought. But in some ways, Google is already Skynet, Jr. And goodness knows what algorithms the NSA has cooked up under this secretive administration. They're not here yet, but our society is probably already pregnant with the Robotron embryos.

11:01 PM, July 11, 2006  
Blogger Anthony said...

One retort to that, with two subparts.

I believe the math involving technological singularity requires an exponential "downhill" slide, and there -is- a threshold issue to overcome - technological singularity does not happen until "design" is overcome. Two most popular theories are AI and Biology.

I believe the issue with AI lies in the inherent limitations of mathematics. I'm not the authority of this - I will defer to anyone who shows greater mastery of this subject, but I think it's an important point and I think it must be raised.

The Turing model of mathematical intelligence has one flaw - that mathematics can be used to solve any mathematical problem. It can't - think the square root of minus two or divide zero.

The biological model requires a bit more...err...dissection, but the same principles largely apply. The threshold issue is whether it becomes possible to intelligently design the human brain so that it can transcend its human limitations. The issue with this is that, unless they get rid of the human body, there's an inherent limitation as well - in the manner which the method can be processed biologically

The question here is whether such inherent flaws form impenetrable barriers for our threshhold issue. I'm not saying that they necessarily form threshold issues. BUT, a theory that doesn't take these threshold issues into account seems to be a flawed thesis.

I guess you could do something like "on the assumption that" we can get past these threshold issues, but that really doesn't do anything for this theory's applicability to our current existence. :)

8:27 AM, July 12, 2006  
Anonymous Mad Latinist said...


10:47 AM, November 16, 2007  
Blogger Pierce said...

viagra online
buy viagra
generic viagra

10:26 PM, June 17, 2010  

Post a Comment

Links to this post:

Create a Link

<< Internal Monologue home