Technological Singularity

This is a term I fished out from Mass Effect (a Sci-Fi RPG that spans three game titles). You can look into it further if you’d like, but the term I want to focus on is the idea of “technological singularity.” Wikipedia defines it as “theoretical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence that will ‘radically change human civilization, and perhaps even human nature itself.'” At this point, the future of human society becomes unpredictable in relation to its past.

Turing’s Turing Test looks at whether or not machines or artificial intelligence can ever pass itself off as human, or effectively imitate human behavior to the point where it is indistinguishable as a machine. In Do Androids Dream of Electric Sheep (don’t worry, no spoilers), it seems to be the case that androids do end up imitating humans quite well, and they in fact exceed human capabilities in terms of things like intelligence. I could introduce more things from the novel but I don’t want to spoil anything, so I at least hope that “technological singularity” can be a concept that we keep in mind as we read. What exactly about that singularity makes it so threatening to human society, and why do we seem to make strides towards that kind of singularity? Along with this idea, I wonder how it complicates Darwin’s idea that natural selection/God “works solely by and for the food of each being, [then] all corporeal and mental endowments will tend to progress towards perfection” (267). Maybe it’s not a comparison worth making, but how might the fact that these androids were made by humans question that definition of perfection, or how might humankind’s own actions in creating such androids that complete that idea of singularity pervert that idea of perfection?

I do wonder, however, if the idea of “technological singularity” can be brought into conversation with H.G. Wells’ Time Machine. We know that the time traveler goes to a very distant future where the human race has evolved/devolved into two distinct species. I’m wondering if there was a point in that timeline where technological singularity could have played a significant role.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

3 Responses to Technological Singularity

  1. fearthefin says:

    I’m really glad you brought this idea of technological singularity into the discussion! (I wonder if, as someone familiar with sci-fi games, whether you noticed the link between the Terrans of Bloodchild and those from Starcraft?)

    As far as it applies to Time Machine, I think that would be pretty hard to say. Though Wells does seem to have a theory that “industry” would bring about the perfection/devolution of humankind, I don’t think he could have ever been able to predict how “technology” would affect humankind. Just because the kind of technology we have now (computers, artificial intelligence) was pretty much unfathomable for those at the turn of the century.

    I think your question of how technological singularity affects Darwin’s ideal of natural selection is certainly a point worth making. I think it really throws a wrench in what Darwin was getting at. Obviously, humans like to play God, to prove their superiority and invincibility to the natural world (and processes like natural selection). In that sense, I think humans are the ones essentially enacting natural selection on other beings — encroaching on natural land and resources for our own gain is a good example. (I’m reminded of that scene in The Matrix where Agent Smith calls human beings a virus, a plague.) So it’s not too much a stretch to assume that humans are playing God in creating technology that can perform perfectly. That’s not to say it could fool the Turing Test, but technology could certainly through a form of natural selection — choosing the traits that are successful and foregoing the ones that fail or don’t meet standards — humans could move technology toward perfection.

  2. h0p3d1am0nd says:

    I think you bring up some really interesting points. You bring together Turing, Dick, Darwin, and Wells–quite a conversation, indeed.

    You say that Wikipedia defines “technological singularity” as “theoretical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence that will ‘radically change human civilization, and perhaps even human nature itself.’” I think that Wells definitely talks about a time when human civilization is different but not necessarily due to A.I. in technology (well, not that’s directly evident to readers anyways). I agree with the point made by another commenter–industry could equivocate to technology perhaps in Wells’s world; but that could also be a stretch.

    I really think it’s intriguing to think about this in a Darwinian Survival of the Fittest sort of way, like you suggest briefly. In Dick’s work, the androids only get to survive if they can “act human” enough to pass the tests. Much like Turing’s test, these tests aren’t flawless, but they do strive to tell a person from a machine. The problem in Dick’s novel is that machines and humans are moving in opposite directions: machines are becoming more human-like, while humans are relying more on machines, even to emote (the mood organ, for example). In a society where the humans have become gray and worn down, the less susceptible to emotion, the better they’ll survive. However, the opposite is true about the androids–the more human and empathetic they seem, the higher their chances of survival. I think you’re completely right about this being an example of technological singularity. The androids’ super human intelligence has altered human civilization; and the line between human and android blurs. You have a good question, though–why do humans continue to improve these androids, if it only makes things less clear?

  3. What will happen when the Singularity takes hold? Currently, an engineer—trained and particularly well suited to design a particular kind of machine—might design a machine with some operational principle. When he designs a machine that will not only make, but design still further machines, and when those machines become more adept at designing machines than their original human engineer, under what rules will design and development continue? The first machine, it might be expected, won’t be half as clever as the third, and so the third machine will be capable of designing units far superior to those of the first (in turn producing units far superior to those of the engineer). But if the first machine designs a handful of machines, and if each second generation machine then designs another handful, there will be multiple third generation machines—a handful squared—some more capable than others. These more capable machines will produce still more capable machines, while the less capable machines will produce units inferior to the output of those more capable machines.

    Natural selection won’t drive the evolution of machines. Machines don’t die. But, assuming that each machine designs progeny that operates somewhat like its parent, some branches—some machine linages—will be more successful, and thus more prolific, than others. These branches, outpacing the competition, will thrive and come to represent the bulk of future units.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s