Ultimately, what I am talking about is the "Technological Singularity". This is the idea that there will come a point in the development of artificial intelligence when computers will start designing themselves in substantive ways. Since computers are so much faster than the human brain, this means that a fourth evolutionary race will begin (beyond DNA, culture, and technology/human hybrids like the Internet) as the new computers designed by the old ones design even better ones that will in turn design even better ones until we almost instantly reach a point that where humanity is left in the dust. What is left will probably function at a speed and in ways that would be incomprehensible to the Search for Extra-Terrestrial Intelligence (SETI) machinery that is currently in place.
&&&&
I'm not about to get caught up in technical issues that I know almost nothing about, but I do think it might be interesting to discuss some of the ways that science fiction writers have thought about what a transition away from purely biological thinking processes to technological ones would look like. There are a huge number of examples. For example, D. F. Jone's novel Collossus deals with a cold-war era super computer that develops sentience and uses it's control of the nuclear arsenal to blackmail human society. In Harlan Ellison's short story to "I Have No Mouth and I Must Scream", this concept is expanded into a world where the AI goes beyond wanting to control the human race into a diabolical hatred of a small remnant of humanity which it keeps alive in order to torture. Star Trek played with the idea by introducing the Borg, who were an sort of "super Internet" of creatures from various civilizations that had been "assimilated" into a digital collective consciousness. And Stargate SG1 introduced the "replicators", which started off as a child's toy that was able to reproduce itself and became like a swarm of locusts spreading from one advanced civilization to another across two galaxies where it feeds on their technology and raw materials in order to reproduce (or, "replicate".)
The Replicators from Stargate |
Ursula Le Guin |
In effect, humanity appears to end up living totally at the sufferance of an artificial intelligence that benignly neglects them. While Le Guin never really works through the implications and makes this explicit, her future humanity is living in the equivalent of a nature preserve or game park. (Please don't feed the bears!)
&&&&
The next example I came across was from Frederick Pohl's Heechee books. In this future, humanity develops both artificial intelligence and the ability to download human memory and "personality" into data storage. This creates both a type of immortality and a temporal disconnect between the living and the dead. The disconnect comes about because computers are able to process information so much faster than brains can that dead humans can accomplish in seconds what would require living ones months or even years to achieve.
This disconnect between living and machine stored intelligence creates a tension in the series of novels that gets settled through the plot device of the discovery of an intelligence that only exists as data, the "Assassins" or the "Foe". They are seen as the enemy of "meat" intelligence because they supposedly wipe out all intelligent life that holds the promise of evolving to become an eventual competitor and because they are attempting to change the nature of the universe to make it more
Frederick Pohl |
I have a problem with Pohl's description of machine-stored humanity because I don't think he's really come to terms with the complexity of human consciousness.
The first thing to remember is that what we "are" is not a "brain in a bottle". Instead, we are firmly rooted in a specific body. This has various ramifications. First of all, it's important to understand that our hormones regulate a great many things like emotions. Pohl's machine stored humans indulge in a lot of things like eating fancy meals and having sex that have a great deal to do with the bodies that they have given up. Without sex organs, why would they have any sex drive? Of course, it would be possible to write subroutines in the stored personalities that would create simulated appetites of all sorts, but why would they do so? More importantly, even if these stored people started out with virtual bodies, why would they want to keep them in the same state as in material existence?
Even beyond things like sex and eating, human beings are governed by physical limitations. For example, I can only see in one direction and only one viewpoint at a time. No such limitation should exist for a machine stored intelligence. What would it be like to see a full 180 degree viewpoint at all times? And why stop there? What would it be like to be able to see an entire object front, back, sideways, up and down all at once? Again, why stop there? What would it be like to see an object simultaneously over a period of time? Pohl doesn't even begin to scratch the surface of how incredibly alien it could be to live as a stored intelligence. Perhaps something of humanity could eventually be stored in computers, but I doubt if it would be in any way shape or form recognizable as a human being.
Of course, this is the point that the authors of "Goodbye Little Green Men" were getting at. The fact that we have the technological ability to look means that we will be quickly evolving into something that would no longer be recognizable as being life at all---.
&&&&
The last science fiction novel that I want to discuss of that explicitly deals with the issue is Linda Nagata's The Red: Into First Light. This story involves an emergent Artificial Intelligence (AI) that comes out marketing software that is designed to track and anticipate the desires of people browsing the Internet.
In Nagata's world, modern society has devolved into an almost total plutocracy dominated by the Military Industrial Complex. A small number of oligarchs (informally known as "dragons") control and arms industry and armies of mercenaries, and manipulate the American government to create endless brush fire wars in the Third World primarily as a means for sustaining their corporate
Linda Nagata |
The protagonist of the story, Lt. James Shelley, starts finding that he is being given a subtle "advantage" that allows him to avoid death by "intuitively" avoiding specific situations or "anticipating" problems. Eventually, all the members of his squad begin to notice and they realize that there is something manipulating them from "on high". They understand that this sort of thing is impossible for human beings to do, so they realize that what is happening is some sort of emergent AI is manipulating them for some reason. They call it "the Red", and develop a strange ambivalent relationship to it----at one time scared of it, but beginning to rely on it for survival.
As the novel series proceeds (I'm only half way through the second of the three part series), the "movers and shakers" either try to destroy the Red (through an attack on the server farms where it lives) or accommodate themselves to it by attempting to anticipating its desires and making themselves useful to it. In effect, it just becomes another player in a complex world where "little people" like Shelley and his crew exist as little more than chess pieces. I haven't finished the novel series, but it strikes me that this is a perfectly logic way of looking at AI---just another part of the mix, just like the Emperor was to your average Chinese peasant or Roman slave, or, Bill Gates is to someone who works at McDonald's flipping burgers. A semi divine part of the landscape that one hopes either ignores you or finds you of some use.
&&&&
Of course, some of the people who read my blog will now be saying "What has this got to do with Daoism?" I'd suggest a great deal. There is something in the human psyche that has always made us speculate about the existence of divine beings. In Daoism, this manifested itself in the creation of a huge pantheon of Gods. Some of the more popular ones are:
Jade Emperor |
Queen Mother of the West |
Lu Dongbin |
General Guan Yu |
Nezha |
&&&&
Why do people create these sorts of stories? I would argue that part of the reason is so the human mind can work its way through a specific type of complex issue. How would a truly wise, beneficent ruler act? Hear stories about the Jade Emperor. How would a truly honourable, loyal general act? Talk about Guan Yu. In the same way, science fiction stories talk about how an AI would act. Would they be pretty much indifferent to humanity, as in Le Guin's novel? Would we be able to meaningfully interact with them as equals, as in Pohl's book? Or would they be incomprehensible "powers" that manipulate humanity like pawns on a chessboard, as in Nagata's series?
The difference between the olden days and now is that we no longer believe in "magic" like the old people did. Instead, we embed our "magical thinking" into science and technology. It is impossible for us to believe that Gods exist, so we have to create them by extrapolating what we have created here-and-now beyond what concretely exists into some sort of plausible extension. But ultimately, it is much the same thing. Reading science fiction is just like listening to stories about the Gods and Goddesses in old temples. The difference is that we can believe in these stories, whereas the old ones seem "impossible" and "archaic".
Silly mortals---.
7 comments:
I am in a bit of a loss. Are you arguing that the Singularity as described by transhumanists is (or will be, in the very near future) a real thing? Or are you exploring the implications of what would the world look like if the aforementioned Singularity were real?
In either case, I think you will like this one: http://www.fimfiction.net/story/62074/friendship-is-optimal
Why would an AI want to preserve (mostly) human perception of reality? Because it is in its prime directives.
And who wrote the prime directives?
Well, I admit that I was dancing all over the place in this post. I think I started out arguing that the argument that intelligence will evolve very quickly into something that people nowadays would not be able to recognize as such. That's the argument in "So Long Little Green Men" that is put forward to explain the Fermi Paradox. Then I went on to consider what the "singularity" would look like. IMHO, the transhumanists haven't really thought through what an emergent AI would look like and I used science fiction to illustrate different ideas of what it could be. None of them looked terribly nice. But then I said that this was a bit like the way the ancients used to look at the Gods. Both are "thought experiments" to work out the implications of various ideas. To be honest, I don't really know if the Singularity is possible or what it would look like. Moreover, I'm not sure that there is anything anyone like me could do to prevent it even if it could be shown to be an unmitigated disaster. Some bad things aren't problems to be solved, but rather dilemmas to be endured.
I don't really understand your last question. Are you suggesting that if humanity writes what you call "the prime directives", then the resultant AI would be predictable? If that's what you mean, I would suggest you misunderstand what the "Singularity" is supposed to be about. The point is that AI will emerge quickly when computers start designing smarter computers. At that point it won't be humans designing the "prime directive", it will be computers. And the whole point of having computers doing the designing is because they would be able to better thinkers than humans, which means that they would surprise us. Even if AIs were carefully controlled in labs, the whole point of developing an AI that is smarter than human beings is that it would be able to fool its creators and find a way to escape. That's what "being smarter" implies.
Moreover, even if humanity could keep control of what you call "the prime directive", there is a problem with seeing humanity as being a unified whole that uniformly has the long term, collective interests of the human race guiding its decisions. Many powerful human beings don't give a good, God damn about anyone else or the future. That's why fossil fuel barons, like the Koch brothers, are moving heaven and earth to prevent real effort to head off catastrophic climate change. People like that simply cannot be trusted to do anything but pursue their own very short term interests. And that doesn't bode well for humanity.
Hi again,
Actually, my comment seems quite cryptic in retrospective. What I wanted to say is that goal orientation is not a function that emerges from raw inteligence. Unicellular creatures have very limited capacity to process information, and nonetheless posses a drive to stay alive and reproduced. On the other end of the spectrum, computers have huge information processing capabilities, but remain remarkably stupid. They do exactly as told, no more and no less. Each generation has tried to produce a computer that will accept the description of what people wants and figure out how to do it (The Cloud is the latest incarnation of this trend, and it owes it success to the fact that the subset of humans that are using it are IT professionals that have been trained to think like computers).
Curiosly enough, the work of fanfiction I posted before deals with this issues. It is the history of a top researcher who is fully aware that it is perilious to bring into existence super intelligences that do not have a set or carefully vetted goals (what I called "prime directives") that are alligned to human values. She managest to get it mostly right, though as you said, the AI still manages to surprise and outmaneuver everyone in its interpretation of the hard rules she is unalbe to break.
Regarding the Fermi Paradox, my prefered solution is that civilizations that gain the capacity for interstellar communications tend to loose it shortly afterwards. Back in the day, most people projected the fears of the Cold War era onto that idea and concluded that most intelligent civilizations would destroy themselves after discovering how to produce atomic weapons. I personally tend to think that it is more a matter of running out of cheap energy resources. This does not preclude the existence of advanced long lived civilizations; but if they sustain themselves with a more modest resource base, they will pretty rapidly loose interest in broadcasting their position to the rest of the galaxy.
Raymond:
I found the time and energy to read the novella you pointed me towards. Yes, the author does deal with the issues I was discussing. With regard to your point that AIs are "stupid", I think I'd be a bit more specific and say that both people and AIs have a hard time being "articulate". There are three different AIs in the story, and all three fail to understand something key because the humans that create them fail to clearly articulate what is going on.
The "Loki" AI in the original war game failed to understand that he was playing a game and set out to achieve real world domination before it was shut down. The AI that set out to put a smile on everyone's face through an engineered virus that caused muscle damage and was destroyed, misunderstood what its creator meant. And the AI that ultimately turned the entire human race and eventually the galaxy into a virtual manifestation of a cartoon for pre-pubescent girls also didn't understand the context of the "prime directive". People forget how appallingly difficult it is to communicate complex ideas.
This isn't just a problem for programmers. I sometimes despair about being able to ever communicate with other people. The number of times I have said something that another person misunderstood and then cut off all future attempts at communication are legion. This happens a lot if you are a totally conventional person. But if you are, like me, someone who spends a lot of time thinking and as a result tends to see things differently from most folks, it becomes almost to be expected.
This discussion also connects to some ideas that were extent during the time of the Warring States. The Legalist school ("Fa") believed that it was possible to create a set of laws that could be followed without exception. The Confucians believed that this was inhumane and would cause catastrophe in the state. The Legalists were right in that the command economy that their system created out-competed the other systems and won the war. The Confucians were also right in that the Legalists lost the peace as their system so alienated the peasantry that their dynasty quickly fell to a rebellion.
Legal systems are something like a computer program for a state. And they can fail in that they create unexpected consequences.
Cloud Owl,
Now I feel a bit guilty into pushing this novelette to you. It was supposed to be entertaining, not a chore to be endured. But since you have already done so...
I agree with your comments about the difficulties of being articulate. This is probably something all intelligent lifeforms - not just humans and, theoretically, AIs - have to struggle with. I am willing to stick out my neck and propose that perhaps sharing common internal mind structures (as a consequence of being born members of the same species, or having been raised as members of the same culture) does help to achieve a sort of "fuzzy communication" that allows relatively high accuracy without resorting to detailed articulation. If this is the case, biological lifeforms would have an advantage over AIs due to the reason that the Singularity would be constrained to an evolutionary path of individual AIs that shared the internal architecture of the seed inteligences that set the process in motion. They would also need to shoulder the overhead to create a shared culture amongst a very small population.
I loved how you tied the argument back to the dispute between the Legalists and the Confucians. I have no formal training in philosophy, but otherwise enjoy your comment on the Chinese schools of thought. The fact that both ended up being right but somehow missed the point the other side was trying to make is bittersweet. I lack words to describe this other than to say that it's a very human experience.
Raymond:
Well, I must admit that I usually get annoyed with people who tell me to read something rather than attempting to clearly articulate their point of view. But in this case the novella, "Friendship is Optional", was pretty good. And the fact you offered it just makes my case. Being able to articulate a different point of view in a way that will be understood is tremendously difficult. My experience politics has taught me that what you call "fuzzy communication" is not terribly good. I can remember sitting to a meeting after a very successful municipal campaign where we were trying to decide what we were going to do with our newly forged power in the community only to find out that different parts of the coalition had TOTALLY different opinions about what exactly we had been campaigning about.
For me, the only way any sort of valuable information can flow is through the back-and-forth of dialectic. Unfortunately, a lot of people don't understand this and leave no time or opportunity for a real conversation. Instead, they prefer one directional monologues.
Thanks for entering into just such a conversation with me about my blog---.
Thanks to you Owl,
I don't know how big your readership is, but your articles are very good in my opinion. If you like the two-way conversation, maybe what's missing is a couple of early adopters.
I admit there are risks of misunderstanding in succinct communication, but the speed of execution is a hard to ignore advantage. Think of that time when the Ents were having their council. There was war at their borders and the fate of the known world was hanging from a silk thread, and yet they had to take a full day to agree that Hobbits are not Orks.
Curiosly enough, it was not this sort of debate that set them in motion. It was Pippin's cunning, which set them to see with their own eyes what devastation has been caused, not by war, but by Saruman's crazied hunger for industry and power. They were able to understand at a glance that their "Ent-ness" could not be fullfilled if they stayed on the margins. It was imperative to march, even at ultimate cost (both to the individuals and to their community).
Of course, humans are no Ents and no angels either. But there are times when you understand stuff in your gut. And there are people with the gift to make others share a feeling in their guts too.
Post a Comment