SHRDLU on Film

The glory days of Artificial Intelligence — those heady days when researchers thought genuine intelligence and perhaps even consciousness would transcend the bonds of human brains and sit comfortably on computer hardware — well, it was a fun time to be alive. And check out this gem:

http://www.csail.mit.edu/videoarchive/history/aifilms

An awesome historical collection of AI videos housed at MIT.

This one’s my favorite:

 

A silent, grainy, low-res film of Terry Winograd’s awesome SHRDLU in action. For those of you youngsters out there, SHRDLU was really quite a remarkable achievement in computer programming. And the fact that Winograd’s first release of the software was in 1968 makes it all the more remarkable. The idea was to create a program that could parse real English sentences and respond intelligently to them. Winograd realized that he couldn’t tackle the problem in an unlimited domain (that is, his program wasn’t going to be able to talk about everything a human might be able to), so he limited the program’s world to a virtual table top with some blocks on it, and a virtual robotic arm that could pick up and move those blocks. Within this tiny domain, SHRDLU was really very impressive. Frankly, I’m still impressed by the program today, even though it’s more than 40 years old.

Here’s a sample excerpt from a human-SHRDLU interaction:

Person: WHAT DID THE RED CUBE SUPPORT BEFORE YOU STARTED TO CLEAN IT OFF?
Computer: THE GREEN PYRAMID.
Person: THERE WERE FIVE BLOCKS TO THE LEFT OF THE BOX THEN.
Computer: NO, ONLY FOUR OF THEM: THE RED CUBE, TWO LARGE GREEN CUBES AND A LARGE RED BLOCK.
Person: PUT THE BLUE PYRAMID ON THE BLOCK IN THE BOX.
Computer: OK. (does it)
Person: IS THERE ANYTHING WHICH IS BIGGER THAN EVERY PYRAMID BUT IS NOT AS WIDE AS THE THING THAT SUPPORTS IT?
Computer: YES, THE BLUE BLOCK.
Person: DOES A STEEPLE
Computer interrupting: SORRY, I DON’T KNOW THE WORD “STEEPLE”.
Person: A “STEEPLE” IS A STACK WHICH CONTAINS TWO GREEN CUBES AND A PYRAMID.
Computer: I UNDERSTAND.
Person: ARE THERE ANY STEEPLES NOW?
Computer: NO.
Person: BUILD ONE.
Computer: OK. (does it)
Person: CALL THE BIGGEST BLOCK “SUPERBLOCK”.
Computer: OK.
Person: HAVE YOU PICKED SUPERBLOCK UP SINCE WE BEGAN?
Computer: YES.
Person: WHY DID YOU DROP IT?
Computer: BY “IT”, I ASSUME YOU MEAN THE SUPERBLOCK.

Really impressive language processing.

AI researchers (and philosophers) at the time were a little too impressed with SHRDLU, and thought that genuinely intelligent computers were surely close at hand. In fact, Winograd himself, after wrestling for years with SHRDLU and computational language processing, came to the conclusion that AI researchers were generally far too optimistic about their achievements, at least as regarded the actual supposed intelligence of their creations. Winograd wrote:

“Most current computational models of cognition are vastly underconstrained and ad hoc; they are contrivances assembled to mimic arbitrary pieces of behavior, with insufficient concern for explicating the principles in virtue of which such behavior is exhibited and with little regard for a precise understanding.” [Winograd, 1987]

This was a cold shower for a lot of optimistic AI researchers, and a bolster to philosophers who opposed functionalism. But this is a topic for another post.

Happy AI film viewing!

Are We Living In A Computer Simulation?

We recently explored Cartesian skepticism, and its dark conclusion that we can’t know for sure that the external world exists. This post is in a similar vein, as it asks the question: Are we unknowingly living in a computer simulation? One difference between this dark idea and Descartes’ is that if we are indeed living in a computer simulation, there definitely would exist an external world of some sort — just not the one we think there is. Our simulators, after all, would have to live in some sort of an external world, in order for there to be computers upon which they could simulate us. But, of course, the world, on this scenario, that we think of as existing would be a mere virtual creation, and so, for us (poor unknowingly simulated beings) the depressing Cartesian conclusion would remain: our external world does not truly exist.

Of course, if you’ve been even a marginal part of contemporary culture over the last decade or two, you know the movie “The Matrix”, the premise of which is that most of humanity is living mentally in a computer simulation. (Physically, most of humanity is living in small, life-sustaining pods, in a post-apocalyptic real world of which they have no awareness.) You no doubt see the parallel between “The Matrix” and the topic of this post. (Other movies with similar premises include “Total Recall” and “Dark City”, and surely many more that I can’t think of off the top of my head. Which makes me think we have to do a philosophy-in-the-movies blog post soon…) But rest assured that this is no banal foray into Keanu Reevesean metaphysics. (“Whoa.”) The subject of existing in a computer simulation has been pored over to a dizzying extent by philosophers. There’s a lot of meat on this philosophical bone.

Nick Bostrom’s Simulation Argument

Nick Bostrom, a philosopher at Oxford, has developed a most interesting argument, the gist of which is to strongly suggest (with a high degree of probability) that we may indeed all be living in a computer simulation. His clever argument discusses advanced civilizations whose computational technology is so powerful that they can easily and cheaply run realistic simulations of their ancestors — people like you and me.

If these advanced civilizations are possible, then, says Bostrom, one of these three hypotheses must be true:

(1) Most (as in an overwhelmingly high statistical majority) civilizations that get to this advanced computational stage wind up going extinct. (The Doom Hypothesis)

(2) Most (as in an overwhelmingly high statistical majority) civilizations that get to this advanced computational stage see no compelling reason to run such ancestor simulations. (The Boredom Hypothesis)

(3) We are almost certainly living in a computer simulation. (The Simulation Hypothesis)

Bostrom claims that (1) and (2) are equally as likely as (3), but, really, it’s fairly straightforward to assume that they are both actually false. The Boredom Hypothesis, in particular, seems rather implausible. Though we don’t know what such an advanced civilization would think of as worth its time, it’s not unlikely that some significant fraction (at least) of advanced societies would run such easy and cheap simulations, either out of anthropological curiosity, or even for just entertainment purposes. (A lot of our best scientists surely play video games, right?) The Doom Hypothesis is slightly more plausible. Perhaps there’s a technological boundary that most civilizations cross that is inherently dangerous and destructive, and only a negligible fraction of civilizations make it over that hurdle. But it’s still tempting and not unreasonable to think that such a barrier isn’t inherent to social and scientific progress.

So, if civilizations don’t generally extinguish themselves before reaching computational nirvana, and if they don’t think that the idea of running ancestor simulations is a silly waste of time, then we have a clear path to the Simulation Hypothesis. Say that a thousand civilizations reach this computational stage and start running ancestor simulations. And say these simulations are so easy and inexpensive that each civilization runs a trillion simulations. That’s a quadrillion simulations overall. Now divide a quadrillion by however many civilizations there are in the universe, which is perhaps far less than a quadrillion, and you get the odds that you are living in a simulated civilization. Say, for the sake of argument, that there are a million civilizations in the universe. The odds are then a billion to one that you are living in a real civilization. The far more likely proposition is that you are living in a computer civilization.

Functionalism

One key assumption upon which this argument relies is that things like minds and the civilizations in which they reside are in fact simulatable. This is a contentious claim.

The theory that minds are able to be simulated is often labeled “functionalism” — it gets its traction from the idea that perhaps minds can emerge from hardware besides human brains. If we meet an alien from an advanced civilization, learn her language, and converse with her about the meaning of life, we’d like to say that she has a mind. But, if upon scanning her body, we discover that her brain is in fact made up of hydraulic parts, rather than our electro-chemical ones, would her different hardware mean that she isn’t possessed of a mind? Or would it be the case that, in fact, minds are the kinds of software that can run on different sorts of hardware?

If this is indeed the case, than minds can be classified as functional things — that is, a mental state (say, of pondering one’s own significance in an infinite cosmos) is not identical with any particular brain state, but is some sort of functional state that can be realized on all different sorts of hardware. And if this is true, then there’s no reason, in principle, that a computer couldn’t be one of those sorts of hardware.

Given our “successes” in the field of Artificial Intelligence (AI), I have long been skeptical of our ability to create minds in computers. And there’s a proud tradition in philosophy of this sort of skepticism — John Searle, for instance, is one of the more famous anti-AI philosophers out there. (You may have heard of his Chinese Room argument.) But, by and large, I think it is fair to say that most philosophers do come down on the side of functionalism as a philosophy of mind, and so Bostrom feels comfortable using it as a building block to his argument.

I can’t, in this post, get into the debate over AI, functionalism, and the mind, but I will pick on one interesting aspect of the whole simulation issue. Every time I think about successful computer simulations, my mind goes to the simulation of physics rather than the simulation of mental phenomena. Right now, I have a cat in my lap and my legs are propped up on my desk. The weight and warmth of my cat have very diverse effects on my body, and the extra weight is pushing uncomfortably on my knees. My right calf is resting with too much weight on the hard wood of my desk, creating an uncomfortable sensation of pressure that is approaching painful. My right wrist rests on the edge of my desk as I type, and I can feel the worn urethane beneath me, giving way, in spots, to bare pine. My cat’s fur fans out as his abdomen rises with his breathing — I can see thousands of hairs going this way and that, and I stretch out my left hand and feel each of them against my creviced palm. The fan of my computer is surprisingly loud tonight, and varies in pitch with no discernable rhythm. I flake off one more bit of urethane from my desk, and it lodges briefly in my thumb’s nail, creating a slight pressure between my nail and my flesh. I pull it out and hold it between my thumb and finger, feeling its random contours against my fingerprints.

At some point, you have to wonder if computing this sort of simulation would be just as expensive as recreating the scenario atom-for-atom. And maybe if a simulation is as expensive as a recreation, in fact the only reliable way to “simulate” an event would actually be to recreate it. In which case the idea of functionalism falls by the wayside — the medium now matters once again; i.e., feeling a wood chip in my fingernail is not something that can be instantiated in software, but something that relies on a particular sort of arrangement of atoms — wood against flesh.

Who knows, really? Perhaps future computer scientists will figure out all of these issues, and will indeed usher in an era of true AI. But until it becomes clearer that this is a reasonable goal, I’ll stick with my belief that I am not being simulated.

If I am being simulated, a quick aside to my simulator: Perhaps you don’t like meddling in the affairs of your minions, but I could really use a winning lottery ticket one of these days. Just sayin’…