Google Ngrams

Have you checked out Google Ngrams? You can search all of Google’s digitized books for keywords, and display the results in a graph.

Here’s what I got by searching for “dualism” versus “functionalism” (with a couple of other terms thrown in for comparison’s sake).

dualism v functionalism

Dogs Are People, Too

A very interesting article in the New York Times, on mapping brain activity in dogs. (And a very nice use of neuroscience to break free of the bonds of behaviorism.)

Although we are just beginning to answer basic questions about the canine brain, we cannot ignore the striking similarity between dogs and humans in both the structure and function of a key brain region: the caudate nucleus.

[M]any of the same things that activate the human caudate, which are associated with positive emotions, also activate the dog caudate. Neuroscientists call this a functional homology, and it may be an indication of canine emotions.

The ability to experience positive emotions, like love and attachment, would mean that dogs have a level of sentience comparable to that of a human child. And this ability suggests a rethinking of how we treat dogs.

We’ll talk about animal rights in a future post. Even without this scientific exploration into animal sentience, there are serious ethical issues with the way we think about the treatment of animals.

http://www.nytimes.com/2013/10/06/opinion/sunday/dogs-are-people-too.html?_r=0

SHRDLU on Film

The glory days of Artificial Intelligence — those heady days when researchers thought genuine intelligence and perhaps even consciousness would transcend the bonds of human brains and sit comfortably on computer hardware — well, it was a fun time to be alive. And check out this gem:

http://www.csail.mit.edu/videoarchive/history/aifilms

An awesome historical collection of AI videos housed at MIT.

This one’s my favorite:

 

A silent, grainy, low-res film of Terry Winograd’s awesome SHRDLU in action. For those of you youngsters out there, SHRDLU was really quite a remarkable achievement in computer programming. And the fact that Winograd’s first release of the software was in 1968 makes it all the more remarkable. The idea was to create a program that could parse real English sentences and respond intelligently to them. Winograd realized that he couldn’t tackle the problem in an unlimited domain (that is, his program wasn’t going to be able to talk about everything a human might be able to), so he limited the program’s world to a virtual table top with some blocks on it, and a virtual robotic arm that could pick up and move those blocks. Within this tiny domain, SHRDLU was really very impressive. Frankly, I’m still impressed by the program today, even though it’s more than 40 years old.

Here’s a sample excerpt from a human-SHRDLU interaction:

Person: WHAT DID THE RED CUBE SUPPORT BEFORE YOU STARTED TO CLEAN IT OFF?
Computer: THE GREEN PYRAMID.
Person: THERE WERE FIVE BLOCKS TO THE LEFT OF THE BOX THEN.
Computer: NO, ONLY FOUR OF THEM: THE RED CUBE, TWO LARGE GREEN CUBES AND A LARGE RED BLOCK.
Person: PUT THE BLUE PYRAMID ON THE BLOCK IN THE BOX.
Computer: OK. (does it)
Person: IS THERE ANYTHING WHICH IS BIGGER THAN EVERY PYRAMID BUT IS NOT AS WIDE AS THE THING THAT SUPPORTS IT?
Computer: YES, THE BLUE BLOCK.
Person: DOES A STEEPLE
Computer interrupting: SORRY, I DON’T KNOW THE WORD “STEEPLE”.
Person: A “STEEPLE” IS A STACK WHICH CONTAINS TWO GREEN CUBES AND A PYRAMID.
Computer: I UNDERSTAND.
Person: ARE THERE ANY STEEPLES NOW?
Computer: NO.
Person: BUILD ONE.
Computer: OK. (does it)
Person: CALL THE BIGGEST BLOCK “SUPERBLOCK”.
Computer: OK.
Person: HAVE YOU PICKED SUPERBLOCK UP SINCE WE BEGAN?
Computer: YES.
Person: WHY DID YOU DROP IT?
Computer: BY “IT”, I ASSUME YOU MEAN THE SUPERBLOCK.

Really impressive language processing.

AI researchers (and philosophers) at the time were a little too impressed with SHRDLU, and thought that genuinely intelligent computers were surely close at hand. In fact, Winograd himself, after wrestling for years with SHRDLU and computational language processing, came to the conclusion that AI researchers were generally far too optimistic about their achievements, at least as regarded the actual supposed intelligence of their creations. Winograd wrote:

“Most current computational models of cognition are vastly underconstrained and ad hoc; they are contrivances assembled to mimic arbitrary pieces of behavior, with insufficient concern for explicating the principles in virtue of which such behavior is exhibited and with little regard for a precise understanding.” [Winograd, 1987]

This was a cold shower for a lot of optimistic AI researchers, and a bolster to philosophers who opposed functionalism. But this is a topic for another post.

Happy AI film viewing!