Sunday, June 1

Shiny, Happy Programs

For the past few weeks, I have been enjoying Dr. Ashok Goel's Design Intelligence Lab workshop. The discussions have been terrific, and we are working toward a rather interesting taxonomy and theory of intelligence.

A part of this workshop is that we are working our way through Marvin Minsky's latest book, The Emotion Machine. Last week, we discovered the second chapter of the book, Attachment and Goals, wherein Dr. Minsky suggests that we acquire new goals and societial learning from our imprimers, those with whom we form attachments.

This set me thinking: are programs "happy"?

Consider the humble email program. It's been told by you to periodically gather your email and distribute it according to your wishes. To me, this raises several questions:

* Does it really have a "goal" of performing its tasks? [What *would* Minsky say?]
* Does your assessment of an email as "spam/ham" correspond in any way to praise or shame in the sense that this input imprints upon the program some statistical notion of correct behavior? Is directed learning reducible to imprimer modeling? (Side question: can we even call this learning?)
* Does your email program then feel (in its own sense) some computational pride at the correct categorization of your email?
* Would an emotional email program be a better, more efficient one?
* Would we even notice?

Of these questions, I think I'm bothered most by the last one.

No comments: