My second semester of this reconsidered walk of life has come and gone.
I'm experiencing a gradual dawning of the responsibility and effort required to land a PhD. It isn't the looming qualifier, the classwork, or the eventual thesis development and defense. It's the great expectations placed upon me (and placed by me upon myself) for any work or comment. I'm finding that a PhD is to be earned stoutly, robustly. The history and tradition of this long road compels in me a new-found, deep respect for your fellow and past scientists, and for science itself.
The popular imagination of a meek scientist has just become unconjurable for me, for I believe there is nothing meek about discovery, invention and creation. We are plopped into our jungles, machete at the ready, vague map to the coast, feeling and breathing in the new land. "Meek" cannot even be formed from the available alphabet.
I have found that I'm not really prodigal any longer.
I am on fire. I am in motion.
I am at peace.
Friday, May 8
Two steps along the long road
Monday, January 19
Drives, Behaviors, Emotions
I've been reading a lot of Cynthia Breazeal's work in prep for the HRI project. One thing I'm struck by is the architecture she developed for Kismet, and how it's based upon drives.
A drive is something that motivates the system to do something else. It has a range of values, but more importantly it has a homeostatic range which the overall system attempts to maintain. Think thermostat, but instead of a particular temperature, add a little slop to it. Suppose I set the thermostat to 70 and allow a variance of 2 degrees. If the ambient temperature drops below 68, the heat is turned on; if it rises above 72, the air conditioning kicks in.
Now, apply that to the mind for a robot. Breazeal identifies three fundamental drives (social, stimulation, and fatigue); my work last semester with Dr. Thomaz showed that later work with Breazeal added two more (novelty and mastery) to facilitate learning and exploration.
A social drive would want to keep a robot engaged socially, but not too much. If it falls too low, then the robot becomes lonely, and would try to seek out companionship or some other social interaction. If it's too high (too many people, too much movement, people too close for comfort (whatever *that* means!)), then the robot would become asocial, and turn away perhaps.
A stimulation drive would make the robot seek out changes in its environment. Too little change, and the robot gets "bored"; too much, and it would be overloaded. In Kismet's case, overloading the stimulus leads to it closing its eyes and turning away. The novelty drive that I looked at this fall seems to be a particular elaboration on this, but it involves the robot's belief system (a topic for another post).
The fatigue drive would let the robot tire of interacting, and rest. Perhaps it might do this if its other drives were at the extreme end for too long. Perhaps it would do this periodically anyway (a robotic cycadic rhythm). The intention of resting in a robot would be to let the other fundamental drives reset to a neutral state. Here's a design consideration: should the other drives immediately reset to neutral, or should they decay back, allowing for varying amounts of "sleep"/"rest" depending upon exactly how stimulated or social the robot was?
Breazeal holds that a robot is in a state of "well-being" when all of its drives are in their respective homeostatic ranges. She maps the state of the drives into a matrix of emotions for Kismet, which then themselves cause behaviors (for example, "fear" could be caused by an abrupt, overwhelming stimulus, and cause the robot to "escape").
Architecturally, all of this is interesting, for it implies three levels, with close, async, interactions: drives cause behaviors and emotional stances; behaviors cause drive changes; emotional stances allow/inhibit behaviors. This lets us envision a robot's idle loop.
I'm going to prototype the system in javascript (but as objectively as possible) and push a primitive version up sometime this week.
Wednesday, January 7
Robots and Humans and ...?
This spring, I'm taking Human-Robot Interaction, a course taught by Andrea Thomaz. It's only the second time this course has ever been given.
The course is structured (and graded) as a discussion seminar/project. Each class session, we're going to have a student panel discuss and debate two or three relevant papers. We're to write a one-page critique of each paper and hand that in at the end of each class. There will be a few other assignments tossed in as we go along.
The biggest portion of the grade in the class is a team project, where we have to come up with something clever, implement it, write about it, and present it. The sky's the limit, but since this is HRI, the requirements are that it must be on a robot, it must be involve human interaction with said robot, and we must evaluate it using human subjects.
Reading and writing about the papers is going to be straightforward (and, I'm thinking I'll get a good jump on things since *all* of the papers have already been posted). The project's so wide open at the moment that I'm much more concerned about it. What do you think I should create? A robot that scowls? An emotive Roomba? A punitive pillow?
Monday, January 5
A semester begins.
The second semester of the PhD begins today. My courses this time are Directed Study (a constant companion, it seems, until the quals are passed) and Human-Robot Interaction, to which I'm looking forward with much anticipation. My big challenge of the semester is to dial in a problem definition for my main area of research.
I've written a fair opus on my thoughts of how analogical reasoning arises. I'll post a copy of the paper on my tech student site for those interested.
Sunday, August 24
So it begins.
Last week, classes began. Words fail to describe how I felt, finally being there. A new gestalt tag for me, certainly: Keith McGreggor, PhD student. At long last. Wow!
This semester, I'm taking 12 hours, divided among three classes:
* CS 7001 Introduction to Graduate Studies
* CS 8803 Knowledge-Based Artificial Intelligence
* CS 8903 Special Projects (with Ashok Goel)
Although I have a perfectly fine and well-fitted office in VentureLab (3rd floor of the Centergy Building), I'm rumored to actually have a desk of my own in the Design Intelligence Lab (2nd floor of TSRB, across the courtyard from Centergy), but I've not yet nested there. Group meetings for the DIL begin after Labor Day. I'm joining up with a research effort within DIL called STAB ... I'll know soon what that entails. I'll also be working on a rather neat bit of intuition I've had over the summer as an initial research area, and working with Dr. Goel and the group on a general theory of mind.
Ashok Goel has adopted me as a student and I him as my advisor; I hope to prove worthy of this great kindness he has extended to me.
I expect to post more frequently now, as I need to set some sort of rhythm for myself. Perhaps once a week isn't too difficult a goal. For a shiny, happy, finally landed PhD student.
Monday, June 2
Time from Causality
The representation of time and causality in computation is difficult. Creating an intelligent program to reason about cause and effect based upon the temporal sequence of events therefore equally difficult. I'm wondering if we're tackling the wrong problem by attempting to represent time at all.
To quote directly from the Wikipedia:
Time is a component of the measuring system used to sequence events, to compare the durations of events and the intervals between them, and to quantify the motions of objects. Time has been a major subject of religion, philosophy, and science, but defining time in a non-controversial manner applicable to all fields of study has consistently eluded the greatest scholars.
Among prominent philosophers, there are two distinct viewpoints on time. One view is that time is part of the fundamental structure of the universe, a dimension in which events occur in sequence. Sir Isaac Newton subscribed to this realist view, and hence it is sometimes referred to as Newtonian time. The opposing view is that time does not refer to any kind of "container" that events and objects "move through", nor to any entity that "flows", but that it is instead part of a fundamental intellectual structure (together with space and number) within which humans sequence and compare events. This second view, in the tradition of Gottfried Leibniz and Immanuel Kant, holds that time is neither an event nor a thing, and thus is not itself measurable.
The common word above is "sequence." Whether we think of time as infinitely smooth, or somehow discrete, we can look at it as "A, then B." This is remarkably (if merely lexically) similar to our traditional expression of causality: "if A, then B." (At this point, I want to set aside any linguistic arguments about how we might express causality.)
If we declare statements like "if A, then A" or "A causes B," what do these mean exactly? The answer depends upon your frame of reference!
In a physical sense, the cause of a particular event is the immediate and salient antecedent (or antecedents) for that event. This is a problem in our smooth spacetime because the immediate antecedent for any event is an infinitesimal deviation of that event from itself. The only possible answer is that the immediate antecedent for any event is the event itself, because spacetime itself is continuous.
Quantum physics gives us a break, however, because we now must think in terms of possibilities, for the collapse of a wave function is not deterministic. According to Mr. Feynmann, we must sum up all the possible histories, however improbable, to account for something's state. So, let's take a slightly different look at the statement "if A then B" in that light.
We would rewrite "A causes B" into a counterfactual statement "if not A, then necessarily not B", and interpret that as given all of the possible states where not A, as well as all the possible states where not B, then if the projection of all possible states of not A onto all possible states where not B is zero, then we can say that A causes B."
What's neat about this is that we can measure, probabilistically, what "the projection of all possible states of not A onto all possible states where not B." This gives causality a strength metric.
Returning to what this might mean for computation. I believe that it might be possible to construct a system where the probability of something being in state A (or conversely, the probability of not A) is computable. It's therefore likely that the causality relationship between A and B could be computed.
This computational causality between A and B represents an internal view of causality only. Whether an external causality ever matters to a computational intelligence, I shall leave to a later post.
Quanta et qualia
I've been exploring quantum physics lately, looking for analogies between how smooth spacetime seems to emerge from the frothing probabilistic quantum soup and the way smooth consciousness and explanation emerges from the complexity of our neural connection. I'm particularly drawn toward a theory called Quantum Graphity.
Quantum Graphity asserts that on a fundamental level the universe is like a dynamic graph with vertices and nodes. At high energies, this graph is highly connected and symmetric. But at low energies, it condenses into a system that has properties such as a geometry, thermodynamics and locality (the property that distant objects cannot influence each other). (Quoted from here.)
I find an interesting analogy here. The substantial neural connections in our brains are a graph of sorts, where the only thing that matters is that two nodes are connected (e.g. there is no inherent meaning attached to that connection or to the nodes themselves). The high and low energy notions are analogous to whether or not these neural connections are suppressed or enhanced by neurochemicals (dopomine, etc.).
The dive continues.
Sunday, June 1
Neural Darwinism
Just finished reading Gerald Edelman's latest book Second Nature: Brain Science and Human Knowledge. It's a summary of his theory of Neural Darwinism, a method for describing the emergence of consciousness (primary and higher consciousness) through the selectionistic evolution of brain structures.
Very fascinating, provocative, and utterly coherent. Wondering now how to construct an intelligent architecture based upon it.
I'll be reading his prior book Wider than the Sky shortly.
[Edelman won the Nobel Prize for Medicine in 1972, and founded the Neuroscience Institute in 1981.]
Shiny, Happy Programs
For the past few weeks, I have been enjoying Dr. Ashok Goel's Design Intelligence Lab workshop. The discussions have been terrific, and we are working toward a rather interesting taxonomy and theory of intelligence.
A part of this workshop is that we are working our way through Marvin Minsky's latest book, The Emotion Machine. Last week, we discovered the second chapter of the book, Attachment and Goals, wherein Dr. Minsky suggests that we acquire new goals and societial learning from our imprimers, those with whom we form attachments.
This set me thinking: are programs "happy"?
Consider the humble email program. It's been told by you to periodically gather your email and distribute it according to your wishes. To me, this raises several questions:
* Does it really have a "goal" of performing its tasks? [What *would* Minsky say?]
* Does your assessment of an email as "spam/ham" correspond in any way to praise or shame in the sense that this input imprints upon the program some statistical notion of correct behavior? Is directed learning reducible to imprimer modeling? (Side question: can we even call this learning?)
* Does your email program then feel (in its own sense) some computational pride at the correct categorization of your email?
* Would an emotional email program be a better, more efficient one?
* Would we even notice?
Of these questions, I think I'm bothered most by the last one.
Wednesday, April 23
Reading in.
A big problem I face as I restart my academic career is getting a sense of the state of the art. I've kept a weather eye on the field, and dabbled here and there over the years with various AI (or AI-like) programming, but nothing ultra elaborate and certainly not long-term research worthy.
- The Society of Mind by Marvin Minsky
- The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind by Marvin Minsky
- A Brief Tour of Consciousness by V.S. Ramachandran
- Consilience: the Unity of Knowledge by Edward O. Wilson
- On Intelligence by Jeff Hawkins
- Reasoning about Uncertainty by Joseph Y. Halpern
- The Road to Reality by Roger Penrose
- Creativity: Flow and the Psychology of Discovery and Invention by Mihaly Csikszentmihalyi
- The Robotics Primer by Maja J. Mataric
Sunday, March 9
the fork in my road
Two weeks ago, on a typical Monday afternoon, I received an email notifying me that I had been accepted into the PhD program at Georgia Tech's College of Computing. That email was the long-awaited fork in my road.
I've taken a path now that will return me to my roots, and I have a lot of explaining to do, particularly to those now stunned by my apparent departure from the tried and true.
Friends, I know that it's as if you never really knew me at all, but you really did.
So, pull up a chair: I've waited a long, long time to tell you my story.
Welcome to my new blog, the prodigal PhD.