The representation of time and causality in computation is difficult. Creating an intelligent program to reason about cause and effect based upon the temporal sequence of events therefore equally difficult. I'm wondering if we're tackling the wrong problem by attempting to represent time at all.

To quote directly from the Wikipedia:

Time is a component of the measuring system used to sequence events, to compare the durations of events and the intervals between them, and to quantify the motions of objects. Time has been a major subject of religion, philosophy, and science, but defining time in a non-controversial manner applicable to all fields of study has consistently eluded the greatest scholars.

Among prominent philosophers, there are two distinct viewpoints on time. One view is that time is part of the fundamental structure of the universe, a dimension in which events occur in sequence. Sir Isaac Newton subscribed to this realist view, and hence it is sometimes referred to as Newtonian time. The opposing view is that time does not refer to any kind of "container" that events and objects "move through", nor to any entity that "flows", but that it is instead part of a fundamental intellectual structure (together with space and number) within which humans sequence and compare events. This second view, in the tradition of Gottfried Leibniz and Immanuel Kant, holds that time is neither an event nor a thing, and thus is not itself measurable.

The common word above is "sequence." Whether we think of time as infinitely smooth, or somehow discrete, we can look at it as "A, then B." This is remarkably (if merely lexically) similar to our traditional expression of causality: "if A, then B." (At this point, I want to set aside any linguistic arguments about how we might express causality.)

If we declare statements like "if A, then A" or "A causes B," what do these mean exactly? The answer depends upon your frame of reference!

In a physical sense, the cause of a particular event is the immediate and salient antecedent (or antecedents) for that event. This is a problem in our smooth spacetime because the immediate antecedent for any event is an infinitesimal deviation of that event from itself. The only possible answer is that the immediate antecedent for any event is the event itself, because spacetime itself is continuous.

Quantum physics gives us a break, however, because we now must think in terms of possibilities, for the collapse of a wave function is not deterministic. According to Mr. Feynmann, we must sum up all the possible histories, however improbable, to account for something's state. So, let's take a slightly different look at the statement "if A then B" in that light.

We would rewrite "A causes B" into a counterfactual statement "if not A, then necessarily not B", and interpret that as given all of the possible states where not A, as well as all the possible states where not B, then if the projection of all possible states of not A onto all possible states where not B is zero, then we can say that A causes B."

What's neat about this is that we can measure, probabilistically, what "the projection of all possible states of not A onto all possible states where not B." This gives causality a strength metric.

Returning to what this might mean for computation. I believe that it might be possible to construct a system where the probability of something being in state A (or conversely, the probability of not A) is computable. It's therefore likely that the causality relationship between A and B could be computed.

This computational causality between A and B represents an internal view of causality only. Whether an external causality ever matters to a computational intelligence, I shall leave to a later post.

## 2 comments:

I'll preface this comment by pointing out that I know exactly nothing about what I am talking about here.

Is it possible that there is some sort of reverse Newtonian fallacy going on here? Just because time acts in odd ways on an atomic scale does not mean that that time need be so odd on a macro scale. Things happen, then other things happen and so on. Don't get all philosophical on us just because of how photons behave. :-)

An interesting way to look at it.

Seems NotA projected onto NotB should be computable based upon infinite measurements and computations, but at what cost?

Assumptions reduce the computational burden, but then we have "localized" the algorithm, right?

Paul, don't emitted photons "tick away" time according to their energy (by Planck's constant)? It's not so far-fetched to consider computing each elementary transaction underlying a macroscopic observation.

Post a Comment