Originally published on NPR's 13.7 Cosmos and Culture Blog . See original entry for comments.
April 16, 2010
If I were to tell you that we have not the faintest idea how the universe came into being out of nothing, I would not be telling any of you anything you don't already know full well. How can we get "something" from nothing? The ancient Greek philosophers said, "Nothing from nothing." The ancient Hebrew's started with Yaweh. Every society has its own creation myth for the origin of "the world", life, its people.
I am about to try to state a creation myth. I do not believe it. But I do not not believe it either. There will be just enough sense in what I will say, that the right skeptical response seems to be, "Well......maybe."
Paul Dirac, famous physicist, famously told one young student that his theory, "Was not even wrong!". It is a wonderful line and, I hope a true
tale. But in some defense I say back, "Any idea at all cannot be worse than no idea."
I think, however, we are safe. I do not envision armies of the converted rising to defend my speculations below.
It all has to do with some odd characteristics of "Possibilities". I am going to try to build a prototheory in which a sudden explosion of possibilities underlies the creation of the universe out of nothing but "The Possible."
I begin with the fact that Quantum Mechanics has, as one consistent interpretation of the famous Schrodinger equation, as I have discussed in previous blogs, that what is "waving" are ontologically real possibilities.
A first clue that we might want to take an ontologically real "possible" seriously arises from the 19th Century American philosopher, Charles Sanders Pierce. Pierce noted that there were three categories of "the the Modal: Actual, Possible, and Probable. It all is related to "The Law of the Excluded Middle" in logic:
Consider the statement, ("Princess Diana died in a car crash." AND "Princess Diana did not die in a car crash.") This can be schematized as (A AND NOT A). Taken together, "A AND NOT A" is a contradiction, forbidden by the Law of the Excluded Middle.
Now flip a coin 10,000 times and conisder the statement, ("The probability of 4723 heads is 0.214 AND the probability of 4723 heads is NOT 0.214"). Again this is a logical contradiction forbidden by the Law of the Excluded Middle.
Now consider the two slit experiment at the statement, (It is possible that the photon went through the left slit AND it is possible that the photon did not go through the left slit."). This statement is not only not a contradiction, but on the Copenhagen interpretation of Quantum Mechanics, the statement is true!
As Pierce pointed out, Actuals and Probabilities obey the law of the Excluded Middle.
Possibilities do not obey the Law of the Excluded Middle.
This fact alone is an important clue that an interpretation of quantum mechanics in terms of an ontologically real "possible" is a legitimate interpretation, and can be taken as one line of evidence for an ontologically real "possible."
Pause to take this in. Empedocles said that only Actuals exist in the world. Aristotle seemed to say both Actuals and Potentia were ontologically real. The early 20th Century philosopher and mathematician, Alfred North Whitehead, believed that both Actuals and Possibles were ontologically real. He held that Actuals give rise to Possibles which give rise to Actuals which give rise to Possibles."
In an earlier post, I pointed out that with the interpretation of quantum mechanics in which quantum possibilities, like Feynman's sum over all possible histories, are ontologically real, a very important feature arises: Constructive and destructive interference of the Schrodinger wave to give the famous interference light and dark band pattern in the two slit experiment. If the "possibility waves" are ontologically real, then interference must be interpreted as ontologically real interactions among ontologically real possibilities, just as real possibilities "out in the world." As real possibilities they can partially or completely block one another in destructive interference, when Possibility peaks coincide in space with Possibility troughs, yielding a dark band on the photodetector screen. Conversely, these ontologically real possibilities can augment one another when wave peaks coincide or wave troughs coincide.
Then, on this interpretation of quantum possibilities, in a sum over the history of all possible pathways photons can take to the light detector via the two slits, those ontologically real possibilities "out there" can interact with one another.
It is, in fact, a mind bending idea. But let's hold onto it.
This interpretation suggests that the Possible is real.
But first, let's back track to Newton. His laws, say for billiard balls on a table, are a predicted succession of only Actuals. The Actual positions and momenta of the balls exactly determine, via integration of his laws of motion, the next Actual position of the balls. There are no possibles, except perhaps in the weak sense of the forward and backward time trajectory of the deterministic Newtonian system.
With Einstein's wonderful General Relativity and the four dimensional block universe of space-time, there are only world lines of events that weave through the block universe. In this block universe, time itself disappears. This disappearance is called the problem of time in General Relativity. But even more strongly, there are no Possibles at all. All is purely Actual.
It seems deeply interesting to me that Einstein's General Relativity, widely regarded as the highest culmination of classical physics, deals only with Actuals. Yet Einstein received his Nobel Prize for the photoelectric effect, a major step towards modern Quantum Mechanics which can deal with possibles.
It is commonly realized that fitting General Relativity together with Quantum Mechanics is very hard because GR is a strongly non-linear theory and Quantum Mechanics is a linear theory. I'll return to this below, for the linearity of Quantum Mechanics is the heart of this blog.
But beyond non-linearity and linearity, there can be a metaphysical difference between General Relativity, with no Possibilities, and Quantum Mechanics which at least has a consistent interpretation in terms of Ontologically real Possibilities. The two cannot match one another, if this claim is true, and these fundamental ontological differences may be a deep reason for difficulties finding a theory of quantum gravity.
So: I begin with the assumption of an ontologically real Possible. In my creation myth, The POSSIBLE WAS, before a single universe emerged.
The next thing I want us to do is to start without any laws at all. I want the laws of the universe to emerge. Indeed, I want the laws of the universe to emerge by a kind of abiotic selective advantage, out of the Possible, all on their own, and naturally.
Next, let's notice two huge clues:
First, consider all the known subatomic particles in the Standard Model. It is a famous fact that these form a mathematical group. That is, they
reflect the symmetries of an underlying mathematical structure with the property that each particle can stay the same, can convert to some other
particles, can revert back to the initial particle and most importantly, this entire process gives rise to exactly the same set of particles.
Why should this be true? One can imagine particles giving rise to jets of ever new particles forever. Why should the particles form a self recreating group?
I will suggest that any such group has an enormous abiotic selective advantage in an early universe or pre-universe. compared to particles that jetted off in streams of ever new particles. In biological terms, the group of particles is a "self maintaining set" in the minimal sense that, once formed, the set recreates only itself. Let the jets of particles jet away, the self maintaining set "gets to exist", even as quantum objects. More if the particles in the group can ever multiply, so particle number in the pre-universe or early universe was not conserved, a group becomes the abiotic analogue of a "replicator". It produces more of exactly itself in The Possible.
Second, Quantum Mechanics has two magical properties. It is a linear wave equation and the square of the amplitudes of the all the waves, representing all the possibilities, add exactly to 1.0. The latter property means that a global property of the amplitudes is exactly conserved. Each property confers what I am again going to call enormous "abiotic selective advantage" on such a set of Possibilities.
The first linear property of the Schrodinger wave equation, say of an electron in a box, or potential well, has as mathematical solutions what are called eigenfunctions, showing the space-time pattern of amplitudes for the position of the electron in the potential well. But in fact, mathematically, there are for a linear theory two further magical properties. There are an essential infinity of eigenfunction solutions to the Schrodinger equation for the electrons in the potential well. More amazingly, since the theory is linear, all the infinite possible sums and differences of any pair of eigenfunction solutions, are also possible solutions of the Schrodinger wave equation.
Thus there are vastly, indeed, infinitely, many possible solutions to the Shrodinger partial differential equation. The possibilities of the Schrodinger equation can diversify wildly, yet their squared amplitudes sum to 1.0 so a feature of their total amplitudes is exactly conserved. In The Possible, solutions, or possibilities, derived from the Schrodinger equation can explode yet, in total, via the sum to 1.0 of their squared amplitudes, the ontologically real possibility amplitudes do not disappear. In a pre-universe, such possibilities have enormous selective advantage compared to possibilities for which these properties do not hold.
These abiotic selective advantages will be the basis for my hoped for natural emergence of both the group property of our particles and something like the linear Schrodinger equation linking the behaviors of those particles.
For those of us not familiar with eigenfunctions, we can be helped by a familiar guitar string. It's ends are fixed. It can vibrate in its harmonic mode, or any octive above that to infinite frequencies in classical physics. These patterns of vibrations are the eigenfunctions of the equations for the guitar string. Just as in quantum theory, the sums and differences of these different string vibrations correspond to different proportions of the diverse harmonics of the base tone.
In short, a first magical property of the linear Schrodinger equation is that it yields an infinite spray of Possibilities. We'll see below that this does not seem to be true of most possibilities and that fact is central to my creation myth.
And again, the other amazing property of the Schrodinger equation is that the square of the absolute values of the amplitudes of the ontologically
real possibility waves, sum exactly to 1.0. So as Max Born first pointed out, these squared amplitudes can be interpreted as the probability
(probability, not mere possibility) that if the electron is measured in the potential well, the probability of its location being detected in such
and such a spot and moment is as given by the squared amplitude for this possibility.
Thus, for my creation myth, we have two remarkable features of Quantum Mechanics and the Standard Model which unites the strong, weak and electromagnetic forces, but not gravity. The particles form a self maintaining group. The probabilities of the Schrodinger equation, by always summing to 1.0, maintain themselves. Both exactly.
Let's turn to what we know in real life about possibilities in biological evolution and our practical life as, I hope, free willed agents.
3.7 billion years ago, life emerged on the earth or jumped here via space. In any case, over these eons, species have come to be, created
opportunities, or niches, for other species to come into existence and make a living, have gone extinct, and a rolling wave of becoming. Each
species creates possibilities, adaptive opportunities, for other species, which in turn create opportunities and also block other opportunities for
other species to come into existence. Thus, I have written about the non-prestatable emergence by Darwinian preadaption of the swim bladder from
the lungs of lung fish. But once there were swim bladders, we can imagine bugs that could only live in swim bladders arising in evolution. It is
not fanciful. A very small bacterium lives only in the lungs of sheep. So the coming into unforeseeable existence of some species and organs
creates the possibilities for other species to come into existence. And presumably the same process blocks the coming into existence of still other
species. Given the wolf, a similar predator cannot easily come to occupy its niche.
In evolution, selective opportunities - or selective possibilities - arise, are selected, enable some other further possibilities, swim bladder to swim bladder bug, but block other possibilities, wolf blocking evolution of near wolves from alternative founder species.
Now lets try our real life. You go to your lawyer about founding a business, business plan all worked out. You start talking. You say, "But of course the plan is reasonable. I've assessed the risks as I must. They check out. But of course, if X, which is quite unlikely to happen, did happen, that would ruin or at least lower the likelihood of this part of my plan. On the other hand, if Y then occurred, now possible because X occurred, it would tend to wipe X out, so my plan would be safe. On the other hand...."
Your lawyer, looking at her watch, say, "Enough, we can go down these alleys of ever more remote possibilities until Doomsday. We don't know, let's cut back to the short term and get real."
We all know this experience. In short, in our real lives, opportunities seem to have likelihoods and to enable or block one another. More we are aware that as we extend to possibilities further in the future that tend to be enabled by or blocked by earlier opportunities in our planning imagination, they become ever more unlikely, precisely because the become ever more "contingent". X only happens if Y does not happen, but could, and Z occurs first to make X more possible, and...
Both in human life and planning and in the evolution of the biosphere, possibilities, like the quantum wave constructive and destructive interferences, enable or block one another.
But something critical is different about these possibilities compared to Quantum Mechanics, which thanks to its linearity gives rise to an infinite set of eigen-function solutions of possible behaviors for the Schrodinger equation for the electron in the potential well, and also the infinite set of the the sums and differences of these solutions are further possibilities, and more magically, the square of the amplitudes of all these possibilities sum to exactly 1.0 and is conserved.
No, in normal common variety possibilities, they do not give rise to an infinite, or at least vast spray of new possibilities, nor do they have a known measure, which, summed over all the possibilities and their likelihoods, let's call those likelihoods "amplitudes", can be squared or some other simple constant mathematical operation acting simultaneously on all the likelihoods to sum to exactly 1.0.
In short, our familiar possibilities are not like quantum possibilities at all.
We do not have the faintest idea how to mathematize the likelihood - amplitudes - of normal possibilities which, in some sense, match how the biosphere evolves and the world in which we human live. But it is important to point out that those amplitudes, if we try to mathematize them, would very likely be non-linear and interact non-linearly.
Why nonlinearly? Well, we're just creating a creation myth. However, it is mathematically true that there are vastly many nonlinear partial differential equations. Linear partial differential equations like the Schrodinger wave equation are a "set of measure zero" in the space of all mathematical partial differential equations. So if one picked a pot full of partial differential equations as a zeroth order trial mathematical model of our familiar possibilities propagating in time and space, almost all would almost certainly be non-linear.
I am about to propose that such non-linear partial differential equations for possibles would be expected to give rise to blocking and enabling of one another of these possibles, more or less as we are familiar with in our everyday and evolutionary experiences of possibilities.
The next point to consider is mathematical, and a difficult area concerning partial differential equations. Some partial differential equations are known to have a set of solutions, eigenfunctions. This set is the spectrum of the partial differential equation. But some partial differential equations are known NOT to have solutions, hence do not have eigenfunctions and a spectrum of solutions. The relative density of arbitrary linear and arbitrary non-linear partial differential equations which have solutions, eigenfunctions and a spectrum of solutions is, I feel confident, not yet known.
I am going to hope the mathematicians one day prove that arbitrary linear partial differential equations are far more likely to have solutions, eigenfunction spectra, than do arbitrary non-linear partial differential equations. One day, we may know. If non-linear partial differential equations do not have solutions, as is already known for some nonlinear partial differential equations, then such nonlinear partial differential equations propagate no possibilities at all!
But a further point that seems likely, and may well be known, is that only linear partial differential equations will allow all possible infinitely different sums and differences of solutions to be further solutions. Such equations generate infinitely many possibilities.
By contrast, why should linear sums and differences of a set of solutions to some non-linear partial differential equation also be solutions? If not, for those, probably more rare, nonlinear partial differential equations that even have solutions, they cannot generate the infinite set of all possible sums and differences of solutions as further solutions.
Then if we can imagine "mathematizing" possibilities by arbitrary non-linear and linear partial differential equations, linear partial differential equations are special in that those that do have a spectrum of solutions also have all the possible sums and differences of those solutions as further solutions. So such linear partial differential equations are expected to generate vastly more possibilities than, I hope, non-linear partial differential equations.
Therefore, this may be a hint that a subset of linear partial differential equations that have vastly many possible solutions may have, as suggested above, the "proliferative advantage" in a "Possible" with a welter of vastly many arbitrary nonlinear and the many fewer linear partial differential equations in my attempt to even imagine mathematizing propagating possibilities.
Then just perhaps, the linear Schrodinger partial differential equation is "the winner" in this space of "The Possible." Its possibilities proliferate wildly and it "wins". If so, the start of quantum mechanics emerges on its own by a rough but natural abiotic natural selection.
But the Schrodinger equation operates on photons, electrons and other quantum particles and degrees of freedom in the Standard Model. But here I have noted another clue above: why do the particles of the Standard Model form a GROUP, all transforming into one another? Why don't particles generate jets of ever new particles?
Could this Group property, of obvious selective advantage in a soup of possible types of particles since it recreates itself, possibly emerge on its own in the Possible?
Thus, one more preamble then my creation myth. Some years ago Walter Fontana, then at the Santa Fe Institute, did a wonderful computer experiment. Lisp is a computer language. Lisp expressions can act on Lisp expressions to yield Lisp expressions. Fontana populated his computer with 60,000 random Lisp expressions. Random pairs of expressions bumped into one another, one was chosen at random to act on the other. Fontana iterated the process for a long time.
He also created "selective conditions". If the total number of lisp expressions in the computer pot became larger than 60,000, he randomly threw out some Lisp expressions down to 60,000. So he was selective for Lisp expressions that got themselves formed easily.
Here is what he found. First, he saw a very long sequence of ever new Lisp expressions, then began to see some of the same Lisp expressions. In due course, a Lisp expression arose that could copy any Lisp expression, including itself. This copier Lisp expression took over the computer pot and became the only expression. Note that the copier is a self maintaining Lisp expression.
Then Fontana tired of copiers and just disallowed them and reran his experiment. He got a wonderful results: He got a collectively autocatalytic
set of Lisp expressions" that each made one another. This collectively autocatalytic set of lisp expressions is also an "identity operator" in the
vast space of Lisp expression. A second wondrous property of Fontana's collectively autocatalytic sets is that they formed a mathematical algebra.
It is not a group, for it lacks an identity operator in which a Lisp expression stays the same, and more importantly it does not have an inverse.
That is expression A acting on B gives expression C. But B acting on C does not, in general, give A.
Fontana's algorithmic chemistry, or, in Santa Fe, Alchemy, demonstrates that random rules can evolve to form a self identity set, and in his case it can also reproduce.
I'm ready for my creation myth: In the Beginning was the ontologically real Possible and it was without Word and all was Void. But it was full of an interacting, seething broth of ever becoming, enabling, blocking ontologically real possibilities. On average, the number or total likelihood of these propagating and interacting possibilities stayed roughly constant.
We can, in principle, try to test that the possibilities stayed roughly constant using random sets of non-linear partial differential equations as a zeroth order mathematical model of this Possible. Amplitudes of the equations or even newly interacting nonlinear partial differential equations, on average, neither grew nor died out. Any ontologically real possibility was as likely to be more or less blocked as it was likely to be enabled.
Thus, the total "amount" or number of possibility stayed low. On average, not much changed in The Possible. (At least I can hope so.)
But one non-day, a set of Quantum Mechanical possibilities came forth from the Possible, that is, from Actual nothingness, and there was a sudden burst of a vast number of Possibilities due to the linearity of the Schrodinger equation describing their ontologically real behavior. And magically, the squared amplitudes summed to 1.0, so there was something conserved in the vast sea of possibilities. The proliferative advantages of the Schrodinger equation vastly outpaced all other possibilities in the Possible. (Since the electroweak and strong forces have been unified, my creation myth actually needs those partial differential field equations to generate all the burst of possibilities.)
And lo, particles forming a group came forth and even replicated, preserving exactly themselves as a Group identity, and were describable by the same Quantum Mechanics. The particles proliferated and persisted. Later they would stop proliferating.
Suddenly there was a vast set of ontologically real possibilities, the different excited states of all the particles as they transformed as a group into one another. Or it all started just with photons, quarks and gluons, and later the whole particle group formed.
If we are allowed an energy state to each of these modes or the spectrum of the eigenfunctions of the partial differential equations, there was an explosion of a vast amount of energy, where the energy of a photon is proportional to its frequency.
You may object, "Where did the energy come from?" But that sudden emergence of energy is already postulated from nowhere in the Big Bang which is obviously still magical, so I don't see why I cannot magically say the diverse modes of the Schrodinger equation and photons and other particles it described did not have the energies they do. (I hope the LHC finds the needed HIggs particle to give mass and energy to the particles.) And in any case a universe with gravity and spacetime and the Standard Model does not, my physicist friends tell me, conserve energy.
Do I believe my creation myth? No. But myths can become a shared framework that later can become science.
Originally published on NPR's 13.7 Cosmos and Culture Blog . See original entry for comments.
April 7, 2010
It is possible that the view of consciousness that I have been exploring, that the mind-brain system is a quantum cohering-decohering-recohering system in an environment, may shed light on the mighty, and still unresolved problem of what is called the famous "Measurement Problem" in Quantum Mechanics. The issue I wish to address is the fact that my perception of a quantum process capable of interference phenomena, can destroy that interference. How can my perception even conceivably "do this"?
As a scientist, I hold what I shall say below with high skepticism, but interest, for a natural interpretation of receiving sensory information from the world on this theory, plus Shor's theorem about quantum error correction, seem to imply that perceiving a quantum event, such as photon's pathway through one of the two slits in the two slit experiment can, in physical principle, partially or even perhaps completely destroy the interference pattern detected on the photodetector screen. To Be Is To Be Perceived, I shall want to say.
A fine discussion of the measurement problem can be found at Google "John von Neumann" + "Quantum Measurement" under the heading "Measurement in Quantum Theory", the Stanford Encyclopedia of Philosophy. As I shall report from this source, more than 70 years since the Schrodinger equation, the attempts to formulate the measurement problem all seem to have inconsistencies or severe problems.
I will follow the above article in part. It points out that the background views are those of Locke and Kant. Locke, a realist, thought our sensory awareness of the world mirrored the real world "out there". By contrast, Kant, using the metaphor of spectacles, thought our perception of the world was via a veil of perception imposed by our minds and the conditions of knowing. We could never know the real, noumenal world. Niels Bohr took a position, given the two slit and other experiments in which human perception altered the outcome, of thinking in extended Kantian terms of our perception as somehow partially being "constitutive of the world out there".
This raises a mighty mystery, central to what I want to write about: How can our perceiving or knowing, alter what is "out there"? How can that perceiving or knowing be, as Bohr insisted, "constitutive of the world"?
More recently Zeh has said: "Heisenberg's original hope that the quantum system was disturbed during the measurement is not tenable. Instead, various systems (the observed one, the apparatus, the observer, and the environment) get entangled."
I gather that Zeh's view about the entanglement between the (conscious) observer and the observed one and the rest above, is a prominent one among physicists.
How can such entanglement arise? And might any such entanglement be a clue to consciousness itself?
I will propose what might be a new way to think about this.
The next, obligatory step in this discussion is to briefly outline the formalization of Quantum Mechanics and the measurement problem by brilliant John von Neumann. He imagined and formalized the measurement process in two steps. First, the Schrodinger equation propagates unitarily, ie preserving probability, of all the superpositions of wave functions that the linear Schrodinger equation states, ie S, to the measuring system, M. In interacting with the measuring system, M, S and M become quantum entangled, in some sense, one system. In his famous step two, M randomly puts all the probability which was in the superposition of all the possibilities in S, into only one possibility. This is the famous "collapse of the wave function", or quantum "jump". There is no law or cause for this jump in terms of which of the possibilities M ontologically randomly jump to, but there is a probability distribution for those jumps, the familiar quantum probabilities achieved by the Born rule, which is to square the absolute value of the amplitude of each wave to get the probability that the quantum measurement jump will "land" on that now single, possibility.
It turns out that no one can formulate this model without contradictions or paradoxes, as the cited article makes clear.
I just may have something helpful to say here, but all caution is required. In my previous posts on whether law describes the processes of decoherence and recoherence in detail, I have tried to say, using philosopher Sir Karl Popper's argument, that in a Special Relativity setting, we can consider an event A with a future and past light cone separated by a region of potential simultaneity. Then consider an event B in the future light cone of A. B's past light cone encompases all A's past light cone, and more - regions that are outside of A's past light cone, and called "space-like separated". But as Popper argued, an observer at A cannot know what lies outside of A's past light cone hence cannot have a "law" for what happens until just before B. The observer cannot have a law in the real sense that he or she cannot know the conditions and possible processes in the space-like separated regions outside the past light cone of A, and until just before B, that may affect what happens at B. I just transfer Popper's argument to a quantum decoherent setting with moving detectors to create a Special Relativity setting and conclude that no function, F, at A, maps the spacetime region around A into its future until just before B. If this is right, it seems there is no law for how decoherence to classicity, or classicity "for all practical purposes", happens in any specific case. If not, there is not a solution to this part of the problem of Measurement in the sense of a sufficient law. This is one effort to substantiate von Neumann's postulate of no law for how M collapses the wave function, the famous "quantum jump" noted above. It remains to be seen if this qualitative argument can be made rigorous beyond the above, if needed.
But this does not yet deal with the entanglement between the observer, the observed system, the apparatus, and the environment. This is a major part of the "measurement problem." How might the conscious observer possibly play a role and be "constitutive of the world?"
My next step is to take a theory of the mind-brain system as a quantum cohering-decohering-and-recohering system, and possibilities as ontologically real, as a serious proposition.
On this theory, what could perception be, say of a visual field? Bear in mind that we know about the classical behaviors of receptor fields, including edge detectors in the behavior of retinal ganglia and visual cortical cells, so I do not want to stray far from established neurscience.
The natural interpretation of receiving information from the outside world, say visual field, is that the quantum information being received makes the partially decoherent mind-brain system become more coherent via some analogue of Shor's error correction theorem. (This postulate mirrors the chlorophyll molecule that is excited to a quantum coherent state by absorbing a photon and remains in a coherent state, helped, we think, by its wrapper of antenna protein.)
I now come to an utterly striking next feature of Shor's error correction theorem. The theorem states that information from a "quantum environment" can enter a quantum system and, detect decoherence in some few of a more numerous entangled multi-qubit (quantum bits) encoding of those fewer qubits. The Shor process detects the decoherence in the qubits via the equivalent of a quantum measurement. Having detected decoherence in those few qubits, the algorithm can make those decohering degrees of freedom recohere. The algorithm can error correct decohering qubits by restoring their coherence. But critically, the theorem says that the decoherence in the "system" does not disappear, instead the decohrence is transferred to the quantum environment!
Now let's put the above remarkable statement into our tentative theory that to perceive, the mind-brain system becomes more quantum coherent via a physical analogue of Shor's theorem, say like the antenna protein and chlorophyll. Then the conclusion is that the outside quantum environment becomes less coherent! That is, the increased coherence of the mind-brain system would acausally make the outside quantum world decohere! But this means that for the mind-brian quantum-cohering-decohering-recohering system to perceive, the world it is perceiving can or must acausally become more or entirely classical!
The perceiving observer and the observed system can possibly, (or must), become entangled by the Shor quantum error correction algorithm used together with the hypothesis that mind-brain is a quantum cohering-decohering-recohering system.
"To Be (classical) Is, (can be), To Be Perceived".
This is the clue about which I write. It might be meaningless, beside the point, or very big. For what is implied is that perception of the pathway the photon took through the two slits, via Shor's transfer of decoherence from the mind-brain system to the photon, just might actually acausally induce decoherence of the photon and destroy interference bands!
This could be a clue to a major part of the Measurement problem.
Do I believe this? Not yet certainly. It is a long step from my perception causing decoherence in the "outside world" to that decoherence bearing on a specific incoming photon in the two slit experiment. But it is striking that Shor's theorem concerning transfer of decoherence from the system which becomes more coherent via injection of quantum "information", to the quantum environment from which the information came, causes decoherence in that environment.
The above may constitute the first clue we have about why and how conscious observer measurement in quantum mechanics could yield decoherence of the observed system and loss of the interference patterns that are the hallmark of quantum behavior.
It may also be the first clue about how "I" the observer, can be "constituitive of the world", as Bohr would want it.
I confess that the fact that this theory could even conceivably account for some of the outstanding problems with quantum measurement via conscious human observation somewhat increases my own sense that the theory of the mind-brain as a quantum cohering-decohering-recohering system may have real plausibility about it. Not only does this theory, in principle, seem to answer philosophy of mind problems about how mind can act on matter and how we can have a responsible free will that have plagued us since Descartes, it may help with conscious observer induced quantum measurement and loss of interference in the two slit experiment and others, a feature of the real world and a central mystery of Quantum Mechanics.
To tie this to neurobiology, even tentativelly, seems both too brave, but also needed. I must imagine that from photoreceptor to the central nervous system, quantum behavior can arise and be correlated. But wide spread quantum entanglement is surely not ruled out in the sensory to brain system, and may span the brain and beyond. So I do not think quantum-cohering-decohering-recohering behavior in a neural brain is ruled out by the existence of neurons and action potentials. In a previous blog I suggested looking at neurotransmitter molecules in synaptic junctions, their post synaptic receptors, and transmembrane channel proteins in dendrites for signs of quantum behavior.
Is this hypothesis impossible, given chlorophyll and the antenna protein? No. Rather, it may be a new hope.The hypothesis put forward in this blog has a number of implications, some of which may be testable.
Originally published on NPR's 13.7 Cosmos and Culture Blog . See original entry for comments.
March 30, 2010
The famous philosopher of mind, John Searle, said, "Not only do we have no idea what consciousness is, we have no idea of what it would be like to have an idea of what consciousness is." Saint Augustine had an idea: Consciousness is our Soul, and exists by direct connection to the Mind of God. The Catholic Church is willing to accept evolution of the human body, but not the conscious mind, or soul. Descartes just posited mind, Res Cogitans in his famous dualism of Res Extensa - ie matter and machines including the human body, and Res Cogitans, the conscious mind.
Modern Connectionists often equate consciousness as another emergent property of sufficiently complex computational systems. I worry, since water buckets by the millions pouring water into one another above and below a bucket level threshold for 1 or 0 could be a complex computer. I just have a hard time believing they would be conscious.
Meanwhile, these computational systems are algorithmic and typically classical, so run afoul of my hoped for critique in my blog, "Is the Mind Algorithmic?" that the mind is not algorithmic. Worse, a classical physical system runs afoul of "how does the mind act on matter, when it seems that the state of matter is sufficient for the next state of matter, and there is no way for conscious mind to act on matter anyway, (but see Daniel Dennett's "Freedom Evolves" for a response.) Often we are driven to one or another, brute or subtle form of epiphenomenalism, in which the brain runs the show and consciousness tags along, ineffective in "acting on matter" or having a "responsible free will". Daniel Dennett in "Freedom Evolves" again takes on some of these problems, but not consciousness itself. My co-blogger, Ursula Goodenough, and Tom Clark take these issues on from a well nuanced naturalist stance, but, pace Tom and Ursula, it still seems epiphenomenal to me - despite their avoidance of the term - for I cannot see in the natural stance what "mind" does or how it does it. And they do not say what consciousness "is". Well, no fault theirs! Searle of the Chinese Room problem, is right: So far, we have no idea what consciousness is, nor even what it would be like to have an idea what consciousness is.
We have had the problem of consciousness for thousands of years, and here I am, about to offer a working hypothesis! Perhaps "fools rush in where angels fear to tread", but, frankly, I don't believe in angels and, foolish or not, I will tread.
What I will say rests on two simple, but major premises:
First, as I have argued in the past several posts, Is the Mind Algorithmic?, How Can Mind Act On Matter?, and Towards A Responsible Free Will., I think these antique problems in the philosophy of mind just might be open to elucidation given the hypothesis that the human mind-brain system is a quantum coherent-decohering to classicity and recohering partially or completely to quantum coherence. This "Poised Realm" surely cannot happen in any physical system, for the decoherent loss of phase information is not easily recoverable. But the chlorophyll molecule, coherent for at least 7000 femtoseconds when the normal time scale of decoherence is 1 femtosecond, or 10 to the - 15th seconds, is amazing. More, it is thought that decoherence is either prevented or reversed by the evolved antenna protein that wraps the chlorophyll and, in line with Shor's theorem about quantum error correction, may be partially correcting inevitable lost phase and amplitude information. We can test this with mutant antenna proteins.
Second, it seems to be a coherent and consistent interpretation of the Schrodinger wave equation that what is "waving" are possibilities that are ontologically real. This interpretation is radical. As I have noted, Empedocles argued that what is real in the universe are Actuals and only Actuals. Yet Aristotle in various senses, argued for the ontological reality of Actuals and Potentia, and Alfred North Whitehead in the early 20th Century argued similarly that ontologically real Actuals give rise to ontological real Possibles which give rise to Actuals which give rise to Possibles, where ontological reality is both Actual and Possible.
With Whitehead, I am going to assume a metaphysics in which Actuals and Possibles are Ontologically real. One cannot avoid a metaphysics. For Newton and Einstein, only Actuals are ontological real. Quantum Mechanics admits as one interpretation, ontologically real Possibles. Therefore the step to taking seriously an ontological real Possibility may not be as great as we tend to think three centuries after Newton. There are, of course, other interpretations of Quantum Mechanics, from epistemological Possibles, to the Multiple World interpretation of Everett, to Bohm's Implicate Order interpretation. Thus the detailed experimental verification of Quantum Mechanics allows but does not prove, an ontologically real Possible.
Since we have not had an idea about the hard problem of consciousness for perhaps 2500 years, there is no harm in trying the two hypotheses above: Mind-Brain is quantum coherent-decohering to classicity and recohering partially or completely to coherence; and the Possibilities in the coherent or partially coherent states or realms of the mind-brain system are ontologically real.
Then my step is utterly simple: Consciousness is, really is what I just said. Consciousness arises in a very special physical mind-brain system, perhaps in a special subset of neuronal circuits, able to sustain quantum coherence, decohere to classicity (or classicity for all practical purposes to keep some physicists happy), and recohere partially or completely to quantum coherence. The Schrodinger wave equation describes ontologically real Possibilities waving. In the mind-brain system, that is consciousness.
Why in the world should one wish to undertake a radically different metaphysics? Well, we have not understood consciousness, and perhaps this will help. More, if the quantum possible is ontologically real, it may lead to new physics in many places.
The most striking evidence I will adduce for this jump is, in fact, something we all know perfectly well. Ready? Just where is the possibility of my going to the movies? Is the possibility under the refrigerator? Well, no, I looked.
The possible does not seem to be locatable in space in any precise sense.
Now, what of your actual conscious experience, say of your visual field. Is that conscious experience locatable in any precise sense in space? No, we all know and comment to ourselves that our awareness of the philosopher's "qualia", or experiences, is not locatable in space in any precise sense.
I am deeply struck by this parallel, neither Possibilities nor experiences, "qualia" are locatable in any precise way in space. Is it fair to think that this similarity can be an identity with respect to mind-brain? Why not? The similarity does not, of course, prove my claim, but it is striking.
Just because I feel a bit ornery: Where are hopes and fears located? Where is what we imagine? Where is what we intuit?
If we adopt a metaphysics in which both the Actual and the Possible are ontologically real, a new world opens before us, and consciousness can be our direct experience of that real Possible in our mind-brain identity system.
There is another feature of our experience that we all know, called roughly "the stream of consciousness." In my past blog on "Towards A Responsible Free Will." I noted that the state space of the human cortex, idealizing each neuron to be active, 1, or inactive, 0, is 2 raised to 10 raised to the 10th power - a hyper-astronomical number vastly vastly exceeding the 10 to the 80th particles in the known universe. I argued for a partially quantum random walk along trajectories in this state space, and identified our decisions as movement in the Poised Realm to full decoherence, so something classical and specific happens via this acausal decoherence to classicity in parts of neurons, ie neurotransmitter molecules, their receptors and transmembrane channels on dendrites and perhaps axons.
Then in our full lives, our decisions are often, if not always, reflective of many of our decisions taken throughout our lives. Watch a skilled artist paint. He or she chooses where the next brush stroke and color will go, based on enormous experience and a kind of "flow freedom". This seems to reflect a very large number of past decisions along trajectories in neural state space. Or tally your own stream of consciousness. These experiences fit with what I have said and am saying.
Now, can quantum mechanics reach spatially beyond the detailed coherent, partially decoherent, or partially recoherent, or fully recoherent behavior of any single receptor of a neurotransmitter? Almost certainly "Yes". We have not discussed the phenomenon of quantum "entanglement". It is a confusing concept, but, again roughly, if a single photon splits into two lower energy photons that fly off in different directions, since the two were once one photon, the two are entangled. Astonishingly, as Einstein, Rosen and Podolsky realized in the 1930s, if an aspect of one of the two entangled photons is detected, say its polarization, that detection of the first entangled photon instantaneously implies a restriction on, or correlation with, the polarization of the second entangled photon. This has been amply demonstrated by amazing experiments and is now fully accepted. It is believed that no causal signal can be transfered between the entangled photons as that would break Einstein's speed limit of the speed of light for causal interactions. So the correlation one sees is taken as a non-loca feature of Quantum Mechanics.
Via this non-local feature of Quantum Mechanics, in principle, neruotransmitter receptors all over the brain might be correlated, although not directly touching one another or interacting causally at all. Thus, Quantum Mechanics does not limit the range of entangled interactions - even to within the brain of a single individual!
Thus I raise a third, contentious issue: Many of us have had "strange" experiences where distant related experiences between people are correlated across space. We are told to write these off as unscientific. We are told we have many such experiences and only remember the ones that are strikingly correlated. Perhaps. Perhaps not. I have had such experiences associated with the hit and run death of my own daughter, Merit, at age 13. I have never before or since had such experiences. Until we understand consciousness, I am not willing to write such experiences off as mere coincidences that I remember because they were so emotionally pregnant. Maybe. Maybe not. If such experiences were partially quantum non-local events, one would expect them to be evanescent and hard to replicate. Does that prove such experiences are real? Of course not. Are they ruled out? Not in my mind-brain system.
Can one imagine turning the above ideas into a research program? Yes, certainly. I would consider starting with the chlorophyll molecule wrapped by normal and mutant antenna proteins, measure quantum coherence durations in both cases, then try to carry out, at the current outer limit of feasibility, a quantum computation of the chlorophyll plus antenna protein in a cellular organized water environment, where that organization is due to molecular crowding by proteins and other molecules in the cell. Given Shor's theorm about quantum error correction of phase and amplitude information by injection of information into a decohering quantum system, and a model of decoherence from the chlorophyll to the antenna protein, ordered water and an environment beyond, we can attempt to see if the antenna protein can partially suppress decoherence or, more likely I think, induce recoherence. This is almost feasible. It may become feasible in the near future.
If one found that the antenna protein does indeed induce recoherence, we would have evidence that an evolved molecular system can induce recoherence in the face of the inevitable loss of phase and amplitude information from the open quantum "system" to an "environment". That would be a huge step. From it, one could look for signs of quantum behavior in neurotransmitter molecules, their receptors, and transmembrane channels in dendrites. I stress again that this anatomical hypothesis is obviously very tentative.
Is it conceivable that there is evidence for an ontologically real Adjacent Possible? I report a remarkable experiment noted by one of the commenters to a previous blog. The experiment consisted of subjects shown a sequence of emotionally neutral and distressing images. The experiment monitored eye blinks and pupil dilations as signatures of distress. Amazingly, with a probability of false positives reported to be 0.0009, the subjects responded just before the distressing images were shown! Do I believe the results? No. Should they be repeated? Of course. Suppose it is repeatable? What in the world could conceivably be going on?
Well, an Adjacent Possible, if Ontologically Real, would seem to be in Einstein's Special Relativity future light cone, ie more simply, the future. What if we actually showed that we can be conscious of a future Possibility that will become Actual in a moment? Do I believe it? Again, no, of course not, but it is a fascinating experiment and certainly can be carefully repeated.
Just a few more comments. CPT (Charge, Parity, Time) symmetry is violated due to the existence of only left handed neutrinos. This symmetry violation is well established. Thus, time must have a direction. We do not understand time, and no one knows what CPT violation is due to. But the physicists' sensible metaphysics is, with Empedocles, that only Actuals are real in the world. We are not logically bound to this metaphysics, but Newton's success has surely persuaded us: Actuals give rise to Actuals on the billiard table. It is a "crazy" thought, one about which Paul Dirac could well say, "Not even wrong", but in an ontology in which the future has ontologically real possibilities, that future ontologically real possibilities might give a missing physical sense to CPT violation and an arrow of time.
Note that we clearly experience the "flow" of time. How do we manage to experience that flow? Do we make a kind of mental movie of past Actual moments and view it? Maybe. But this movie of the flow of time does not seem to stop abruptly at the present instant. It somehow seems to seamlessly flow into the future. Why? For myself, it seems that the flow of the past flows into future possibilities in my life. Is that true for all of us? Even if that were so, it wold not prove that we are aware of a future ontologically real Possible. Nevertheless, might our sense of the flow of time be via awareness of immediate future real Possibilities? It is just conjecture at this point, but not necessarily impossible. The image and eye blink experiment described above, if reconfirmed and extended, could actually test this radical idea.
I end, having proposed experimental and computational avenues to explore. I am likely to be wrong. But to my utter astonishment, I found myself, in my next blog, "To Be Is To Be Perceived: A Clue To The Quantum Observer Measurement Problem" finding an unexpected use of Shor's quantum error correction theorem that seems able to yield a quantum entanglement of a quantum-coherent-decohering-recohering conscious observing mind-brain system and its quantum environment - yielding decoherence, perhaps to classicity, in the observed environment - like conscious observation in the two slit experiment. We will see that just such an entangled conscious observing mind and its observed quantum environment, is a very common view among physicists. But we have lacked an idea how this entanglement of observing mind and observed environment might arise physically. This surprising possibility, I think, supports the hypotheses that the mind-brain really is quantum coherent-decohering-recohering and consciousness really may be awareness of an ontologically real possible.
Originally published on NPR's 13.7 Cosmos and Culture Blog . See original entry for comments.
March 26, 2010
In this blog I will attempt to formulate a conceivable, if scientifically still improbable, theory of a responsible free will. The problem is formidable so any potential progress is to be welcomed, if also taken with scientific skepticism, as we may try to formulate testable hypotheses. I will attempt just such hypotheses below.
Here is the problem: However we conceive of the mind-brain system, if it is deterministic, as is, for example, Newtonian classical physics, then we can have no free will at all.
So if I kill the old man in the wheel chair, it is not my responsibility. Conversely, if quantum events such as the purely chance radioactive decay of a nucleus yields a non-deterministic, hence "free will", and I happen, by chance to kill the old man in the wheel chair, I can hardly be responsible for the chance event that lead to the death of the old man.
We seem ineluctably caught on the horns of a dilemma. My co-blogger, Ursula has discussed approaches to this problem, in her blog, My I Self, as has Tom Clark. Please see their efforts. Also, Daniel Dennett's "Freedom Evolves" is well worth reading.
I think I do not agree with either Ursula nor Tom, for despite their nuanced efforts, I feel they remain in a stance in which mind is an epiphenomenon of a deterministic brain. I disagree with Dennett, because his mind remains algorithmic, and, as I argued in my blog, "Is The Mind Algorithmic?," I do not think the mind is algorithmic.
The two stances I am taking place me on a track in which I hope to show that it is possible, as in my last blog, "How Can Mind Act On Matter?," that it is indeed possible for a mind-brain system that is quantum coherent, decohering to classicity and back partially or totally to coherence. Then mind has consequences for the classical matter of the brain by acausal decoherence to classicity, not by acting classically causally on the classical matter of the brain.
The above arguments rest on plausible physics, but remain very much to be proven. To do so requires some hypothesis about where and when my hoped for quantum decoherence and recoherence may take place in the brain. I will suggest in this blog that candidate loci include the neurotransmitters in the synaptic vesicles, their post synaptic receptors, and transmembrane channels in the dendrites and axons of nerves. This is an extremely tentative hypothesis, but conceivably open to empirical test.
There are two central parts to my effort here to find a conceivable grounds for a morally responsible free will: 1) The mind-brain system of 10 to the 11th neurons can perform a partially random walk among ontologically possible alternative patterns of neural activities which is both somewhat random at each step, but intensely correlated as a specific historical random walk over a long time, even perhaps a life time, of a single brain. There is, then no law for this walk, yet taken as a whole it is not random. 2) I propose that the poised mind-brain system as a whole can tune how quantum and how classical it is in the Poised Realm between quantum coherent and decoherence to classicity, in specific regions of the brain. I will hypothesize that approaching classicity allows the poised mind-brain system, hovering between quantum and classical, to "decide" and can result in a specific classical event, such as the firing of a specific neuron or set of neurons.
In order for an approach to classicity to constitute a "decision" with specific classical conseqeuences, I must claim that recoherence and decoherence can tune the absence or presence of such a decision and its acausal consequences via acausal decoherence. Thus the responsible free will I seek "acts" acausally, hence is not subject to the horns of the dilemma stated above, which is stated in terms of classical physics, without recourse to quantum mechanics. Yet both quantum and classical are true of the real world. (Some physicists would say that even the classical world retains a residue of quantum behavior.)
I begin a long step removed from the brain. Consider a familiar random walk of a classical particle on a two dimensional lattice. At each discrete time moment, the particle moves one step to the right, and randomly moves one step up or down on the lattice. Over a period of time moments, the particle carries out the familiar random walk, so well studied in stochastic processes. Now the first thing we know is that this walk is random at each step, but the walk over hundreds of steps or more, is very strongly correlated. Thus, if the walk drifts below the starting position on the lattice, it will remain below that starting position for a long time, a law called the arcsin law. Thus, while each step is random, the entire walk, taken as a correlated whole, is not random.
The immediate connection to the quantum horn of the responsible free will dilemma is that, in this horn of the dilemma, one considers only a single random quantum event, say the radioactive decay of one nucleus. But similarly, if one considers only one step of a random walk, it is, indeed, entirely random. There is no long term history which can become correlated along the walk, as shown by the arcsin law.
I now take a second step on my way to the real human brain by reconsidering a "vast" chemical reaction graph with 600,000 molecules, 100,000 each of carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur, CHNOPS, the atoms of organic chemistry. As I discussed in my blog, "Can A Changing Adjacent Possible Change History?", the reaction graph among all possible molecules these 600,000 atoms could form is indeed vast. Consider all possible small proteins, length 200 amino acids. Each has about ten atoms, hence only 2000 atoms compared to 600,000 atoms. Even for such small proteins, as I've discussed, at the Planck time scale of 10 to the -43 seconds, if the 10 to the 80th particles in the universe were doing nothing but making these proteins it would take 10 to the 39th times the history of our universe to make all proteins length 200 once. The time scale to make all possible molecules up to 600,000 atoms of CHNOPS is vastly longer, so even in a closed thermodynamic system, the time to equilibrium is vastly longer than the lifetime of our universe and the "hypopopulated" reaction system will, in effect, take a random walk on this vast reaction graph where I am quite confident that fluctuations will not damp out.
The intuition that fluctuations will not damp out is readily testable using what is called chemical master equations, simulated by what is known as the "Gillespie Algorithm", so can be studied now.
For my conceptual purposes, imagine a region on this reaction graph in which molecules that are adjacent to one another by legal reactions form a coherent "blob" on the graph, with each molecule present in only a single copy. At the edge of the blob lies the Adjacent Possible on this vast graph. Consider a specific reaction, A + B, present on the graph at the frontier of the blob, can react to form C + D in the Adjacent Possible. This reaction is "shifted to the left", that is the substrates exist, a single copy each, the products do not exist. So A + B in the next short time, by an acausal quantum process may, or may not form C + D. But C + D do not exist classically, so cannot form A + B.
The consequence is that, in due course, C + D will virtually certainly be formed by a quantum event.
The next critical step, completely implausible for the reaction system I discuss, but to make a conceptual point, is to imagine that the classical B molecule can recohere partially (or completely) to a quantum state, as Hans Briegel of the Physics Department at the University of Innsbruck says is possible. (I do not think such recoherence is physically possible in the case above for phase information that is lost cannot easily be recovered, but will argue that it may well be possible in the mind-brain system below.) All I need for the point I want to make in this hypothetical example is that if B can recohere sufficiently such that it cannot participate in the reaction in which A + B form C + D, then that reaction is blocked by the hoped for recoherence of B.
Notice that I am supposing a new means to allow or disallow a chemical reaction, or ligand binding event: if one of the partners can recohere such that the event can no longer occur, it is blocked. This is a new idea as far as I know, and may be testable.
Before we ask if it is testable, let's ask if, given known physics, it is possible? A quantum chemist would treat B as a quantum object and solve the time dependent Schrodinger equation for the molecule. But to do so, the quantum chemist typically idealizes the locations and number, N, of the nuclei as fixed in space, and solves for the behavior of the electron cloud. In other words, the quantum chemist assumes the nuclei of the B molecule and worries about the electron cloud and the formation of bonds among N nuclei to form the B molecule. But is that idealization realistic? Well, perhaps not. First, the nuclei move in three dimensional space. More deeply, my physicist friends inform me that in quantum field theory, even the number of nuclei in a spatial region is indeterminate.
If even the number of nuclei is indeterminate in quantum field theory, part of a hoped for wave equation of the universe, and if recoherence can reach back to quantum field theory, then the "existence" of the classical B molecule is "gone". The A + B reaction to C + D may, in fact, be blocked by the recoherence process. Ultimately, this should be testable experimentally.
As noted above, it seems virtually impossible for such recoherence to happen on my vast reaction graph. Lost phase information cannot be recovered easily. But it may not be impossible for the mind-brain system. Recall that the chlorophyll molecule is surrounded by an evolved antenna protein and is quantum coherent for at least 7000 femtoseconds at 77K, when decoherence within 1 femtosecond, 10 to the - 15 seconds, is a typical decoherence time. It is thought that the antenna protein either prevents decoherence, or, in analogy with Shor's quantum error correction theorem, where injection of phase and amplitude information can yield recoherence in decohering degrees of freedom, the antenna protein may interact with the quantum coherent but decohering chlorophyll molecule to yield recoherence to some degree with respect to phase and amplitude information.
Now let's turn to the human brain with its 10 to the 11th power number of nerve cells. As an underestimate, assume that only 10% of these are in the cortex, hence 10 to the 10th power number of neurons. Assume for consideration that we can idealize any neuron to be firing, 1, or not firing, 0. Then the possible states of the cortex are 2 raised to the 10 to the 10th power! This is hyper-astronomical. There are only 10 to the 80th particles in the known universe, infinitesimal compared to the possible states of the binarized cortex.
Now consider neural processes over a lifetime in this cortex. Before we enter quantum considerations, let this process be classical but a familiar noise driven stochastic process. Then, like the random walk above which may be random at each step, we can anticipate that over a long history the walk in the vast state space of the neural system is non-random yet lawless.
I now add the quantum decoherence recoherence ingredients to this conceptual and neural model. The neuroanatomy of the brain has axons of neurons ending in synaptic junctions. In the presynaptic region of the incoming neuron are vesicles filled with different numbers of specific neurotransmitter molecules. The axon synapses on the dendrites of the post-synaptic neuron. Transmission of activity from the upstream neuron to the downstream neuron is achieved by release of the neurotransmitter molecules which diffuse across the synaptic cleft and are bound by receptors on the postsynaptic membrane of the downstream neuron. In turn, this binding induces a small voltage change in the post synaptic membane. The summation of these "depolarizations", achieved via transmembrane channel proteins allowing ion passage through the membrane, is accumulated at the axon hillock of the nerve cell and, if above a "threshold" the nerve initiates an action potential which propagates down the axon of the neuron.
Now let me suppose that the recoherence of neurotransmitter molecules in the presynaptic vesicles can render them incapable of binding to the post synaptic receptor molelecues, and/or that recoherence of the post synaptic receptor can render it incapable of binding the neurotransmitter, and/or that recoherence of the protein ion transmembrane channels can render them incapable of allowing ion flux. In the above hypothesized case, recoherence has blocked or tended to block activation of the specific downstream neuron.
Conversely, decoherence of these molecules to classicity has permitted activation of specific down stream neurons.
Then we can conceive of "deciding" by the mind-brain system as a controlled acausal decoherence at specific loci in the brain that enable specific classical events to happen.
In the hypothesized quantum-decohering-recohering - brain system we are considering, we have at our disposal for a responsible free will only the above. But it seems that may well suffice. The now quantum random walk in the vast state space of neural activities may be random at any single step, but it is both true that the walk as a whole is highly correlated, non-random but non-lawful, and that decoherence at specific sites for one or more neurons and dendrites can acausally yield specific neural activities in the brain. The mind-brain system thereby acausally decides and acts to make specific classical events happen. Therefore, and critically, the "walk" in the space of neural activities is no longer merely random and correlated. Our decisions change specific neural activities in specific ways and change the walk.
I think we have here a conceptual and hopefully a physical foundation for a responsible free will, no longer bound by the familiar, purely classical, conception of the mind-brain system, nor the specter of mere quantum chance.
Originally published on NPR's 13.7 Cosmos and Culture Blog . See original entry for comments.
March 15, 2010
Since Descartes, three central problems have plagued our thinking about mind and matter: 1) How can mind act on matter? It seems that the current state of the brain is sufficient for the next state of the brain, so there is nothing for mind to do. And there seems no way for mind to act on matter anyway. 2) How can we have a morally responsible free will? 3) The "hard problem", what is consciousness, after all? Three and a half centuries of trying have produced no satisfactory answers to any of these questions. Indeed, John Searle, a famous philosopher of mind, has famously quipped, "Not only do we have no idea what consciousness IS, we have no idea what it would even be like to have an idea what consciousness is!".
Notwithstanding three and a half centuries of difficulty, in the next three blogs, I will propose what I take to be the start of answers to all three questions. I follow Roger Penrose who first suggested in his "The Emporer's New Mind" that consciousness has something to do with quantum mechanics. However Penrose ties his attempt to quantum gravity.
I will take a radically different approach: To issue 1) I will propose in this blog that mind and brain are "identical", but that the total system is both quantum and classical, more that it is a system poised in a realm between persistent acausal decoherence from quant;um choherence partially or completely to "classicity", and recoherence partially or completely to quantum coherence. Thus, quantum mind can have consequences for classical matter acausally, without acting on causally on matter.
In principle, this answers "how mind can act on matter",. It does doesnot act causally on matter, but can do so acausally by acausally losing phase information from the mind-brain system to the environment, hence decohering from the quantum "mere possibilities" to classical matter. I will present below the evidence, theoretical and now experimental that supports this possible poised state of the mind-brain system, reversibly decohering and recohering from quantum to classical and back.
In the next blog, I will confront the familliar dilemna: If the mind-brain system is determinsitic, we have no free will at all. If the mind-brain system is sometimes quantum, hence purely random on any interpretation of quantum mechanics, then we are not morally responsible for our random actions. I will show that a partially decoherent-recoherent mind-brain system is almost certainly lawless in its detailed behavior, that that behavior can nonetheless yield historical behavior that is highly non-random in its historical "becoming" and that there is an interpretation of my hypothesis for the poised mind-brain system which allows the system to "decide" and acausally yield specific classical behaviors. This will be my basis for a morally responsible free will. I emphasize that at this stage it is more important to conceive of a possible answer than to be right in scientific detail.
In the third blog I will make the concrete suggestion that conscious, the hard problem, can be solved if we identify this poised quantum decohering-recohering mind-brain system with consciousness itself. I will propose speculative, but possible and ultimately testable identifications of this hypothesis with specific neural correlates comprised of the neural transmitter molecules in synaptic vesicles, and/or the post-synaptic receptors for those neurotransmitters and/or transmembrane ion and other channels on dendrites and nerve axons. Again any possible hypothesis, particularly testable ones, can hope to be more helpful than Searle's quip above.
As a scientist, I think my hypothesis, at present, is "just possible", but certainly improbable, ultimately testable, and may be the most hopeful and investigatable set of hypotheses we now have .
The last paragraphs are brave language. Here is why I do not like the reigning best hypothesis, which derives from the view of the mind as algorithmic: I do not think the mind is algorithmic! I discussed this in my last blog, "Is The Human Mind Algorithmic?"
A cogent discussion of the algorithmic view is put forward by Daniel Dennett in "Freedom Evolves", a fine book. Dennett rightly notes that John Conway's famous Game of Life, played on a large square lattice that can grow indefinitely in size as specified in a moment has the following properties: 1) The cellular automata rules of the game of life define for each square on the lattice its next state as a definite, defined logical, or Boolean function of the eight neighbors to that square and the square itself. All squares have the same logical rule. Each square can take the values 1 or 0, "white" or "black". We therefore know completely the "physics" of the completely deterministic system in which all squares update their values, 1 or 0, simultaneoujsly each time a discrete imaginary clock ticks. 2) Conway showed that the game of life could give rise to patterns of black and white squares, called gliders, which propagate across the lattice. The speed of the gliders moving across the lattice constitutes a "speed of light" in this physics and for our purposes, the size of the ever finite lattice need merely increase as fast or faster than this speed of light for the remaining results. Next, glider "guns" can emit gliders. Finally, an wonderfully, Conway showed that these gliders and guns could constitute a universal Turing machine!
Now, recall the famous "Halting problem" in computer science. Here the issue is that we want an algorithm which can examine any other algorithm which is to compute an answer, print it and halt. Our hoped for examining algorithm is asked to say whether the examined algorithm will halt in finite time as it works on its infinite Turing machine tape. Turing famously proved that the Halting problem is formally undecidable! That is, we cannot have a formal procedure, that is, an examining algorithm, which can say ahead of time what the examined algorithm will do, halt or not halt. The essential result is that, although we know completely the underlying Boolean function,or "physics" of the Game of Life on the growing two dimensional lattice, we cannot say ahead of time what the behavior of the every growing lattice will be.
The above remarkable result, firmly proven, is the posterchild for "emergent" behavior. And indeed it is emergent behavior: Once we have the above, the behavior on the growing lattice cannot be deduced from the underlying deterministic physics. Wonderful result.
Nevertheless, reducible to the underlying physics of the Game of Life or not, Dennett's Turing machine glider and glider gun system remains perfectly algorithmic!
What is wrong with this beautiful emergence as a model for mind?
I see an enormous pair of problems, despite Dennett's attempt:
1) As I argued in my last blog,"Is The Mind Algorithmic"?, the mind is not algorithmic. I refer the reader to that last blog, but briefly: Can you finitely name all the uses of a screw driver?
2) And is there a mathematics in which all those uses are theorems! I surely do not see such a mathematics. And if not, it is not true that everything that unfolds in the universe is mathematizable. If you object that here we assume a responsible free will for the possible uses of a screw driver, well yes I do. But the same is true for the emergence of Darwinian exaptations in the biosphere 2 billion years ago where we need no appeal to conscious thought at all. That unfolding becoming of the biosphere, I claim cannot be finitely, or even denumerably infinitely, prestated, nor can it be mathematized. The becoming by Darwinian exaptations cannot be denumerably infinitely prestated because to do so we would need an effective procedure to create an ordering relation, like the integers, listing the first, second, third...to infinity possible preadaptation, but in "Breaking the Galilean Spell" I think we agreed that we cannot prestate all the possible Darwinian exaptations into the Adjacent Possible of the bisophere. It seems we can have no ordering listing of those unnamable possibilities.
With respect to 1) above, can you finitely, or denumerably infinitely, name all the possible uses of a screw driver, I think you cannot. Watch: The screw driver can be used to screw in screws, pop open paint can lids, wedge doors open, wedge doors shut, peel putty, used to stab an assailant, tied to the end of a bamboo pole to spear a fish, used with a rock to chop down a (small) tree, leaned agains a wall with the flat side of the tip of the screw driver at right angles to the wall, and used to prop up a square piece of plywood leaning against the wall and supported by the screw driver from falling on a valuable pot,.....
Notice two features of the above. First, relational features matter in most cases above, for example the angle of the screw driver to the wall, and its being turned properly to support the square plywood board. To prestate both all the possible relational features of the screw driver with all entities in the universe from, say atoms to molecules to plywood boards and brick, to fish....from two to many "things" at a time, and simultaneously to prestate all the uses or purposes to which the screw driver can be put, seems impossible. Certainly if spacetime is taken as continuous, it is second order infinite, and no finite list of relational features is possible. no finite or even denumerably infinite list is possible, for, again, no ordering relation among the above, a first, second, third, use of a screw driver, can be devised. How would we create such an ordering?
As I argued in the previous blog, what computer scientists do is list a finite set of "affordances" of screw drivers, "is a", "does a", has a", "needs a", and do neat things. But if the weird use we or James Bond or McGiver wishes to make use of the screw driver to accomplish is not deducible from those affordances, then there is no algorithmic way to get to such uses. My claim is that not all possible relation features and uses of screw drivers can be deduced from any finite set of affordances, nor is there any mathematics that can entail all such relational uses, but we humans, in particular Bond or McGiver in a pinch, find the inventive uses all the time. The mind is not algorithmic.
If not, we have, as yet, no theory of mind.
I think a deep issue arises here: I am claiming that something is impossible, but have not proved it. Is it possible to prove such a claim about no ordering being possible? I don't know how. And I will take two lines with respect to this.
First, it may be what philosophers call a "category mistake" to seek a mathematical proof from that which is not open to mathematical proof. Maybe we just need to think.
Second, I have and here again suggest that the co-evolution of the quantum-classical system in its environment is beyond law, hence cannot be algorithmic. In particular, for a quantum system in a quantum environment, when the former's quantum degrees of freedom are roughly uniformly partially decoherent, the Schrodinger wave equation cannot be propagated unitarily to preserve probability for all the quantum possibilities. While the Schrodinger equation for a fully coherent quantum system is lawful, we simply do not know if a quantum system losing phase information to its environment is describable by a law. If such a system is not describale by law, that constitutes a proof that such processes are real, and non-agorithmic, so not all that unfolds is algorithmic.
Having attempted again to counter the best present view of the mind as algorithmic, I now proceed to my own, radically different approach based on a quantum cohereing, decohering, recohering mind-brains system.
Let's look again at Cartesian dualism, Res Extensa and Res Cogitans. But Res Extensa according to DesCartes, Newton and classical physics is a deterministic dynamical system. Then the state of the brain - Res Extensa, is sufficient for the next state of the brain and there is nothing for mind to do. Worse, there is no way for Res Cogitans do do that something to the brain.
The situation is not helped by the mind-brain identity theory, for again philosophers do ask, and we can ask: If the brain-meat of the brain is a sufficient determinsitic condition for the next state of the brain, there is nothing for mind to do. Worse, the "mind" conscious part of the mind-brain system has no causal way to act on the brain!
Claims to stochastic equations for the dynamics of the brain will not help us, for these are merely "epistemological" stochastic events such as chaos when we cannot measure initial conditions accurately enough. The system is still deterministic ontologically.
But, critically, the above dilemma is purely stated in classical physics.
The world is not limited to classical physics, quantum mechanics also applies.
Now let me state again my hypothesis then defend it: The mind-brain system is quantum coherent, persistently decoheres to classicity and recoheres again to quantum coherent. The mind-brain system, since acausal decoherence takes an interval of time at least a femtosecond, can exist in a poised realm, see previous post, in which most or all its degrees of freedom are partially decoherent as it acausally loses phase information to the environment.
Then the mind does not act causally on the meat of the brain. Rather, the mind decoheres acausally to a classical "meat" brain state that has consequences for the classical aspects of the mind-brain system. Then mind can act on the brain, but does so acausally via decoherence.
Fine, I have suggested a solution to the problem of how mind can "act on" brain acausally. But the mind keeps acting on brain, so I am forced to assume that the classical, or classical (for all practical purposes - as some physicists say) brain can recohere partially or completely to the coherent quantum state.
In the remainder of this blog I will discuss evidence supporting decoherence and more tentative evidence supporting recoherence. In my next blog on a possible view of a responsible free will I will very tentatively identify the sites of such coherence, decoherence and recoherence as neurotransmitter molecules in synaptic vessicles, their post-synaptic receptors, and transmembrane channels in dendrites and possibly axons.
Decoherence is well extabished and is the bane of people trying to construct quantum computers. Different physicists differ on whether decoherence can proceed completely to classicity, or classicity "for all practical purposes", hence whether classical behavior always maintains some residue of quantum aspects.
Four lines of evidence support recoherence:
Shor's quantum error correction theorem establishes mathematically that injection of "information" about quantum phases (and amplitudes), can correct a quantum computer such that its quantum degrees of freedom return to coherence. This, of course, requires that the quantum system be distinguished from the quantum environment that supplies the phase and amplitude information, hence that the quantum system be "open".
Hans Briegel in the Physics Department at the University of Innsbruck, Austria, has published two theoretical papers claiming that a molecule can pass repeatedly from a quantum coherent "entangled" state to a classical state and back.
Recently, as noted by a commenter on my previous blog about a Poised Realm between quantum coherence and classical behavior, it has been proposed that a poised resonant state that is partially decoherent may account for high temperature superconducting.
Most striking, there is direct experimental evidence based on the familiar chlorophyll molecule, which carries out photosynthesis, and the "antenna" protein which is wrapped around the chlorophyll molecule. The normal time scale for decoherence is 10 to the- 15 seconds, a femtosecond. These workers have shown experimentally that coherence in chlorophyll at 77K, where K is Kelvin, absolute temperature and room temperature is 300K, can last at least as long as 7000 femtoseconds, or almost a nanosecond. This discovery is leading to a new field of quantum biology. Now it is thought that the antenna protein somehow "suppresses" decoherence. This is currently directly experimentally testable by using mutants of the antenna protein. But there is this further thought: chlorophyll, as a quantum system, is losing phase information to its environment. It is hard to imagine how the antenna protein can prevent this loss of phase information. Instead, it seems equally or more plausible that the antenna protein is acting, like Shor's quantum error correction algorithm above, to inject phase and amplitude information into chlorophyll. No one knows, so I will assume that the antenna protein injects phase and amplitude information into the chlorophyll molecule. Then it is hard to see how that injected information can exactly match the phase information chlorophyll is losing to the environment. If not, the temporal behavior of chlorophyll as a molecule in its envronment, in which many or all quantum degrees of freedom of chlorophyll are partly decoherent, cannot be described by the Schrodinger equation which is only able to unitarily propagates all the quantum possibilities together with all their phase and amplitude information. In short, chlorophyll is likely to be an open system gaining and losing phase and amplitude information all the time from and to its environment, and therefore it is not clear that a law describes its behavior. In particular, we do not know how that phase information is being lost to the environment, either in general, or due to Popper's argument in a Special Relativity setting as described in a previous blog.
In summary, it is plausible that both quantum decoherence and recoherence partially or totally to a quantum coherent state can happen, perhaps lawlessly from a quantum system to its environment - the universe.
Then my proposal is that the mind-brain system is such a quantum coherent, decohering to classicity, perhaps for all practical purposes, and recohering system, in which mind has acausal consequences for classical brain matter without acting causally on brain matter.
How mind acts on matter has a proposed answer.
Originally published on NPR's 13.7 Cosmos and Culture Blog . See original entry for comments.
March 15, 2010
My purpose in this blog is to argue that the human mind is not algorithmic. This is a contentious issue, for one of the received views in Cognitive Science and Neuroscience is that the mind is, and must be, algorithmic.
I will present what might be called "the Standard View" of the mind as algorithmic although calling it "the standard view" may be an overstatement, to make it clear what I shall be arguing against.
First, we need a clear statement of what an algorithm actually is. This definition can be made in terms of the famous Turing machine, the basis of all contemporary, physically classical, computers.
Turing wished to make a "formal machine" that was to be both a formalization of a Cartesian machine, yet also a formalization of human calculation. Indeed, his language is full of "doing", "moving" and so forth.
A general Turing machine consists of an infinite tape with squares on it. In each square is either nothing, that is the square is blank, or one of a finite discrete, definite and different symbols. In the familiar case, all can be done with the symbols, "1" and "0".
These symbols can be placed on the tape in specific positions initially. In addition, the machine has a "head" which has a discrete number of distinct and definitely different states, encoded by an alphabet of symbols that constitute its "internal states". At each moment of the computation, the head reads the square of the tape below it, blank, 1, or 0, and carries out two distinct, definite operations. Given its internal state and the read input from the tape, the machine stays where it is, or moves one square to the left or right, and writes a symbol, blank, 0 or 1 on the square below it. In addition, the head uses both its internal state and the symbol it read from the tape to move to a definite, discretely different, internal state.
This is an informal definition of the universal Turing machine, The initial distribution of symbols on the tape can be taken as both input data to the algorithm carried out by the Turing machine, and more symbols on the tape can constitute the computer program that the Turing machine uses to carry out its operations.
All contemporary computers are based on the Turing machine, which can be universal in the precise sense that all computable functions can be computed by the universal Turing machine, given input data and a program on the tape.
The triumph of this conception is easy to see in the computerized world we now live in.
I want to pause to note the idea of "definiteness" emphasized by Turning. There is no ambiguity at all in the Turing machine. Given the internal state of the machine, and a read discrete definite symbol on the tape, the machine can do only and exactly one definite thing, as desribed above.
Now, what is the presumed connection to the human mind? It comes in three steps. First, early in the 20th Century, philosophers such as Bertrand Russell, and the young Wittgenstein, sought to place knowledge of the world on the firmest possible empirical foundation. We could be wrong that a chair is in the room, they reasoned, but we could hardly be wrong that we experienced what seemed to be a chair. This was formally simplified to "atomic propositions" about "sense data", such as "red here","A flat now", or "hard here". There were two fundamental ideas in this effort. First, one could not be wrong about one's own sense data. Second, and critically, all empirical statements about the external world were to be build up by logical connections among such sense data statements. Thus "there is a chair in the room" was to be a formal shorthand, true if and only if a set of sense data statements linked together with AND, OR, and other logical connectors, along with "quantifiers" such as "There exists", and "For All", were true as the logical combination of sense data statements. In short, the idea was to reduce statements about the external world to a finite set of necessary and sufficent set of logically interconnected sense data statements. Note again the definiteness, a logical AND, as in A AND B is definite and is true if and only if both A and B statements are true.
But did this hopeful attempt work? No.
In the ensuing two decades it was realized that it was impossible to set the statement, "There is a chair in the room" into one to one correspondence with a finite set of logically connected sense data statements. The technical arguments are long. But they drove Wittgenstein, who wrote the culmination of this effort in his famous "Tractatus" as a young man, to reflect deeply and return, older and puzzled, to work out why the whole effort was so much bunk. To get the flavor of what Wittgenstein said, we need his notion of a "language game". It will be critical to the issue of whether mind is algorithmic.
Consider, said Wittgenstein, the jury foreman rises and says the words, "We find Jones guilty of murder in." Now, said Wittgenstein, notice that "found guilty" assumes that we know the meanings of an interdefined circle of words such as "evidence", "admissible evidence", "legally responsible", "guilty under the law", "innocent", and so forth. Now, he argues, can we "reduce" such language to language about ordinary human actions? Suppose you do not know the meanings of the legal terms I used above, you enter the courtroom and hear and see the following: A man/woman stands up and says the words, "We find Jones guilty of murder." You do not understand what has been said. More, argued Wittgenstein, there is no way to reduce the statement "We find Jones guilty of murder." into a logically necessary and sufficient set of statements about ordinary, non-legal termed, human actions. We cannot replace what the foreman says by a combination of statements about ordinary human actions! The legal set of words constitutes what Wittgenstein came to call a "language game."
Similarly, supposed we monitored sound vibrations in the court room, and made a pixel movie of the events. Could we find a necessary and sufficient set of sound vibrations and pixel confiurations to be logically true if and only if "We find Jones guilty" is true? No. Human action statements are a different language game than those reporting pixels and sound vibrations.
Bear language games in mind as we go forward, for we naively think we learn some fundamental set of concepts as children, and all other concepts are somehow logical constructions, again by definite logical rules, from the initial set of concepts that constitute a "basement language." We don't do so. We learn legal langauge on its own level. It cannot be stated in human action language alone. Philosophers have come to agree that there is no basement language.
Now, ignoring Wittgenstein, whose towering "Philsophic Investigations" came after Turing, who came after the "Tractus", back to steps to the human mind.
In 1943, Warren McCulloch and Walter Pitts published a seminal paper, "The Logical Calculus of Ideas Immanent in the Mind". What these authors showed is that any logical statement of the kind Russell wanted could be computed by an appropriate acyclic, feedforward network of "formal neurons". A formal neuron is a device capable of only two states, 1 and 0, and has inputs from other "upstream" neurons in the feedforward network, and computes a logical, or Boolean, function on its inputs. So an infinite formal neural network was as powerful as a universal Turing machine.
The connection to the mind came by identifying the 1 or 0 states of neurons with single or logical combinations of sense data statements, "A flat now", "hard here". Then the network could calculate any definite logical combination of the initial set of sense data statements, and if these represented the physical "firing" or "non-firing" states of specific real neurons, the identification was complete. The McCulloch Pitts paper was enormously influential, and a founding document in early cybernetics.
It is but a step to the mind and hoped for neuroscience. McCulloch and Pitts restricted themselves to feedforward networks, but one could consider networks with feedback loops. In general, such networks are a bit like a mountainous region with lakes in valleys, draining streams in their drainage basins. The "state" of a formal neural network is the current 1 or 0 activities of all its N neurons. In the simplest case, a central imaginary clock ticks off discrete time moments. At each moment all neurons look at the activities, 1 or 0, of their own inputs, consults the logical, Boolean, function governing its behavior, and assume the definite next 1 or 0 value. So the network passes from a state to a state to a state along what is called a trajectory, over time steps. But the system has a finite number of states, 2 to the Nth power, so eventually the network hits a state it was in before. Since the network is deterministic, it thereafter cycles repeatedly around a "state cycle", called mathematically "an attractor". The attractor is the analogue of the lake. The different sequences of states, that is, different trajectories, that flow to the same state cycle attractor, are like the streams flowing to the same lake. The lake and streams constitute the "basin of attraction" of the attractor.
Now, a network of formal neurons with feedback loops can have many attractors. The next step has been to identify attractors with things like "memories", or "classifcations" that the network carries out. All states in one basin of attraction are co-classified as the same. This is the heart of Connectionism in computer science, cognitive science and neuroscience, where in the last case, we identify a neuron firing or not as 1 or 0.
Then, if the mind is algorithmic, most workers would say it is something like the sketch above, with variations for asynchronous updating of the formal neurons, or stochastic noise in the behavior of the formal neurons so they sometimes do the "wrong" thing, given their Boolean function and the activities of their inputs.
This is not far from one dominant view. Will it work for the human mind? I think not. We should already be deeply suspicious given Wittgenstein's language games. All is definite in the above computations, and there is no way to get from a formal network categorizing normal human behavior, if that could be done, to categorizing the outcomes of legal proceedings. One language, the legal one, cannot be reduced to the human action level. Language games are not reducible, and no one has found a means to implement diverse language games in a network of formal neurons.
Another view of the non-algorithmic character of the human mind comes from trying to do it. For example, computer scientists have invented the idea of "affordances", for object oriented programming. Here a computer object representing a real carburator is characterized by a finite definite set of affordances, "Is a", "Has a", "Does a", "Needs a". This move is wonderful and much has been done with it.
But do formal affordances suffice? I am convinced that the answer is "No".
Consider the humble screw driver. Can you finitely list all the uses of screw drivers in all contexts? Great for screwing in screws, of course, but it can be used to open a paint can, prop open a door, wedge shut a door, scrape putty, tied to the end of a stick and used to spear fish, used as an object d'art, a paper weight, a roller, to prop up a piece of cardboard......Note that in this list are many relational features, such as proping up a piece of cardboard against a wall. We cannot list both all the relational features and purposes to which a screw driver might be put. Think not? First, consider James Bond in a pinch, or McGiver, then see if you can list all the uses. You cannot. Yet in any concrete case, you'd race to tie the screw driver to a bamboo stick to spear dinner on a South Pacific island. That is, we do these things, often easily, sometimes with effort, sometimes you don't think of it, but Jim invents a really novel use of the screw driver.
Now consider our finite list of affordances. If the affordances do not have in them, deducibly, "can be tied to the end of a stick to spear things like fish", you'll never deduce such a novel functionality for the screw driver.
I'll give a second example given before. A group of engineers want to invent the tractor, so have a huge engine block. They mount it on a chassis which promptly breaks. They mount the block on successively larger chassis, all of which break. At last, an engineer says, "You know, the engine block is so big and rigid, we can use the engine block itself as the chassis and hang everything off the engine block!" That counted as an invention - a new use for the engine block, using the rigidity of the block for a new functionality in the context of inventing the tractor. This is, in fact, how tractors are made. And the invention is the technological analogue of a Darwinian preadaptation, or exaptation, like the emergence of the swim bladder from lung fish, where again, a novel function, neutral bouyancy in the water column, emerged and changed evolution and we cannot prestate all possible human exaptations, as I have argued to break the Galilean Spell.
Neither the evolution of the biosphere nor the human mind is algorithmic, although the human mind can, of course, perform algorithmically. All this will bear on our philosophy of mind.
Originally published on NPR's 13.7 Cosmos and Culture Blog . See original entry for comments.
March 8, 2010
My purpose in the next several posts is to begin to explore major issues in the Philosophy of Mind. My hope is that it may be possible to bring to bear material from past posts to help resolve deep issues that have been with us at least since Descartes, in 1650. To do so, we need a brief outline of these issues.
Descartes famously postulated two kinds of "substance" in the universe, res extensa and res cogitans. Res extensa is, roughly the physical world. Res cogitans is, roughly, mind and consciousness. This view of two kinds of substances is called "dualism".
It was clear to Descartes that his dualism raised the deep issue of how mind/consciousness acts on the physical matter of the body, including the issue of how the mind can have a morally responsible free will. His own proposal was that the mind acted on the body via the pineal gland, a single gland in the brain.
With respect to res extensa, Descartes was an early mechanist: the world, and the bodies of animals and humans were conceived of as clock like machines, gears, escapements and so forth.
This early view was profoundly enriched by Newton's three laws of motion, universal gravitation, and the invention by Newton and Leibnitz of the calculus.
Recall that Aristotle had considered four causes, formal, material, final and efficient: the formal cause of a house is the blueprint, the material cause of the house are the bricks and mortar, the final cause of the house is your decision to build it, and the efficient cause is the actual process of building the house. As I have mentioned, Aristotle also offered a model for explanation in science: deduction: All men are mortal, Socrates is a man, therefore Socrates is mortal. As Robert Rosen pointed out in 'Life Itself', with Newton's laws in differential equation form, initial and boundary conditions, one has, say for a table of billiard balls, the initial positions and momenta of all the balls, the boundary conditions of the billiard table, then the physicist integrates the differential equations to get the trajectories the balls will follow. But as Rosen points out, integration is exactly Deduction. So Newton mathematized Aristotle's efficient case as integration of the differential equations for a system, and that became the "new" mechanical world view", for example Celestial Mechanics.
Note two things about Newton's triumph. First, from the perspective of an Alfred North Whitehead, or myself, who are both interested in exploring ontologically REAL Possibles giving rise to Ontologically real Actuals that give rise to Ontologically real Possibles, Newton's and Einstein's worlds are only Actuals. Thus, for Newton, what IS the Actual state of the billiard ball system in terms of positions and momenta, causally determines the next Actual state of the system, as revealed by integration of his equations of motion. Second, for Newton all events have sufficient Actual causes. There is no uncaused event. The forward and past (due to time reversibility of Newton's laws) trajectories of the billiard balls is entirely determined by the present state of the billiard balls's positions and momenta, and a sequence of Actuals that beget the next or preceding Actual.
With this in mind, let's return to a Cartesian dualism: We have a res extensa, described by Newton's deterministic equations. Now pass to a deterministic set of equations describing the dynamical neural behavior of the brain. Then we confront the profound problem: How does mind/consciousness - as we exprience it with our sense of free will - cause matter, res extensa to change? The standard philosophy of mind arguments are straightforward: 1) The brain, as a physical system is causally sufficient to generate the next Actual state of the brain, so there is nothing for mind to do. 2) Besides, there is no causal means by which Mind/Consciousness can act on matter.
We're stuck! One response was Idealism: this was Bishop Berkeley's theory that all is Res Cogitans, and what seem to be Res Extensa are so because they are held in the mind of God. There are a variety of Idealist positions spawned by Berkeley.
The modern view is called the "mind-brain" identity theory. It holds that causal events in the brain, say circuits of neurons firing creating a nonlinear dynamical system among about 10 to the 11th power neurons, constitute the causal behavior of the (classical) neurons, much like Newton, where Actuals give rise to the next Actuals in a state space of the firing activities of all 10 to the 11th power neurons at each moment, hence a flow along trajectories in this state space. But, since the mind and brain are identical, this very dynamical activity is mind and consciousness.
The mind-brain identity view is dominant today, and I agree with it, although not as formulated above.
A subspecies of this mind-brain identity theory is called "connectionism". Here is the idea. A neuron firing or not can be thought of as the truth or falseness of a proposition, say, "An edge at angle X is here in the visual field". Then the firing of all the neurons is like a computer calculating, a Turing machine, and vastly many computable functions can be carried out by the brain, calculating the true or falseness of vastly many propositions.
The neurological correlate of this is easily seen by receptor fields. Hubel and Wiesel showed in the 1950s that a given neural ganglion at the back of your eye could be stimulated to fire by points of light directed at small spots on your retina, but inhibited from firing by directing the light at a small circle on your retina surrounding the central excitatory spot. This is an on center off surround receptor field. Later they showed that there are receptor fields that respond to short lines of light with specific spatial orientations, and these are called edge detectors, for it was soon realized that a set of these receptor fields firing together could detect a straight edge spanning a number of edge receptor fields oriented in the same direction.
This triumph led to the "grandmother" cell theory, in which a specific neuron would fire if and only if you looked at your grandmother. This theory has fallen into disfavor, as such cells have not been found. More, this Connectionist vision of the brain typically is associated with the claim that the mind is algorithmic, a computational machine like a classical computer. In a later blog I will suggest that this view is deeply wrong, the mind is not algorithmic. But that is for later...
So what are the philosophic problems with the mind-brain identity theory?
"Well", a philosopher of mind commonly says, "let's take the "meat-brain" part of the mind-brain identity. Once again, it seems as if the Actual state of the brain is causally sufficient for the next Actual state of the brain, so again, there is nothing for mind to do." (You may wonder why the philosopher gets to ask this question if mind and brain are identical, but an actual survey of a modest number of good philosophers shows that they do ask this question.)
Worse, there seems no way for mind to act on the brain - just as in dualism.
One response is to say the conscious mind is a mere "epiphenomenon", of no power on its own to cause anything in the world of matter. Here, consciousness is either an illusion, or in any case, ineffective in the real world. This may be the position of Tom Clark and Ursula, my co-blogger, in her recent post when she says of Ursula, "she did this and that", meaning her brain did it, not Ursula. (I may have misunderstood a very good friend, who will surely tell me if I've misunderstood.)
So what about morally responsible free will? Here is the standard dilemma. The mind-brain is a deterministic dynamical system, a la Newton, but different "neural" equations. So you were determined by that dynamics to kill the old man in the wheel chair. Not your fault. You didn't do it. No responsible free will.
Alternatively, we have a little or a lot of quantum chance, in the simplest case, like the random decay of a radioactive nucleus. So you are sauntering down the street, and by random chance, kill the same old man in the wheel chair. Not your fault, just a random event.
Again we are stuck. We can have no responsible free will.
I will offer an argument later about a coherent, decohereing-recohering mind-brain system that has consequences for the classical world via decoherence, but decoherence that is acausal, so mind can have consequences for matter without acting causally on matter. I think this is actually very hopeful. It answers cleanly one of the outstanding problems in the philosophy of mind - IF the mind-brain system is quantum coherent, decohering and recohering, which none of us knows as yet.
A responsible free will is harder.
But there are further problems:
Why did consciouisness evolve anyway? Suppose an algorithmic robot with sensors could calculate exactly what will happen in its world. Why bother to be "aware" of its world, just buzz around and plug yourself into power sockets and pop oil into your joints.
What use, if mind is a machine, such as a connectionist machine, is there in being conscious, having awareness, or "qualia"?
Now you might think that an answer is that the robot cannot compute its world exactly due to measurement error, or what is called "deterministic chaos", the famous butterfly effect where a butterfly in Rio can change the weather in Chicago because tiny changes in initial conditions lead to widely varying trajectories in state space.
But this answer won't do. The unconscious, but able computerized robot could sense the difference between its predictions about the world and the world, reset the initial conditions and recompute. Why be conscious? There seems no answer.
This blog is at the philosophy of mind 101 level, of course, but I hope it set in place the context of the problems we face. In my next blogs I hope to take up these issues.
Post Script: In a recent blog, 'Can a Changing Adjacent Possible Acausally Change History? The Open Universe, IV', I discussed 'Vast chemical reaction graphs' and the flow of a small amount of matter on such graphs. I claimed equilibrium would never be reached in the lifetime of the universe, that fluctuations would not damp out but be history dependent, due to those very fluctuations that do not damp out, and that, if we interpret Quantum Mechanics in terms of an ontologically 'real possible', the entire process is acausal. A new study, just published in the Proceedings of the National Academy of Science, USA, of the famous Murchison meteorite that fell in Australia about 40 years ago, used high resolution mass spectrometry and NMR, able to detect mass differences of 1 electron. The study detected 14,000 organic compounds from which, the authors say, millions of organic compounds can be made. The data suggest the meteorite may be older than the sun and carry compounds from early in the formation of the solar system. My addition is that space chemistry may indeed be a flow on a hypo-polulated "vast" reaction graph as we have discussed.
With respect to the philosophy of mind for later blogs, this flow, if interpreted as I have done above, is acausal, lawless but non-random in its historical contingency, and offers a potential way out of the bind about free will as either deterministic, hence not free, or merely quantum random like the decay of a single radioactive nucleus, hence just random, so not morally responsible as we just happen to kill the old man in the wheel chair.
'Scientists say that a meteorite that crashed into Earth 40 years ago contains millions of different carbon-containing, or organic, molecules.'
More information: High molecular diversity of extraterrestrial organic matter in Murchison meteorite revealed 40 years after its fall, Philippe Schmitt-Kopplin et al., PNAS February 16, 2010 vol. 107 no. 7 2763-2768, doi: 10.1073/pnas.0912157107
Originally published on NPR's 13.7 Cosmos and Culture Blog . See original entry for comments.
March 3, 2010
In this blog I am going to take us beyond known physics, or at least the physics that I, a non-physicist, know is known. We are all more or less familiar with the weird world of Quantum Mechanics, and of course, the classical world of Newton and Einstein. I have begun to suspect, on grounds I discuss below, that there is an entire realm that I will call the "Poised Realm" between the two. If there is this realm, its importance may be twofold, first perhaps very novel physics. Second, in a later blog I will advance the working hypothesis that this "Poised Realm" in the human and probably other brains, IS consciousness. Since we have had no idea what consciousness "is", a new working hypothesis may, one day, be testable and become real science.
In past blogs I have introduced in outline the famous two slit experiment in which photons from a "photon gun, directed toward a barrier with two slits cut in it, and beyond to a photodetector screen, give rise to the astonishing interference patterns of light and dark bands on the screen. As Feynman taught in his famous lectures on physics, no classical view of reality can account for this phenomenon. Now the interference bands look something like the patterns on the bottom of a swimming pool were two pebbles dropped into the still water of the pool. This image helps understand the famous Schrodinger time dependent wave equation. This wave equation "propagates" a quantity, which, like ordinary waves, undulates in time and space. Therefore, there is a "phase" associated with each point on this wave. The light and dark bands of the interference pattern arise because when the wave passes through the two slits, then, like a plane water wave hitting a similar wall with two holes, the Schrodinger waves passing through the two slits give rise to roughly semicircular spreading waves from both slits that advance and hit the photodetector. Where two peaks or two troughs meet at the same point on the photo detector, the "amplitudes" of the two waves add together to create a higher amplitude. Where a peak of one wave meets a trough of the second wave, the two cancel out entirely, leaving zero amplitude. This adding of in phase and out of phase wave amplitudes is called, respectively, "constructive interference" and "destructive interference".
In the mathematics of quantum mechanics, the next step is to square the absolute value of the amplitude. By the so called "Born rule", this squared value is the PROBABILITY that a photon will be measured at that time and space location. Three points are essential here.
First, the "probability" is ontological, not epistemological. A photon may or equally well may not be detected at that spot, with an acausal probability. Note that Aristotle's Boolean law of the excluded middle, (A and Not A) cannot both be true simultaneously, is not valid here. In this quantum situation, A as well as Not A may happen. Both are true.
Second, all the phase information, ie where the peaks and valleys are in time and space, must be available when the waves hit the photodetector screen, for the interference patterns to arise.
Third, and critically, if we distinguish between a quantum system and its enviroment, quantum or quantum + classical, phase information may be lost from the quantum system to the quantum envornonment and not be recoverable. This process, as described in earlier blobs, in called "Decoherence".
Now, if phase information is lost from the system to its environment, at some point so much phase information has been lost from the system that the system alone can no longer exhibit the light band interference patterns that are the hallmark of quantum behavior. For this reason, many if not most physicists now say that decoherence is the best current account of how the classical world of Actual things emerges from the quantum world. The famous Copenhagen interpretation of the Schrodinger wave is that it is a wave of "possibilities". These possibilities are ontologically real. So, in decoherence, ontologically real possibilities yield the classical world. More, this transformation is acausally, for there is no causal account of the loss of phase information. Phase information is just lost into the environment.
But something radically new has emerged in recent physics: The transformation from quantum to classical is now thought by a number of physicists to be reversible. That is, the quantum possibilities can decohere to the classical world of actual events or entitities, then recohere to quantum possibilities! Assume for the moment this is correct, I'll give the grounds for it below.
If quantum can convert to classical and classical can convert to quantum via decoherence and recoherence, then there may be an entire new "Poised Realm" between quantum and classical worlds. Why? Well, it takes time for a quantum system to decohere, often on the order of a femtosecond, or 10 to the -15 seconds. That sounds short, but the shortest time scale in the universe is the Planck time scale of 10 to the - 43 seconds, so 10 to the 28th Planck moments pass while decohrence happens. Conversely, presumably, but not surely, it takes time for recoherence to happen. No data are available, but lets say a femtosecond. During these intervals, the quantum system is losing coherence in some or all of its "quantum degrees of freedom", or recohering with respect to some or all of its quantum degrees of freedom. Thus, there must be time periods where the system is in my "Poised Realm.
Then IF the quantum to classical conversion is reversible, it is logically possible that a physical system can remain in the Poised Realm for a long time. I explore next what this might mean, for it is an entirely unexplored area of physics, with possible implications for consciousness.
Is decoherence experimentally verified? Absolutely. Decoherence is the bane of quantum computers, for the quantum degrees of freedom that are carrying out the computation are known to gradually decohere. Many lines of evidence demonstrate decoherence and it is familiar to many physicists.
Can the rate of decohrence be slowed down? Astonishing new evidence says that the answer is almost surely "Yes". Chlorophyll is the molecule in plants that carries out photosynthesis. Photons hit a receptor site and migrate to a reaction center. Recent experimental work has demonstrated that the quantum coherent state of chlorophyll during this process lasts 1000 femtoseconds, or a nanosecond, or even longer! So decohrence can be slowed down, and the coherent state is thought to sharply increase the efficiency of energy extraction by the plant from the photon. More, the chlorophyll molecule is surrounded by an "antenna" protein which is thought to suppress decohrence, or possible enable recoherence. This hypothesis can be tested using mutant antenna proteins.
The long time scale coherence at ambient temperature of a quantum coherent state is important because until recently, most physicists would have believed that at room temperature, decoherence would rapidly destroy all quantum coherence. This suggests that long lived coherence may be biologically useful, selected for, and tuned for sundry functions.
What about the converse? What is now known about conversion from a decohered, classical (for all practical purposes) state to regain the quantum coherent state? There are, at present, two bodies of work.
First, mathematician Shor proved a quantum error correction theorem for quantum computers. This work has been expanded upon. Briefly, if quantum degrees of freedom in a quantum computer are decohring due to loss of phase information from the computer (the system), to its environment, then Shor showed that if information is added to the system from the outside, the decohering degrees of freedom could be made to recohere again. This clearly says that recoherence is possible.
Second, physicist Hans Briegel, University of Innsbruck, Austria, has published two papers showing that a quantum coherent "entangled" system can decohere to classicity than recohere to quantum entangled coherence.
I am not a physicist or mathematician. But based on the above, I will assume that the quantum coherent to classical conversion can happen acausally by decoherence, and recoherence can also be attained.
This immediately raises four huge sets of questions. First, what determines the ratio of quantum to classical processes in the universe? Second, can a sustained, partially decohrent "Poised Realm" be attained and maintained? Third, and critically, what laws, if any laws exist, describe the behavior of a system in the "Poised Realm"? I return to this in a moment, for we really know almost nothing. But we do know one critical thing. Fourth, how can one expermentally achieve and study a possible sustained partially decohrent "poised realm" physically?
I will jump to the third question: If a Poised Realm exists and persists between quantum and classical, partially decoherent, in a system in its environment, what laws if any apply to its behavior? At this stage the only thing that seems certain is this: The quantum system is losing phase information to its environment, therefore the Schrodinger equation for the system cannot be propagated "unitarily. "Unitarily" means that the Schrodinger wave propagates in such a way that the square of the amplitudes of all the possibilities, interpreted as probabilities, sum to 1.0. This is the only way physicists know how to compute the forward time (or reversed time) behavior of the Schrodinger equation describing the behavior of a quantum system. But this cannot be done for the system alone if phase information is being lost from the system to the environment. Thus, at present, we have no idea if the behavior of a system in the Poised Realm is lawful or not, or what that or those laws may be. Thus, if a maintained Poised Realm can be created, it seems to be a realm of very new physics.
One possibility to bear in mind is a contrast. We are used, since Newton, to ordinary differential equations and partial differential equations in physics. In areas of computer science we grow used to "agent based models", which interact to exhibit behavior, but so far we typically cannot write down differential equations for their detailed behavior. With respect to the Poised Realm, if a sustained Poised Realm can exist, we don't have any idea what happens except at the quantum and classical limits of this Realm.
The second question asks: Can a Poised Realm state be maintained? It would seem possible. If the quantum system is losing phase information to its environment, Shor's theorem assures us that addition from the outside of information can cause decohering quantum "degrees of freedom" to recohere. Thus it is conceivable that a balance can be struck between decoherence and recoherence, leading to a sustained state in the Poised Realm. Obviously, at this point, how to do this "in principle" possibility is unknown.
If the quantum to classical transition via decoherence is reversible, then the first question looms large: What is the balance between quantum and classical processes in the universe and its "parts", via what means, laws, or otherwise. One radical possibility is a kind of abiotic natural selection in which bits of classical matter which are good at avoiding return to a quantum state persist. If they have variants by accretion of different added bits of classical matter there are even more resistant to return to a quantum state, they will be abiotically "selected" ie they will persist better.
I turn to the third and fourth questions with some initial thoughts. Anton Zeilinger, an Austrian physicist, is carrying out experiments with objects of increasing mass, in the famous two slit experiment, looking, as mass increases, for the failure of interference bands. To date, Zeilinger has used a beam of Buckmeisterfullerenes molecules, 60 atom carbon molecules, and shown that they exhibit interference. Now one expects that a beam of dead rabbits flung at a two slit apparatus would result in two distinct piles of dead rabbits, one behind each slit, with no interference pattern of band of rabbits, no rabbits, rabbits across the "rabbit detector". Thus, at some point, as the mass of entities in the the beam increases, and presumably as their density increases, decoherence will set in and interference bands will start to disappear.
If the above can be found, two things follow. First at such densities where decoherence sets in, a sustained, partially decoherent state in the Poised Realm would seem to have been attained. Second, as I have noted in an early post, in a Special Relativity setting, no law would seem to describe the decohrence process, based on Popper's argument about past and future light cones. Lawlessness in the Poised Realm with or without a special relativity setting might be testable. For example, the way interference bands "fade" as decoherence sets in might not be precisely repeatable or stable over time, and could be shown not to be due to experimental noise.
In a future blog, after laying a foundation in some of contemporary philosophy of mind issues, and seeking answers to 350 year old questions about how mind can "act" on matter, how we might have a responsible free will, and why consciousness might have been of selective advantage, I will offer, as noted, the obviously improbable working hypothesis that consciousness is associated with the Poised Realm and sensors and effectors to couple to the world. Perhaps one day this could be demonstrated for the mammalian brain.