Originally posted to 13.7 Cosmos & Culture on 9/28/2016
I start with a remarkable quote:
"The passage from the physics of the brain to the corresponding facts of consciousness is unthinkable. Granted that a definite thought, and a definite molecular action in the brain occur simultaneously, we do not possess the intellectual organ, nor apparently any rudiment of the organ, which would enable us to pass by a process of reasoning from the one phenomenon to the other. They appear together but we do not know why. Were our minds and senses so expanded, strengthened and illuminated as to enable us to see and feel the very molecules of the brain, were we capable of following all their motions, all their groupings, all their electric discharges, if such there be, and were we intimately acquainted with the corresponding states of thought and feeling, we should be as far as ever from the solution of the problem. How are these physical processes connected with the facts of consciousness? The chasm between the two classes of phenomena would still remain intellectually impassable...Let the consciousness of love, for example, be associated with a right-handed spiral motion of the molecules of the brain, and the consciousness of hate with a left-handed spiral motion. We should then know, when we love, that the motion is in one direction, and, when we hate, that the motion is in the other; but the "Why?" would remain as unanswerable as before." [My italics.]
This was part of eminent Victorian physicist John Tyndall's 1868 Presidential Address to the Physical Section of the British Association for the Advancement of Science. Already 148 years ago, scientists were puzzled by the strange fact that once we adopt a purely materialistic description of the mind, we must face the major challenge of figuring out how "molecular action in the brain" relates to thought.
The interesting aspect of Tyndall's argument is his claim that even if we did understand the hows — that is, the fact that, in his example, love was associated with a right-handed spiral motion and hate with the opposite direction — we would still have no clue how to relate the molecular machinery of emotion and thought to the subjective experience of emotion and thought.
Of course, much has happened in the burgeoning field of cognitive neurosciences in the past 150 years. With the advances of noninvasive probing technologies such as fMRI and EEG, we can follow, approximately at least, the regions of the brain where stuff happens, so to speak, as we feel love or hatred or listen to music, or meditate. Scientists have uncovered a long list of 100 plus biochemical compounds, the neurotransmitters, that transmit signals across chemical synapses, the bridges between neurons or between neurons and muscle and gland cells: glutamate, acetylcholine, dopamine, epinephrine (adrenaline), histamine, etc. We now have a much clearer picture of the brain, with its roughly 85 billion neurons, each with some 15,000 connections to neighboring neurons. The pathway complexity is staggering, summarized in the "connectome," a kind of wiring diagram mapping all neural connections in an organism.
In their 2005 article proposing the connectome, authors Olaf Sporns, Giulio Tononi and Rolf Kötter, remarked that: "The connectome will significantly increase our understanding of how functional brain states emerge from their underlying structural substrate." In other words, within the working hypothesis of the brain as a network of neurons and synapses, the hope is that this map will somehow address Tyndall's concern and finally clarify how "physical processes are connected with the facts of consciousness." A second goal of the map, extremely important from a medical perspective, is to "provide new mechanistic insights into how brain function is affected if this structural substrate is disrupted." This second goal, of course, is much easier to achieve.
One immediate advantage of having a connectome is that it could, in principle, be reproduced in computers. One would then have a simplified model of the brain's connectivity, which could be used in many different ways. For example, testing the effect of localized trauma or stimulation on overall functionality. Assuming the chemical elements could be brought into the model, that is, the complex maze of neurotransmitter flowing about through neural synapses, the effects of specific drugs could be tested in the simulations, without need of animal or human subjects. We can then see immediate medical applications of this approach, which, by themselves, make the effort extremely relevant for science.
The hard problem, however, is whether such efforts could indeed shed light on Tyndall's question. As New York University philosopher David Chalmers remarked, the "really hard problem of consciousness is the problem of experience." We really don't know, and claims to the contrary are optimistic at best. This doesn't mean that the problem is intractable; such final claims are dangerous in science, given that we keep surprising ourselves with the problems that we can solve now that we couldn't before. However, it seems reasonable to argue that there is some elusive element missing here, something that translates physiological neural activity to the subjective experience of having a thought or an emotion. Enthusiasts argue that we will only know if we try, and that building connectomes and sophisticated computer programs may bring us close to an understanding. Such arguments assume that the very complexity of neural connectivity will somehow engender higher consciousness through some sort of emergent collective phenomenon. It's hard to argue with the relevance of this research. Even if it fails to provide a window into consciousness, it will still have a profound impact on medical diagnosis and brain pharmacology.
On the other hand, it's hard to imagine how the collective emergent behavior of neurons can become a thought or the experience of an emotion. If consciousness is, as it should be, an organized state of matter, we seem to be lacking an essential component to describe it. For comparison, a building has bricks and pumps and electrical currents controlled by on-off switches flowing through countless wires. It is a mechanical contraption, working firmly within a set of physical laws. We understand buildings, and can build and fix them because we know the underlying physical principles under which they operate. Likewise, it is plausible that we can build brain-like systems having different kinds of experiential awareness like seeing or hearing, and that respond to such stimuli with certain actions. Many robots already do this.
Assuming we have an approximate connectome of the human brain, and that we know most of the underlying physical and biochemical principles, we may indeed get really close to a model that mimics how we see and hear. But will that model also have the subjective experience of being? The issue here seems to be not a tentative "yes" or "no" answer — but a "why not?"