Three to four years later, however, the researchers found an increase in gray matter in the posterior hippocampi, or the back part of the hippocampus, among the 39 trainees who ultimately qualified as taxi drivers. This change was not observed in the non-taxi drivers or trainees who had failed the exams. [...]
Qualified taxi drivers showed better memory performance for London-based information during the follow-up testing than controls or those who failed the test; however, they displayed “surprisingly poorer learning and memory for certain types of new visual information,” compared with controls, the researchers write online today in the journal Current Biology, “suggesting there might be a price to pay for the acquisition of their spatial knowledge.” (via.)
Cannabinoid receptors involved in placebo analgesia.
Placebo analgesia is mediated by both opioid and nonopioid mechanisms, but so far nothing is known about the nonopioid component. Here we show that the specific CB1 cannabinoid receptor antagonist 5-(4-chlorophenyl)-1-(2,4-dichloro-phenyl)-4-methyl-N-(piperidin-1-yl)-1H-pyrazole-3-carboxamide (rimonabant or SR141716) blocks nonopioid placebo analgesic responses but has no effect on opioid placebo responses. These findings suggest that the endocannabinoid system has a pivotal role in placebo analgesia in some circumstances when the opioid system is not involved. (via
When Transcranial Electromagnetic Stimulation (TMS) was used to inhibit the activity of the neurons in the posterior medial frontal cortex, the participants did not change their ratings of the photographs so they were more in line with the rest of the group.
Dr Klucharev believes this part of the brain is responsible for generating an “error” signal when individuals deviate from the group opinion, triggering a cascade that leads them to conform with the group view.
He said: “What if that mechanism could be suspended for a time? The group who were exposed to the TMS changed their views to a much lesser extent – they were immune to ‘group pressure’.
“Individuals differ in the strength of the error signal – which is why some people are more conformist than others. It also tells us that conformity is a rather automatic process that is based on an old evolutionary mechanism.”
Read the rest of the article at: The Telegraph
Nothing Either Good Or Bad?
You’re a 10-year-old girl in Africa, lying nervously on your back with your lower body fully exposed. You don’t know why, but the grown-ups nearby keep assuring you that the tradition they’re about to carry out is to prevent you from becoming one of those immoral outcasts. They are concerned with one thing only: keeping you pure – your virginity must be preserved. Being a female and all, you mustn’t pleasure yourself. With your consent, what is about to happen to you is called aclitoridectomy – partial or total removal of the clitoris. Without your consent – and almost surely, you have not granted it – it is called female genital mutilation (FGM).
Is FGM good or bad? There are two predominant philosophies to answering that question. Some people would say it’s bad because, out of all the ways to optimize human flourishing and well-being, FGM can’t possibly be one of them. These people are moral realists: what is right or wrong for others is right or wrong for me, because morals transcend culture, race, sex, and popular vote. It follows, then, that people with good intentions can still be wrong about moral questions. Neuroscientist Sam Harris champions this view, summarized here.
The other approach comes from the moral relativists. Their line of reasoning can be summarized as follows: as morals are the product of culture, they are relative. Who are we, the chauvinists of the West, to argue that FGM is absolutely wrong? Our version of “right” cannot be said to be better than someone else’s “right.” Moreover, science has nothing to say about moral claims, because science is about how the world is, not how it ought to be. Harvard psychologist Joshua Greene gives a brief overview championing a version of moral relativism.
The Is-Ought “Problem”
The short of it is this: moral values rely on facts about how the mind works because moral values arise from minds at work. Minds at work are very real and very measurable parts of the natural world, which renders them susceptible to full-blown scientific investigation.
Human well-being is a concept that can be described in terms of brain states. The more we understand about neuroscience and psychology, the more we will be able to determine which actions lead to, and which ones inhibit, human well-being or suffering. Again, every possible state of suffering or well-being is realized at the level of the brain.
At the outset, note what I am not saying: I am not saying that we should only use science to navigate moral grounds. Fields like behavioral economics surely contribute immensely to the dialogue too. I’m simply saying two things: moral questions can have right and wrong answers, and science can and should inform decisions that require us to invoke the concept of morality. Additionally, if we are to speak of what is morally right and wrong, we must hold this discussion with a 21st century mentality.
Putting this mentality aside momentarily, in the 1700s the Scottish philosopher David Hume famously claimed that it is a mistake to derive what is good (whatought to be the case) from facts about the world (what is the case). In other words, you can’t get an ought from an is. Whenever someone attempts to define what the “good” is in terms of natural properties, such as measurements of human well-being, then that person is committing a naturalistic fallacy.
Today, this philosophical musing oversaturates nearly every conversation about morality. Note, however, that it is usually invoked on the basis of authority (“As Hume once said…”) rather than facts (“As the data show…”). This is a fundamental difference between science and philosophy — science removes what philosophy does not: our natural urge to fool ourselves. It seems rather strange that science, a process that exercises our intellectual capabilities to the fullest, from creativity to rational and evidence-driven discourse, should be left out of the conversation on the most important moral questions in life.
A Science of Human Values
At this point, it becomes important to note that many moral questions will have answers in principle, but answers in practice might be extremely difficult or at times impossible. However, this is true of every sphere of human knowledge. In the next 10 seconds, how many times will the hearts of every human alive beat? This question has a definitive answer, but collecting the data is an expensive waste of time. This does not make cardiology a futile scientific enterprise. Similarly, while some questions in morality seem nearly impossible to reconcile (“Should we kill everyone who isn’t happy just to increase the average well-being in the world?”), this does not render morality outside the province of science.
Neuroscientist Sam Harris has tackled this problem recently in The Moral Landscape. Can there be universally right and wrong ways to move about in the world? (Here his provocative Ted Talk). To answer this, he asks us to imagine a universe whereby all conscious creatures suffer to the maximum extent that is possible. Everyone is in a perpetual state of misery – throwing up, bursting in flames, losing family members repeatedly, and so on. This scenario is what he calls “the worst possible misery for everyone.”
Can such a scenario be called universally bad?
He goes on,
if you think we cannot say this would be ‘bad,’ then I don’t know what you could mean by the word ‘bad’… I am saying that a universe in which all conscious beings suffer the worst possible misery is worse than a universe in which they experience well-being. This is all we need to speak about ‘moral truth’ in the context of science. Once we admit that the extremes of absolute misery and absolute flourishing – whatever these states amount to for each particular being in the end – are different and dependent on facts about the universe, then we have admitted that there are right and wrong answers to questions of morality.
So, if we ought to do anything, it’s to avoid the Worst Possible Misery. What other priorities would there be under which ought would apply? Like anything else, facts about the world, including how our brains operate, are constrained by natural laws. It follows, then, that there are ways to physically move away from or get closer to the Worst Possible Misery. These movements are contingent on the activities of our brains; neural activity, after all, is anchored solely to the realm of reality — of natural, measurable, explainable events. With this in mind, a science of morality becomes possible.
Harris draws the analogy between health and morality to clear up some philosophical fog. The concept of health has no clean-cut definition, but this does not make medicine any less scientific. A precise definition of life is just as hard to find for that matter, but this does not make biology unscientific either. Imagine if we applied the moral relativist’s line of reasoning to medicine. If a man comes in to the hospital throwing up all the time and bleeding profusely from his ears, yet claims that he is perfectly healthy, but the doctor disagrees, would the correct response be, “Who are you, doctor, to say he isn’t healthy?”
The fact that analogous questions are no-brainers for medicine but hazy issues for morality means we’re unnecessarily granting morality the kind of immunity to scientific investigation that no other field of study has. Morality has not achieved escape velocity from the gravity of science.
Belief is Secondary to Explanation
In science, truth is not predicated on a popular vote but on data points. We are all entitled to our own opinions, but not to our own facts. One can think of an infinite set of questions that would serve as litmus tests for what is morally acceptable or reprehensible. Does it make a net contribution to human well-being to throw acid in the faces of women who try to get an education in Afghanistan? How would Afghanistan’s economic system be affected if women and men are treated equally?
If we zoom in the microscope from cultures to an individual brain, then we can begin to see where and how moral decisions are processed. To that end, neuroscientist Rebecca Saxe argues that the right temporal-parietal junction (RTPJ), an area of the brain behind your right ear and to the top, is necessarily involved in making moral judgements. Reader-friendly summary here. Temporarily disrupting this brain region through transcranial magnetic stimulation (TMS; basically, a very strong magnet) lead subjects to change the magnitude of their moral judgements.
“Moral judgments of attempted harm (negative belief, neutral outcome) are significantly different by TMS site (RTPJ vs. contro, *P < 0.05).”
Interestingly, some of the RTPJ’s functions are impaired in children with autism, but it should be noted that the RTPJ is not solely responsible for processing moral decisions. That claim would be neuroscientifically naïve. Complex functions like morality recruit many corners of the brain, anterior and posterior, ancient and modern, molecular and behavioral. Saxe’s work elegantly takes the scalpel of science and puts what was once the province of philosophy on the dissection table.
There is also a growing body of science in support of the view that humans have an internal moral compass, independent of our cultural backgrounds. Despite his unfortunate academic snafu, psychologist Marc Hauser offers a quick summary on some current evidence for an “innate moral grammar”. Evolutionary biologist Jerry Coyne summarizes the point as succinctly as possible: Our concept of morality, of right and wrong, comes from two things, namely, evolution and secular reasoning. Here’s a one more data point for evolution.
Does Well-Being Actually Matter?
By now, the moral relativist will recede to asking a breathtakingly useless question. “Why in the scheme of the universe should we care about well-being to begin with?” The man who valued throwing up and claimed he was healthy simply doesn’t get invited back to the conference on health and well-being; similarly, someone who doesn’t value well-being from the get-go has little to contribute to the conversation about morality besides a Hume-inspired red-herring. Here, our argument has hit philosophical bedrock with a stupid shovel of a question, Harris remarks.
It’s like asking why should science actually value empirical reasoning? Why should a circle not actually be a square? These arguments are splendid ways to run on intellectual treadmills and not get anywhere. That said, the value of avoiding the Worst Possible Misery is the only assumption that needs to be granted.
Finally, there are very relevant situations in which science can inform us on what ought to be the case in order to avoid taking a step closer to the Worst Possible Misery. About 13% of Afghani women are literate. Life expectancy is 44 years. That is a fact. That is the is. So, have we maximized well-being? Science says No.
One recent measure reports that child mortality rates plummet worldwide when women are better educated.
The ought, then, becomes that we ought to educate women, because the alternative (oppressing women) has demonstrably failed to optimize overall well-being across cultures. The same logic can be applied to cultures that promote the bludgeoning of homosexuals or the attempted killing of two people of different cultural backgrounds just for falling in love.
Importantly, scientists surely won’t call all the shots: we can say smoking increases the likelihood of getting lung cancer but men in white coats aren’t descending on every smoker to stop them. They simply provide the facts for people to make well-informed decisions. Science can derive ought from is because it informs our decisions to a degree that is based only on reality and not on shared subjective values.
Neuroscience can act as a bridge between facts and values, even though the moral compass of each culture is not equally calibrated. Some would argue that FGM must be a straw man, or at best an easy caricature of a morally repulsive situation. In Africa and many Arab cultures alone, an estimated 3 million girls (of all ages) a year undergo this form of legalized torture. We need science to provide these measurements in the form of graphs to inform moral decisions — graphs contain unbiased data points that are not influenced by the coffee we didn’t have this morning, the political agenda at hand, or the culture in which we were raised. This is relevant to morality because a single scientific data point manages to transcend everything that divides us.
You’re lying on a sandy beach on a hot sunny afternoon, enjoying a few hours of much needed laziness. As you open your eyes and confront the vastness of the ocean in front of you, light of 600nm wavelength hits your retina, kindling an impossibly long cascade of events in your brain: a molecule called retinal changes shape, neurons fire action potentials down the optic nerve, arrive at the lateral geniculate nucleus deep in the brain causing more action potentials in primary visual cortex in the back of your head, and so on ad infinitum. At some point, the mechanical wonder of 100 billion neurons working together produces something special: your experience of the color blue. What’s special is not that you can discriminate that color from others; nor that you are aware of it and paying attention to it. It is not notable that you can tell us about it, or assign a name to it. It’s that you have a subjective, qualitative experience of the color; there is something it is like to experience the color blue. Some philosophers call these experiences qualia – meaning “what kind” – but it is not important what kind of experience you are having, just that you are having one at all. Modern science hypothesizes that subjective experience is a product of the brain, but has no explanation for it. Continue reading
Study finds that CB1 receptor knockout mice have increased brain inflammation, which leads to earlier cognitive decline.
Brain aging is associated with cognitive decline that is accompanied by progressive neuroinflammatory changes. The endocannabinoid system (ECS) is involved in the regulation of glial activity and influences the progression of age-related learning and memory deficits. Mice lacking the Cnr1 gene (Cnr1?/?), which encodes the cannabinoid receptor 1 (CB1), showed an accelerated age-dependent deficit in spatial learning accompanied by a loss of principal neurons in the hippocampus. The age-dependent decrease in neuronal numbers in Cnr1?/? mice was not related to decreased neurogenesis or to epileptic seizures. However, enhanced neuroinflammation characterized by an increased density of astrocytes and activated microglia as well as an enhanced expression of the inflammatory cytokine IL-6 during aging was present in the hippocampus of Cnr1?/? mice. The ongoing process of pyramidal cell degeneration and neuroinflammation can exacerbate each other and both contribute to the cognitive deficits. Deletion of CB1 receptors from the forebrain GABAergic, but not from the glutamatergic neurons, led to a similar neuronal loss and increased neuroinflammation in the hippocampus as observed in animals lacking CB1 receptors in all cells. Our results suggest that CB1 receptor activity on hippocampal GABAergic neurons protects against age-dependent cognitive decline by reducing pyramidal cell degeneration and neuroinflammation. (via.)
Rhythms in the brain that are associated with learning become stronger as the body moves faster, UCLA neurophysicists report in a new study. The research team, led by professor Mayank Mehta, used specialized microelectrodes to monitor an electrical signal known as the gamma rhythm in the brains of mice. This signal is typically produced in a brain region called the hippocampus, which is critical for learning and memory, during periods of concentration and learning.
The researchers found that the strength of the gamma rhythm grew substantially as running speed increased, bringing scientists a step closer to understanding the brain functions essential for learning and navigation. (via.)
The starry heavens above and the moral law within — these were the two things that Immanuel Kant claimed were immune to scientific investigation. Equally untouchable was the vague abstraction known as consciousness. That was in the 1700s. This book-length book of a post will be split into two parts, each covering the hot buttons of consciousness or morality, both within the framework of neuroscience. Here’s the sparknotes version: consciousness can be explained solely in terms of orderly neural activity and is fully measurable; and, morality is and ought to be understood in light of the brain states of conscious creatures. We can — and do — have a neuroscience of both, because we’re not in the 1700s anymore.
Part 2 on morality will be posted shortly.
[Edit]: A few decades back, a philosopher demanded a precise definition of consciousness before tackling it with the tools of science. Francis Crick responded by saying, “My dear chap, there was never a time in the early years of molecular biology when we sat around the table with a bunch of philosophers saying ‘let us define life first.’ We just went out there and found out what it was: a double helix.” By popular demand, however, I’ll give it a stab anyway: consciousness is the neural process of bringing into working memory a representation of our ‘self’, where self is defined as body and its internal states over time (the same way we can bring to mind an image of a beach or a heaping mount of creme brulee.)
Science will fail to explain consciousness, or so the argument goes. A few weeks ago I stumbled on the most recent iteration of neuroscience’s supposed shortcomings. It appears in Boston University’s neuroscience magazine, The Nerve. The article is written by a student of both neuroscience and philosophy; it can be found here. It is a lucid and engaging commentary on where philosophy and science make points of contact to compliment each other; it also stresses the instances in which they are separate spheres of inquiry. The article as a whole, however, is an example par excellence of getting philosophy right and science wrong. To boot, I define science in the broad sense as honest, rational, and evidence-based inquiry, capable of forming testable questions about the world with high predictive power.
In terms of the basic elements from which our bodies are assembled, we are no different than the stars above or the earth beneath or the splendid variety of organisms around us. So, then, how is it that our subjective feelings, what philosophers have termed qualia – our experience of the blueness of the sky or the pain of a toothache — arise from objective and physical fluff? How does the brain achieve the mind’s “I” when it is made up of mindless, I-less, brain cells?
Well, when 100 billion brain cells get together and connect in trillions of ways, seemingly magical stuff happens. This claim makes many cringe because of our inability to completely wrap our minds around staggering complexity. As a trivial thought experiment, if I asked you to think of 2 bottles of Pepsi, you’d do it effortlessly. 5 bottles – took a second longer, but you did it. 9 bottles, kind of tough, but not impossible. 23 bottles – they’re a blur. 100 billion bottles all spewing neuropepsi in 100 trillion dizzying directions all while changing shape and responding to the timing of beverage released behind and in front of them – you get the point.
The article starts by rightfully praising the productive dialogue that occurs when philosophy and science join teams to tackle a particularly hairy problem. Philosophy is very good at asking questions, science at answering them. However, the first party foul occurs with the seemingly innocuous but misleading question: “…can and ought we reduce the mind to the components of the brain?”I’m immediately tempted to pose a simple rebuttal: what single strand of evidence suggests that the mind can be reduced to anything else? The question answers itself later, “Reducing our infinitely rich human experience to synapses and action potentials seems foolish, although they are the only observable physical phenomena that we have to work with at the present time.”
Whether it is foolish or not is irrelevant; nature doesn’t really care about our particular hunches. When something is the only observable physical phenomenon, we can either remove it or re-introduce it in a particular system while leaving all the other pieces untouched. These are loss-of-function and gain-of-function experiments that have become the gold standard in order to claim causality. Accordingly, neuroscientists manipulate “infinitely rich experiences” by perturbing the activity of that which produces them – from synapses to action potentials, genes to neurons. And so, when you’re a student of neuroscience, the mind should look very different to you. It’s physical stuff only. It looks like — it is — the brain.
If neuroscience has taught us anything in the last two decades, it’s that the separation between “mental” and “physical” phenomenon simply does not exist. We know this because of the tragic loss-of-function experiments that affect millions of people each year. Broken brain pieces give rise to broken thoughts. Pharmaceutical treatments, however, help glue together these broken thoughts. No amount of philosophy will fight off depression, but a blue pill called fluoxetine is effective. Schizophrenic symptoms shatter lives, but risperidone can intervene and help turn lives around. Minds can go into fits of mania, but these can be curbed by lithium. Parkinsonian symptoms are debilitating beyond belief, but they can be temporally kept at bay thanks to L-Dopa. Alzheimer’s has all sorts of dramatic effects on memory; donepezil can at least partly treat this kind of dementia. The general principle underlying the effectiveness of these pills is simple: physical stuff interacts only with physical stuff, and the mind is just that. Like a pill, it has a measurable mechanism of action.
If we zoom in the microscope a bit, we find that brain cells are very well equipped to perform a spectacular variety of mental functions, including consciousness. I make this claim in light of a neuron’s specialized ability to integrate all sorts of inputs from the external world while also responding to inputs from within the body. This confers the ability to represent information by changing its own structure as well as the timing and frequency of its own firing, or action potentials. For a review paper on a neuron’s properties click here and for a more in-depth review on the evolution of the synapse, go here.
When you couple this with the presence of things that bite and move and mate and communicate, with environmental pressures, then evolution by natural selection gives you the following: you get the ability to remember, thanks to the hippocampal memory system; the ability to paint any memory with emotion in light of the amygdala emotional system; the ability to represent sensory information even when it is no longer present to the senses, courtesy of the prefrontal cortex and working memory; the ability to bind multi-sensory experiences through the thalamus and its reciprocal connections with the rest of the brain’s sensory areas; the ability to represent our own body in terms of its position in space, its motor movements, and its sensations via the parietal lobe as well as the motor/somatosensory cortex proper, respectively. And the list goes on. Now imagine all of processes working in concert in one brain. This is where you happen. It is also why we experience the world and ourselves as a smoothly occurring story, not as a blooming, buzzing confusion, as psychologist William James would have called it.
Already we can begin to explain our salient, second-by-second awareness of our self over time as a product of specific neural activity — a claim that is not unfeasible, especially when you consider the functional properties of a few more brain regions. Two prime candidates are the anterior cingulate cortex and anterior insular cortex, which are specifically thought to be involved in self-awareness (a reader-friendly commentary here). It is no coincidence that many of the brain regions I just mentioned balloon at around ages 2 onwards, which is about when we begin to experience the world not as hiccups of experience, but as a seamless story.
Let’s look closer at one relevant example. Neuroscientist Earl Miller has done pioneering work connecting memory and attention, especially with regards to the brain’s ability to hold multiple items in a given thought. One of Earl’s seminal papers can be found here with a reader-friendly commentary here. His work builds on a fascinating property of many neural networks, namely, that certain populations of cells march to specific beats. More precisely, they tend to fire at specific frequencies, or oscillations. These oscillations, termed “gamma oscillations” or “theta oscillations” depending on their firing frequencies, have been hypothesized to create optimal windows for communication between neurons across brain regions. Like passing a baton, information from one neuron is transferred to the other when the latter is most ready to receive that information. Amazingly, when multiple items are held in attention, they tend to occur during different points along these oscillations. These “items,” it has been argued, can include our sense of “self.” This is when self comes to mind. Not surprisingly, many claim this to be the first real neural correlate of consciousness.
Suffice it to say, and this part is important, there is nothing that distinguishes experiential from action potential.
The beauty of a hypothesis that claims consciousness is the product of neural activity and neural activity only is that it is falsifiable with an N of 1, and that 1 simply has not been found. Inert matter, such as single hydrogen atoms, do wonderful things when given nearly 14 billion years to interact, decay, explode, form stars, form planets, form lipid bilayers, form replicative machinery…. form minds, form life.
It makes little sense that philosophers juke and dodge the slings and arrows of scientific explanation by touting our inability to fully explain a process like consciousness. Accordingly, the article continues, “There are some things that empiricism and science simply cannot answer, and the nature of mind and consciousness may be one of these things.” So in the spirit of philosophy, I propose that the following syllogism remains perfectly logical and intact: Mental experience is the product of neural activity. Neural activity is anchored only to the realm of physical laws. Therefore, mental experience can be explained fully through physical laws. Where’s the evidence to the contrary?
The “Missing Causal Link” between Mind and Brain
Onward. When creationists claim that there are missing links in the evolutionary ladder, it makes most biologists want to jump in front of traffic. In fact, museums showcase hundreds of thousands of these missing links daily and provide touchable evidence for the process of evolution. The same is true about the “missing causal links” between mind and brain. The museums, in this case, are the innumerable clever experiments that repeatedly provide evidence in support of the mind’s neural basis. The last part of this rebuttal will deal with the issue of causality in an attempt to show that the mind is just neural activity. The brain is solely responsible for producing the mind. You are nothing but your neurons, and this is fully measurable. I reiterate these “nothing but” and “is just” statements in their most extreme form, because it would be a scientific injustice not to. More importantly, the evidence gives me its permission.
First, the crux of the argument to the contrary from the same Nerve article we’ve been reviewing:
… while brain structures are active during specific tasks, one ought not suggest that the biological activity in a brain region is the sole cause for a mental event. Rather, the correlation should be stressed… the bolder claims of causality by neuroscientists ought to be reserved for philosophy because there is simply not enough evidence to prove their validity or falsehood… Although we would not exist without our brain, our perceived power of free will and thought about our physical and biological nature is evident, implying that the metaphysical has an equal or larger role in explaining the nature of reality. There is a causal link missing between brain and the mind, and philosophy calls us to be conscious of that while neuroscience presses on deeper into our brain matter for an answer that may not be in the flesh. Empirical evidence for causal relationships is sparse, and while brain scans and studies may be sources of insight on physical mechanisms of thought they can never tell us what ought to be, or provide us with the answer to the hard problem of consciousness.
This entire argument is demonstrably false. That causal link is provided thousands of times a day, and you may have even experienced it for yourself. Anyone who’s ever been “put under” with general anesthesia and then brought back to wakefulness when taken off it has single-handedly settled the score. Anesthesia — often a cocktail of inhaled or injected chemicals — shifts the balance of excitation and inhibition in the brain such that you doze off while a procedure is performed. You wake up when the anesthesia wears off.
What we have here is yet another loss-of-function (consciousness), followed by a gain-of-function (consciousness) experiment, all done millions of times a year, worldwide, explainable solely in physical terms from molecules to behavior. More fundamentally, what we have here yet again is the gold standard in science in order to claim causality. What neuroscience is working on today, granted, are the nitty-gritty details of how brain cells fire in particular patterns across various brain regions to make consciousness possible. These details, however, don’t undermine a causal role for neurons with regards to consciousness; they simply support it. That is the staggering power of 100 billion brain cells organized in the right patterns and firing at the right time, and it’s all we need because that’s all there is. “The whole purpose of science is to keep us from mistaking what we’d like to be true for what really is true,” evolutionary biologist Jerry Coyne reminds us.
Additionally, the claim that “empirical evidence for causal relationships is sparse” rests on a deep misunderstanding of what correlation and causation really mean. A correlation is when two variables repeatedly co-occur under the control of other variables. Hot summer days in New York correlate with more ice cream trucks on the streets. Did the ice cream trucks cause the heat? Obviously not – they simply positively correlate.
Causality is when one event is necessary and sufficient to influence the direction of another event. Light causes certain proteins in your retina to change conformation and transform the wavelength into a pattern of neural firing. No light, no conformational change. Add light back, and there’s a conformational change. Repeat. As a control, use mechanical energy (sound) instead of light to try to mimic light’s effects: nothing. Ergo, light causes these changes in protein structure. These are exactly the kind of experiments neuroscientists are capable of doing. And they do it routinely. Heck, we can do even better than that.
Correlations are becoming an ever-receding pocket of scientific ignorance. In the last two decades, the tools of neuroscience have undergone a dramatic shift and have enabled countless causal dissections of the brain. Can we change a robust behavior by deleting a single gene? Causality: check. How about controlling neurons and behavior specifically with light? Done. Let’s analyze the latter for a moment, because it nicely encapsulates the revolution currently happening in brain science.
In this experiment, neurons were genetically tricked to respond to light. These neurons are located in the amygdala, a teardrop-shaped structure that is thought to be involved in regulating states of anxiety. The shining of light itself caused for a light-sensitive protein called channelrhodopsin to open up along the surface of the cell. This opening permitted for an influx of molecules that then caused a specified population of neurons to turn on. Remarkably, by turning brain cells on or off with a few genetic tricks and pulses of light, the authors were able to reversibly control anxiety states in their subjects by manipulating this specific neural circuit within the amygdala.
Today, we can take electrodes, implant them in brains, have them readout the firing patterns of brain cells, and re-construct with great confidence what the brain cells were representing. We can even do this in reverse and implant information of sorts by artificially simulating neural activity through deep-tissue prosthesis. Speaking of which, the field of neural prosthesis and the study of brain-machine interfaces build on the fact that biochemical and electrical signatures comprise the mind. How else would you be able to move a bionic arm with your thoughts?
This fundamental aspect of neuroscience has been applied across the evolutionary ladder to decode multiple levels of cognition, from fruit flies to mice, monkeys to humans. Potent is the ability to directly measure states of the mind. Humans are no exception; whether we like it or not, consciousness occurs by the same rules that I just described — brain cells firing in a particular order at a particular time. They are measurable and, therefore, fall within the purview of science.
And so with your permission, dear reader, I’d like to close with one more trip up the causal ladder to provide an example we can all relate to. Drinking alcohol has its effects on the brain and alters your state of consciousness because of how it alters the patterns of neural firing. When you drink alcohol it increases the expression of a molecule called GABA, which for the sake of simplicity, often inhibits neural firings. Your frontal lobes are largely responsible for actively inhibiting a particular action, such as refraining from throwing your phone after a dropped call or kicking your mouse after a failed experiment. GABA inhibits this region to a large extent – thus disinhibiting your actions and making it more likely that you’ll say things you didn’t mean, do things you wouldn’t have done otherwise, drunk text your dad, and so on. It also inhibits your cerebellum – an area partly responsible for your ability to generate fine motor movements. And after making its way to your brainstem, GABA can depress breathing rates, which is a large reason why binge drinking can lead to death, not because the person threw up while sleeping, but because the brain forgot to breath.
With an upregulation of GABA, you stumble and mumble and breathe drunkenly. And this is just a cross-section of one substance in the brain producing or affecting all sorts of behaviors; do keep in mind that a normal brain entails the parallel action of millions of molecules acting on billions of neurons. And all of this is traceable on the level of genetics, neurons, circuits, and behavior. Just because we label our awareness as “subjective” (or a “Hard Problem”) does not entitle it to some immunity to scientific scrutiny and explanation using empirical measures.
As a quick digression, and inevitably, philosophers will respond, “Well, you still can’t pinpoint ‘redness’ in my brain. It’s a lump of tissue. It’s not red. How do you explain, then, my experience of red?” But there is redness in the brain! Philosophers are just using the wrong tools. With the exception of the retina, the brain doesn’t process information in wavelengths. Hence, its information cannot be read out directly by using our eyes, which themselves are wavelength detectors. We have to probe and decode the brain’s native language – electricity and chemicals – in order to see the red in a lump of tissue.
All the staggering transformations that happen between visualizing red and being aware of red have very physical explanations each step of the way, but measuring these requires the right tools. The eyes of philosophers are not those tools. Electrodes and biochemical interventions are. That is why every time we describe consciousness with words we have to lie a little bit. The pure answer is in the electrophysiological bolts of micro-lightning and biochemical sneezes of neurons – measurability: check. Consequently, the separation between neural tissue and thought is a fictitious one.
I’ll set the goal-posts now. To end, I will hold the article’s claims to the same rigorous process that mine are held to: is there evidence, and what would the evidence look like? Can you be wrong, and how would you know you were wrong? Can you measure it, and what measures would you take? Does it make testable predictions, and what would they look like? If you can answer these four questions in the affirmative and provide an example of each, then your claims become empirically testable. They might even test positive for establishing contact with reality. After all, what the evidence says and what we want the evidence to say is the difference between science and belief; and, science is the kind of business you get into if you want to know reality, not if you want to believe in your own version of it.
Excellent quote from a really good article by David Eagleman:
Instead of debating culpability, we should focus on what to do, moving forward, with an accused lawbreaker. I suggest that the legal system has to become forward-looking, primarily because it can no longer hope to do otherwise. As science complicates the question of culpability, our legal and social policy will need to shift toward a different set of questions: How is a person likely to behave in the future? Are criminal actions likely to be repeated? Can this person be helped toward pro-social behavior? How can incentives be realistically structured to deter crime?
The important change will be in the way we respond to the vast range of criminal acts. Biological explanation will not exculpate criminals; we will still remove from the streets lawbreakers who prove overaggressive, underempathetic, and poor at controlling their impulses. Consider, for example, that the majority of known serial killers were abused as children. Does this make them less blameworthy? Who cares? It’s the wrong question. The knowledge that they were abused encourages us to support social programs to prevent child abuse, but it does nothing to change the way we deal with the particular serial murderer standing in front of the bench. We still need to keep him off the streets, irrespective of his past misfortunes. The child abuse cannot serve as an excuse to let him go; the judge must keep society safe.
Or put more simply: how can our justice system be slightly less stupid and broken.
Scientists are bipolar masochists in white coats. And we are okay with that.
This post will be somewhat atypical. No science-heavy references, no hard data, no hypotheses — nothing new. It’ll be just one general and optimistic story for the non-specialist, a story and opinion that celebrates neuroscience to the bone.
Imagine taking a scalpel and making an incision on an anesthetized patient’s head. As the blade glides down a shaved patch of gelatinous skin, the blood underneath begins to flow out and glistens a bright crimson. It quickly dries to a rust on the scalpel and gauze. Some yellow cubes of fat almost bursts at the seams of the skin around it, and white skull finally comes into sight. Flakes of bone fly around as you drill a hole open. As soon as the hole gives way, the pinkgrey custard that is your brain appears. It is engulfed by a spider-web of of purple bloody veins. Congratulations! Before you, finally, is the seat to everything that is You, and there is nothing You-er than your mind, than your brain. So, let’s study You.