What are the critical assumptions of neuroscience?

In light of all the celebration surrounding the discovery of a Higgs-like particle, I found it amusing that nearly 30 years ago Higg’s theory was rejected by CERN as ‘outlandish’. This got me to wondering, just how often is scientific consensus a bar to discovery? Scientists are only human, and as such can be just as prone to blindspots, biases, and herding behaviors as other humans. Clearly the scientific method and scientific consensus (e.g. peer review) are the tools we rely on to surmount these biases. Yet, every tool has it’s misuse, and sometimes the wisdom of the crowd is just the aggregate of all these biases.

At this point, David Zhou pointed out that when scientific consensus leads to rejection of correct viewpoints, it’s often due to the strong implicit assumptions that the dominant paradigm rests upon. Sometimes there are assumptions that support our theories which, due to a lack of either conceptual or methodological sophistication, are not amenable to investigation. Other times we simply don’t see them; when Chomsky famously wrote his review of Skinner’s verbal behavior, he simply put together all the pieces of the puzzle that were floating around, and in doing so destroyed a 20-year scientific consensus.

Of course, as a cognitive scientist studying the brain, I often puzzle over what assumptions I critically depend upon to do my work. In an earlier stage of my training, I was heavily inundated with ideas from the “embodied, enactive, extended” framework, where it is common to claim that the essential bias is an uncritical belief in the representational theory of mind. While I do still see problems in mainstream information theory, I’m no longer convinced that an essentially internalist, predictive-coding account of the brain is without merit. It seems to me that the “revolution” of externalist viewpoints turned out to be more of an exercise in house-keeping, moving us beyond overly simplistic “just-so” evolutionary diatribes,and  empty connectionism, to introducing concepts from dynamical systems to information theory in the context of cognition.

So, really i’d like to open this up: what do you think are assumptions neuroscientists cannot live without? I don’t want to shape the discussion too much, but here are a few starters off the top of my head:

  • Nativism: informational constraints are heritable and innate, learning occurs within these bounds
  • Representation: Physical information is transduced by the senses into abstract representations for cognition to manipulate
  • Frequentism: While many alternatives currently abound, for the most part I think many mainstream neuroscientists are crucially dependent on assessing differences in mean and slope. A related issue is a tendency to view variability as “noise”
  • Mental Chronometry: related to the representational theory of mind is the idea that more complex representations take longer to process and require more resources. Thus greater (BOLD/ERP/RT) equals a more complex process.
  • Evolution: for a function to exist it must be selected for by evolutionary natural selection

That’s all off the top of my head. What do you think? Are these essential for neuroscience? What might a cognitive theory look like without these, and how could it motivate empirical research? For me, each of these are in some way quite helpful in terms of providing a framework to interpret reaction-time, BOLD, or other cognition related data. Have I missed any?

Insula and Anterior Cingulate: the ‘everything’ network or systemic neurovascular confound?

It’s no secret in cognitive neuroscience that some brain regions garner more attention than others. Particularly in fMRI research, we’re all too familiar with certain regions that seem to pop up in study after study, regardless of experimental paradigm. When it comes to areas like the anterior cingulate cortex (ACC) and insula (AIC), the trend is obvious. Generally when I see the same brain region involved in a wide a variety of tasks, I think there must be some very general level function which encompasses these paradigms. Off the top of my head, the ACC and AIC are major players in cognitive control, pain, emotion, consciousness, salience, working memory, decision making, and interoception to name a few. Maybe on a bad day I’ll look at a list like that and think, well localization is just all wrong, and really what we have is a big fat prefrontal cortex doing everything in conjunction. A paper published yesterday in Cerebral Cortex took my breath away and lead to a third, more sinister option: a serious methodological confound in a large majority of published fMRI papers.

Neurovascular coupling and the BOLD signal: a match not made in heaven

An important line of research in neuroimaging focuses on noise in fMRI signals. The essential problem of fMRI is that, while it provides decent spatial resolution, the data is acquired slowly and indirectly via the blood-oxygenation level dependent (BOLD) signal. The BOLD signal is messy, slow, and extremely complex in its origins. Although we typically assume increasing BOLD signal equals greater neural activity, the details of just what kind of activity (e.g. excitatory vs inhibitory, post-synaptic vs local field) are murky at best. Advancements in multi-modal and optogenetic imaging hold a great deal of promise regarding the signal’s true nature, but sadly we are currently at a “best guess” level of understanding. This weakness means that without careful experimental design, it can be difficult to rule out non-neural contributors to our fMRI signal. Setting aside the worry about what neural activity IS measured by BOLD signal, there is still the very real threat of non-neural sources like respiration and cardiovascular function confounding the final result. This is a whole field of research in itself, and is far too complex to summarize here in its entirety. The basic issue is quite simple though.

End-tidal C02, respiration, and the BOLD Signal

In a nutshell, the BOLD signal is thought to measure downstream changes in cerebral blood-flow (CBF) in response to neural activity. This relationship, between neural firing and blood flow, is called neurovascular coupling and is extremely complex, involving astrocytes and multiple chemical pathways. Additionally, it’s quite slow: typically one observes a 3-5 second delay between stimulation and BOLD response. This creates our first noise-related issue; the time between each ‘slice’ of the brain, or repetition time (TR), must be optimized to detect signals at this frequency. This means we sample from our participant’s brain slowly. Typically we sample every 3-5 seconds and construct our paradigms in ways that respect the natural time lag of the BOLD signal. Stimulate too fast, and the vasculature doesn’t have time to respond. Stimulation frequency also helps prevent our first simple confound: our pulse and respiration rates tend oscillate at slightly slower frequencies (approximately every 10-15 seconds). This is a good thing, and it means that so long as your design is well controlled (i.e. your events are properly staggered and your baseline is well defined) you shouldn’t have to worry too much about confounds. But that’s our first problematic assumption; consider for example when one’s paradigms use long blocks of obscure things like “decide how much you identify with these stimuli”. If cognitive load differs between conditions, or your groups (for example, a PTSD and a control group) react differently to the stimuli, respiration and pulse rates might easily begin to overlap your sampling frequency, confounding the BOLD signal.

But you say, my experiment is well controlled, and there’s no way my groups are breathing THAT differently! Fair enough, but this leads us to our next problem: end tidal CO2. Without getting into the complex physiology, end-tidal CO2 is a by-product of respiration. When you hold your breath, CO2 blood levels rise dramatically. CO2 is a potent vasodilator, meaning it opens blood vessels and increases local blood flow. You’ve probably guessed where I’m going with this: hold your breath in the fMRI and you get massive alterations in the BOLD signal. Your participants don’t even need to match the sampling frequency of the paradigm to confound the BOLD; they simply need to breath at slightly different rates in each group or condition and suddenly your results are full of CO2 driven false positives! This is a serious problem for any kind of unconstrained experimental design, especially those involving poorly conceptualized social tasks or long periods of free activity. Imagine now that certain regions of the brain might respond differently to levels of CO2.

This image is from Change & Glover’s paper, “Relationship between respiration, end-tidal CO2, and BOLD signals in resting-state fMRI”. Here they measure both CO2 and respiration frequency during a standard resting-state scan. The image displays the results of group-level regression of these signals with BOLD. I’ve added circles in blue around the areas that respond the strongest. Without consulting an atlas, we can clearly see that bilateral anterior insula extending upwards into parietal cortex, anterior cingulate, and medial-prefrontal regions are hugely susceptible to respiration and CO2. This is pretty damning for resting-state fMRI, and makes sense given that resting state fluctuations occur at roughly the same rate as respiration. But what about well-controlled event related designs? Might variability in neurovascular coupling cause a similar pattern of response? Here is where Di et al’s paper lends a somewhat terrifying result:

Di et al recently investigated the role of vascular confounds in fMRI by administrating a common digit-symbol substitution task (DSST), a resting state, and a breath-holding paradigm. Signals related to resting-state and breath-holding were then extracted and entered into multiple regression with the DSST-related activations. This allowed Di et al to estimate what brain regions were most influenced by low-frequency fluctuation (ALFF, a common resting state measure) and purely vascular sources (breath-holding). From the figure above, you can see that regions marked with the blue arrow were the most suppressed, meaning the signal explained by the event-related model was significantly correlated with the covariates, and in red where the signal was significantly improved by removal of the covariates. The authors conclude that “(results) indicated that the adjustment tended to suppress activation in regions that were near vessels such as midline cingulate gyrus, bilateral anterior insula, and posterior cerebellum.” It seems that indeed, our old friends the anterior insula and cingulate cortex are extremely susceptible to neurovascular confound.

What does this mean for cognitive neuroscience? For one, it should be clear that even well-controlled fMRI designs can exhibit such confounds. This doesn’t mean we should throw the baby out with the bathwater though; some designs are better than others. Thankfully it’s pretty easy to measure respiration with most scanners, and so it is probably a good idea at minimum to check if one’s experimental conditions do indeed create differential respiration patterns. Further, we need to be especially cautious in cases like meditation or clinical fMRI, where special participant groups may have different baseline respiration rates or stronger parasympathetic responses to stimuli. Sadly, I’m afraid that looking back, these findings greatly limit our conclusions in any design that did not control for these issues. Remember that the insula and ACC are currently cognitive neuroscience’s hottest regions. I’m not even going to get into resting state, where these problems are all magnified 10 fold. I’ll leave you with this image from neuroskeptic, estimating the year’s most popular brain regions:

Are those spikes publication fads, every-task regions, or neurovascular artifacts? You be the judge.

edit:As many of you had questions or comments regarding the best way to deal with respiratory related issues, I spoke briefly with resident noise expert Torben Lund at yesterday’s lab meeting. Removal of respiratory noise is fairly simple, but the real problem is with end-tidal C02. According to Torben, most noise experts agree that regression techniques only partially remove the artifact, and that an unknown amount is left behind even following signal regression. This may be due to slow vascular saturation effects that build up and remain irrespective of shear breath frequency. A very tricky problem indeed, and certainly worth researching.
Note: credit goes to my methods teacher and fMRI noise expert Torben Lund, and CFIN neurobiologist Rasmus Aamand, for introducing and explaining the basis of the C02/respiration issue to me. Rasmus particularly, whose sharp comments lead to my including respiration and pulse measures in my last meditation project.

New Meditation Study in Neuroimage: “Meditation training increases brain efficiency in an attention task”

Just a quick post to give my review of the latest addition to imaging and mindfulness research. A new article by Kozasa et al, slated to appear in Neuroimage, investigates the neural correlates of attention processing in a standard color-word stroop task. A quick overview of the article reveals it is all quite standard; two groups matched for age, gender, and years of education are administered a standard RT-based (i.e. speeded) fMRI paradigm. One group has an average of 9 years “meditation experience” which is described as “a variety of OM (open monitoring) or FA (focused attention) practices such as “zazen”, mantra meditation, mindfulness of breathing, among others”. We’ll delve into why this description should give us pause for thought in a moment, for now let’s look at the results.

Amplitude of bold responses in the lentiform nucleus, medial frontal gyrus, middle temporal gyrus and precentral gyrus during the incongruent and congruent conditions in meditators and non-meditators.
Results from incon > con, non-meditators vs meditators

In a nutshell, the authors find that meditation-practitioners show faster reaction times with reduced BOLD-signal for the incongruent (compared to congruent and neutral) condition only. The regions found to be more active for non-meditators compared to meditators are the (right) “lentiform nucleus, medial frontal gyrus, and pre-central gyrus” . As this is not accompanied by any difference in accuracy, the authors interpret the finding as demonstrating  that “meditators may have maintained the focus in naming the colour with less interference of reading the word and consequently have to exert less effort to monitor the conflict and less adjustment in the motor control of the impulses to choose the correct colour button.” The authors in the conclusion review related findings and mention that differences in age could have contributed to the effect.

So, what are we to make of these findings? As is my usual style, I’ll give a bulleted review of the problems that immediately stand out, and then some explanation afterwards. I’ll preface my critique by thanking the authors for their hard work; my comments are intended only for the good of our research community.

The good:

  • Sensible findings; increases in reaction time and decreases in bold are demonstrated in areas previously implicated in meditation research
  • Solid, easy to understand behavioral paradigm
  • Relatively strong main findings ( P< .0001)
  • A simple replication. We like replications!
The bad:
  • Appears to report uncorrected p-values
  • Study claims to “match samples for age” yet no statistical test demonstrating no difference is shown. Qualitatively, the ages seem different enough to be cause for worry (77.8% vs 65% college graduates). Always be suspicious when a test is not given!
  • Extremely sparse description of style of practice, no estimate of daily practice hours given.
  • Reaction-time based task with no active control

I’ll preface my conclusion with something Sara Lazar, a meditation researcher and neuroimaging expert at the Harvard MGH told me last summer; we need to stop going for the “low hanging fruit of meditation research”. There are now over 20 published cross-sectional reaction-time based fMRI studies of “meditators” and “non-meditators”. Compare that to the incredibly sparse number of longitudinal, active controlled studies, and it is clear that we need to stop replicating these findings and start determining what they actually tell us. Why do we need to active control our meditation studies? For one thing, we know that reaction-time based tests are heavily based by the amount of effort one expends on the task. Effort is in turn influenced by task-demands (e.g. how you treat your participants, expectations surrounding the experiment). To give one in-press example, my colleagues Christian Gaden Jensen at the Copenhagen Neurobiology Research recently conducted a study demonstrating just how strong this confounding effect can be.

To briefly summarize, Christian recruited over 150 people for randomization to four experimental groups: mindfulness-based stress reduction (MBSR), non-mindfulness stress reduction (NMSR), wait-listed controls, and financially-motivated wait-listed controls. This last group is the truly interesting one; they were told that if they had top performance on the experimental tasks (a battery of classical reaction-time based and unspeeded perceptual threshold tasks) they’d receive a reward of approximately 100$. When Christian analyzed the data, he found that the financial incentive eliminated all reaction-time based differences between the MBSR, NMSR, and financially motivated groups! It’s important to note that this study, fully randomized and longitudinal, showed something not reflected in the bulk of published studies: that meditation may actually train more basic perceptual sensitivities rather than top-down control. This is exactly why we need to stop pursuing the low-hanging fruit of uncontrolled experimental design; it’s not telling us anything new! Meditation research is no longer exploratory.

In addition to these issues, there is another issue a bit more specific to meditation research. That is the totally sparse description of the practice- less than one sentence total, with no quantitative data! In this study we are not even told what the daily practice actually consists of, or its quality or length. These practitioners report an average of 8 years practice, yet that could be 1 hour per week of mantra meditation or 12 hours a week of non-dual zazen! These are not identical processes and our lack of knowledge for this sample severely limits our ability to assess the meaning of  these findings. For the past two years (and probably longer) of the Mind & Life Summer Research Institute, Richard Davidson and others have repeatedly stated that we must move beyond studying meditation as “a loose practice of FA and OM practices including x, y, z, & and other things”. Willoughby Britton suggested at a panel discussion that all meditation papers need to have at least one contemplative scholar on them or risk rejection. It’s clear that this study was most likely not reviewed by anyone with any serious academic background in meditation research.

My supervisor Antoine Lutz and his colleague John Dunne, authors of the paper that launched the “FA/OM” distinction, have since stated emphatically that we must go beyond these general labels and start investigating effects of specific meditation practices. To quote John, we need to stop treating meditation like a “black box” if we ever want to understand the actual mechanisms behind it. While I thank the authors of this paper for their earnest contribution, we need to take this moment to be seriously skeptical. We can only start to understand processes like meditation from a scientific point of view if we are willing to hold them to the highest of scientific standards. It’s time for us to start opening the black box and looking inside.

Switching between executive and default mode networks in posttraumatic stress disorder [excerpts and notes]

From: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2895156/?tool=pubmed

Daniels et al, 2010

We decided to use global scaling because we were not analyzing anticorrelations in this paradigm and because data presented by Fox and colleagues66 and Weissenbacher and coworkers65 indicate that global scaling enhances the detection of system-specific correlations and doubles connection specificity. Weissenbacher and colleagues65 compared different preprocessing approaches in human and simulated data sets and recommend applying global scaling to maximize the specificity of positive resting-state correlations. We used high-pass filtering with a cut-off at 128 seconds to minimize the impact of serial autocorrelations in the fMRI time series that can result from scanner drift.

Very useful methodological clipping!

The control condition was a simple fixation task, requiring attention either to the response instruction or to a line of 5 asterisks in the centre of the screen. We chose this control task to resemble the activation task as closely as possible; it therefore differed considerably from previous resting state analyses because it was relatively short in duration and thus necessitated fast switches between the control condition and the activation task. It also prompted the participants to keep their eyes open and fixated on the stimulus, which has been shown to result in stronger default mode network activations than the closed-eyes condition.60

Good to remember: closed-eyed resting states result in weaker default mode activity.

To ensure frequent switching between an idling state and task-induced activation, we used a block design, presenting the activation task (8 volumes) twice interspersed with the fixation task (4 volumes) within each of 16 imaging runs. Each task was preceded by an instruction block (4 volumes duration), amounting to a total acquisition of 512 volumes per participant. The order of the working memory tasks was counterbalanced between runs and across participants. Full details of this working memory paradigm are provided in the study by Moores and colleagues.6 There were 2 variations of this task in each run concerning the elicited button press response; however, because we were interested in the effects of cognitive effort on default network connectivity, rather than specific effects associated with a particular variation of the task, we combined the response variations to model a single “task” condition for this study. The control condition consisted of periods of viewing either 5 asterisks in the centre of the screen or a notice of which variation of the task would be performed next.

Psychophysiological interaction analyses are designed to measure context-sensitive changes in effective connectivity between one or more brain regions67 by comparing connectivity in one context (in the current study, a working memory updating task) with connectivity during another context (in this case, a fixation condition). We used seed regions in the mPFC and PCC because both these nodes of the default mode network act independently across different cognitive tasks, might subserve different subsystems within the default mode network and have both been associated with alterations in PTSD.8

This paradigm is very interesting. The authors have basically administered a battery of working memory tasks with interspersed rest periods, and carried out ROI inter-correlation, or seed analysis. Using this simple approach, a wide variety of experimenters could investigate task-rest interactions using their existing data sets.


The limitations of our results predominantly relate to the PTSD sample studied. To investigate the long-lasting symptoms that accompany a significant reduction of the general level of functioning, we studied alterations in severe, chronic PTSD, which did not allow us to exclude patients taking medications. In addition, the small sample size might have limited the power of our analyses. To avoid multiple testing in a small sample, we only used 2 seed regions for our analyses. Future studies should add a resting state scan without any visual input to allow for comparison of default mode network connectivity during the short control condition and a longer resting state.

The different patterns of connectivity imply significant group differences with task-induced switches (i.e., engaging and disengaging the default mode network and the central-executive network).

My response to Carr and Pinker on Media Plasticity

Our ongoing discussion regarding the moral panic surrounding Nicolas Carr’s book The Shallows continues over at Carr’s blog today, with his recent response to Pinker’s slamming the book. I maintain that there are good and bad (frightening!!) things in both accounts. Namely, Pinker’s stolid refusal to acknowledge the research I’ve based my entire PhD on, and Carr’s endless fanning of the one-sided moral panic.

Excerpt from Carr’s Blog:

Steven Pinker and the Internet

And then there’s this: “It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people.” Exactly. And that’s another cause for concern. Our most valuable mental habits – the habits of deep and focused thought – must be learned, and the way we learn them is by practicing them, regularly and attentively. And that’s what our continuously connected, constantly distracted lives are stealing from us: the encouragement and the opportunity to practice reflection, introspection, and other contemplative modes of thought. Even formal research is increasingly taking the form of “power browsing,” according to a 2008 University College London study, rather than attentive and thorough study. Patricia Greenfield, a professor of developmental psychology at UCLA, warned in a Science article last year that our growing use of screen-based media appears to be weakening our “higher-order cognitive processes,” including “abstract vocabulary, mindfulness, reflection, inductive problem solving, critical thinking, and imagination.”

As someone who has enjoyed and learned a lot from Steven Pinker’s books about language and cognition, I was disappointed to see the Harvard psychologist write, in Friday’s New York Times, a cursory op-ed column about people’s very real concerns over the Internet’s influence on their minds and their intellectual lives. Pinker seems to dismiss out of hand the evidence indicating that our intensifying use of the Net and related digital media may be reducing the depth and rigor of our thoughts. He goes so far as to assert that such media “are the only things that will keep us smart.” And yet the evidence he offers to support his sweeping claim consists largely of opinions and anecdotes, along with one very good Woody Allen joke.

Right here I would like to point out the kind of leap Carr is making. I’d really like a closer look at the supposed evidence demonstrating  “our intensifying use of the Net and related digital media may be reducing the depth and rigor of our thoughts.” This is a huge claim! How does one define the ‘depth’ and ‘rigor’ of our thoughts? I know of exactly one peer-reviewed high impact paper demonstrating a loss of specifically executive function in heavy-media multi-taskers. While there is evidence that generally speaking, multi-tasking can interfere with some forms of goal-directed activity, I am aware of no papers directly linking specific forms of internet behavior to a drop in executive function. Furthermore, the HMM paper included in it’s measure of multi-tasking ‘watching tv’, ‘viewing funny videos’, and ‘playing videogames’. I don’t know about you, but for me there is definitely a difference between ‘work’ multitasking, in which I focus and work through multiple streams, and ‘play’ multitasking, in which I might casually surf the net while watching TV. The second claim is worse- what exactly is ‘depth’? And how do we link it to executive functioning?

Is Carr claiming people with executive function deficits are incapable or impaired in thinking creatively? If it takes me 10 years to publish a magnum opus, have I thought less deeply than the author that cranks out a feature length popular novel every 2 years? Depth involves a normative judgment of what separates ‘good’ thinking from ‘bad’ thinking, and to imply there is some kind of peer-reviewed consensus here is patently false. In fact, here is a recent review paper on fmri creativity research (is this depth?) indicating that the existing research is so incredibly disparate and poorly defined as to be untenable. That’s the problem with Carr’s claims- he oversimplifies both the diversity of internet usage and the existing research on executive and creative function. To be fair to Carr, he does go on to do a fair job of dismantling Pinker’s frighteningly dogmatic rejection of generalizable brain plasticity research:

One thing that didn’t surprise me was Pinker’s attempt to downplay the importance of neuroplasticity. While he acknowledges that our brains adapt to shifts in the environment, including (one infers) our use of media and other tools, he implies that we need not concern ourselves with the effects of those adaptations. Because all sorts of things influence the brain, he oddly argues, we don’t have to care about how any one thing influences the brain. Pinker, it’s important to point out, has an axe to grind here. The growing body of research on the adult brain’s remarkable ability to adapt, even at the cellular level, to changing circumstances and new experiences poses a challenge to Pinker’s faith in evolutionary psychology and behavioral genetics. The more adaptable the brain is, the less we’re merely playing out ancient patterns of behavior imposed on us by our genetic heritage.

Here is my response, posted on Nick’s blog:

Hi Nick,

As you know from our discussion at my blog, I’m not really a fan of the extreme views given by either you or Pinker. However, I applaud the thorough rebuttal you’ve given here to Stephen’s poorly researched response. As someone doing my PhD in neuroplasticity and cognitive technology, it absolutely infuriated me to see Stephen completely handwave away a decade of solid research showing generalizable cognitive gains from various forms of media-practice. To simply ignore findings from, for example the Bavalier lab, that demonstrate reliable and highly generalizable cognitive and visual gains and plasticity is to border on the unethically dogmatic.

Pinker isn’t well known for being flexible within cognitive science however; he’s probably the only person even more dogmatic about nativist modularism than Fodor. Unfortunately, Stephen enjoys a large public following and his work has really been embraced by the anti-religion ‘brights’ movement. While on some levels I appreciate this movement’s desire to promote rationality, I cringe at how great scholars like Dennett and Pinker seem totally unwilling to engage with the expanding body of research that casts a great deal of doubt on the 1980’s era cogsci they built their careers on.

So I give you kudos there. I close as usual, by saying that you’re presenting a ‘sexy’ and somewhat sensationalistic account that while sure to sell books and generate controversy, is probably based more in moral panic than sound theory. I have no doubt that the evidence you’ve marshaled demonstrates the cognitive potency of new media. Further, I’m sure you are aware of the heavy-media multitasking paper demonstrating a drop in executive functioning in HMMs.

However, you neglect in the posts I’ve seen to emphasize what those authors clearly did: that these findings are not likely to represent a true loss of function but rather are indicators of a shift in cognitive style. Your unwillingness to declare the normative element in your thesis regarding ‘deep thought’ is almost as chilling as Pinker’s total refusal to acknowledge the growing body of plasticity research. Simply put, I think you are aware that you’ve conflated executive processing with ‘deep thinking’, and are not really making the case that we know to be true.

Media is a tool like any other. It’s outcome measures are completely dependent on how we use it and our individual differences. You could make this case quite well with your evidence, but you seem to embrace the moral panic surrounding your work. It’s obvious that certain patterns, including the ones probably driving your collected research, will play on our plasticity to create cognitive differences. Plasticity is limited however, and you really don’t play on the most common theme in mental training literature: balance and trade-off. Your failure to acknowledge the economical and often conservative nature of the brain forces me to lump your work in with the decade that preceded your book, in which it was proclaimed that violent video games and heavy metal music would rot our collective minds. These things didn’t happen, except in those who where already at high risk, and furthermore they produced unanticipated cognitive gains. I think if you want to be on the ‘not wrong’ side of history, you may want to introduce a little flexibility to your argument. I guess if it makes you feel better, for many in the next generation of cognition researchers, it’s already too late for a dogmatic thinker like Pinker.

Final thoughts?

Google Wave for Scholarly Co-authorship: excerpt from Neuroplasticity and Consciousness Abstract

Gary Williams and I are working together on a paper investigating the consciousness and neuroplasticity. We’re using Google wave for this collaboration, and I must say it is an excellent co-authorship tool. There is nothing quite so neat as watching your ideas flow and meld together in real time. There are now new built in document templates that make these kinds of projects a blast. As an added bonus, all edits are identified and tracked in real time, letting you keep easy track of who wrote what. One of the most suprising things to come out of this collaboration is the newness of the thoughts. Whatever it is we end up arguing, it is definetely not reducible to the sum of it’s parts. As a teaser, I thought I’d post a thread from the wave I made this morning. This is basically just me rambling on about consciousness and plasticity after reading the results of our wave. I wish I could post the movie of our edits, but that will have to wait for the paper’s submission.

I have an idea I want to work in that was provoked by this paper:

Somewhere in here I still feel a nagging paradox, but I can’t seem to put my finger on it. Maybe I’m simply trying to explain something I don’t have an explanation for. I’m not sure. Consider this a list of thoughts that may or may not have any relationship to the kind of account we want to make here.

They basically show that different synthesthetic experiences have different neural correlates in the structural brain matter. I think it would be nice to tie our paper to the (likely) focus of the other papers; the idea of changing qualia / changing NCCs. Maybe we can argue that, due to neural plasticity, we should not expect ‘neural representations’ for sensory experience between any two adults to be identical; rather we should expect that every individual develops their own unique representational qualia that are partially ineffable. Then we can argue that it this is precisely why we must rely on narrative scaffolding to make sense of the world; it is only through practice with narrative, engendered by frontal plasticity, that we can understand the statistical similarities between our qualia and others. Something is not quite right in this account though… and our abstract is basically fine as is.

So, I have my own unique qualia that are constantly changing- my qualia and NCCs are in dynamical flux with one another. However, my embodiment pre-configures my sensory experience to have certain common qualities across the species. Narrative explanations of the world are grounded in capturing this intersubjectivity; they are linguistic representations of individual sense impressions woven together by cultural practices and schema. What we want to say is that, I am able to learn about the world through narrative practice precisely because I am able to map my own unique sensory representations onto others.

I guess that last part of what I said is still weak, but it seems like this could be a good element to explore in the abstract. It keeps us from being too far away from the angle of the call though, maybe. I can’t figure out exactly what I want to say. There are a few elements:

  • Narratives are co-created, coherent, shareable, complex representations of the world that encode temporality, meaning, and intersubjectivity.
  • I’m able to learn about these representations of the world through narrative practice; by mapping my own unique dynamic sensory experience to the sensory and folk psychological narratives of others.
  • Narrative encodes sensory experience in ways that transcend the limits of personal qualia; they are offloaded and are no longer dynamic in the same way.
  • Sensory experience is in constant flux and can be thrown out of alignment with narrative, as in the case of most psychopathy.
  • I need some way to structure this flux; narrative is intersubjective and it provides second order qualia??
  • Narrative must be plastic as it is always growing; the relations between events, experiences, and sensory representations must always be shifting. Today I may really enjoy the smell of flowers and all the things that come with them (memory of a past girlfriend, my enjoyment of things that smell sweet, the association I have with hunger). But tommorow I might get buried alive in some flowers; now my sensory representation for flowers is going to have all new associations. I may attend to a completely different set of salient factors; I might find that the smell now reminds me of a grave, that I remember my old girlfriend was a nasty bitch, and that I’m allergic to sweet things. This must be reflected in the connective weights of the sensory representations; the overall connectivity map has been altered because a node (the flower node) has been drastically altered by a contra-narrative sensory trauma.
  • I think this is a crucial account and it helps explain the role of the default mode in consciousness. On this account, the DMN is the mechanism driving reflective, spontaneous narrativization of the world. These oscillations are akin to the constant labeling and scanning of my sensory experience. That they in sleep probably indicates that this process is highly automatic and involved in memory formation. As introspective thoughts begin to gain coherency and collude together, they gain greater roles in my over all conscious self-narrative.
  • So I think this is what I want to say: our pre-frontal default mode is system is in constant flux. The nodes are all plastic, and so is the pattern of activations between them. This area is fundamentally concerned with reflective-self relatedness and probably develops through childhood interaction. Further, there is an important role of control here. I think that a primary function of social-constructive brain areas is in the control of action. Early societies developed complex narrative rule systems precisely to control and organize group action. This allowed us to transcend simple brute force and begin to coordinate action and to specialize in various agencies. The medial prefrontal cortex, the central node, fundementally invoked in acts of social cognition and narrative comprehension, has massive reciprocal connectivity to limbic areas, and also pre-frontal areas concerned with reward and economic decision making.
  • We need a plastic default mode precisely to allow for the kinds of radical enculturation we go through during development. It is quite difficult to teach an infant, born with the same basic equipment as a caveman, the intricacies of mathematics and philosophy. Clearly narrative comprehension requires a massive amount of learning; we must learn all of the complex cultural nuances that define us as modern humans.
  • Maybe sensory motor coupling and resonance allow for the simulation of precise spatiotemporal activity patterns. This intrinsic activity is like a constant ‘reading out’ of the dynamic sensory representations that are being constantly updated, through neuroplasticity; whatever the totality of the connection weights, that is my conscious narrative of my experience.
  • Back to the issue of control. It’s clear to me that the prefrontal default system is highly sensitive to intersubjective or social information/cues. I think there is really something here about offloading intentions, which are relatively weak constructions, into the group, where they can be collectively acted upon (like in the drug addict/rehab example). So maybe one role of my narration system is simply to vocalize my sensory experience (I’m craving drugs. I can’t stop craving drugs) so that others can collectively act on them.

Well there you have it. I have a feeling this is going to be a great paper. We’re going to try and flip the whole debate on it’s head and argue for a central role of plasticity in embodied and narrative consciousness. It’s great fun to be working with Gary again; his mastery of philosophy of mind and phenomenology are quite fearsome, and we’ve been developing these ideas forever. I’ll be sure to post updates from GWave as the project progresses.

Snorkeling ’the shallows’: what’s the cognitive trade-off in internet behavior?

I am quite eager to comment on the recent explosion of e-commentary regarding Nicolas Carr’s new book. Bloggers have already done an excellent job summarizing the response to Carr’s argument. Further, Clay Shirky and Jonah Lehrer have both argued convincingly that there’s not much new about this sort of reasoning. I’ve also argued along these lines, using the example of language itself as a radical departure from pre-linguistic living. Did our predecessors worry about their brains as they learned to represent the world with odd noises and symbols?

Surely they did not. And yet we can also be sure that the brain underwent a massive revolution following the acquisition of language. Chomsky’s linguistics would of course obscure this fact, preferring us to believe that our linguistic abilities are the amalgation of things we already possessed: vision, problem solving, auditory and acoustic control. I’m not going to spend too much time arguing against the modularist view of cognition however; chances are if you are here reading this, you are already pretty convinced that the brain changes in response to cultural adaptations.

It is worth sketching out a stock Chomskyian response however. Strict nativists, like Chomsky, hold that our language abilities are the product of an innate grammar module. Although typically agnostic about the exact source of this module (it could have been a genetic mutation for example), nativists argue that plasticity of the brain has no potential other than slightly enhancing or decreasing our existing abilities. You get a language module, a cognition module, and so on, and you don’t have much choice as to how you use that schema or what it does. The development of anguage on this view wasn’t something radically new that changed the brain of its users but rather a novel adaptation of things we already and still have.

To drive home the point, it’s not suprising that notable nativist Stephen Pinker is quoted as simply not buying the ‘changing our brains’ hypothesis:

“As someone who believes both in human nature and in timeless standards of logic and evidence, I’m skeptical of the common claim that the Internet is changing the way we think. Electronic media aren’t going to revamp the brain’s mechanisms of information processing, nor will they supersede modus ponens or Bayes’ theorem. Claims that the Internet is changing human thought are propelled by a number of forces: the pressure on pundits to announce that this or that “changes everything”; a superficial conception of what “thinking” is that conflates content with process; the neophobic mindset that “if young people do something that I don’t do, the culture is declining.” But I don’t think the claims stand up to scrutiny.”

Pinker makes some good points- I agree that a lot of hype is driven by the kinds of thinking he mentions. Yet, I do not at all agree that electronic media cannot and will not revamp our mechanisms for information processing. In contrast to the nativist account, I think we’ve better reason than ever to suspect that the relation between brain and cognition is not 1:1 but rather dynamic, evolving with us as we develop new tools that stimulate our brains in unique and interesting ways.

The development of language massively altered the functioning of our brain. Given the ability to represent the world externally, we no longer needed to rely on perceptual mechanisms in the same way. Our ability to discriminate amongst various types of plant, or sounds, is clearly sub-par to that of our non-linguistic brethren. And so we come full circle. The things we do change our brains. And it is the case that our brains are incredibly economical. We know for example that only hours after limb amputation, our somatosensory neurons invade the dormant cells, reassigning them rather than letting them die off. The brain is quite massively plastic- Nicolas Carr certainly gets that much right.

Perhaps the best way to approach this question is with an excerpt from social media. I recently asked of my fellow tweeps,

To which an astute follower replied:

Now, I do realize that this is really the central question in the ‘shallows’ debate. Moving from the basic fact that our brains are quite plastic, we all readily accept that we’re becoming the subject of some very intense stimulation. Most social media, or general internet users, shift rapidly from task to task, tweet to tweet. In my own work flow, I may open dozens and dozens of tabs, searching for that one paper or quote that can propel me to a new insight. Sometimes I get confused and forget what I was doing. Yet none of this interferes at all with my ‘deep thinking’. Eventually I go home and read a fantastic sci-fi book like Snowcrash. My imagination of the book is just as good as ever; and I can’t wait to get online and start discussing it. So where is the trade-off?

So there must be a trade-off, right? Tape a kitten’s eyes shut and its visual cortex is re-assigned to other sensory modalities. The brain is a nasty economist, and if we’re stimulating one new thing we must be losing something old. Yet what did we lose with language? Perhaps we lost some vestigial abilities to sense and smell. Yet we gained the power of the sonnet, the persuasion of rhetoric, the imagination of narrative, the ability to travel to the moon and murder the earth.

In the end, I’m just not sure it’s the right kind of stimulation. We’re not going to lose our ability to read in fact, I think I can make an extremely tight argument against the specific hypothesis that the internet robs us of our ability to deep-think. Deep thinking is itself a controversial topic. What exactly do we mean by it? Am I deep thinking if I spend all day shifting between 9 million tasks? Nicolas Carr says no, but how can he be sure those 9 million tasks are not converging around a central creative point?

I believe, contrary to Carr, that internet and social media surfing is a unique form of self stimulation and expression. By interacting together in the millions through networks like twitter and facebook, we’re building a cognitive apparatus that, like language, does not function entirely within the brain. By increasing access to information and the customizability of that access, we’re ensuring that millions of users have access to all kinds of thought-provoking information. In his book, Carr says things like ‘on the internet, there’s no time for deep thought. it’s go go go’. But that is only one particular usage pattern, and it ignores ample research suggesting that posts online may in fact be more reflective and honest than in-person utterances (I promise, I am going to do a lit review post soon!)

Today’s internet user doesn’t have to conform to whatever Carr thinks is the right kind of deep-thought. Rather, we can ‘skim the shallows’ of twitter and facebook for impressions, interactions, and opinions. When I read a researcher, I no longer have to spend years attending conferences to get a personal feel for them. I can instead look at their wikipedia, read the discussion page, see what’s being said on twitter. In short, skimming the shallows makes me better able to choose the topics I want to investigate deeply, and lets me learn about them in whatever temporal pattern I like. Youtube with a side of wikipedia and blog posts? Yes please. It’s a multi-modal whole brain experience that isn’t likely to conform to ‘on/off’ dichotomies. Sure, something may be sacrificed, but it may not be. It might be that digital technology has enough of the old (language, vision, motivation) plus enough of the new that it just might constitute or bring about radically new forms of cognition. These will undoubtably change or cognitive style, perhaps obsoleting Pinker’s Bayesian mechanisms in favor of new digitally referential ones.

So I don’t have an answer for you yet ToddStark. I do know however, that we’re going to have to take a long hard look at the research review by Carr. Further, it seems quite clear that there can be no one-sided view of digital media. It’s not anymore intrinsically good or bad than language. Language can be used to destroy nations just as it can tell a little girl a thoughtful bed time story. If we’re to quick to make up our minds about what internet-cognition is doing to our plastic little brains, we might miss the forest for the trees. The digital media revolution gives us the chance to learn just what happens in the brain when its’ got a shiny new tool. We don’t know the exact nature of the stimulation, and finding out is going to require a look at all the evidence, for and against. Further, it’s a gross oversimplification to talk about internet behavior as ‘shallow’ or ‘deep’. Research on usage and usability tells us this; there are many ways to use the internet, and some of them probably get us thinking much deeper than others.

A defense of vegetarian fMRI (1/2)

Recently there’s been much ado about a newly published fMRI study of empathetic responding in vegetarians, vegans, and omnivores. The study isn’t perfect, which the authors admit, but I find it interesting and relatively informative for an fMRI paper. The Neurocritic doesn’t, rather he raises some seemingly serious issues with the study. I promised on twitter I’d defend my claim that the study is good (and that neurocritic could do better). But first, a motivated ramble to distract and confuse you.

As many of you might realize, neuroscience could be said to be going through something like puberty. While the public remains infatuated with every poorly worded research report, researchers within the neurosciences have to view brain mapping through an increasingly skeptical lens. This is a good thing: science progresses through the introduction and use of new technologies and the eventual skeptical refinement of their products.

And certainly there is plenty of examples shoddy neuroscience out there, whether it’s reports of voodoo correlations or inconsistencies between standard fMRI analyses packages. Properly executed, attention to these issues and a healthy skepticism of the methods will ultimately result in a refined science. Yet we must also be careful to apply the balm of skepticism in a refined manner: neuroscientists are people to, and we work in an increasingly competitive field where there are few well-defined standards and even less clarity.

Take an example from my lab that happened just today.  We’re currently analyzing some results from a social cognition experiment my colleague Kristian Tylen and I conducted last year. Like many fMRI results, our hypotheses (which were admitable a bit vague when we made them) were not exactly supported by our findings. Rather we ended up with a scattered series of blobs that appeared to mostly center on early visual areas. This is obviously boring and unpublishable, and after some time we decided to do a small volume correction on some areas we’d discussed in a published paper. This finally revealed some interesting findings somewhere around the TPJ, which brings me to the point of this story.

My research has thus far mostly focused on motor and prefrontal regions. We in neuroimaging can often fall victim to what I call ‘blob blind sight’ where we focus so greatly on a single area or handful of areas that we forget there’s’ a wide world of cortex out there. Imagine my surprise when I tried to get clear about whether our finding was situated in exactly the pSTS, TPJ, or nearby inferior parietal lobule (IPL) only to discover that these three areas are nearly indistinguishable from one another anatomically.

All of these regions are involved in different aspects of social cognition, and across the literature there are no clear anatomical differentiation between them. In many cases, researchers will just lump them together as pSTS/TPJ, regardless of the fact that a great deal of research has gone on explicitly differentiating them. Now what does one do with a blob that lands somewhere in the middle, overlapping all three? More specifically, imagine the case where your activation foci lands smack dab in the middle, or a few voxels to the left. Is it TPJ? Or IPL? Or is it really the conjunction of all three, and if so, how does one make sense of that given the wide array of functions and connectivity patterns for these areas. IPL is a part of the default mode, whereas TPJ and pSTS are not. It’s really quite a mess, and the answer you choose will likely depend upon the interpretation you give, given the vast variety of functions allocated to these three regions.

The point of all this, which begins to lead to my critique of TNC critique, is that it is not a simple matter of putting ones foot down and claiming that the lack of an expected activation or the presence of an unexpected one is damning or indicative of bad science. It’s an inherent problem in a field where hundreds of papers are published monthly with massive tables of activation foci. To say that a study has gone awry because they don’t report your favorite area misses the point. What’s more important is to evaluate the methods and explain the totality of the findings reported.

So that’s one huge issue confronting most researchers. Although there are some open source ‘foci databases’ out there, they are underused and hard to rely on. One can of course try to pinpoint the exact area, but in reality the chance that you’ll have such a focused blob is pretty unlikely. Rather, researchers have to rely on extra-scanner measures and common sense to make any kind of interesting theoretical inferences from fMRI. This post was meant to be a response to The Neurocritic, who took issue with my taking issue of his taking issue with a certain vegetarian fmri study… but I’m already an hour late coming home from work and I’m afraid I’ve failed to deliver. I did take the time this afternoon to go thoroughly through both the paper and TNC’s response however, and I think I’ve got a pretty compelling argument. Next time: why the neurocritic is plain wrong 😉