Friday, December 16, 2011

Breathing, Fast and Slow

Mice breathe at three speeds. At rest, they breathe at ~3Hz. When they smell something interesting, they start rapid-sniffing at ~8Hz. And when mice encounter something malodorous, they hold their breath, and but still breathe every second. Given the differences between these breathing regimes, how can the olfactory system encode odor information across all cases?

Breathing Fast

The best attempt to address this question was by Cury and Uchida (although credit should be given to Verhagen and Wachowiak for the first crack). Cury recorded from mitral cells in freely moving rats while the rats performed odor tasks. During the tasks, the rats switched between normal and fast-breathing, which allowed Cury to compare the neurons' firing during both conditions. They found that the spike timing (or odor code) does not depend on the duration of the breath length (see below; similar to what I blogged about, arguing against a phase-mapping of the sniff cycle). They also noted that during fast breathing, there could be hysteresis, where activity during one breath bleeds into the next.
Response of a mitral cell during breaths of different duration. Top. Raster plot of a neuron's spikes, aligned to breath onset. Inhalation is show as dark grey area, and full breath in light grey. Bottom. PETH of firing during breath during rapid and slow breathing. Note, there is no odor present, but the result holds for odor responses as well.
From Cury and Uchida, 2010.
Seeing this, they turned to the population, and asked how the odor information evolved over the course of one breath. They calculated the population spike distance between different odors, and found that for both fast and slow breathing, the odor representations started to diverge 40ms following inspiration, and were maximally separate around 50-80ms. Following the peak, the representations "converged" (is there a word for "move closer together, but not bunched"?) for the rest of the breathing cycle. Notably, there is only a small difference between fast and slow breathing in the amount or speed of information .

Population spike distance ("inter-odor distance") for fast ("discrimination") and slow ("stay") breathing regimes. The distance peaks between 50-80ms before plateauing for the rest of the breath. This timecourse of the distance is reminiscent of structural plasticity following uncaging.
From Cury and Uchida, 2010.
Given these results, what is the role of fast sniffing? Rodents fast-sniff when they encounter a novel odor, or when they are searching for an odor source. People have hypothesized that fast-sniffing might give more information about the odor. However, the data above argue that there is no more information during fast-sniffing than slow. Rather than more information during a breath, fast-sniffing may provide a more frequent sampling of the environment.

In the discussion, Cury and Uchida also note the robustness of the odor coding across different breathing regimes. This means the odor code is invariant with respect to inhalation amplitude or duration. In another part of their paper they look at how behaviour relates to odor coding, and found that the responses are relatively insensitive to top-down processing like attention as well.

Breathing Slow

While Cury and Uchida showed that the first 150ms of odor coding during fast and slow breath are the same, it still leaves the question of why so many cells respond after 150ms. For example, the cell below responds to an odor by firing a burst of action potentials at XXXms (top panel below). However, when the mouse fast-breathes during the early trials, this odor response does not have time to evolve. This can be clearly seen if you truncate and aligned all responses to a 150ms window (lower panel).

Raster plot of a cell's response to 3-Hexanone. Spikes shown in black, and inspiration in blue. Odor presented from 6-8s. A. With only the first respiration aligned, you can see a clear response to the odor approximately 200-300ms following inspiration. After the burst of activity, the cell is inhibited again, then returns to basal firing before the next breath. During the first two trials, the mouse breaths faster, and there is no odor response. B. Same response, with each breath truncated to 150ms, as if the mouse were fast-sniffing. The response is no longer evident.
The cell above also shows what happens to the odor code when the animal stops breathing. The cell above is inhibited after its burst of spikes, but the inhibition only last ~500ms. Afterwards, it just returns to its basal firing rate. In other words, the odor code only seems to exist for 500ms. It is easy to sense this perceptually: if you sniff something and then hold your breath, the sensation dissipates rapidly.

So, why do many mitral cells only respond to odors after 150ms, which is after the subject has identified the odor? I have two hypotheses.

Hypothesis 1 (why?): The slow breaths do contain more odor information overall, which is contained in the spikes later in the breath. In another figure of the paper, Cury and Uchida show that their odor predictor monotonically increases its accuracy over the entire breathing cycle. However, behavioural data argues against this: when rodents are performing a freely moving odor discrimination task, they fast-sniff.

To test this hypothesis, you could measure a subject's odor discrimination or detection thresholds while it employs a fast- or slow-breathing strategy. If the slow-breathing threshold is lower, it would imply that the late spikes add information. However, if the thresholds are the same (or fast-sniffing lower), then it would argue against the information hypothesis. In the end, this may be difficult to test in rodents, as you need to force them to employ a specific breathing strategy.

Hypothesis 2 (how?): The responses after 150ms are vestigial, due to continued ORN input, or reverberations in the olfactory bulb. ORNs can have complex temporal responses that last for seconds. It is possible that they continue feeding odor-specific information to the olfactoroy bulb after the olfactory bulb no longer needs it. This input would manifest itself as late-arriving responses.

Another possibility is that the olfactory bulb contains recurrent connections, like dendro-dendritic inhibition, that allow the structure to reverberate. For example, stimulating the olfactory nerve can cause mitral cell activity that lasts for seconds. The mitral cells may already have all the information they need after 150ms, but continue firing due to these reverberations.

This hypothesis is more easily testable. You can record from mitral cells in OMP-Halorhodopsin mice. Then during each sniff, you can turn on the light after 150ms, shutting down ORN input to the mitral cells. If the mitral cell activity is elided, then they require ORN input to continuously fire; if, however, the mitral cells continue to fire, then the ORN input is not needed.

Monday, December 12, 2011

Delineating cognitive and neuro sciences

I was discussing possible PhD labs with a student, and she said, "I was thinking about some fMRI labs, but..."

"Yeah, that's not neuroscience," I said.

I've long held this belief, and when I mention it to other neuroscientists, they generally agree. But I've not heard it vocalized (birdsong term!) publicly, out of politeness or politics (fMRI gets a lot of funding and publicity, so badmouthing it looks envious). Today I'm going to lay out what I think the difference is between cognitive science and neuroscience, and why the distinction has blurred.

The dif

What irks me most is the application of the term "cognitive neuroscience" to research that uses techniques like fMRI or EEG.  In my mind, there is a clear delineation between cognitive science and neuroscience, due to differences of technique and perspective. If I could summarize the difference in one sentence I'd say, "if it ain't describing neurons, it ain't neuroscience."

The difference between neuroscience and cognitive science is most clear on the technical side. Neuroscience techniques focus, naturally, on individual neurons (patch, imaging, extracellular recording) or groups of neurons (voltage sensitive dyes, wide field imaging, immunoblotting). Cognitive techniques, on the other hand, look at areas of the brain as a whole, like EEG, fMRI, or DTI.

The techniques one uses in turn determine the types of questions one can answer.* In broad terms, neuroscience techniques have specific measures, and so the questions neuroscientists ask are concerned with concrete, well defined inputs and outputs. How does this neuron represent that stimulus? How do two signaling molecules interact? We want to break the brain down like a car, so we can better understand it.

* Intuitively, one might think that the questions drive the technical split, and thus the difference between neuroscience and cognitive science. Certainly, individuals (and labs) are interested in questions, and acquire the techniques to answer those questions. Yet, questions can be answered on multiple levels. For example, if you're interested in sensory perception, your techniques can range from biophysics to neuroscience to psychophysics. You can explore the question via neuroscience or cognitive science.
In the end, I think identity is driving this distinction. I feel kinship with people who perform similar techniques, even in different systems. I feel like I could walk into any neuroscience lab, and start producing data within a few months, while it would take longer to become competent in a psych lab. So when I think of cognitive neuroscientists, I think of not-me, and would like clear labels to distinguish us.

Cognitive science's techniques, on the other hand, are more imprecise, but have much more interesting model organisms: humans and primates. Thus, they leverage their animal model by asking more slippery questions** regarding topics like consciousness, decision making, or emotion, which neuroscience is unable to answer at the moment. They move beyond treating the brain like a black box as in psychology, and do the best they can with the tools available.

** Dale Purves, the former chair of Duke Neurobiology, switched from cellular neuroscience to cognitive science late in his career. Whenever he went to talks, he would quasi-troll people by asking simple questions like, "What is a decision?", and arguing with their answer).

Then there is, of course, the overlapping field of cognitive neuroscience (I love Wikipedia's attribution of the term to two guys in a cab). While others may think fMRI is cognitive neuroscience, in my mind, the only cognitive neuroscientists are those who attempt to address those fuzzy cognitive questions by sticking electrodes in primates and humans

Why the confusion

So why have cognitive scientists coopted the prefix "neuro?" It's a pure status play.

Cognitive science is a small step removed from psychology, and psychology, despite decades of normalization, is still a dirty word to the public. When people think psychology, they think clinical psychology, people laying on couches, and the bizarre, foundational theories of Freud. They don't think rigorous science, they think feelings.

What the public doesn't realize is that most psychology, and most research psychology, is non-clinical, and encompasses cognitive, developmental, and other psychologies. And that non-clinical psychology has undergone tremendous improvement over the last few decades (say, post-Skinner), and employs all the standard tools of science like control groups, replication, etc. I enjoy me a good pop-psychology book.

So, given the low status of psychology, and neuroscience's higher relative status (neuro is an obscure Greek prefix, always a good sign), cognitive scientists doing fMRI rebranded the field "cognitive neuroscience." And lo, the NIH money flowed.

You can see other fields doing this as well, like neuroeconomics. There are some true neuroeconomists doing risk/reward research in primates, but most of it is just rebranded experimental economics.


To recapitulate: cognitive scientists and neuroscientists use different techniques to answer different questions. But cognitive scientists are wary of being mistaken for psychologists, and so coopted the term "neuro." It's mostly just a matter of semantics, but I thought fMRI people should know, when they call themselves neuroscientists, we ain't buying it.

(And I should reiterate here that I like psychology and cognitive science. They study the brain through a different lens, which is important. I just take issue with the misapplication of "neuroscience.")

Saturday, December 3, 2011

Neuroscience graffiti

You know you've been in the lab too much when you start to see LFPs everywhere:

(Pardon the crappy quality, but the Swiss being Swiss, this was covered up a few days later when I came back for a good picture.)

Thursday, November 24, 2011

You're gonna carry that weight

In my last post, I presented data that suggests that the odor code dramatically changes between the first and subsequent breaths. Later, however, I discovered a subtle mistake (to me at least) in my analysis, which slightly changed the result. Too often in science, we see the end point of research, and don't see how it evolved. Today I'm going to show how I found my mistake, what the mistake was, and how fixing that mistake modified the result.

Calculating spike distance

Previously, I argued that the odor codes evolves over multiple breaths. After presenting some example cells where the odor code shifted between the first and second breath, I turned to the population level, and showed this figure:

A. Schematic of population vector. B. Distance between breaths. Breath identity shown underneath.
On the left is the schematic for the analysis. For each cell, I binned the response to an odor over a breathing cycle into 8 bins. To add more cells to the population, I added 8 bins for each cell to the bottom of the population vector. To get different observations, I repeated this for each breath that I recorded.

It's possible to calculate the "population spike distance" by just calculating the Euclidian distance between the population vector for each breath. Yet that method is fairly noisy, as most cells are uninformative. When I tried that simple method, the result was similar to that shown above, but not as clean. To make the differences more obvious, I performed PCA on the population vector, and then calculated the distances between the first five principal components (shown above). Here, each of the five components was informative, and the distances between the breaths were much clearer.

The problem discovered

After looking at different breaths for the same odor, I next wanted to investigate the differences between odors. Rather than look at "spike distances," though, I used a prediction algorithm.

To do the prediction, I built population vectors as I did above, but instead of observing different breaths for the same odor, I observed different odors for the same breath. Once again, I transformed the data via PCA and took the first 5-10 principal components. I then created a sample population vector for an individual trial, and calculated the distance between the sample vector and the average vectors for each odor. The "predicted" odor is the one with the smallest distance from the trial. This was repeated for all trials to get the average prediction rate. When I did this, I got the prediction rates show in the top panel:

n= 105 neurons, and 6 odors, split between two sets of three odors. 10-12 trials.
Here, the predictions are between three different odors, so the chance level is 33%. The first five breaths are pre-odor breaths, while breaths 6-10 are during the odor. As you can see, the odor breaths are >95% correct, which is great. However, many of the pre-odor breaths have prediction rates >50%, which is obviously bad (I have different control breaths for each odor; the pre-odor prediction chooses among the three control breaths).

In playing with the data, I then noticed something odd: as I increased the numbers of bins or principal components that I used to make predictions, the pre-odor predictions got higher (panel B above). With 20 bins and 20 PCs, I could get pre-odor predition rates of >70% for each odor! I wasn't making predictions, but was over-fitting my model to the data so that it could never be wrong!

And this is where I realized my mistake. When you do PCA, the algorithm tells you how much each component describes the variance in the data. The first few principal components (PC) account for most of the variance, while the later components account for less. When I was doing my prediction algorithm (and my distance calculations above), I was weighing each PC equally, and over-representing principal components which didn't have much meaning.

Once I realized this, it was a simple procedure to weigh each PC according to its variance, and re-run the prediction (panel C above). Following that, the pre-odor predictions are at chance; the positive control is finally working. The downside to this correction, however, is that the odor prediction during the odor was now between 60-80%. This lower predictive ability makes more sense, though, given the trial-to-trial noise in the signal, and the relatively low number of  neurons in the population vector.

Back to breath distance

Having realized my error, I returned to my original analysis on the breath distance, and added the proper weightings. When I did this, the results were slightly different:

n=11 odors from 5 experiments with >15 cells.
The control breaths are still quite distant from the odor breaths. However, the first breath is no longer so different from the subsequent breaths. Indeed, it appears that there is an evolution in the code over the first few breaths before the code stabilizes. The stark difference between the different breaths had blurred.

I'm guessing that this is a tyro analysis mistake. I only stumbled upon it because I figured a reviewer would want to see pre-odor prediction rates to compare to those during the odor. I know that when I read a paper, I rarely delve into the detailed methods of these more complicated analyses. And if I do, they aren't always informative. Given how often people perform analysis by themself, with custom code, it's easy to forget how many simple, subtle mistakes one can make. The only way to avoid them is to gain experience, and to constantly question whether what you're doing actually makes sense, and agrees with what you've already done.


Today I found an even BIGGER problem with my odor prediction. When I was creating my "average spike population" I was including the test trial in the population. And I was once again getting pre-odor prediction rates near 100%. Excluding the test-trial from the average population made everything MUCH more sensible.

Thursday, November 10, 2011

The odor code for the first breath is different from subsequent breaths

As I mentioned last post, I have been recording from the olfactory bulb of awake, head-fixed mice. In general, the responses I'm seeing are in accord with those reported by Shusterman and Rinberg: about half of cell-odor pairs respond to a given odor. In trying to quantify these responsive cell-odor pairs, I stumbled upon another finding, that the odor code for the first breath is different from the rest.

One cell's response

(Brief methods: To look at whether a given breath is responsive, I segmented the recordings into breaths, and fit each breath to a standard breath length (if a given breath was longer than the average breath length, I deleted all spikes after the end of the standard breath; if a given breath was shorter, I assumed the rest of the time included no spikes). To quantify whether breaths were "responsive," I compared a breath's tonic firing rate to the control, pre-odor breaths (using ANOVA with p<0.05, and Tukey's post-hoc testing); and I tested whether the "phase" or timing of the breath differed from the pre-firing rate (using a Kolmogorov Smirnov test; here I used p<0.02 as the threshold for significance as using p<0.05 yielded many false positives when comparing different control breaths). And when I looked at the data, it was obvious that some cells had strikingly different codes for the first breath versus later breaths.)

One example is shown below (this is the same neuron-odor pair from the previous post, albeit different trials). The top panel shows the PSTH of the cell's response to amyl acetate, with 40ms bins. You can see that before odor presentation, the neuron fired irrespective of phase. During the first sniff of the odor, there was a strong, transient burst of activity in the middle of the breathing cycle. However, in the subsequent sniffs, the cell was inhibited.
This cell is excited during the first breath before becoming inhibited. (top) PSTH of the response with 40ms bins. The odor is applied at t=0s. Blue dashed lines represent inpiration. (middle) PSTH of the response with a single bin for each breath. The cell is inhibited during breaths 2-4. (bottom) Cumulative distribution of spikes during the ctl breath (black), first breath (blue), and second breath (red).
While seeing the difference by eye is nice, I wanted to test these withous bias and quantitatively. To detect tonic changes, I averaged the firing rate for each breath, as shown in the middle panel. In this example, over the whole breath, the first sniff's firing rate does not differ from the control breaths'. However, the three subsequent breaths are all inhibited.

To look at the phasic changes, I plotted (for ten trials) the cumulative spike times for control breaths and breaths during the odor (bottom panel, above). Before the odor, the spikes occur without phase bias (black line), while during the first breath you can see that most spikes come between 150-200ms of the breathing cycle. However, on the second breath, the phasic nature of the response has begun to dissipate.

Population changes in the odor code

The transformation between the first and second breath can take many forms. The example above shows a neuron that switches from a strong, phasic, excitatory response to an inhibitory response. Many other neurons are inhibited during the first breath but not afterward. Below is a more subtle example, where the neuron does not respond to ethyl butyrate during the first sniff. However, on the subsequent sniffs, the timing of the response shifts to earlier in the breathing cycle.
This cell appears to not respond during the first breath, but has a phasic response during later breaths.
Rather than exhaustively quantify how individual cells change their code between the first and second breath, I took a different approach and looked at the population code. I have three experiments where I was able to record from at least ten cells at the same time. For these three experiments, I created a population vector of the responses to odor, and calculated the "spike distance" between the representation of each breath. Another way to think of spike distance is how dissimilar two representations are: small spike distances imply similar population representations. All distances were normalized to the average distance between control breaths.

The odor code for the first breath is as different from control as it is from the 2nd breath. A. For each breath during an odor, I created a "population" vector, where for each breath and cell, I broke the response into eight bins. To reduce the dimensionality, I performed PCA. To calculate the distance, I used the first five PCA scores for each breath. B. Normalized "spike distance" between odor code for different breaths. All distances were normalized to the average distance between control breaths (C) for an experiment-odor pair. All post-odor breaths are distant from the control breaths (white). The first breath is also different from the 2nd-4th breaths (blue). However, the 2nd-4th breaths are relatively similar to each other (red).
The population response during all of the odor breaths are distant from the control breaths (white bars). The distance here presumably encodes the presence of the odor. However, if you look at the distance between the first breath and subsequent breaths (blue bars), you can see they are also quite far apart. In fact, the first breath is almost as distant from the other breaths as it is from the control breaths. In contrast, the distance between breaths 2-4 is much lower, and almost comparable to the distance between control breaths.

When I showed this to my boss he was not impressed, and said they had already showed this in a previous paper. And indeed, buried in three panels of Fig. 4, they did show something similar (below). There are some significant differences, though. First, those experiments were in anesthetized animals, rather than awake animals. Second, I've shown that individual cells use strikingly different codes between breaths. Third, they did not create their population vector to consider a cell's firing as a whole. This could change the interpretation of the results. In any case, asking around the lab, no one seemed to remember this was even in the paper.

The velocity of the population representation is highest during the first odor and post-odor breaths. A. The population vector contains the firing of each cell in a given time-bin. B. Cross-correlation and distance for the population during a given odor. C. The velocity of the population vector (how much the distance changes) is highest at the beginning and end of odor presentation.
From Bathellier, et al, 2008.
Given this result, one needs to be careful with how one characterizes "responsive" cells in the olfactory bulb. First, when determining whether a cell-odor pair is responsive, one needs to always look at more than just the first breath. For example, the second cell shown above was responsive during the second breath, but not the first. Second, when characterizing responses, it is difficult to say whether a cell was excited or inhibited, as cells can be both excited and inhibited in a given breath, as well as be excited for one breath but not others. While it may be unsatisfying, it is probably best to just call them "responsive" cells.

These results also gave me an idea for an experiment to test whether the difference in coding is perceptually important. It is now possible to stimulate the olfactory epithelium via Channelrhodopsin while mice sniff (using an OMP-ChR2 line), which makes it possible to mask an odor response with olfactory white noise. To test how important different sniffs are to perception, you would start by establishing the detection threshold for an odor. Then you could measure the detection threshold while masking either the first or second sniff with the olfactory noise. There are a few possible results. First, the threshold might not change at all, as both the first and second sniff contain sufficient information to detect an odor. Second, the sensitivity could be equally decreased when either sniff is blocked. This would also imply their is equal information in each sniff. The third possibility is that masking the first sniff would decrease sensitivity far more than masking the second sniff (which is what I expect). It has been shown that mice and humans can detect odors in a single sniff. And in daily life, no odor is as strong as its first whiff. The difference in odor coding between the first and second sniff might be one step towards explaining why.

While this is pretty basic analysis, I had to perform this en route to doing more sophisticated comparisons while trying to measure a form of plasticity in the odor code. This is also the first complete data I've shown from this lab. I would appreciate any feedback on this, as it's always useful to get a perspective outside the insular confines of a lab. Were the figures legible? The analysis convincing? Or is this entirely un-novel?

Tuesday, November 1, 2011

Don't Warp Yo Breath

In my last post, I briefly reviewed a paper from the Rinberg lab where they recorded from the olfactory bulb of awake, head-fixed mice. When they were analyzing their neurons' responses, they performed a time-warping manipulation on the data that increased the precision of the responses. Today I'm going to present some counter-evidence that shows why their time warping is a bad idea.

Time Warping of Neuronal Responses

The first figure of their paper clearly explains how they time-warped their data. They recorded extracellularly from mitral cells in the olfactory bulb while presenting head-fixed mice with odorants. In their recording, as in previous studies, if they did not perform any temporal alignment, they saw very weak responses in response to odorant application (black traces, below). However, it is well known that sniffing can influence olfactory bulb activity, so they realigned all of their mitral cell activity to the first inhalation following odor onset (blue traces, below). When they did this, they found that the mitral cell responses were quite strong, and around 59% of odor-cell pairs were responsive.

Aligning responses to inhalation reveals odor responses. a. Diagram of inhalation alignment, and temporal warping. Odor presentation in yellow. c. Spike rasters to odor under three alignment paradigms: to odor (black); to inhalation onset (blue); and time-warped (red). d. Peri-stimulus time responses(?) of the responses in c.
From Shusterman, et al, 2011.
Not satisfied with the precision of their responses, they performed one more manipulation. The breathing cycle, while fairly regular, does vary; and they reasoned that duration of the breathing cycle could influence neuronal activity. To normalize this, they fit curves to both inhalation and exhalation, and then stretched time (and moved spikes) until the breaths fit a standard breathing cycle (red traces, above). When they did this, they found that both the precision and magnitude of responses were increased.

This did not sit well with me for three reasons. First, if you take the perspective of a neuron in the olfactory bulb, it means that the neuron has to somehow keep track of where it is in the breathing cycle, not in terms of time, but in terms of phase. To do this, 50ms following inhalation, a neuron has to know when the next inhalation is going to come. They're psychic! [Update: a commenter noted that the OB could receive an efference copy from brainstem respiratory centers. I am not aware of any evidence that it does, however. Which of course does not mean it does not exist.]

Second, I think that the timing of mitral cell responses is in large part dictated by the temporal dynamics of the olfactory epithelium. The olfactory epithelium, in turn, has its kinetics determined by the taus of the G-protein signaling cascade, and the concentration of odorants in the epithelium. The kinetics depend on inhalation onset and intensity, not phase.

The third reason I have a problem with time-warping is that I have counter-evidence

Mitral Cell Response Timing is Independent of Breath Length

While I wait for my mice, I have been performing head-fixed recordings from the olfactory bulb of awake mice. In general, I've been getting population responses in line with what the Rinberg lab has suggested: ~50% of the odor-cell pairs in the OB differ from baseline. Like the Rinberg lab, the Carleton lab aligns responses to inhalation onset. Yet, unlike the Rinberg lab, we have not performed any time warping.

After seeing the Shusterman paper, I took a closer look at my data, to see whether time-warping makes sense. In general, respiration is regular enough that time-warping would little effect the responses. However, I found a few cases where time warping is a bad idea.

Below is a raster plot of the firing of one neuron in response to amyl acetate at 20x dilution. I have zoomed in on the first second following odor onset. Inhalations are denoted by blue lines, while spikes are shown in black. We trigger odor delivery by waiting for an exhale, which is why the inhalation times are non-random. Following the first sniff, you can see that this neuron fires vigorously, with some delay. (I should say that this response is easily one of the highest firing rates in my data set.)
Raster plot of a neuron's response to Amyl Acetate. Amyl Acetatate began application at t=6s. Blue lines  are inhalations, black lines are spikes. Ten trials shown.
We can also align this data by moving time such that the first post-odor breaths all occur at the same time. If you do this, you can more clearly see the strong response to the odor. This response is fairly long, and has a high firing rate (>100Hz at its peak).
Same raster plot as above, except aligned to the first inhalation following  odor. Here the response is much clearer.
And now I can finally address the issue of whether time-warping is a good idea. In the example above, there are two trials with short breaths, trials 6 & 7; and two trials with long breaths, trials 8 & 10. Despite the different lengths of these breaths, both of these trials have high amplitude firing rates between 6.2 and 6.3s. If you were to time-warp these trials, you would be moving the spikes from trials 6&7 later in time, and the spikes from trials 8&10 earlier. Both manipulations would cause a decrease in precision.

This is just one extraordinary example, but it shows that time-warping can have deleterious effects on precision. In my view, if you are recording from the olfactory bulb, you should align all your responses to breath onset, and truncate your breaths to the same standard breath.

I hope you are convinced that time-warping your data is a bad idea for mitral cells in the olfactory bulb. If I've missed anything, please let me know in the comments.

Thursday, October 27, 2011

Walk Along the Paper Trail: Rinberg Redoubt

It's been too long since I wrote about, you know, actual science. Today I'm going to cover two recent papers from the Rinberg lab, which has been on fire lately.

"Precise" temporal coding in the olfactory bulb

While the core coding strategies used in the visual, auditory, and somatosensory systems are generally well defined, much less is known about coding in olfaction. Many people have imaged the glomeruli that receive input from olfactory sensory neurons, which is certainly useful, but far fewer have actually recorded from the mitral cells directly in vivo. Many basic questions like, how many odors a neuron can respond to, or what a typical response is like, are still unknown. To answer these questions, Shusterman and colleagues recorded from awake, head-fixed mice while presenting odorants.

When they aligned mitral cell spikes to odor onset, they found that responses were quite "sparse." (panel a, top; panel c/d, black trace) (People love talking about "sparseness" in olfaction without ever really defining it well.) However, given how important the sniff is, they then aligned their responses to sniff onset (panel a, middle; panel c/d blue trace). When they did so, they found that odor responses were in fact quite strong. Of the 467 cell-odor pairs they recorded, 59% percent responded; and of those responses, approximately half were excitatory and half were inhibitory.
Aligning responses to inhalation reveals odor responses. a. Diagram of inhalation alignment, and temporal warping. Odor presentation in yellow. c. Spike rasters to odor under three alignment paradigms: to odor (black); to inhalation onset (blue); and time-warped (red). d. Peri-stimulus time responses(?) of the responses in c.
From Shusterman et al (2011)
While breathing is fairly regular, they went one step further, and "warped" all the breaths to the same reference breath. To do this, the segregated the breaths into inhalation and exhalation phases, and stretched/shrunk time to fit these standard phases. When they did this, they saw an increase in the magnitude and precision of neuronal responses (they'll get back to this).

Next they asked when these responses arrive during the breathing cycle. To do this, they calculated the first point at which the response was significantly different from the baseline response (panels a-c below). Once they identified the latency of all responses, they plotted the cumulative latency, and showed that the responses "tiled" the breathing cycle. In their words, "The latency distribution of these sharp events tiled the sniff cycle (range, 43–324 ms; Fig. 4d). Given this precision, reliability and sniff cycle tiling, we estimate that in each 10-ms window of a sniff in the presence of odor, a new ensemble comprising roughly 0.5% of mitral/tufted cells (approximately 250 cells in the mouse) will begin a sharp excitatory response."

Neuronal responses tile the breathing cycle. a-c. Three example responses during baseline (grey) and to odor (color). The latency to response is shown as vertical black bar. d-e. Response latencies for excitatory (red) and inhibitory (blue) cell-odor pairs.
From Shusterman et al 2011
Perhaps this is a semantic argument, but I dispute that responses tile the entire breathing cycle. They claim that the latencies fall in the range of 40-325 ms, but looking at the distributions above, a majority of responses fall in the range between 60-200 ms, or about half the breathing cycle. This makes intuitive sense: the delay is due to the time it takes for the sensory neurons' G-protein signaling to run its course; and the dropoff after exhalation can be explained by a lack of odorants in the olfactory epithelium.

In the later stages of the paper, they looked at the precision of the responses, and how well they predict the odors (which I'll skip). To calculate the jitter of responses, they calculated the standard deviation of the first spike following a long inter-spike interval. For the simple breath-aligned responses, they found the jitter was ~23 ms (panel a; below). They then calculated the jitter in warped time, and found the jitter was much lower, ~11 ms. From this they conclude: 1. that the olfactory system has much more precise firing than previously believed; and 2. that warping breaths is a good idea. As a second way to look at the value of warping breaths, they calculated the peak response amplitude in real time and warped time, and found that responses were generally larger in warped time.

Time warping increases response precision. a. The jitter of excitatory responses is larger in real time than in warped time. b. The amplitude of excitatory responses is higher in warped time than real time.
From Shusterman et al 2011.
While I do not speak for or represent the Carleton lab, I will say the lab was quite surprised when this paper came out. We had been recording in awake head-fixed mice for some time, and not seen responses as strong as this. Since the paper has come out, though, we have started to get responses more in line with their data. (Which makes me wonder about how we find the results we are looking for.) It is nice to see, though, that such a simple reporting of olfactory bulb coding can be published in a high impact journal.

My biggest issue with the paper are those warped breaths. The breathing cycle is generally regular enough that warping breaths will yield only mild increases in precision. Yet, I have to wonder, how does a neuron know where it is in the breathing cycle? In other words, if it has been 50 ms since the end of inhalation, how does a neuron know whether the next breath is 100ms or 200ms away? You can solve this problem with synfire chains a la birdsong representations in HVC, but there the motor and auditory pathways are linked, whereas here they are divorced. I hope to post counterexample data in the future.

Second Paper

To follow that paper, they next turned to behaviour. They asked, if individual neurons code the phase of sniffing, can the mice perceive sniff phase as well?

To artificially stimulate different sniff phases, they used mice that express channelrhodopsin in the olfactory epithelium (OMP-ChR2). They then implanted a cannula with a light fibre in the epithelium, and stimulated with 5mW pulses. To show that mice were receiving artificial olfactory input, they showed that mice could discriminate between air puffs with light vs air puffs without light, in a go/no-go task.

Once they established that mice could sense the light stimulation, they then set out to test whether they could discriminate light stimulation at different times during the sniff phase. They used a small cannula to detect the start of inspiration and exhalation, and stimulated either 32ms following inhalation or exhalation (panel a, top). When light was presented during inhalation, the mice were expected to lick; when light was during exhalation, it was no-go. As you can see below, the mice were initially unable to discriminate the stimuli, but quickly learned to discriminate between light stimulation during inhalation versus exhalation.
Mice can discriminate between different parts of sniff phase. a. They detected the breathing cycle, and stimulated with light either 32ms following inhalation (go) or 32ms following exhalation (no-go). After training, mice were able to discriminate between the stimuli. b. Here, no-go stimuli were presented 50-100ms following the go stimuli. Mice were able to discriminate differences of ~10-20ms.
From Smears et al 2011
Once they established that mice can discriminate inhalation from exhalation, they then set out to find the minimum discrimination interval. Here the go stimuli was 32 ms after inhalation, and the no-go stimuli was between 5-100 ms following the go stimuli (panel b, above). In general, mice were able to discriminate ~12 ms differences between the two.

Between these two papers, the Rinberg lab showed that neurons can encode the precise timing of olfactory information. The odor tuning of neurons is rather broad, with 59% of cell-odor pairs responding. And this precise timing of stimuli is discriminable to the mice, which means it is possibly important for detection.


Shusterman R, Smear MC, Koulakov AA, & Rinberg D (2011). Precise olfactory responses tile the sniff cycle. Nature neuroscience, 14 (8), 1039-44 PMID: 21765422

Smear M, Shusterman R, O'Connor R, Bozza T, & Rinberg D (2011). Perception of sniff phase in mouse olfaction. Nature PMID: 21993623

Monday, October 17, 2011

Where do all the nice mice go?

I wait for mice. I wait for them to wake from anesthesia; for them to replicate the viruses I inject; for them to copulate, conceive, gestate, give birth, and ween new mice. And I wait for mice I don't even have; for my boss to order new strains, for the paperwork to be filled, the courier to ship, and the vet to quarantine.

Unless (until?) we can shorten mouse gestation or virus replication, the day-to-day waiting cannot be mitigated. But those other delays - those not bound by nature - seem solvable. What I would like to see is a way for individual researchers (or labs) to have access to a wide variety of mice in a timely fashion.

Beaucoup de souris

This logistical issue only became incipient recently. In olden times, most people worked with WT mice, so you never had to order mice, nor worry about maintaining multiple lines. Now there are dozens of  mouse lines with Cre- or GFP expression in interesting subsets of neurons, like specific cortical layers, neuromodulatory centers, or inhibitory neurons. And this issue is accelerating, as new mouse lines are constantly developed. For example, in last month's Neuron, Josh Huang announced a batch of mice that express Cre in GABAergic neurons. The Feng lab released a batch targeting neuromodulatory centers. And that's not counting ongoing projects like GENSAT or the Allen Brain Atlas that generate, collect, and distribute lines.

You might be thinking, "Options are a good problem. Just be selective, and only order the mice that are interesting." Which is what we do. Yet, it's impossible to identify how useful a mouse line is without testing it. For example, if you want to use a mouse line expressing Cre in a specific cortical layer, you have many choices, which differ in terms of density, and selectivity. The only way to differentiate them would be to test them all.

As it stands now, if you guess wrong and order a lemon of a mouse line, you have no recourse. Just months of delayed projects, and the decision whether to maintain that line for possible future use, or just to end it.

Costs defined

In economics terms, I believe these issues can be categorized as transaction and variable costs. The transaction costs include those beyond the price of buying mice, like the time spent filling paperwork, or  the time waiting. And the variable costs are those involved in housing more mouse lines.

The way research is organized now significantly increases these costs. Research is performed in small labs in small departments spread around the world, each of which maintain their own mouse colonies. While some of these colonies are unique, most are redundant. Whenever a lab wants to work on a new mouse line, they have to wait months for the colonies to breed up (increased transaction cost), and incur thousands of dollars in housing costs.

Given the increasing importance of mouse lines to research, the explosion in the number of mouse lines, and the costs associated with maintaining independent mouse facilities, it may be time to rethink how we organize research. (I am entirely aware that reorganizing all of neuroscience research for easy mouse access is crazy. But bear with me.)

Cutting Costs

Two solutions spring to mind, both focused on reducing transaction and variable costs.

First, instead of transferring mice, we could transfer people. If I want to work on the KX25 mouse line, I could pack my electrodes and lickometer, and fly to the lab that has them. No quarantine, no breeding, just TSA screening.

This solution is, of course, impossible. On the human side: who gets credit/authorship for the experiments? do we want to burden people with moving every few months? etc. On the technological side, it would require a quantum leap in equipment standardization or portability, so that one could perform experiments anywhere. (It's never a good idea to bet against technological progress, but for now techniques are evolving too quickly for this to be feasible soon.)

The other way to reduce these costs is to simply reduce the number of locations, and concentrate more science in fewer locations, maybe a dozen around the US. Each location could be specialized to further reduce costs. For example, you could group together everyone interested in cortical systems neuroscience, have all the mouse lines they typically use. Then whenever someone wanted a mouse, they could check the availability, and have something available the next day, or within a month. And if the mouse didn't work, it would not be a problem, since you didn't spend too much time or money ordering and housing them.

Costs and Benefits

What are the downsides or risks to this idea? There are certainly cultural issues. If we reduced the number of research institutions, would we lose diversity? Would people in the mega-facilities collaborate, or compete? What would the academic career path look like without the grad-student, post-doc, faculty progression? (Wait, ditching that would be a bonus.) What would the funding process look like? Would people accept having even fewer choices of locations to work?

There are political issues too. The smaller research universities would be losing significant federal funding, which would displease congressmen. And the universities themselves might be scared of losing so many teachers, but then again, most researchers hate teaching.

Yet, there might be additional benefits beyond simply reducing costs. Paul Graham writes about tech startups, and has noted the right environment is essential for startups to thrive. In Silicon Valley, people think start-ups are cool rather than risky; the social network of useful people is denser; there are venture capitalists to pitch to.

By concentrating research in fewer locations, neuroscience may benefit in similar ways. If you're working in olfaction, you can discuss new experiments every week with others, rather than waiting for conferences. Or if you have an idea for a set of experiments, the funding agency might be there for you. Imagine the seminars!

Given the government sure as heck won't do this, who might? The best bets are Howard Hughes, or very large research universities (Harvard, UCSD, Penn). Right now universities generally try to hire people with a diversity of interests, rather than focusing on specific subjects. Instead, they should probably narrowly focus. Once you have 5-10 faculty all sharing mice, those costs might be half of what they would be otherwise.

I should really be a department chair. I've already figured out how to boost a department's reputation by authoring wiki textbooks. And now I've figured out how to reduce a significant non-payroll cost. Only problem is my short Trail of Papers.

Saturday, October 15, 2011

Mechanisms Ratings System

I love video games, especially first person shooters. In the early days, computers were slow, and weren't able to render complex objects, so level designers filled space with simple items like crates, and barrels. This led to the Crate Review System, whereby games were rated by how long it took before you saw a crate.

Ever since I wrote about the staple scientific phrase, "The mechanisms ... are not fully understood," I smile when I see it. And today I would like to debut the Mechanisms Ratings System, which is the number of words in a paper or review before they invoke the catchphrase. Today's rating: "Polyrhythms of of the Brain," which starts, "The mechanism by which multiple brain structures interact to support working memory is not yet fully understood."

Words to mechanism: 1
Words in mechanism sandwich: 11

Saturday, October 8, 2011

Figure Misadventures: **

This week in lab meeting, we covered a recent paper from the Mizrahi lab in Israel. Two points.

First, it's amazing how standards have increased. They performed chronic, in vivo, two-photon imaging over nine months, and used a dopaminergic-specific GFP line, and all they got was a Journal of Neuroscience paper. Now, there are reasons it's only J. Neuroscience: they counted cells bodies, not spines, and as everyone knows, cells exist for their spines to be counted; and their only non-control result was a 13% increase in dopaminergic neuron number in the olfactory bulb (below, lower panel).

c. Gains and losses of dopaminergic neurons in the olfactory bulb.  Note the super-significant ** (p<0.01!!!) at the third time point. d. Overall change in dopaminergic neuron number. 13%!.
What I really love, though, is the ** in the top panel of the figure, denoting p < 0.01, compared to those dirty *s over the other data points (p < 0.05). To get those stars performed repeated Student's t-test rather than doing an ANOVA, which is such a ubiquitous sin it's like Catholics using birth control. It's indisputable that there are more neurons gained than lost neurons at all time points. Yet one of the authors was compelled to include the **. Why? Would someone not believe the data if one of the points was p < 0.01? Is there some standard that all p-values < 0.01, must get a **? Are they trolling anal-retentive people, like me? Are they going to focus their research on that time point? Or, more likely, someone thoughtlessly figured, "why not?"

Extra *s are not inherently dumb, but they reflect imprecise thinking. In science, we can't hope to be that certain (p <0.01). We can only observe until we're pretty sure, then wait for independent verification. I worry that people who note **s when they're not meaningful might be prone to noticing *s when they don't exist.

Monday, September 19, 2011

Way too much about bitter taste perception

Last time I repined that there aren't enough "organic" reviews out there, so today I'll give it a go myself.

Theories of bitterness

When you eat food, you are able to identify it via its smell, texture in your moth, and how it activates taste cells on your tongue. The classic "taste modalities" are sweet, sour, salty, bitter, and umami.  For sweet, sour, salty, and umami, there is a single taste receptor; the only information you get from those senses is degree of activation.   In contrast, there are over twenty five bitter receptors, called T2Rs.

So is bitter taste similar to the other modalities, a single labeled line, or is it more complex? Some people (the Zuker lab chief among them) propose that, like sweet and sour, we can only detect the extent to which something is bitter. That is, there is a single "labeled  line" for all bitter tastes. This would reduce our sense of taste to five labeled lines.  The ability to discriminate between similar tastes - e.g. between two citrus fruits - would be due to extra information from olfaction.

There is the alternative possibility that bitter is not a single labeled line, but more than one line. Individual bitter taste cells could express a subset of T2Rs, have individualized receptive fields, and discriminate between different bitter compounds. Today I'm going to review the evidence for both theories at the level of the tongue, brain, and behaviour.

Bitter on the tongue

The bitter receptors were discovered circa 2000, and reported in a series of  papers.  In one of those papers (Adler, et al 2000), the authors performed in situ hybridization against multiple T2Rs on the tongue. The found similar numbers of cells were labeled whether they used probes for 1, 2, 5, or 10 T2Rs (see below, left). Notably, they state that labeling 2+ receptors labeled 20% of taste cells, while labeling with only 1 receptor labeled 15% of taste cells. Here, the difference could be due to simple labeling inefficiency. They alternately verified this by performing double-labeled fluorescent in situs, and found "most" cells coexpressed multiple receptors. From this, they concluded that individual bitter taste cells in the tongue express most T2Rs, and are sensors for bitter, generally.

Individual taste cells express multiple bitter taste receptors. c. In situ label using 10 probes for T2Rs. The number of cells labeled here is similar to single label probes. d. Fluorescence in situ double-labeling for T2R3 (green) and T2R7 (red). Most cells express both T2Rs.
From Adler et al 2000.
The next year, the Roper lab reported potentially contradictory results. Caicedo and Roper performed confocal calcium imaging on isolated tongues from rats while applying five bitter tastants. Of the taste cells they imaged, 18% (69/374) responded to one of the bitters, but most cells responded to only one or two of the tastants (see below).

Individual taste cells respond to only a subset of bitter tastants. A. Three example taste cells each respond to different tastants (denatonium, quinine, cycloheximide, phenylthiocarbimide, and sucrose octaacetate). B. Response for all responsive cells to bitters. Cell ID on left, tastants on top. From top to bottom, cells respond to more tastants.
From Caicedo and Roper, 2001.
It is hard to reconcile these two results. I am not an expert on in situs, but it is possible that the labeling specificity is not 100% specific (as is often the case for antibody staining). Yet, I think you have to trust that the researchers were competent. I would only emphasize that "most" receptors is not "all" receptors, and so these results are not completely mutually exclusive.

In 2005, Zuker fired back. Bitter signaling uses a G-protein coupled cascade that signals through PLCβ2; PLCβ2 KO mice lose all bitter taste. Mueller et al took PLCβ2 -/- mice, and then expressed PLCβ2 under a T2R promoter, like mT2R5 (m for mouse). When they did that they were able to fully rescue bitter taste.

Expression of PLCβ2 under the expression of a single bitter receptor rescues all bitter taste. The "relative response" measures the amount of licking mice did of bitter compounds (inverse). Control mice do not lick bitters. PLC -/- mice cannot taste bitter, and so lick bitters. PLC expressed behind the promoters for mT2R5, mT2R32, and mT2R19 is able to rescue bitter perception.
From Mueller et al 2005.
However, again, I feel this is not quite conclusive. If bitter taste receptor expression overlaps randomly, while an individual taste cell may not express all T2Rs, the whole population of mT2R5 taste cells could express all the other T2Rs. And hence allow full recovery of bitter sensitivity.

Bitter in the brain

There's a lot more interesting stuff about bitter taste on the tongue - for example, the Meyerhof lab has identified the ligands for many human taste receptors - but let's move to the brainstem (and beyond!).

Few labs have recorded from the taste areas of the brainstem. Chief among them are David V Smith and the Travers of Ohio State. In 2006, Geran and Travers recorded from NST of rats while applying the classic tastants + a set of bitters. And they found that some cells in NST responded differentially to cycloheximide and denatonium. And if the brain can discriminate between different bitters, surely the tongue must as well...

Individual NST neurons can discriminate denatonium (DEN) from cycloheximide (CHX). Pardon the figure, it's excised from a MUCH larger one. y-axis is response rate, x-axis is neuron ID. On the right are the bitter sensitive neurons (B-best). The first three neurons respond to denatonium, while the rest do not. There also may be quinine neurons.
From Geran and Travers, 2006.
As far as I know, no one has presented animals with multiple bitters while electrically recording from gustatory cortex. However, two weeks ago I covered a recent paper on Ca2+ imaging in gustatory cortex. While the main focus of the paper was taste hotspots in gustatory cortex, they also presented the following figure in the supplementary data. They applied multiple bitters while imaging cortex, and found that not all cells responded to all the bitters presented. In general, there was some unreliability in there results, - many "responsive" cells only responded in a subset of trials - but this does raise the possibility that cortical cells can discriminate between different bitters.

Gustatory cortical neurons may be able to discriminate between different bitters. left. Map of responsive cells to three bitter: denatonium, cycloheximide, and quinine. middle. Overlaid map of cells to left, color coded for cells that respond to all 3 bitters (red), 2 bitters (yellow), and 1 bitter (white).  right. Bar chart of # of cells that respond to bitters.
From Chen et al, 2011.
Bitter in the "mind"

So what about, you know, bitter perception itself? This has been rather ill studied. Many people have shown that typical "bitter" stimulants are aversive. Only a couple have tested whether mice can discriminate between them.
The best paper I've found tested whether mice could discriminate between quinine and denatonium (I wonder if scientists choose these chemicals so often because they're much easier to pronounce and remember than sucrose octaacetate). To ensure that the mice were not discriminating between different intensities of bitterness, they measured the aversiveness of each chemical, and used iso-yucky concentrations.

Thirsty mice were allowed to lick a water bottle for five seconds. The lick rate over the last 3 seconds of the trial determined the "stimulus licks" and were normalized to water licks. Dashed rectangles denote equivalently aversive concentrations.
From Spector and Kopka, 2002.
Once they had determined the equivalent concentrations to use, they employed a two-alternative forced choice task to measure discrimination. They first validated their system by testing whether animals could discriminate between quinine and KCl (below, left). Then they switched quinine for denatonium to see if the mice noticed, and found that the mice continued to discriminate between denatonium and KCl, as if quinine and denatonium were the same. As a positive control, they swapped NaCl for denatonium, and found that the mice needed a few testing sessions to relearn the new task (middle). Finally, they tested whether mice could discriminate quinine and denatonium, and found that discrimination was at chance level (right).

Mice are unable to discriminate between quinine and denatonium. See above for details.
From Spector and Kopka, 2002.
Similar experiments were performed in flies, using the proboscis extension reflex as a measure of palatability (Masek and Scott, 2010). I'm sure someone has tested this in humans, but I have not read the study yet. My main issue with these experiments is that they are using such high, aversive concentrations of the bitter stimuli that they may be beyond a discriminatory range. For example, bitter taste may have two functions: discrimination, and aversion. At low concentrations, certain chemicals may be useful for discrimination, and non-toxic; at high concentrations, however, they signal toxicitiy. It would be interesting to see how discrimination worked at lower concentrations.

After all this data, what can you conclude? The evidence for a single bitter labeled line come from the taste input and output, the receptors and the behaviour. In the brain, however, it seems that individual cells can discriminate between bitter tastants. It's certainly possible that taste neurons can discriminate between bitters before discarding the information as useless.  I think the main issue here is the old scientific problem of, "just because you can't detect it doesn't mean it's not there." I know reading these papers has suggested a few experiments to my mind.


Adler, E., Hoon, M a, Mueller, K. L., Chandrashekar, J., Ryba, N. J. P., & Zuker, C. S. (2000). A novel family of mammalian taste receptors. Cell, 100(6), 693-702.

Caicedo, A., & Roper, S. D. (2001). Taste Receptor Cells That Discriminate Between Bitter Stimuli. Science, 291(5508), 1557-1560. doi:10.1126/science.1056670

Chen, X., Gabitto, M., Peng, Y., Ryba, N. J. P., & Zuker, C. S. (2011). A Gustotopic Map - Supplement. Science, 333(6047), 1262-1266. doi:10.1126/science.1204076
Geran, L. C., & Travers, S. P. (2006). Single neurons in the nucleus of the solitary tract respond selectively to bitter taste stimuli. Journal of neurophysiology, 96(5), 2513. Am Physiological Soc. doi:10.1152/jn.00607.2006.
Mueller, K. L., Hoon, Mark a, Erlenbach, I., Chandrashekar, J., Zuker, C. S., & Ryba, N. J. P. (2005). The receptors and coding logic for bitter taste. Nature, 434(March), 225-230. doi:10.1038/nature03366.1.
Spector, A. C., & Kopka, S. L. (2002). Rats fail to discriminate quinine from denatonium: implications for the neural coding of bitter-tasting compounds. The Journal of neuroscience : the official journal of the Society for Neuroscience, 22(5), 1937-41.