Pages

Thursday, October 27, 2011

Walk Along the Paper Trail: Rinberg Redoubt

It's been too long since I wrote about, you know, actual science. Today I'm going to cover two recent papers from the Rinberg lab, which has been on fire lately.

"Precise" temporal coding in the olfactory bulb


While the core coding strategies used in the visual, auditory, and somatosensory systems are generally well defined, much less is known about coding in olfaction. Many people have imaged the glomeruli that receive input from olfactory sensory neurons, which is certainly useful, but far fewer have actually recorded from the mitral cells directly in vivo. Many basic questions like, how many odors a neuron can respond to, or what a typical response is like, are still unknown. To answer these questions, Shusterman and colleagues recorded from awake, head-fixed mice while presenting odorants.

When they aligned mitral cell spikes to odor onset, they found that responses were quite "sparse." (panel a, top; panel c/d, black trace) (People love talking about "sparseness" in olfaction without ever really defining it well.) However, given how important the sniff is, they then aligned their responses to sniff onset (panel a, middle; panel c/d blue trace). When they did so, they found that odor responses were in fact quite strong. Of the 467 cell-odor pairs they recorded, 59% percent responded; and of those responses, approximately half were excitatory and half were inhibitory.
Aligning responses to inhalation reveals odor responses. a. Diagram of inhalation alignment, and temporal warping. Odor presentation in yellow. c. Spike rasters to odor under three alignment paradigms: to odor (black); to inhalation onset (blue); and time-warped (red). d. Peri-stimulus time responses(?) of the responses in c.
From Shusterman et al (2011)
While breathing is fairly regular, they went one step further, and "warped" all the breaths to the same reference breath. To do this, the segregated the breaths into inhalation and exhalation phases, and stretched/shrunk time to fit these standard phases. When they did this, they saw an increase in the magnitude and precision of neuronal responses (they'll get back to this).

Next they asked when these responses arrive during the breathing cycle. To do this, they calculated the first point at which the response was significantly different from the baseline response (panels a-c below). Once they identified the latency of all responses, they plotted the cumulative latency, and showed that the responses "tiled" the breathing cycle. In their words, "The latency distribution of these sharp events tiled the sniff cycle (range, 43–324 ms; Fig. 4d). Given this precision, reliability and sniff cycle tiling, we estimate that in each 10-ms window of a sniff in the presence of odor, a new ensemble comprising roughly 0.5% of mitral/tufted cells (approximately 250 cells in the mouse) will begin a sharp excitatory response."


Neuronal responses tile the breathing cycle. a-c. Three example responses during baseline (grey) and to odor (color). The latency to response is shown as vertical black bar. d-e. Response latencies for excitatory (red) and inhibitory (blue) cell-odor pairs.
From Shusterman et al 2011
Perhaps this is a semantic argument, but I dispute that responses tile the entire breathing cycle. They claim that the latencies fall in the range of 40-325 ms, but looking at the distributions above, a majority of responses fall in the range between 60-200 ms, or about half the breathing cycle. This makes intuitive sense: the delay is due to the time it takes for the sensory neurons' G-protein signaling to run its course; and the dropoff after exhalation can be explained by a lack of odorants in the olfactory epithelium.

In the later stages of the paper, they looked at the precision of the responses, and how well they predict the odors (which I'll skip). To calculate the jitter of responses, they calculated the standard deviation of the first spike following a long inter-spike interval. For the simple breath-aligned responses, they found the jitter was ~23 ms (panel a; below). They then calculated the jitter in warped time, and found the jitter was much lower, ~11 ms. From this they conclude: 1. that the olfactory system has much more precise firing than previously believed; and 2. that warping breaths is a good idea. As a second way to look at the value of warping breaths, they calculated the peak response amplitude in real time and warped time, and found that responses were generally larger in warped time.

Time warping increases response precision. a. The jitter of excitatory responses is larger in real time than in warped time. b. The amplitude of excitatory responses is higher in warped time than real time.
From Shusterman et al 2011.
While I do not speak for or represent the Carleton lab, I will say the lab was quite surprised when this paper came out. We had been recording in awake head-fixed mice for some time, and not seen responses as strong as this. Since the paper has come out, though, we have started to get responses more in line with their data. (Which makes me wonder about how we find the results we are looking for.) It is nice to see, though, that such a simple reporting of olfactory bulb coding can be published in a high impact journal.

My biggest issue with the paper are those warped breaths. The breathing cycle is generally regular enough that warping breaths will yield only mild increases in precision. Yet, I have to wonder, how does a neuron know where it is in the breathing cycle? In other words, if it has been 50 ms since the end of inhalation, how does a neuron know whether the next breath is 100ms or 200ms away? You can solve this problem with synfire chains a la birdsong representations in HVC, but there the motor and auditory pathways are linked, whereas here they are divorced. I hope to post counterexample data in the future.


Second Paper


To follow that paper, they next turned to behaviour. They asked, if individual neurons code the phase of sniffing, can the mice perceive sniff phase as well?

To artificially stimulate different sniff phases, they used mice that express channelrhodopsin in the olfactory epithelium (OMP-ChR2). They then implanted a cannula with a light fibre in the epithelium, and stimulated with 5mW pulses. To show that mice were receiving artificial olfactory input, they showed that mice could discriminate between air puffs with light vs air puffs without light, in a go/no-go task.

Once they established that mice could sense the light stimulation, they then set out to test whether they could discriminate light stimulation at different times during the sniff phase. They used a small cannula to detect the start of inspiration and exhalation, and stimulated either 32ms following inhalation or exhalation (panel a, top). When light was presented during inhalation, the mice were expected to lick; when light was during exhalation, it was no-go. As you can see below, the mice were initially unable to discriminate the stimuli, but quickly learned to discriminate between light stimulation during inhalation versus exhalation.
Mice can discriminate between different parts of sniff phase. a. They detected the breathing cycle, and stimulated with light either 32ms following inhalation (go) or 32ms following exhalation (no-go). After training, mice were able to discriminate between the stimuli. b. Here, no-go stimuli were presented 50-100ms following the go stimuli. Mice were able to discriminate differences of ~10-20ms.
From Smears et al 2011
Once they established that mice can discriminate inhalation from exhalation, they then set out to find the minimum discrimination interval. Here the go stimuli was 32 ms after inhalation, and the no-go stimuli was between 5-100 ms following the go stimuli (panel b, above). In general, mice were able to discriminate ~12 ms differences between the two.

Between these two papers, the Rinberg lab showed that neurons can encode the precise timing of olfactory information. The odor tuning of neurons is rather broad, with 59% of cell-odor pairs responding. And this precise timing of stimuli is discriminable to the mice, which means it is possibly important for detection.


References


Shusterman R, Smear MC, Koulakov AA, & Rinberg D (2011). Precise olfactory responses tile the sniff cycle. Nature neuroscience, 14 (8), 1039-44 PMID: 21765422


Smear M, Shusterman R, O'Connor R, Bozza T, & Rinberg D (2011). Perception of sniff phase in mouse olfaction. Nature PMID: 21993623

Monday, October 17, 2011

Where do all the nice mice go?

I wait for mice. I wait for them to wake from anesthesia; for them to replicate the viruses I inject; for them to copulate, conceive, gestate, give birth, and ween new mice. And I wait for mice I don't even have; for my boss to order new strains, for the paperwork to be filled, the courier to ship, and the vet to quarantine.

Unless (until?) we can shorten mouse gestation or virus replication, the day-to-day waiting cannot be mitigated. But those other delays - those not bound by nature - seem solvable. What I would like to see is a way for individual researchers (or labs) to have access to a wide variety of mice in a timely fashion.

Beaucoup de souris

This logistical issue only became incipient recently. In olden times, most people worked with WT mice, so you never had to order mice, nor worry about maintaining multiple lines. Now there are dozens of  mouse lines with Cre- or GFP expression in interesting subsets of neurons, like specific cortical layers, neuromodulatory centers, or inhibitory neurons. And this issue is accelerating, as new mouse lines are constantly developed. For example, in last month's Neuron, Josh Huang announced a batch of mice that express Cre in GABAergic neurons. The Feng lab released a batch targeting neuromodulatory centers. And that's not counting ongoing projects like GENSAT or the Allen Brain Atlas that generate, collect, and distribute lines.

You might be thinking, "Options are a good problem. Just be selective, and only order the mice that are interesting." Which is what we do. Yet, it's impossible to identify how useful a mouse line is without testing it. For example, if you want to use a mouse line expressing Cre in a specific cortical layer, you have many choices, which differ in terms of density, and selectivity. The only way to differentiate them would be to test them all.

As it stands now, if you guess wrong and order a lemon of a mouse line, you have no recourse. Just months of delayed projects, and the decision whether to maintain that line for possible future use, or just to end it.

Costs defined

In economics terms, I believe these issues can be categorized as transaction and variable costs. The transaction costs include those beyond the price of buying mice, like the time spent filling paperwork, or  the time waiting. And the variable costs are those involved in housing more mouse lines.

The way research is organized now significantly increases these costs. Research is performed in small labs in small departments spread around the world, each of which maintain their own mouse colonies. While some of these colonies are unique, most are redundant. Whenever a lab wants to work on a new mouse line, they have to wait months for the colonies to breed up (increased transaction cost), and incur thousands of dollars in housing costs.

Given the increasing importance of mouse lines to research, the explosion in the number of mouse lines, and the costs associated with maintaining independent mouse facilities, it may be time to rethink how we organize research. (I am entirely aware that reorganizing all of neuroscience research for easy mouse access is crazy. But bear with me.)

Cutting Costs

Two solutions spring to mind, both focused on reducing transaction and variable costs.

First, instead of transferring mice, we could transfer people. If I want to work on the KX25 mouse line, I could pack my electrodes and lickometer, and fly to the lab that has them. No quarantine, no breeding, just TSA screening.

This solution is, of course, impossible. On the human side: who gets credit/authorship for the experiments? do we want to burden people with moving every few months? etc. On the technological side, it would require a quantum leap in equipment standardization or portability, so that one could perform experiments anywhere. (It's never a good idea to bet against technological progress, but for now techniques are evolving too quickly for this to be feasible soon.)

The other way to reduce these costs is to simply reduce the number of locations, and concentrate more science in fewer locations, maybe a dozen around the US. Each location could be specialized to further reduce costs. For example, you could group together everyone interested in cortical systems neuroscience, have all the mouse lines they typically use. Then whenever someone wanted a mouse, they could check the availability, and have something available the next day, or within a month. And if the mouse didn't work, it would not be a problem, since you didn't spend too much time or money ordering and housing them.

Costs and Benefits

What are the downsides or risks to this idea? There are certainly cultural issues. If we reduced the number of research institutions, would we lose diversity? Would people in the mega-facilities collaborate, or compete? What would the academic career path look like without the grad-student, post-doc, faculty progression? (Wait, ditching that would be a bonus.) What would the funding process look like? Would people accept having even fewer choices of locations to work?

There are political issues too. The smaller research universities would be losing significant federal funding, which would displease congressmen. And the universities themselves might be scared of losing so many teachers, but then again, most researchers hate teaching.

Yet, there might be additional benefits beyond simply reducing costs. Paul Graham writes about tech startups, and has noted the right environment is essential for startups to thrive. In Silicon Valley, people think start-ups are cool rather than risky; the social network of useful people is denser; there are venture capitalists to pitch to.

By concentrating research in fewer locations, neuroscience may benefit in similar ways. If you're working in olfaction, you can discuss new experiments every week with others, rather than waiting for conferences. Or if you have an idea for a set of experiments, the funding agency might be there for you. Imagine the seminars!

Given the government sure as heck won't do this, who might? The best bets are Howard Hughes, or very large research universities (Harvard, UCSD, Penn). Right now universities generally try to hire people with a diversity of interests, rather than focusing on specific subjects. Instead, they should probably narrowly focus. Once you have 5-10 faculty all sharing mice, those costs might be half of what they would be otherwise.

I should really be a department chair. I've already figured out how to boost a department's reputation by authoring wiki textbooks. And now I've figured out how to reduce a significant non-payroll cost. Only problem is my short Trail of Papers.

Saturday, October 15, 2011

Mechanisms Ratings System

I love video games, especially first person shooters. In the early days, computers were slow, and weren't able to render complex objects, so level designers filled space with simple items like crates, and barrels. This led to the Crate Review System, whereby games were rated by how long it took before you saw a crate.


Ever since I wrote about the staple scientific phrase, "The mechanisms ... are not fully understood," I smile when I see it. And today I would like to debut the Mechanisms Ratings System, which is the number of words in a paper or review before they invoke the catchphrase. Today's rating: "Polyrhythms of of the Brain," which starts, "The mechanism by which multiple brain structures interact to support working memory is not yet fully understood."


Words to mechanism: 1
Words in mechanism sandwich: 11

Saturday, October 8, 2011

Figure Misadventures: **

This week in lab meeting, we covered a recent paper from the Mizrahi lab in Israel. Two points.

First, it's amazing how standards have increased. They performed chronic, in vivo, two-photon imaging over nine months, and used a dopaminergic-specific GFP line, and all they got was a Journal of Neuroscience paper. Now, there are reasons it's only J. Neuroscience: they counted cells bodies, not spines, and as everyone knows, cells exist for their spines to be counted; and their only non-control result was a 13% increase in dopaminergic neuron number in the olfactory bulb (below, lower panel).

c. Gains and losses of dopaminergic neurons in the olfactory bulb.  Note the super-significant ** (p<0.01!!!) at the third time point. d. Overall change in dopaminergic neuron number. 13%!.
What I really love, though, is the ** in the top panel of the figure, denoting p < 0.01, compared to those dirty *s over the other data points (p < 0.05). To get those stars performed repeated Student's t-test rather than doing an ANOVA, which is such a ubiquitous sin it's like Catholics using birth control. It's indisputable that there are more neurons gained than lost neurons at all time points. Yet one of the authors was compelled to include the **. Why? Would someone not believe the data if one of the points was p < 0.01? Is there some standard that all p-values < 0.01, must get a **? Are they trolling anal-retentive people, like me? Are they going to focus their research on that time point? Or, more likely, someone thoughtlessly figured, "why not?"

Extra *s are not inherently dumb, but they reflect imprecise thinking. In science, we can't hope to be that certain (p <0.01). We can only observe until we're pretty sure, then wait for independent verification. I worry that people who note **s when they're not meaningful might be prone to noticing *s when they don't exist.