Pages

Monday, August 5, 2013

Paper trail expansion: odor afterimages in the olfactory bulb

OR: I couldn't call this blog Trail of Papers, and not publish, now could I?

My work of the last two years has just been published in PNAS. I know you, dear reader, are busy, so here are four key panels from the paper.

(With this publication, I am on the post-doc job market. While I am applying to a few specific labs, I'm interested in seeing how this blog could influence my career. If any American labs are looking to hire someone to do chronic, in-vivo recordings using optogenetics, send me an e-mail. If you want to know what I think like, this blog is probably an accurate representation. Here is my CV.)

Mitral cells have an odor afterimage

So I was recording the responses of mitral cells in the olfactory bulb while mice smelled odors, and I noticed something odd. Some cells continued to respond to odors after the end of the odor, and this post-odor response could last for a few seconds:

PSTH from a mitral cell responding to ethyl butyrate. The cell was excited during the odor, but inhibited afterwards.
In the example above, the cell switched its firing from excitation to inhibition between the odor to the post-odor. Other cells could switch the other way, from inhibition to excitation; or could have more subtle shifts in phase between the odor and post-odor (Fig. 3 of the paper). Around 30% of cells had some sort of post-odor response, and these post-odor responses could last for over ten breaths after the odor was gone.

To see whether these post-odor responses were odor specific, we performed some population analyses, and found that the post-odor responses were unique for each odor. However, as you can see above, these post-odor responses were not simply recapitulation of the odor responses. Given that these post-odor responses are different from the odor response, but contain odor information, we dubbed these responses odor afterimages.

Most glomeruli do not have afterimages

Some labs have previously reported that olfactory sensory neurons in drosophila can have long-lasting "super sustained" or "ultra prolonged" responses, and recordings from the olfactory nerve of rats also sometimes have off responses. To see whether our mitral cells' odor afterimages were simply the result of sustained input from the receptors, Sam (the 2nd author) performed calcium imaging of glomeruli, which contain the terminals of olfactory sensory neurons. While 5-10% of cells could have sustained activity, most did not:

Response of three glomeruli to acetophenone. Some glomeruli did have a post-odor response, but most did not.
Given that some glomeruli did have post-odor responses, we tried to compare the post-odor activity in glomeruli and mitral cells using a template matching prediction algorithm. The glomeruli prediction was only above chance for one breath after the odor, showing that the post-odor responses did not contain much post-odor information. In contrast, the mitral cell responses lasted for over ten breaths. So while some of the olfactory bulb inputs may be long-lasting, there seems to be more information in the mitral cell responses than can be explained by simple peripheral processes.

Olfactory bulb activity in the absence of sniffing

Since we found that glomerular activity was not as strong as mitral cell activity during the post-odor, we investigated what types of feedback there could be during the post-odor. To look at this, we did my favourite crazy experiment: we recorded from an olfactory bulb while blocking the nostril. In this condition, all the activity in the bulb with the blocked nostril (ipsilateral), would have to come from the contralateral, unblocked nostril, via cortical connections. When we recorded from the bulb with the nostril blocked, almost all spiking activity was gone, as has been seen previously. However, we did find LFP activity:

LFP activity in the mitral cell layer of the OB in response to an odor with the nostril open (top) or blocked (bottom). With the nostril open, the odor elicits a large LFP deflection early, and gamma frequency oscillations after 50-100 milliseconds. With the nostril blocked, there is an LFP deflection, but it is delayed, and contains less gamme frequency. Breathing activity shown above LFP (inspiration is up). Total time ~1 sec.
There are a couple interesting things about the blocked nostril LFP.  First, the sign of the activity is flipped, which may signal a switch from excitation to inhibition, or some other process. Second, the activity with the nostril blocked is delayed. This could reflect the time it takes for information to get processed in the contralateral olfactory bulb, and then relayed to the ipsilateral olfactory bulb. The gamma frequency activity in  the open nostril condition also seems to coincide with the onset of activity in the blocked condition. This is not a particularly conclusive or informative result, but I love the idea of olfactory responses absent odor input, and the strange nature of these responses.

Ringing in the olfactory bulb

Since we did not see strong input from the glomeruli into the olfactory bulb, we hypothesized that the odor afterimage was maintained centrally rather than from the periphery. This gave us the idea that perhaps we could generate an "artificial afterimage." So we took some Thy1-ChR2 mice, which express ChR2 in mitral cells, stimulated the heck out of the olfactory bulb, and found that mitral cells could continue firing for up to ten breaths after the end of photostimulation:

PSTH of firing in a mitral cell in a Thy1-ChR2 mouse during and after photostimulation at 20Hz. It was difficult to record spikes during photostimulation, perhaps due to inhibition. Afterwords, the cell had rebound activity for a few seconds.
Some cells were inhibited after light stimulation, others were excited, and others were completely uneffected. I'm not current on ringing activity in other brain areas, but I think it's interesting that if you put the olfactory system into an active state, it takes a few seconds for the brain to calm down. We did most of these Thy1-ChR2 experiments in anesthetized mice, because in the one awake mouse I stimulated, the mouse started freaking out and drooling. I don't want to know what it smells like when your entire olfactory bulb is activated.

So those are my favourite four figures of the paper. There are a lot more interesting tidbits scattered throughout, and I hope the paper is readable to a general audience. Hopefully my blogging over the last few years has improved my writing style. I suggest you check it out.

Thursday, July 25, 2013

The nature of the lab

OR: Mike reads Coase

This is the 4th in a series of posts wherein I attempt to apply economics principles to neuroscience. Econoneuroscience if you will. Previous posts covered transaction costsspecialization, and currency.

Why is most research done in labs of five to fifteen people? Certainly, one can see why labs don't just consist of a solo PI: people working together get more done than one person alone. On the other side, one can see why there aren't just a few mega-labs of hundreds of people. Ignoring the issues of funding, or finding qualified people, coordinating that many people is a task in itself. But why have labs settled into this middle ground of size?

The question of why companies exist, and what factors influence their size, have been bandied about in economics for over a century. The seminal paper on this topic is probably Ronald Coase's Nature of the Firm. His basic argument is that firms (labs) exist to minimize transaction costs, and grow to the point at which marginal transaction costs equal the cost of organizing within the firm. In this post, I will first summarize Coase's paper, then consider how it can be applied to neuroscience.

Coase's Nature of the Firm

(If you prefer a more straightforward summary, wikipedia is decent.)

Coase starts by observing that in capitalist economies, prices help individuals and firms decide what to buy; in other words, resources are allocated by the price mechanism. For example, if I am a scientist, and Aperture Research will pay me more money than Black Mesa, all else being equal, I will work for Aperture Research. In contrast, firms are not capitalist societies, and do not have a price mechanism; resources are allocated by managers. That is, if my boss transfers me onto a new project, I generally say yes. Quoting DH Robertson, in the sea of market prices  these top-down managed firms are like, "lumps of butter coagulating in a pail of buttermilk."

If, in general, the price mechanism is more efficient at allocating resources than planning, why do firms exist? Coase first considers two odd ideas. First, he suggests that someone might take a pay cut in order to be managed by someone else, but rejects this idea since people like to "be their own master." (Interestingly, I think there is some validity to this idea. People often trade the inconsistent, but potentially higher, income of being a freelancer or consultant for a guaranteed salary.) He also suggests the converse, that people may take a pay cut to manage other people. Again, however, this is contrary to reality where managers get paid more than employees.

Eventually, he flat out states that firms exist in order to minimize transaction costs. For example, rather than hiring employees daily, and negotiating a contract for each day, firms employ people for months or years. And when you hire a long-term employee, you reduce the cost of ensuring the work is done right, since the employee has a history. As a modern bonus, firms are a great way to minimize taxes, since "purchases" within a firm are untaxed, while outsourced purchases are taxed.

If firms reduce transaction costs, why aren't all workers organized into gigantic firms? Coase proposes that the cost of coordination increases as the firm size increases. Without the guidance of prices, a manager makes mistakes in resource allocation, and these mistakes multiply as his or her attention is spread thinner. These coordination costs will also increase with other parameters, like the spatial size of the firm. This is why two firms that do the same thing can exist in different cities, while firms in the same city specialize in different products. Technology that reduce distances, like phones, airplanes, and the internet, would theoretically allow for the organization of larger groups of people.

So why do firms exist, and how large can they get? Firms exist because they reduce transaction costs, and they will reach a size such that marginal transaction costs are equal to the cost of coordination.

The nature of the lab

Given this framework, I would like to consider the organization of science from the perspective of labs-as-firms, and funding agencies-as-firms.

Historically, the size of labs has continually increased. Ramon y Cajal typically primarily published as sole author, there are many famous dyads from the post-war period (Fatt and Katz, Hodgkin and Huxley, Hubel and Wiesel, etc.), and even as recently as ~1990, papers were published with only a few authors. Today, many papers have at least a half-dozen authors.

So what does the typical lab do in-house, versus what does it outsource? The most obvious thing labs do themselves are experiments, as this allows them to control all the variables. If a lab wanted to outsource its experiments, each time it wanted to try something new, it would have to enumerate the parameters, and trust the outsourcing agency to follow its instructions. Similarly, data analysis and light software development also fit under this umbrella, in that they are done often, and with slightly different parameters each time. (Computational neuroscience labs are an interesting corner case, in that they often outsource data collection, and mine exist data sets for new insight. This allows them to reduce costs by not performing experiments, and forces them to specialize in analysis to generate comparative advantage.) Labs do sometimes outsource data acquisition, which we call collaboration. Collaborations are usually limited, however, to reduce transaction costs. For example, if I need another lab to do Western blots for me every 6-12 months, I would collaborate; if I need them monthly, I would do them myself.

Labs outsource many non-scientific tasks. Some tasks are capital intensive, like virus development, or the fabrication of recording amplifiers. Other labs and firms specialize in these tasks, decreasing their costs, and we purchase them so infrequently that the transaction costs here are relatively low. On the cheap side, commodities like pipette tips or wild-type mice are outsourced, too. Here again, other firms are able to specialize in production, and compete with each other to lower highly visible, known prices.

There are a small number of tasks that fall somewhere in between. The Carleton lab has developed our own olfactometer, since there are none commercially available (to my knowledge). Yet we don't do all the development ourselves, but rather work together with the department's shop, which provides expertise in electrical engineering. I have also argued in the past that we could outsource our transgenic line maintenance, but that would require a reorganization of how science is done.

In general, I would argue that given their funding constraints, labs do a good job of finding their optimal size, and outsource the correct tasks.

The nature of the funding agency

What is the next level up from the lab? Some might argue that a department is a firm, but I think departments are more like holding companies: they have lots of disparate labs that occasionally coordinate, but they're more like lumps of butter than a stick.

Instead, I think the most interesting large firm-like organizations in science are funding agencies. The NIH, for example, has the job of dispersing large amounts of money to try to improve human health. For some of this money, they disperse it in a top-down, firm like manner through their intramural funding program. As I understand it, for the internal programs, one gets a funding level for a period of ~5 years; then at the end of five years, you get reviewed, and a new funding level for the future.

For the rest of the money, the NIH funds outside labs through competitive grants. And as someone who has applied for these grants, I can tell you the transaction costs here are high. For my grad school fellowship, I probably spent a man-month (or is it person-month now?) writing it, making figures, and discussing it with my boss. I understand that full-fledged R01s can take many months of time. For all of this time spent, one has only a chance to get funded. And from the NIH's perspective, they spend a lot of time through their program officers, and review committees, trying to figure out who to give money to.

From the Coasian perspective, then, Janelia and Max Planck have the right idea of giving trusted scientists a large chunk of money under intermittent review. Rather than engaging in the high transaction cost process of reviewing applications, they simply hire people. Thinking about it from this view, I actually agree with their policy of preventing people from applying for grants. I wouldn't want scientists I'm paying to perform research to spend their time creating transaction costs applying for more money. Just perform the research, and if you do well enough, you'll get more money at the next review.

Monday, July 1, 2013

My friend Clay

Summer courses have started at Woods Hole, so time for another embarrassing story.

I was a tech for the imaging section of the neuroscience summer course around five years ago. I arrived a week early, during the e-phys section, to incubate some slice cultures. One evening after work some people from the course headed over to the local bar, the Kidd, and one of the physiologists introduced me to his older friend Clay.

"What do you study?" Clay asked me.

"I study AMPA receptor trafficking during LTP. Do you know much about it?"

"A little bit, but fill me in on the details."

I went on to explain that there are two main subunits of AMPA receptors at the synapse, GluA1 and GluA2. During LTP there is a change in inward rectification, which means that GluA1 receptors specifically are inserted into the synapse.

"Oh yeah?" Clay asked.

Seeing that he might not remember the intricacies of rectification, I explained in detail what inward rectification is, and how a positively charged amino acid in the receptor pore prevents positive ions from flowing in.

The next day I found out that Clay was in fact Clay Armstrong, one of the first people to study how ion pores affect rectification in potassium channels.

Monday, January 28, 2013

Minimal optical stimulation

The Isaacson lab just published a paper about feedback connections from piriform cortex to the olfactory bulb. To do this, they expressed ChR2 in cortex, then cut slices of the olfactory bulb, patched various cells, and photostimulated the cortical axon terminals. Standard optogenetics in 2013.

They quantified the strength of the cortical feedback onto various cell types, and they found that the EPSCs were stronger onto one cell type, deep short axon cells (dSAC), than another cell type, granule cells (GC). Curious to know whether the dSAC current was larger due to convergent input from multiple fibres, or simply because individual synapses were stronger, they employed a new technique: minimal optical stimulation.

Photostimulation of cortical fibres yields larger currents in dSACs (~300 pA) than GCs (~30 pA).
From Boyd et., al., 2012.
Minimal (electric) stimulation

Before writing about minimal optical stimulation, a brief history of minimal (electric) stimulation. I did some Google Scholaring, and it appears that the first paper to use minimal stimulation was from McNaughton, Barnes, and Anderson in 1981. McNaughton was interested in the strength of synaptic connection onto granule cells in the hippocampus. As they note, "While  there  exist  abundant  anatomical data on numbers and kinds of afferent fibers making synaptic contact on many types of neuron, these data are, in only a relatively few cases, matched by quantitative physiological data on the efficacy of these synapses..." Given all the recent discussions about the utility of the "connectome," it is interesting to see the same questions being asked thirty years ago.

McNaughton wanted to measure the synaptic currents of single fibres, but the fibres were too small to be isolated at the time. To attack the problem from a different angle, they patched a cell, then placed a stimulating electrode in the fibre bundle, and gradually increased the strength of stimulation. For some stimulation locations, they saw a gradual, ramped increase in the evoked current, presumably due to recruitment of increasing numbers of fibres (below left). In other cases, they saw a step-wise increase in synaptic current, which remained steady for above-threshold stimulation (below right). The step-wise increases were presumably due to the recruitment of single fibres.
Gradual increases of stimulation intensity can either cause a gradual increase in synaptic potentials (A), or step-wise increases in synaptic strength (B).
From McNaughton et. al., 1981.
Thus (I believe) was born minimal stimulation: the stimulation of fibres at low intensity (and with high failure rate), such that you probably are only stimulating a single fibre. The moniker "minimal stimulation" itself was not widely used until ~1990 when Malenka, Richard Tsien, and Charles Stevens published a few papers using the technique.

Minimal optical stimulation

Ok, back to Boyd. They wanted to know whether the large dSAC EPSC was due to a single large fibre, or convergent input from multiple fibres; phrased differently, they wanted to know what the single fibre aPCx->dSAC strength was. Since this is 2013, and we have optogenetics, they used minimal optical stimulation (UPDATE: I originally thought this was the first paper to use this technique, as a Google search did not reveal other papers using the term "minimal optical stimulation." However, Sam Reiter in the comments mentioned this paper by Franks from the Axel lab, which used the technique without naming it. Perhaps there are even more papers out there I am unaware of. Remember folks: always name your techniques.). They write,
In these experiments, we reduced light intensity to the point at which clear failures of synaptic responses were observed on > 50% of trials and we measured the average amplitudes of successes in each cell... The average amplitude of the single-fiber EPSC was actually somewhat larger for inputs onto GCs compared to dSACs (29.8 ± 4.6 pA and 17.0 ± 3.8 pA for GCs (n = 17) and dSACs (n = 10), respectively; K-S test, p = 0.04). Together, these data suggest that dSACs receive stronger excitation than GCs due to a higher convergence of feedback inputs.
Minimal optical stimulation of GC cells show they have a larger single fibre current than dSACs.
From Boyd et. al., 2012.
While they did not quantify this, the similarity between the "full" and single fibre currents onto GC cells imply that they receive cortical feedback from just one neuron. In contrast, dSACs could receive cortical feedback from up to twenty different cortical cells.

I do have a few questions regarding this result. First, and trivially, why did they use a K-S test instead of a t-test? Did the variability of the GC EPSCs mess up the t-test? Second, to what degree is the minimal optical stimulation reliably activating the same cortical fibre? For GCs, it is clear that they are receiving input from just a few fibres. For the dSACs, where the cells are presumably receiving input from multiple fibres, it seems possible that each photostimulation could activate a different fibre. If this were true, minimal optical stimulation would not be measuring single fibre conductances, but rather, the average single fibre conductance. You might be able to detect this by comparing the variance of the GC conductances to those of the dSACs; if the minimal optical stimulation was stimulating different fibres, you would expect the variance to be higher. Finally, given this single result, I am unclear on what role multi-vesicular release could be playing at the dSAC synapse. It seem possible, if somewhat unlikely, that dSACs receive input from just a few fibres, but that each fibre has a high dynamic range, and is able to release multiple vesicles.

I wonder how this technique could be improved. Like with minimal electric stimulation, with minimal optical stimulation you never know which axon or synapse your activating. However, it is much easier to move a light spot than an electrode, so it should be possible to reduce the area and vary the location of photostimulation to reduce the potential number of fibres. Of course, if your photostimulation is so precise you can identify the actual axon you are stimulating, it no longer is a minimal stimulation technique, but targeted stimulation of individual fibres. This may still be useful given the problems with 2-photon stimulation of channelrhodopsin.

In any case, I thought minimal optical stimulation was a nice blend of an old method with new technique. Minimal electric stimulation is still in wide use today, and perhaps minimal optical stimulation will soon be as well. Hopefully someone will more extensively explore the potential of this technique.

References

Boyd AM, Sturgill JF, Poo C, & Isaacson JS (2012). Cortical feedback control of olfactory bulb circuits. Neuron, 76 (6), 1161-74 PMID: 23259951

McNaughton BL, Barnes CA, & Andersen P (1981). Synaptic efficacy and EPSP summation in granule cells of rat fascia dentata studied in vitro. Journal of neurophysiology, 46 (5), 952-66 PMID: 7299453

Thursday, January 3, 2013

What if I call you

It's grad school interview season, so here is a story from my first interview at the University of Pennsylvania.

I quickly fell in love with Penn. The campus was urban, like Case, but had some Ivy-status je ne sais quoi. I took an evening to see a game at the Palestra, a temple of college basketball. Ate oily cheese-steaks and gawked at medical curiosities at the Mutter Museum. Philly was perfectly grungy.

The interview was not perfect. In undergrad, I modeled intracellular calcium dynamics. Kwabena Boahen asked me what the shortcomings of my model were, and I had no good answer. I have no doubt that I was an arrogant know-it-all, too, like many young scientists.

In general, though, things went well from my perspective.

At the goodbye party, I ran into Mikey Nusbaum, the director of graduate studies. Nusbaum, by all accounts, was a phenomenal DGS, and genuinely interested in student welfare. He was also something of an eccentric scientist, wearing an earring, raising horses, and letting people call him Mikey.

Like Nusbaum, some of my friends call me Mikey, which led to this exchange:

Me: "I noticed people call you Mikey. My friends call me Mikey too."

Nusbaum: "Oh, I really don't care what people call me."

Flash to freshman year of college, when there were two Steves in my suite, and I was trying to figure out what to call them. One of the Steves said the same thing as Nusbaum, "I don't care what you call me." And I responded the same way at Penn as I did as a freshman.

Me: "What if I call you jackass?"

I didn't get in.