Monday, August 29, 2011

Annals of Underwhelming Papers: The Pink Panthers*

As I explore the background literature on taste perception, I try to read deeply rather than broadly. Most recently, I read a batch of papers about taste coding in the brainstem. Taste information goes from taste bud cells on the tongue to the nucleus of the solitary tract (NST) in the brainstem, then to the parabrachial nucleus (PBN), and then the thalamus and onward (in primates, taste info does not travel via the PBN). A few labs have recorded from these areas in primates and rats, most notably David V Smith. Today I'm going to briefly describe one of his papers, and why it left me unsatisfied.

Stim and Response

A couple posts ago, I described how the arcuate nucleus of the hypothalamus regulates feeding. The arcuate nucleus does not directly project to any taste areas, but does project to the lateral hypothalamus, which then projects to NST and PBN. In today's paper, Cho et al recorded from NST while stimulating the lateral hypothalamus.

They recorded 99 taste responsive neurons in NST via glass micropipette (they do not say how many total neurons they recorded form). To stimulate LH, they simply stuck an electrode in there, and stimulated with square pulses at 0.33 Hz. Approximately half  (49/99) of the taste-sensitive NST neurons responded orthodromically to LH stimulation. The taste receptive fields and LH sensitivity were distributed across all taste modalities.

Taste responses of LH-responsive and non-responsive neurons. All taste modalities are represented in both groups.
From Cho et al, 2002.
Of the hypothalamic stimulation responses, a majority were excitatory (43/49), while the rest were inhibitory. Only two neurons responded antidromically to stimulation. To rule out fibres of passage, they applied a glutamate agonist in the LH, and saw it could still affect the NST. Finally, they stimulated the LH while applying tastes, and found that the combination caused more firing than a tastant alone. This is not surprising given a majority of LH input was excitatory.

Le Pink Panther

This is still a state-of-the-art experiment for recordings from NST, so why is it disappointing? In the age of molecular mouse models, electrical stimulation is antiquated. There are multiple cell types in the LH, and stimulating all of them is just too dirty. This seven year old paper is already outmoded.

It reminds me of old movie comedies, like The Pink Panther.  My dad loves those movies, and as a kid I liked them too. But if you watch them now, they are predictable, obvious.

Most papers inevitably turn pink, as we stand on giants' shoulders. New methods outdate the old, and people are just more thorough now. More famous examples of Pink Panthers would be the early papers in LTP by all the big bears like Malenka, Malinow, Kauer, etc., where they applied a few antagonists and called it a Science paper.

Other old papers still hold up in a timeless way, like Fat and Katz, or Hubel and Wiesel. While they're simple, and have been surpassed technically, the core result is still clean. We still describe visual cortical neurons in terms of orientation selectivity. These are the Some Like it Hots and Airplane!s of neuroscience.

The Pink Panthers deserve a large, hospitable wing of the Annals of Underwhelming papers, where they can live out their senescence. I am going to read all of David V Smith's papers, and take from them what I can. And hopefully outdo them.

* As always, my disappointment in a paper should not reflect on the scientists who performed the experiments. Nor do these opinions reflect those of the lab.

Monday, August 22, 2011

Compendium of Analyses, Part II, Ensembles

A couple weeks ago, I listed a variety of standard analyses which can be used to characterize single neurons. Today, I'm going to cover analyses that describe how populations of neurons encode information. These analyses are more complex than the single-cell analyses, and I do not know the details of how they are all implemented. Unlike the single-cell analyses, which are all from a similar Weltanschauung, each population analysis requires a slightly different perspective.


The simplest population analysis requires looking at the smallest population: two neurons. One way to do this is via cross-correlation, which answers the question, "how often do two neurons fire near each other in time?"

To do this, you start with the spike trains of two neurons.  For each spike neuron A fires, you identify the spikes that neuron B fires around the same time, and note the time difference. As you repeat this, you will build a histogram of these time differences, centered on t=0 lag. If two neurons' spikes are uncorrelated, the histogram will be flat, as the neurons fire at random times; if the neurons' firing is correlated, you will see peaks in the histogram.

Cross-correlation between two gustatory cortex neurons. The two neurons' firing is normally uncorrelated (thin traces), but when two tastants are applied, they become correlated (purple and blue lines).
From Katz et al, 2002
Given the simplicity of this analysis, you would think it's trivial to implement, but it's not.  I was playing around with some olfactory data, and looked into using MATLAB's xcorr() function, given there's a neuroscience blog named after it. And xcorr works great, for analog data. However, action potentials are digital. Someone else in the lab had implemented an autocorrelation using pdist(), but that doesn't work for pairs of neurons. So I had to root around the internet for a quick and dirty implementation of cross-correlation for spikes. You would think this would be standard by now.


When you think about action potentials, it's natural to take a cell-centric view, and think about how ions flow in and out of cells. From an expanded perspective, though, large groups of neurons can significantly effect the electrical milieu around them.  This is called the local field potential (LFP), which you can measure during extracellular recording. The LFP typically oscillates; in the olfactory bulb, the most prominent oscillations are at the gamma frequency.

While the LFP does not reflect population coding in the traditional sense, it does reflect population firing, and the modulatory state.  Modulatory centers can change the LFP's amplitude or frequency, both changing how individual neurons fire, and the environment all neurons fire in.

(Update from July 2012: Looking back on this, it's  embarrassing that I didn't mention the actual analyses you can use to look at LFPs. In any case, for the curious, the basic ones are: power spectrum analyses of epochs (using FFTs), spectrograms (via wavelet decomposition or FFTs), coherence/correlation, and spike-triggered LFPs (and vice-versa)).

Population response vectors and PCA

For single-cell analyses, you typically describe neurons in terms of what stimuli they respond to. On the population level, you need to reverse this, and ask, how is a stimuli encoded by the population?

The easy way to do this is to create a population response vector for different stimuli. To do this, you calculate the firing rate of each neuron you recorded from following a stimulus. Then you take all the firing rates, and put them into a vector, which gives you the population spike response.  Then you repeat this for different stimuli, or time points.  To find out how similar or different the representation of two stimuli are, you just subtract the population vector for one stimuli from the other to get the population spike distance.

Schematic of population spike response. Each cell responds to a stimuli (top row).  You convert these responses into a single number, the firing rate, then put these responses into a vector, where each row is a cell. You can then look at how the population representation changes over time, or with different stimuli, by subtracting one population vector from another.
From Bathellier et. al., 2008
Population spike vectors can get unwieldy for large number of neurons, so people often reduce the dimensionality via principal component analysis (PCA). I have covered PCA before, but the basic idea is that the responses of different neurons in the population vector are correlated, and you can create artificial variables called "principle components" that include this correlation.  If you are lucky, the first few components will explain a majority of the response.

Once you have the population vectors (or principle components) for a a set of responses, you can really have fun. For example, you can see whether the population spike distance between odors is correlated with their perceived similarity. Or you can see how the principal components of an odor response change over time, forming dynamic cycles (below). Principle components are a great way to make data easier to visualize and manipulate.

This shows the PCA response over time to different odor mixtures in the zebrafish. Trajectories start at the arrowheads. The trajectories for the different stimuli generally follow two trajectories.
From Niessing and Friedrich, 2010
Hidden Markov Models

Another way to think about the population response is that neuronal firing represents a "brain state." For taste, this state could be, "I am tasting something sweet." This idea of neuronal firing reflecting internal states can be represented by a Hidden Markov Model (HMM). You assume that the animal has an internal state that you do not know (it is "hidden"), and try to infer the state by the firing patterns of neurons. The math of how this is done is complex, but it involves making assumptions about what states are hidden, and then testing different states to see if they fit the data better.

The beauty of HMMs is that rather than worrying about receptive fields, and firing rates, you simply try to measure the "state" of a set of neurons. This further frees you from trying to guess when states start and end, and lets you find state transitions as they naturally occur. The downside is that it is more difficult to interpret what these states mean in human terms.

A. Spike trains from 10 neurons in GC in response to sucrose. Different states identified by HMM are numbered 1-4. B. Four more example responses. Note that the state change occurs at different times, something that would be missed by typical PSTH analyses. C. Firing rates of the neurons in each state.
From Jones et al, 2007.
Hidden Markov Models have been uncommonly used in neuroscience. I tried implementing them in MATLAB for some of our data. MATLAB has an HMM function (hmmdecode), but like xcorr(), it is optimized for single observations of analog data. All of our olfactory bulb recordings are multiple observations of digital data. At some point I'm going to have to write some code myself, or ask the Katz lab for it, to see if it yields anything interesting.*


That's all for today. To be over-reductionist, I would say most analyses are some variation on receptive fields (single neurons) and population response vectors. There are obviously many more analyses out there, like stimulus (odor) prediction, or spectral power, which I will save for when I better understand them. Hopefully this is useful as an inventory of all the simple, standard things you can ask of data.

* It amazes me how much time people spend re-implementing simple techniques. For example, implementing a HMM for multiple observations is a moderately tricky thing, but should be general enough to be useful for anyone analyzing multiple spike trains. Yet, there's no code on the internet.

There are some reasons for this. No one wants to put their code out and find there's a typo. And each lab's experiments are different enough to not make them completely generalizable. Shoot, I don't have any code out there myself. But reusable code for simple things like HMM, or cross-correlation would save man-years of time. Which is why I really hope the NIH develops better software for neuroscientists.

Monday, August 15, 2011

Neuroscience is Computer Science, Part II

Labrigger posted that the NIH has posted a request for information on what tools would speed neuroscience research.  As I've written before, neuroscience research is now heavily dependent on computing tools, and the programming skills of researchers.  As such, I wrote to them regarding what tools I'd like to see developed:
I would like to see a wide variety of software tools developed.
I have performed neuroscience analyses using expensive proprietary programs, or custom-designed software written by amateurs. For example, in my previous lab, we analyzed data using Vector NTI, which costs $1000/year, and has an awful user interface, wasting time.  In my current lab, we analyze multielectrode data using software written by the Buszaki lab. This software works, but is slow, requiring a high-powered computer to run overnight to analyze one piece of data. 
The NIH has already developed one nice piece of software, ImageJ, which is used extensively throughout neuroscience. The programs I would like to see (from my own experience) are an update of ImageJ, a DNA analysis tool (primer design, sequence alignment, etc.), spike identification, and spike clustering. Given the highly computationally intense nature of some of these programs, they should furthermore utilize the power present in modern graphics processors.
The current software tools we use are expensive, slow, and generally inadequate. A small amount of money spent by the NIH developing and standardizing these tools would save researchers money, time implementing solutions themselves, and time actually using the programs.

Thursday, August 11, 2011

Mouse Escapism

If you were a mouse, with human intelligence, could you escape from the lab?

The first step is getting out of the cage. Since mice can't really power their way out of a cage, you'd probably have to wait for a dumb human to pull you out. Cages are normally opened when transferring mice, or during experiments.  Animal facilities are well sealed, so you wouldn't want to make your break during a cage transfer. This leaves experiments.

Many experiments start with anesthesia, usually injected i.p. When I do injections, I often just let the mouse sit on top of the cage for a second before holding it down. This would be the time, as a mouse, to run for it. If the human is more diligent - she doesn't let go of your tail for a moment - you'll have to bite her, and hope she lets go.

Once you're out of the cage, the next step is to get "down the floor, and out the door," as my Dad used to say to get me out of those house to school. Having never been a mouse, nor read the relevant scientific literature, I don't know how far a mouse can fall without injury. Let's estimate 50cm, or about 1.5'. The counters in my lab are about 1m tall, so you'd need at least one step in between. The best opportunities would probably be garbage cans, partially opened drawers, or stepping stools. The Carleton lab has a rolling paper towel cylinder that's about 50cm off the ground, which could work as well. You could grab the towel, then jump to the floor like Rapunzel.

Of note, when running as a mouse, it's best to stick to the edge of the room, and keep your tail close to your body. Tails may be good for balance, but they're even better for getting grabbed.

Now you're on the floor. Hopefully, your former handler is freaking out, rather than being mindful and closing the door. You need to break for the door, hopefully under the awnings of drawers or refrigerators. Time is essential here, since a closed door means a short trip back to the cage, and a vengeful death.

Once you're in the hallway, you can afford to be more patient: no one can lock down a whole floor of a building. The next goal is getting out of the building. As a human, stairs would be the best way to avoid detection, but as a mouse, you probably don't want to climb down flights stairs, even if they're only 10cm each. You must take the elevator. But you don't want to go down the elevator now, in the middle of the day, with people around. No, you should wait until evening when the maintenance people arrive, with their lumbering wheeled garbage bins that you can hide under.

So you're in the hallway, and want to wait until evening before riding the elevator to freedom (a rodent Underground Railroad, if you will*). You need to hunker down. In the US, this would be difficult, as the hallways are just walls and doors. But here, in Geneva, you are in luck! Space is so valuable, they put lockers in the hallway, but the lockers don't snugly fit their niche. As a black mouse (oh please don't be blanche or agouti!), you should be inconspicuous. There you can make your souris refuge.

As a Maus without a watch, you'll have to be alert for evening's onset, namely a janitor and his cart trundling by. As he passes, scurry out, and use the garbage cart for cover; you may even hitch a ride, if you dare. You'll need to be a little lucky, and hope he takes the elevator directly to the basement. From the basement you are almost free! Be calm, find a nook, and wait for the humans to dwindle-dawdle off. Then saunter to the garage doors, find a gap, and slip into the world. Now instead of running from humans, they will run from you! Only now you must worry about other, more primal predators.

* Ok, that may have been too much.

Monday, August 8, 2011

Walk Along the Paper Trail: Trail of T1Rs

Taste receptors on the tongue transduce the chemical information of food into neuronal signals.  As of now, people have identified a wide variety of receptors for sweet, sour, salty, bitter, umami, fat, carbonation, and water. Three of the taste qualities - bitter, sweet, and umami - are transduced by G-protein coupled receptors. This is the story of how the sweet and umami receptors were discovered.

T1R, the third

In the 70s, multiple labs reported that different strains of mice had different sensitivity to sugar, and dubbed the more sensitive mice "Sac" tasters.  They eventually traced the responsible genes down to the distal end of chromosome four, which they dubbed the "Sac" locus. 

Circa 2000, the Zuker and Buck labs sifted through the genome, and reported the discovery of a set of putative taste receptors. Gene clustering and immunostaining showed that these receptors could be broadly divided into two groups: T1Rs and T2Rs (Taste # Receptors). One of the T1Rs, T1R3, was traced to the Sac locus.

Nelson et al (2001) tested whether T1R3 was indeed the Sac gene by knocking-in the whole Sac locus into "non-taster" mice.  They then tasted the mice's sweet sensitivity, and found that the Sac/non-taster mutant mice had a lower threshold for sweet tastes, turning them into taster mice.

Sac taster mice have a low threshold for saccharin (open red circles). Non-taster mice have a high threshold (filled black circles). Non-taster mice with knocked in Sac locus have a low threshold (filled red circles).
From Nelson et al 2001.
They then performed in situ hybridization, and found that T1R3 was expressed all over the tongue, while T1R1 was expressed at the front of the tongue, and T1R2 was expressed at the back of the tongue. Notably, T1R1 and T1R2 were never expressed alone, but always in combination with T1R3.

T1R, part deux

They next tried to find what these receptors recepted.  To do this, they expressed the receptors in HEK cells. Mouse T1Rs did not express well in the human HEK cells, so they used rat T1Rs instead. Since they did not know the G-proteins these receptors were coupled with, they used the promiscuous G-proteins Gα15 and Gα16. Then to read out the activity, they detected calcium with Fura-2.

They tested each receptor individually, but could not detect responses in cells with individual receptors.  Then they moved on to combinations of receptors, starting with T1R2+3, which responded to a variety of sweet tastants like sucrose (top left, below). For controls, they left out the G-proteins, and saw no response.
T1R2+3 responds to sweet tastants. (Top right) Dose-response curve for sweet tastants. (Bottom) Timecourse of [Ca] following tastants application.
From Nelson et al 2001.
Mice have different sensitivities to sweet compounds, and Nelson reasoned that this could be due to the receptors themselves.  They graphed the receptors dose-response curve, and found that the receptors' EC50s generally agreed with the behaviour (top right, above). For example, sucrose has an in vivo threshold of 20mM, and in the HEK cells it was ~50mM. They also sloppily characterized the kinetics of the response, showing it was low latency (<1s), with a slow time off, and that they partially inactivate during prolonged exposure.

T1R, 1

When they first reported the results in 2001, they could not get heterologous expression of T1R1, but based on its similarity to T1R2 and T1R3, thought it was a sweet receptor.  Six months later, though, they reported T1R1's function.

While most people are familiar with the tastes of sweet, sour, salty, and bitter, few people can name umami. Umami was discovered around 1900, and is sensitive to amino acids, which taste "delicious." This is most familiar in daily life as MSG.

In 2002, Nelson used the same heterologous expression system to look at T1R1+3, and found that they could respond to some L-amino acids (figure below, right panel). For reasons that aren't clear, researchers had found that purines like IMP could sensitize responses to L-amino acids, which was recapitulated here. Notably, IMP alone does not elicit a response.

Calcium response of cells expressing T1Rs. b. Cells expressing T1R2+3 respond to "sweet" tasting amino acids, but not other amino acids. c. Cells expressing T1R1+3 respond to L-amino acids, but not sweet amino acids. This response is sensitived by purines like IMP.
From Nelson et al 2002.
While most amino acids taste like umami, some D-isoform amino acids are perceived as sweet. They tested these D-isoforms on the T1R1+3 expressing cells, and found they did not elicit a response.  However, application of D-amino acids to T1R2+3 cells did elicit a response. The percept and the cell biology match.

In the final figure of the paper, they performed a set of random little experiments. First, they looked at how T1R3 can modulate sweet responses in the taster vs non-taster mice. Their hypothesis was that non-taster T1R3 could not form stable heteromers with T1R2, but could with T1R1. They expressed T1R1+3 and T1R2+3, using both taster and non-taster mutants for T1R3, then immunoprecipitated them using T1R1 or T1R2. They then western blotted for T1R3, and could see it in bands for both taster and non-taster pull-downs (panel a). T1R3 was perfectly capable of former dimers with both T1R1 and T1R2. The also tested non-taster T1R3's effect on umami, and found there was none (panel b).

a. Pull-down of T1R1 or T1R2 also bring taster and non-taster T1R3. b. Non-taster T1R3 only effects sucrose response. c. Human T1R1 is highly sensitive to MSG (open circles). This is further sensitized by IMP (closed circles). Other amino acids shown in grey. d. Mouse T1R2 does not respond to aspartame.
Having done all the experiments so far on mouse and rat T1Rs, they performed a few on human taste receptors. As mentioned before, MSG strongly elicits an umami response, so they tested human T1R1+3 response to MSG, and found it had a low threshold compared to other amino acids (panel c). It's also known that mice cannot taste aspartame (an artificial sweetener) while humans can.  This was shown in HEK cells as well, where human T1R1+3 could detect aspartame, but mouse T1R1+3 could not.


These papers present a nice, clean set of experiments which have held up to this day. It's amazing there were only six months between the papers. In the first, they claimed problems with expressing T1R1 in human cells (which I tend to believe, given the called T1R1+3 a sweet receptor), but apparently that was resolved.

In the introduction and discussion, they noted that perceptually, there are much fewer sweet tastants than bitter. People speculated that this would imply a smaller number of sweet receptors than bitter.  This was found to be true: there is only one sweet receptor, while there are ~20 bitter receptors.

Another point of interest is the dissimilarity between human and rodent T1Rs. They plotted the similarity of various G-protein couple receptors in human and mice, and most of them are >90% similar. In contrast, the T1Rs in mice and humans are only >70% similar. These differences have functional implication, like how humans can taste aspartame, but mice cannot. They speculate these differences could reflect different dietary concerns of the animals, while neural circuits need to be reliable. Certainly there is less natural selection against animals with weird taste than animals with malfunctioning synapses.

Most G-protein coupled receptors are conserved in humans and mice. T1Rs, however, have some variation.
From Nelson et al 2001.

Nelson, G., Hoon, M., Chandrashekar, J., Zhang, Y., Ryba, N., & Zuker, C. (2001). Mammalian Sweet Taste Receptors Cell, 106 (3), 381-390 DOI: 10.1016/S0092-8674(01)00451-2

Nelson, G., Chandrashekar, J., Hoon, M., Feng, L., Zhao, G., Ryba, N., & Zuker, C. (2002). An amino-acid taste receptor Nature, 416 (6877), 199-202 DOI: 10.1038/nature726

Thursday, August 4, 2011

Paper Flashcards

I enjoy finding new "life hacks," even if most of them don't stick. After reading Checklist Manifesto, I lightly binged on writing protocols/checklists for the new lab, since most were in French. Now that the protocols are routine, I consult and update them less.  One of my GTD projects should be "Write short checklists for protocols."

This week I'm trying out a new hack, using the flashcard program Mnemosyne, which I found at the end of a blog trail.  Mnemosyne's flashcards come with a twist, which emphasizes how people create long-term memories. People learn best when they are repeatedly exposed to a subject over the course of days and weeks, rather than crammed into a few hours or one day. This phenomena is called spaced learning.

Rather than showing you random flashcards, Mnemosyne utilizes spaced learning by scheduling flashcards for you over a period of days and weeks. Whenever you see a flashcard, you rate how well you know it from 0-5.  If you remember the card well (e.g. rating 4-5), Mnemosyne will not show you the card again for weeks.  However, if you remember it poorly (e.g. 1-2), Mnemosyne will schedule for a few days later; just before you forget it.  In that way, you're constantly reinforcing tenuous knowledge.

To give the program a spin, I downloaded some vocabulary flashcards for French, and hiragana/katakana flashcards for Japanese. It's only been a few days, but it seems to work well.

Then I started thinking about how this could be applied to science. You could create flashcards for signaling cascades, which certainly have a lot of connections to remember. As a systems neuroscientist, I could create cards for anatomical connections, but few of those are relevant at any given time.

One thing that's frustrated me as I learn a new literature is the patchwork nature of my memory for papers. For example, I know T1R# receptors are responsible for sweet, and umami taste, but I can't recall whether they share the T1R1, T1R2, or T1R3 receptors. I just google it. Yet I can name twenty fifteen players from the 1997 Cleveland Indians World Series team.*

I've tried to reinforce my paper memory by always taking notes on them in Mendeley.  And I write the Walk Along the Paper Trail series in part to make me remember the papers I write about.  But these efforts don't cover nearly enough papers, nor do they contain the repeated, spaced learning that works well.

I've started to build a Mnemosyne database of papers.  On one "side," I write the first and last authors of the paper and the year. The other side contains the methods, and key findings.  Hopefully by doing this I can refer to papers by name, instead of calling them things like, "That Carlson lab paper where they invented the empty neuron system." After a few days use, I can remember ~15 papers by name, lab, date, and key findings.

The downside of this memory is that is rote, ignoring connections between papers, and serious analyses of them.  But I think that is an issue best resolved via other methods.

*Ok, I'll try: Sandy Alomar, Jim Thome, (2nd base is killing me, Tony Fernandez? [yes]), Omar Vizquel, Matt Williams, David Justice, Marquis Grissom, Manny Ramirez, Chad Ogea, Paul Nagy, Julian Tavarez, Brian Anderson, Plunk, Jose Mesa, Brian Giles, Assenmacher, Mike Jackson... Ok, 15. Can't believe I forgot Jaret Wright, Orel Hershiser, and Julio Franco.

Monday, August 1, 2011

Compendium of Analyses, Part I: Single Cells

In thinking about data analysis, I realized that I don't have a clear idea of what analyses are possible, their advantages and shortcomings. So I'll just list them all, in two parts, single cell and ensemble techniques. As I continue reading, I will hopefully add more esoteric methods.


The simplest of all analyses is the peri-stimulus time histogram (or for non-stimulated responses, peri-event time histogram).  You record from a neuron while repeating a stimulus, and sum the spikes following a trigger into bins.

Response of neurons in gustatory cortex to different tastants; the triggering event is a lick.
From Katz et al, 2001. My review.
PSTHs are quick, dirty, and reveal the obvious, but have significant limitations. First, PSTHs are highly sensitive to bin size.  In the figure above, in each time bin, the neurons respond to different tastants. Bin size can drastically change how neurons are characterized: before Katz's paper, people guessed that only 10% of gustatory cortical neurons encoded taste; using 500ms time bins, Katz estimated 30-40% of neurons were taste sensitive.

PSTHs also ignore the context surrounding an event.  For example, simple PSTHs are inappropriate if stimuli are presented on an oscillating background, unless you otherwise compensate for oscillations. Furthermore, the identity of triggering events may not be obvious for rapidly changing stimuli.  To look at those, reverse-correlation is a better technique.

Spike phase

For neurons that function in an oscillating context, spike timing matters less than spike phase.  The example below is a recording from the olfactory bulb following an odor presentation.  In the PSTH on the left, there is an obvious tonic response to the odor.  On the right is a phase plot, where the breathing cycle was transmuted from seconds into degrees, with inhalation at zero degrees.  From the phase plot, you can see that the preferred spike phase changed over the course of the response. What is especially interesting are cells that maintain the same firing rate over moderate bin sizes (~300ms), but change their phase.

Left: PSTH of OB neuron spiking following an odor. Right: Spike phase of this neuron.  0 degrees = start of inhalation (see breathing plot on left). The neuron's preferred phase of the breathing cycle changes between breathing cycles.
From Bathelier et al 2008.
Phase plots are obviously only useful in oscillating contexts, although these contexts can vary.  In the hippocampus, Buzsaki has found that there is phase precession around the theta oscillations. In olfaction, the phase is usually defined around the breathing cycle. In gustation, it is the licking cycle.

"Receptive fields"

Once you have many PSTHs or phase diagrams for a neuron, the next step is to organize the PSTHs. In other words, you have to identify which stimuli cause the neuron to respond, given some definition of "response." The set of all stimuli that a neuron responds to are its "receptive field."

The term "receptive field" comes from the visual system.  Retinal or visual cortical neurons are excited by stimuli in one location, and inhibited by stimuli in other locations.  You can similarly plot their orientation and direction selectivity.  For the hippocampus, the receptive field is location.

Receptive fields of V1 neurons. Excitatory regions shown in red, inhibitory in blue. You can see orientation selectivity in panels A-C.
From Tanaka and Ohzawa, 2009.
For olfactory or gustatory neurons, the "receptive field" is simply the set of odorants or tastants that elicit responses (which unfortunately is harder to visualize than a nice center-surround). In the olfactory bulb, there is a lot of disagreement as to how sparse or dense mitral cells' receptive fields are, in part due to the semantics of a "response."  In the anesthetized state, mitral cells respond with large firing rate changes, while in the awake animal, these are less pronounced.  However, there do seem to remain phase changes in the awake state.

Stimulus (de-)composition

In thinking about these analyses, I remembered the Lin and Katz paper where they used gas chromatograpy to deconstruct a natural odorant into its components. While this is not an analysis technique per se, I think it's important to consider how stimuli are combined in the brain.  For example, do the component odorants in a natural odorant combine linearly or non-linearly? How do odorant receptor neurons and mitral cells respond to mixtures of odorants? While people have made stabs at answering these questions, the answer remains elusive. In the visual system, retinal and visual cortical neurons have been found to have highly non-linear receptive fields.

A. Gas chromatography of cloves. B. Intrinsic imaging of OB during gas chromatography. The clove response (C), is more than just a sum of B. D&E. An artificial mixture of clove components elicits a map similar to the clove map.
From Lin and Katz, 2006.
Reverse correlation techniques

As mentioned before, PSTHs are not good at detecting responses to rapidly changing stimuli.  In the visual cortex, this has been addressed via reverse correlation techniques.  Rather than look at the spike following an event, reverse correlation looks at which stimuli precede a spike.

This is most useful in visual cortex, where it is possible to present movies, record neurons' responses, and then use reverse correlation to decode their spatiotemporal receptive field.  That is, besides the spatial information of the receptive field, you also gain information on how the receptive field changes; for example, whether a certain area is both inhibitory and excitatory at different times.

This technique is obviously less useful in olfaction and gustation, where stimuli vary less rapidly. And it is impossible to record natural stimuli for playback to an animal.

That's it for the single cell analyses.  The point on the single cell level is to identify which stimuli make a neuron spike, or stop spiking (ignoring subthreshold effects).  Things get more complicated on the population level, where spike timing becomes more important. (I did not mention spike timing separately here, although it can be very precise, both in vision and audition. I would be surprised if spike timing on the 1-5ms scale mattered for chemosensation.)


Bathellier, B., Buhl, D. L., Accolla, R., and Carleton, A. (2008). Dynamic ensemble odor coding in the mammalian olfactory bulb: sensory information at different timescales. Neuron 57, 586-98.

Katz, D. B., Simon, S. A., and Nicolelis, M. A. L. (2001). Dynamic and multimodal responses of gustatory cortical neurons in awake rats. The Journal of neuroscience 21, 4478-89.

Lin, D. Y., Shea, S. D., and Katz, L. C. (2006). Representation of natural stimuli in the rodent main olfactory bulb. Neuron 50, 937-49.

Tanaka, H., and Ohzawa, I. (2009). Surround suppression of V1 neurons mediates orientation-based representation of high-order visual features. Journal of neurophysiology 101, 1444-62.