Pages

Tuesday, May 29, 2012

Walk Along the Paper Trail: Val de Laurent, Stopfer Stead

This is part 2 in an arbitrarily long series where I review papers from Gilles Laurent's lab.

Last time I looked at four early papers from Laurent's lab, which described the basics of how insect olfaction works. They showed that when you present a locust with an odor, their olfactory centers' LFPs oscillate at 20Hz, and antennal lobe (AL) neurons fire action potentials in sync with the LFP. Macleod and Laurent further showed that blocking GABAA transmission abolishes the LFP oscillations, presumably via local neurons (LN) in the AL.

Today I'm going to cover three papers by Mark Stopfer, a post-doc in the Laurent lab around the turn of the millenium.

And bees do it too!


Laurent started investigating insect olfaction in locusts, which do not have well-defined, manipulable behaviours. To start doing behaviour, Stopfer and Laurent repeated all of the locust experiments in honeybees, which extend their proboscis in response to sugars (proboscis extension reflex, PER). And when I say repeated, I mean almost literally repeated: they showed that the honeybee MB LFP oscillates at 30Hz, that GABAAinhibitors abolish the oscillations, that projection neurons respond to subsets of odors, etc.

Stopfer and Laurent (1997)'s first three figures recapitulate previous work, only this time in the honeybee. a, figure from Macleod and Laurent that shows (top) PN response to odors, and (bottom) the LFP-PN correlogram before (left) and after (right) picrotoxin. b, parts of figures from Stopfer and Laurent (1997) that show the same thing in honeybees.
Having established that the honeybee's olfactory processing is similar to locust's, they performed a conditioning experiment. They trained bees to associate the odor octanol with sugar, then tested the bee's PER in response to three different odors: octanol (C), a similar odor hexanol (S), and a dissimilar odor gerianol (D). Saline-treated bees were able to discriminate between the similar odors, and had a higher PER for the conditioned odor (see below); they did not respond to the dissimilar odor. They then repeated the conditioning experiment in bees with a GABAinhibitor in the AL, and found these bees were not able to discriminate between the conditioned and similar odors; the bees were still able to discriminate the dissimilar odor. The discrimination effect wore off after one hour, showing it was a problem in discrimination, not learning.

GABAAblockade reduces odor discrimination. Bees were tested on their PER (y-axis, percent that extend) to conditioned (C), similar (S), and dissimilar (D) odors. Saline-treated bees were able to discriminate the odors, while picrotoxin-treated (PCT, GABAAinhibitor) bees were not.
Having griped about the first three figures, I must admit this is a nice result. I would have liked to see a larger odor set, perhaps finding a companion odor to their "dissimilar" odor, gerianol. The Carlson lab, when investigating odorant receptor responses used dozens of odors, and characterized their similarity. Perhaps Stopfer got tired of bees after using 1000(!) of them.

In the discussion, Stopfer claims that these experiments show how important the LFP is in shaping responses to similar odors, but do not provide any explanation of how. My guess is that since PNs fire APs in the presence of PCT, the effect of the LFP desynchronization is not in the AL, but downstream in the mushroom body (MB).  I believe the paper that investigated this in Perez-Orive, 2002.

Short-term adaptation


Two years later, Stopfer and Laurent published a paper using a protocol near to my heart: they simply manipulated the stimulus and saw what happened. They returned to locusts, and recorded intracellularly from PNs, LNs and the LFP in the MB. They then presented odors to naive locusts ten times (1s odor pulse, 0.1Hz), and found that all the responses evolved over the ten presentations. During the first trial, the PNs and LNs had strong EPSPs, and PNs fired many action potentials. On subsequent trials, however, the EPSP amplitudes and action potential number both shrunk (see below). However, while the amplitudes had decreased, the precision had increased: the LN EPSPs and PN action potentials both became synchronized with the LFP. In contrast to the decreased strength of the AL neurons' responses, the LFP in the MB increased in strength over the trials.

LN, PN, and MB LFP all change with repeated stimultion. a, example traces from the LFP in the MB, and intracellular recording from LNs and PNs. Trial number below. b, during early trials (top), the PN action potentials are not synchronized to the LFP. During later trials (bottom), they are synchronized.
Having observed the basic phenomenon, they then varied the stimulus. They found the adaptation occurred for odor pulses ranging from 0.25-2s, and for inter-pulse intervals of 2.5-25s. They tested whether the adaptation could reset, and found that after a 2 min interval the adaptation was diminished, and after 12 minutes was nearly gone. They tested whether the adaptation was odor specific, and found that indeed, it was odor specific, but could overlap with similar odors. Finally, they tested whether the adaptation was central or peripheral by splitting the antenna in two via a barrier, applying the odor ten times to half the antenna, and then testing for adaptation on the other half; it was still there, showing the adaptation was central (given that I blocked the nostril of a mouse, I would love to see that experimental setup).

Having explored the stimulus space, they then tried to find the advantage of this adaptation. To do this, they created a classic odor predictor, turning the spike trains of PNs into n-dimensional space, and calculating the minimum distance between a given trial and templates for each odor. To differentiate early and late responses, they used two templates: the first was simply the response to the first trial; the second was the average response of the past three trials. When predicting the odor by comparing to the first trial, the odor predictor got worse with each trial; when predicting by comparing to recent trials, the predictor got better. Thus, they conclude that while PNs fire less after repeated stimulation, the increase in precision (and thus decrease in noise) increased the information in the system.

Odor prediction diagram, and results. Top, they compared a given trial, at, to a template, bt. For their "first trial" predictor, each trial was compared to b1. For the "last 3 trials" predictor, each trial was compared to the average of the last three trials. Bottom, odor prediction failure rate for both scenarios. d. The first trial predictor is best for early odors, but quickly gets worse. g. In contrast, the average predictor gets better with each trial.
While this adaptation is interesting, it is difficult to draw parallels to mammalian olfaction. Each trial here is analogous to a single sniff in mice, and indeed I have found that the odor representation changes with each sniff in mice. However, the time scales are very different. In mice, each sniff is separate by 250ms, not 2.5s; and the adaptation resets in the 10s between trials, not the 2 min. shown here.

Furthermore, the increase in precision may not be important for mammals. Mice have been shown to perceive odor identity in a single sniff.

Concentration coding

In 2003, Stopfer published another paper that used a simple protocol to examine the olfactory system. This time, he presented locusts with odors ranging in concentration over three orders of magnitude. They found that the MB LFP, and LN EPSPs both increased in magnitude with increasing concentration. In contrast, PN firing rate was flat across all concentrations, although there was an increase in LFP phase precision. This increase in precision could explain how the LFP amplitude increased, despite the flat firing rate.

They then focused on individual PNs, and found that PNs could respond completely differently to the same odor at different concentrations. There was no clear trend in these changes.

Four example neurons' response to odors at different concentrations. I leave finding the changes as an exercise for the reader.
Given these different responses at each concentration, they asked how a downstream neuron could possibly identify an odor at different concentrations. To do this, they built a population vector for the entire response, binning cells' responses into 50ms bins. They then ran a clustering algorithm to see how similar each odor-concentration representation was. Odors at different concentrations clustered together, rather than intermingling with other odors.

Different concentrations of the same odor cluster together. Dendrogram of similarity for three odors (red, blue, and green), at five concentrations (numbers at right). Different concentrations of the same odor generally cluster together.
They also ran various odor prediction algorithms, and showed odor trajectories in PCA space. The upshot of all of this was that while individual neuron's responses to an odor can vary with concentration, the population as a whole is more "stationary," which would allow decoder neurons to maintain  odor identity across concentrations.

In the final figure of the paper, they took a peek at the decoder by recording Kenyon cells in the MB. Kenyon cells could be split into two groups: some cells responded to an odor at all concentrations (30%), while others responded to odors only at a specific concentration (15%).

To date, I believe this is the most complete exploration of how concentration is coded in the mitral cells/antennal lobe. Some work has been done in anesthetized rodents, but the standard now is to record in awake animals. There have also been a large number of imaging studies in glomeruli that show that more glomeruli (and thus ORNs) are active as concentration increases. Whether this would translate into more neurons being active at higher concentrations, a shift to earlier spike times, or some other phenomenon is unclear. How this would be decoded downstream in cortex is also unclear.

That's it for today. Only a dozen or so more papers to go! I like this post-doc centered format, so next time I'll take a look at Rainer Friedrich or Rachel Wilson's work.


Citations


Stopfer M, Bhagavan S, Smith BH, & Laurent G (1997). Impaired odour discrimination on desynchronization of odour-encoding neural assemblies. Nature, 390 (6655), 70-4 PMID: 9363891


Stopfer M, & Laurent G (1999). Short-term memory in olfactory network dynamics. Nature, 402 (6762), 664-8 PMID: 10604472


Stopfer M, Jayaraman V, & Laurent G (2003). Intensity versus identity coding in an olfactory system. Neuron, 39 (6), 991-1004 PMID: 12971898

Thursday, May 24, 2012

Walk Along the Paper Trail: Val de Laurent, Valley Floor

Like Casablanca on an AFI top 100 list, you knew this was coming: the Laurent lab overview. Gilles Laurent is the forefather the dynamics of olfactory coding, and his neurotree includes descendants like Rainer Friedrich and Rachel Wilson. In the next few posts I'm going to cover the significant papers since he started his lab. Today I will cover results from four papers that initially described the locust olfactory response

Projection neurons respond dynamically

(The anatomy of the insect olfactory system is fairly similar to the mammalian system. Odorant receptor neurons bind odorants, and synapse onto antennal lobe neurons (AL; mitral cell analogue). The AL contains two types of neurons: projection neurons (PNs) that project to Kenyon cells in the mushroom body (MB; very rough analogue of olfactory cortex); and local neurons (LNs) that synapse within the AL. For more details you can see this review of insect olfaction.)

PN responses to odors are highly dynamic.
From Laurent, et al (1996).
After a few minor papers, the Laurent lab began cooking in the mid-90s when they recorded extra- and intracellularly from neurons in the AL and MB while presenting locusts with odors. They noted two things: that the LFP oscillates at 20Hz in both the AL and the MB; and that PNs fire action potentials in response to odors not tonically, but dynamically, with periods of excitation and inhibition (see right; Laurent and Davidowitz, 1994; Laurent et. al. 1996). Around 10% of PNs responded to any given odor, showing that PNs had an odor receptive field. The PNs' responses were reliable: repeated odor presentations separated minutes would elicit the same response.

They furthermore discovered that the intracellular responses were correlated with the LFP. The Vm of LNs oscillated in near synchrony with the LFP; in contrast, PNs depolarized around 1/4 cycle after the LNs. When PNs fired action potentials during odor presentation, some of these action potentials were synchronized with LFP, occurring during the rising phase; at other times, the action potentials would fire unphased (see below; PN2 is phased during the early cycles, and unphased for cycles 14-16). When PNs fired in sync with the LFP, the phase of the action potentials was the same for all PNs and all odors, which means that action potential phase does not encode odor identity, but merely the action potential timing (Wehr and Laurent, 1996). By looking at the phase, they found that while PN responses are reliable on a long time scale, on a cycle-by-cycle basis, they are only 20-90% reliable.

PN action potentials are synchronized with the LFP. Top: LFP trace, with labeled cycles. Bottom: PSTH from two PNs. Both PNs fire APs at the same phase.
From Wehr and Laurent (1996).
Given that neurons fire dynamically, and in sync with the LFP, they next asked whether the firing between neurons was correlated. To look at this, they simply calculated the cross-correlation of the membrane potential between two neurons over time. They found that pairs of projection neurons would become transiently correlated for periods of a few hundred milliseconds.

Cross-correlation between pairs of neurons. The odor was presented for 1s (see bar left), and different pairs of neurons were correlated at different times following the odor.
From Laurent and Davidowitz (1996).
 LNs control the LFP


As mentioned before, the phase of LNs' voltage preceded that of PNs by 1/4 cycle. To better understand the relationship between LNs and PNs, they recorded intracellularly from LN-PN pairs, and found that depolarizing current injection in LNs could hyperpolarize PNs. This explains why LNs phase precedes PNs: PNs need to be released from inhibition before firing.

LNs are GABAergic, so to see how LNs affect PN responses to odor, they bathed the antennal lobe in picrotoxin, a GABAA inhibitor. Picrotoxin reduced the 20Hz oscillations in the LFP, but did not effect the response of PNs. Obviously, since the LFP was reduced, the PNs' action potentials were no longer correlated with the LFP.

Picrotoxin abolishes 20Hz oscillations in response to odor, but not PN action potentials. Top: LFP; middle: PN voltage; bottom: PN-LFP cross-correlation. Left: This PN fires action potentials in response to odor, in sync with the LFP. Right: Following picrotoxin, the neuron still fires action potentials at the same time, but no longer in sync with the LFP.
From Macleod and Laurent (1996).
I'm going to stop here for now. In these four papers from 1994-1996, Laurent and colleagues showed that there are 20Hz LFP oscillations during odor presentation, and that the projection neurons fire action potentials in dynamic groups of neurons. Using pharmacology they found that the 20Hz oscillations are generated by local neurons, but that projection neurons still respond to odors absent the synchronizing oscillations.

These papers from the 90s put in stark relief how much neuroscience has changed. The papers are much shorter. For this post I read five papers that included 28 figures combined, with zero supplemental data. In comparison, when I did retrospectives on the Carlson lab or Katz lab, I could base a post around two papers from the aughts, since each paper contained more experiments. This is not an indictment of Laurent, but rather a comment on changing standards.

Another aspect of the different standards is that these papers were more descriptive and less quantitative that contemporary papers. They were able to state things like, "The  temporal  structure  of the  response  of individual  PNs to a given  odor  was  consistent  and  reliable," (Laurent, et. al., 1996, p.3839) without actually quantifying how reliable each cell was (these techniques, or the computer power to employ them, may not have been widely available, and indeed the Laurent lab eventually developed many of them). This descriptive nature makes the papers feel much less formal, almost like a series of long, thought-out blog posts, each an incomplete fragment of a story. Only the last paper, Macleod and Laurent, feels like a completely new set of experiments that set out to test the system.

Next time I'll cover papers from 1999-2004, which use more varied model organisms and stimulus protocols to explore the dynamics of the response.

Citations

Laurent, G., & Davidowitz, H. (1994). Encoding of Olfactory Information with Oscillating Neural Assemblies Science, 265 (5180), 1872-1875 DOI: 10.1126/science.265.5180.1872

Laurent G, Wehr M, & Davidowitz H (1996). Temporal representations of odors in an olfactory network. The Journal of neuroscience : the official journal of the Society for Neuroscience, 16 (12), 3837-47 PMID: 8656278

MacLeod K, & Laurent G (1996). Distinct mechanisms for synchronization and temporal patterning of odor-encoding neural assemblies. Science (New York, N.Y.), 274 (5289), 976-9 PMID: 8875938

Wehr M, & Laurent G (1996). Odour encoding by temporal sequences of firing in oscillating neural assemblies. Nature, 384 (6605), 162-6 PMID: 8906790

Monday, May 21, 2012

My recording setup

A few months ago on xcorr.net, Patrick showed how they set up their data acquisition and processing cluster. I picked up a few pointers from it (NoMachine is awesome), I love seeing how people do things, and Methods sections are blatantly inadequate, so here's how I record and process spikes.

Recording

We record from awake, head-fixed mice using Michigan-style probes from NeuroNexusTech. Most of the time we use their 4x2 tetrode array, with 2 diamond tetrodes located on four shanks (see below).

Following the craniotomy, I use a micromanipulator to enter the dorsal olfactory bulb at a 20-45 degree angle, then move down ~300 um to make sure all the shanks are not touching the skull. The mitral cell layer is ~100-250um below the olfactory bulb surface, which means the lower set of tetrodes often have spikes on them. It's difficult to record from these neurons, however, because even with a solid head-fixing, the mouse can still generate movement artifacts. To avoid these artifacts, I usually penetrate through the entire olfactory bulb to the medial mitral cell layer, ~1000-2000um deep (depending on penetration angle, and anterior-posterior location), basically spearing the olfactory bulb on the recording electrodes.  Once the electrodes are in place, I let the brain settle for 5-10 minutes. Since the electrodes are at an angle, and depending on the curvature of the bulb, it is possible to get spikes on both the lower and upper shanks at the same time (my record for cells recorded at a single site is 30, while my labmate's is 41).

Recording setup for head-fixed mouse. I cement a headpost to the skull of the mouse using Syntac dental adhesive and Miris 2 Dentin dental cement. The headpost is attached to the recording crane by a simple screw. We record the breaths from the right nostril while presenting the odor to the left nostril. I use a micromanipulator to move the electrode.
To actually record the spikes, we use a NeuraLynx Digital Lynx acquisition system, and their cheetah software. One frustration I have with NeurLynx is that when recording on wall power, the system is highly susceptible to line noise, despite our best efforts at grounding, and filtering. We continue to use old 12V batteries that they no longer support. For online monitoring of the recording we use LabView, which is honestly a black box to me. At the end of a recording day, we upload the data to a server. A typical experiment can run 5-10GB.

Spike identification

Only two people in the lab perform multielectrode recording, so we each have a "personal" computer. Mine contains an 8-core 2.2GHz Xeon processor, and 24GB of memory. Since our files aren't extraordinarily large, the local hard drive has 2TB of space; if it fills up, we just upload the data to a server.

For filtering, spike detection, and sorting, we use the NDManager suite of software originally from the Buzsaki lab. The NDManager suite runs in linux, and my old linux box was crashing intermittently, so I got the wise idea of updating the computer to Debian. However, the newest distro of Debian installs KDE4, while NDManager requires KDE3 libraries. To get around this, I had to uninstall KDE4, install KDE3, pin the KDE3 libraries, then reinstall KDE4 (which I couldn't have done without the help of a linux guru).

I had been running a ~5 year old version of the software, and all of the subroutines have been changed in the new version. The new version of NDManager stores all pertinent information about a recording in an .xml file, including things like file names, sampling frequency, number of electrodes, and electrode groupings. One nice feature of the updated NDManager is the you can assign a single tetrode to multiple spike-detection groups. NDManager also allows access to a set of scripts that filter and detect spikes, again stored in the .xml file.

So here is, step-by-step, how I turn Neuralynx data into Matlab compatible files.

1. Convert from Neuralynx .Ncs files to a wideband .dat file. Here you don't need NDManager, and can just run:
"process_nlxconvert -w -o [output_name] [input_files]"
(Note: process_nlxconvert requires the extension of the neuralynx data to be .ncs (rather than .Ncs). To convert the extensions, just run, "rename.ul Ncs ncs *.Ncs") I have a script set up to copy files from the server to my analysis computer, and run this program. The limiting step here is our antiquated 100Mbit/s network, rather than CPU power (it amazes me that universities don't have Gigabit set up when Case had it a decade ago). Once this is done, you can look at your data in neuroscope.

For the next 4 steps, you need to run NDManager to set the parameters in the .xml file. You can then run the scripts independently by simply running:
2. Downsample data for the lfp: ndm_lfp. The only parameter is the sampling rate, for me 2713 Hz.

3. Hipass filter the data for spike detection: ndm_hipass. This script runs a "median" filter which, rather than assigning the median value, subtracts the median value. This is indeed a hi-pass filter, but this initially confused the heck out of me.

The default filter width is ~10 samples, and is set up for recordings at 20kHz (I believe). Since our recordings are at 32kHz instead of 20kHz, this filter width is ~0.6ms, meaning the filter was subtracting spikes, rather than subtracting the background. Given that I thought it was a normal median filter, and it was filtering out the features, I assumed the filter size was too large, and decreased it. This, of course, made things worse. Once I realized what the filter was actually doing, I increased the filter width to 22, and it seems to be working adequately.

4. Extract spikes: ndm_extractspikes: The three important parameters here are the threshold, refractory period, and search window. The threshold is the level above the baseline noise that a spike needs to be for detection. This will depend on the quality of the filtering above, and the noisiness of your signal. I was missing some spikes initially, so I now use a threshold value of 1.2.

The refractory period is the time after a spike where the program will ignore other spikes. Since I am recording on tetrodes, I don't want a spike on electrode A to interfere with electrode B, so I have this set to a relatively low value, 5 samples.

Finally, there is the peaksearchlength parameter. When the program detects a spike, it verifies that it is indeed the beginning of spike, rather than the tail end of a spike. I have found that leaving this value too low yields detection of spikes like this:

Example of incorrectly extracted spike when the peak search windows is too small. Top: waveform. Bottom: Autocorrelogram.
Right now, I have this value set to 40 samples, or just over 1ms. To verify that the spike detection is actually working you can load the spikes into neuroscope (load the .fet files into Klusters (see below), then save the .clu file, and open it in neuroscope).

5. Spike characterization: ndm_pca. This computes the PCA of each spike for later clustering.

All of the above steps in NDManager take 30 min for a typical experiment, and can be run in parallel.

Spike sorting

At this point I have identified spikes on all my tetrodes, and am ready to group them into (hopefully) neurons using Klustakwik. To run the clustering on each tetrode, I wrote a short script that creates a screen instance for each tetrode. Figuring out the exact screen parameters took a bit of Googling:
screen -dmS multi_KK$1
for i in {1..8}; do
   screen -S multi_KK$1 -X screen $i
   screen -p $i -S multi_KK$1 -X stuff "KlustaKwik $1 $i^M"
done
Where $1 is the name of the *.clu.# files to be operated on, and multi_KK is an arbitrary name that doesn't start with a number. Klustakwik uses an EM algorithm, which runs in O(n^2) time. Short recordings of ~30 minutes generally get clustered within 30 minutes. Longer recordings, or tetrodes with a large number of spikes can take over 8 hours. If sorting is taking too long, you can run the sorting with the "-Subset 2" option, which halves the number of spikes considered for clustering, and reduces the running time to one quarter.

Finally, since the automatic clustering isn't perfect, I run Klusters to finish the clustering. This typically involves: deleting clusters that don't contain true spikes based on waveform and autocorrelogram; and combining clusters by looking at spike waveform, and cross-correlation between clusters. (This step is surprisingly taxing given how trivial it is. I have a theory for why. While each of these decisions individually is straightforward, you make them constantly, every 5-10 seconds, for as long as you can tolerate it. This then induces a low-level form of decision fatigue.) Folowing this, I have *.res and *.clu text files that contain spike times and cluster IDs, respectively, which can be read in MATLAB.

Monday, May 14, 2012

Navel Gazing: First Blogiversary

It's been 13 months since I started both my post-doc and this blog, so it's time for a performance review.

The blog
In the past year I've published 56 posts, about one a week. At the start I maintained a Mon/Thurs posting schedule, but have since slacked off.

~20 people subscribe to the blog RSS. The most popular posts are the paper summaries, which get a spike of traffic from researchblogging.com, and then strong residual traffic from people Googling the papers. The paper summaries typically receive 50-150 hits, depending on the profile of the paper. The most popular post on the entire blog is my review of the Zuker lab's recent paper on taste hotspots in gustatory cortex, which I published within a month of the paper. That is also my personal pick for best paper summary.

The most disappointing posts, hit-wise are the data presentations. The current journal system is bullshit - delaying dissemination of information for no gain in reliability; and utterly arbitrary - so I take pride in posting my scientific results in (hopefully) meaningful chunks. I realize these would primarily be of interest to other chemosensory people, but the lack of interest (and feedback) is disheartening.

The weirdest part of blogging for the past year has been the solicitation. One person asked me to sponsor links, while another wanted to make a guest post. I was like, "Really? You want to sponsor a blog that gets five hits a day? That's worth your (and my) time?"

Journal flashcards
I'm terrible at remembering the specifics of papers, like being able to refer to them by author, so I tried to create journal flashcards using the spaced-learning program Memnosyne. I kept up with my flashcard review for a few months, before the habit got interrupted, and never restarted. Six months later, I can't recall the specifics of most papers, but I can recall the gists.

Besides spaced-learning, the key to flashcards is overlearning: that is, putting more information on the flashcards than you need to know. That way, if you forget the details, you remember the core. For example, I know that a whole host of papers have been published on the roles of AgrP and NPY neurons in feeding behaviour, and that, for example, AgrP knockout mice starve to death, but I can't remember the names of the authors of those papers. Citing them is probably the best way to consolidate that knowledge.


Protocols
The other process I was interested in was writing checklists (viz. protocols), both long (read-do) and short (do-verify). I have a Google doc containing detailed protocols for wet-lab stuff including everything from recipes for anesthesia to passwords for ordering. This has been perpetually useful. I also made a Google doc for data processing and analysis, which I updated frequently at the start. As I continue to write functions, however, I have neglected to update this. Part of it is due to the one-off nature of many data analysis functions, which means I rarely refer to it. I also wrote do-verify checklists for my experiments, but never used them since most mistakes during an experiment are fixable, and I'm working alone, so communication is nonexistent.

I have some more personal reflections on my first year as a post-doc. Those, however, are best discussed over beer.