Pages

Monday, May 21, 2012

My recording setup

A few months ago on xcorr.net, Patrick showed how they set up their data acquisition and processing cluster. I picked up a few pointers from it (NoMachine is awesome), I love seeing how people do things, and Methods sections are blatantly inadequate, so here's how I record and process spikes.

Recording

We record from awake, head-fixed mice using Michigan-style probes from NeuroNexusTech. Most of the time we use their 4x2 tetrode array, with 2 diamond tetrodes located on four shanks (see below).

Following the craniotomy, I use a micromanipulator to enter the dorsal olfactory bulb at a 20-45 degree angle, then move down ~300 um to make sure all the shanks are not touching the skull. The mitral cell layer is ~100-250um below the olfactory bulb surface, which means the lower set of tetrodes often have spikes on them. It's difficult to record from these neurons, however, because even with a solid head-fixing, the mouse can still generate movement artifacts. To avoid these artifacts, I usually penetrate through the entire olfactory bulb to the medial mitral cell layer, ~1000-2000um deep (depending on penetration angle, and anterior-posterior location), basically spearing the olfactory bulb on the recording electrodes.  Once the electrodes are in place, I let the brain settle for 5-10 minutes. Since the electrodes are at an angle, and depending on the curvature of the bulb, it is possible to get spikes on both the lower and upper shanks at the same time (my record for cells recorded at a single site is 30, while my labmate's is 41).

Recording setup for head-fixed mouse. I cement a headpost to the skull of the mouse using Syntac dental adhesive and Miris 2 Dentin dental cement. The headpost is attached to the recording crane by a simple screw. We record the breaths from the right nostril while presenting the odor to the left nostril. I use a micromanipulator to move the electrode.
To actually record the spikes, we use a NeuraLynx Digital Lynx acquisition system, and their cheetah software. One frustration I have with NeurLynx is that when recording on wall power, the system is highly susceptible to line noise, despite our best efforts at grounding, and filtering. We continue to use old 12V batteries that they no longer support. For online monitoring of the recording we use LabView, which is honestly a black box to me. At the end of a recording day, we upload the data to a server. A typical experiment can run 5-10GB.

Spike identification

Only two people in the lab perform multielectrode recording, so we each have a "personal" computer. Mine contains an 8-core 2.2GHz Xeon processor, and 24GB of memory. Since our files aren't extraordinarily large, the local hard drive has 2TB of space; if it fills up, we just upload the data to a server.

For filtering, spike detection, and sorting, we use the NDManager suite of software originally from the Buzsaki lab. The NDManager suite runs in linux, and my old linux box was crashing intermittently, so I got the wise idea of updating the computer to Debian. However, the newest distro of Debian installs KDE4, while NDManager requires KDE3 libraries. To get around this, I had to uninstall KDE4, install KDE3, pin the KDE3 libraries, then reinstall KDE4 (which I couldn't have done without the help of a linux guru).

I had been running a ~5 year old version of the software, and all of the subroutines have been changed in the new version. The new version of NDManager stores all pertinent information about a recording in an .xml file, including things like file names, sampling frequency, number of electrodes, and electrode groupings. One nice feature of the updated NDManager is the you can assign a single tetrode to multiple spike-detection groups. NDManager also allows access to a set of scripts that filter and detect spikes, again stored in the .xml file.

So here is, step-by-step, how I turn Neuralynx data into Matlab compatible files.

1. Convert from Neuralynx .Ncs files to a wideband .dat file. Here you don't need NDManager, and can just run:
"process_nlxconvert -w -o [output_name] [input_files]"
(Note: process_nlxconvert requires the extension of the neuralynx data to be .ncs (rather than .Ncs). To convert the extensions, just run, "rename.ul Ncs ncs *.Ncs") I have a script set up to copy files from the server to my analysis computer, and run this program. The limiting step here is our antiquated 100Mbit/s network, rather than CPU power (it amazes me that universities don't have Gigabit set up when Case had it a decade ago). Once this is done, you can look at your data in neuroscope.

For the next 4 steps, you need to run NDManager to set the parameters in the .xml file. You can then run the scripts independently by simply running:
2. Downsample data for the lfp: ndm_lfp. The only parameter is the sampling rate, for me 2713 Hz.

3. Hipass filter the data for spike detection: ndm_hipass. This script runs a "median" filter which, rather than assigning the median value, subtracts the median value. This is indeed a hi-pass filter, but this initially confused the heck out of me.

The default filter width is ~10 samples, and is set up for recordings at 20kHz (I believe). Since our recordings are at 32kHz instead of 20kHz, this filter width is ~0.6ms, meaning the filter was subtracting spikes, rather than subtracting the background. Given that I thought it was a normal median filter, and it was filtering out the features, I assumed the filter size was too large, and decreased it. This, of course, made things worse. Once I realized what the filter was actually doing, I increased the filter width to 22, and it seems to be working adequately.

4. Extract spikes: ndm_extractspikes: The three important parameters here are the threshold, refractory period, and search window. The threshold is the level above the baseline noise that a spike needs to be for detection. This will depend on the quality of the filtering above, and the noisiness of your signal. I was missing some spikes initially, so I now use a threshold value of 1.2.

The refractory period is the time after a spike where the program will ignore other spikes. Since I am recording on tetrodes, I don't want a spike on electrode A to interfere with electrode B, so I have this set to a relatively low value, 5 samples.

Finally, there is the peaksearchlength parameter. When the program detects a spike, it verifies that it is indeed the beginning of spike, rather than the tail end of a spike. I have found that leaving this value too low yields detection of spikes like this:

Example of incorrectly extracted spike when the peak search windows is too small. Top: waveform. Bottom: Autocorrelogram.
Right now, I have this value set to 40 samples, or just over 1ms. To verify that the spike detection is actually working you can load the spikes into neuroscope (load the .fet files into Klusters (see below), then save the .clu file, and open it in neuroscope).

5. Spike characterization: ndm_pca. This computes the PCA of each spike for later clustering.

All of the above steps in NDManager take 30 min for a typical experiment, and can be run in parallel.

Spike sorting

At this point I have identified spikes on all my tetrodes, and am ready to group them into (hopefully) neurons using Klustakwik. To run the clustering on each tetrode, I wrote a short script that creates a screen instance for each tetrode. Figuring out the exact screen parameters took a bit of Googling:
screen -dmS multi_KK$1
for i in {1..8}; do
   screen -S multi_KK$1 -X screen $i
   screen -p $i -S multi_KK$1 -X stuff "KlustaKwik $1 $i^M"
done
Where $1 is the name of the *.clu.# files to be operated on, and multi_KK is an arbitrary name that doesn't start with a number. Klustakwik uses an EM algorithm, which runs in O(n^2) time. Short recordings of ~30 minutes generally get clustered within 30 minutes. Longer recordings, or tetrodes with a large number of spikes can take over 8 hours. If sorting is taking too long, you can run the sorting with the "-Subset 2" option, which halves the number of spikes considered for clustering, and reduces the running time to one quarter.

Finally, since the automatic clustering isn't perfect, I run Klusters to finish the clustering. This typically involves: deleting clusters that don't contain true spikes based on waveform and autocorrelogram; and combining clusters by looking at spike waveform, and cross-correlation between clusters. (This step is surprisingly taxing given how trivial it is. I have a theory for why. While each of these decisions individually is straightforward, you make them constantly, every 5-10 seconds, for as long as you can tolerate it. This then induces a low-level form of decision fatigue.) Folowing this, I have *.res and *.clu text files that contain spike times and cluster IDs, respectively, which can be read in MATLAB.

2 comments:

  1. Hi Mike,

    I was reading your blog cause I want to do the same thing that you were doing here. I have data recorded by NeuraLynx and I want to use Klustaview for clustering the data. I need to change the data from .ncs to .dat but I am not sure how I can do it. NeuraLynx has a tool which can convert .ncs files to Matlab variables and then from this point I don't know how I can make .dat files.
    I wanted to try NDmanager but I do not have linux. My I have Mac OS and Windows. If I am right, the handbook of NDmanager is written for linux version, so I don't know how to work with it on Windows. I was wondering if you can help me in that.
    Thanks

    ReplyDelete
  2. I have not used this system for a while, and I no longer use Klustaview or NDmanager. But I can tell you what I do now. I switched to python a while ago, and I store my code for converting .ncs files on github at:
    https://github.com/map222/MPNeuro/tree/master/nlxio

    To convert a bunch of .ncs files to a .dat binary, I change my current working directory to the directory with the files. Then in python I run:

    from MPNeuro.nlxio import nlx_to_dat
    nlx_to_dat.load_nlx_save_dat(, <# tetrodes>)

    After running this there is a .dat file in the same directory. load_nlx_save_dat calls some other functions written by someone else that handle the loading of .Ncs files, stored in the __init__.py in the nlxio module.

    Not sure how much that helps if you don't use python. You might use this to make the .dat files, then the other software to view it.

    ReplyDelete

Note: Only a member of this blog may post a comment.