Pages

Thursday, March 22, 2012

Notes on mouse licking

We recently got two things I've been waiting for: transgenic mice, and a lickometer. In anticipation of recording from taste cortex, I've started training mice to do a simple licking task: lick 9+ times following a short tone. (An FR5 lick schedule, borrowed from Sid Simon's lab.) (Doing this has made me realize the unique nuisance of investigating taste: unlike, other sensory systems, like olfaction or whisking, you need to train your mice before every experiment.)

Since the task is so simple, I designed a short, straightforward training protocol (see below). The first goal (Stage 1) was to get the mouse to lick following a tone. To reinforce this behaviour, every time the mouse licks within the "lick window," they get a water reward of 1-2uL (the mice are, of course, water deprived). Once the mice learn to lick after the tone, I gradually decrease the number of rewarded licks, until they only get water for a single lick in the middle of the window (Stage 2). To ensure they lick the whole time, the mice get a water reward after 9 licks. Finally, once the mice have completed Stage 2, we are ready to try (potentially aversive) tastants in the middle of the task (Stage 3). Once the mice perform Stage 3 correctly, we can record.
Learning protocol. (Top) Tone timecourse. (Middle) Liquid dispensed during licks. H = water; T = tastants. (Bottom) Example lick history. In Stage 1, each lick during the "lick window" is rewarded with water. In Stage 2, we gradually reduce the number of rewarded licks until only a single lick in the middle is rewarded. After nine licks, the mice get a water reward as well. In Stage 3, we give tastants in the middle of the lick window, instead of water.



Since we just got the lickometer equipment, I ran a pilot study with two mice, for two weeks. At the end of the two weeks, both mice were performing Stage 2.2 fairly well. Here is the lick-o-gram for the last 200 trials of training for Mouse 2 (M2):

Raster plot of licks during task. Mice were cued by a 200ms, 4kHz tone. Following that, they had a 2s lick window (blue line). In the lick window, they received water (1-2uL) on the fifth lick, and every lick from #10 onward.
Some observations:

1. Individual mice behave differently. The mice we're using, C57/Bl6, have been inbred for over twenty generations, and are practically clones. Despite the genetic similarity, the two mice I trained had different behaviour. Mouse 1 (M1) didn't hide in the behaviour tube, and refused to lick the water spout at first. In contrast, M2 hid in the behaviour tube, and had to be coaxed out; and quickly started licking the water spout. I don't know whether these differences are due to epigenetics, genetic drift, small differences in how I handled them, or something inscrutable. But it did surprise me.

2. The best way to get a mouse to move forward is to (gently) pull his tail.

3. Mice may solve your task in unexpected ways. The goal of training was for the mice to lick following the tone. M2 didn't quite understand this, but did know that if he licked a lot, he would get water. So he just licked continuously, ignoring the tone, and got his water. While I don't mind this in practice - since I just want them to lick - I wanted to see how well they could learn the complete task, so I extended the inter-trial interval, and M2 stopped licking constantly.

Continuing this theme, I do not think the mice quite understood that the tone preceded the lick window. In the lick-o-gram above the mouse stopped licking after 4 seconds, and restarted after 7 seconds. This makes me believe the mouse knew there was water every ~10s, rather than being cued by the sound. Also notice, the mouse did not really perform the task for the first 5 or so trials, then did well. MICE, WHAT ARE YOU THINKING!?!?

4. Motivation happens in bouts. The mice were water deprived before the experiment, and performed quite eagerly for the first 100 or so trials. However, after a while they would stop, only to restart the task for 5-10 trials. Two hypotheses: licking thousands of times gets tiring after a while; or a few hundred uL of water slakes their thirst for a bit.

5. When shaping mouse behaviour, you must move step-wise. In moving from Stage 1 to Stage 2, I started by removing the reward for licks 6-9, and the mice continued to perform pretty well (in general, they seem to lick for a few seconds after free-licking time). When removing the reward for the initial licks, the mice continued to perform when I dropped the first lick, but if I stopped rewarding the first two licks, they gave up, and simply stopped licking. So I will have to be patient when removing the initial reward licks.

Saturday, March 10, 2012

Neuron vs Nature Neuroscience

One reason I loathe the current publishing system is the proliferation of supplemental figures: those figures that no one reads, but take lots of efforts to produce. I thought my PNAS paper was unusually bad when I had five main figures and ten supplemental ones, but I've noticed that that ratio is becoming routine. So in the tradition of hotornot.com, and my previous Nature vs Science, I'll pit Neuron and Nature Neuroscience against each other to see who's worse at requiring supplemental figures.

I simply looked at the two most recent issues of both Neuron* and Nature Neuroscience, and counted up the number of main and supplemental figures for each paper (tables were counted as figures; this ignores figure size):
Number of main (x-axis) and supplemental figures (y-axis) for articles in recent issues of Neuron (black crosses) and Nature Neuroscience (blue dots). Dashed line is for main figs == supplemental figs.
As you can see, Nat Neuro requires many more supplemental figures than Neuron (p<0.01, two-tailed t-test). On average, Nat Neuro requires 1.7 supplemental figurse per main figure, while Neuron requires 0.77.

What can you conclude from this? Depending on your opinion of each journal's editorial rigor, and the Elsevier boycott, you should probably submit your first manuscript to Neuron.

*  While compiling these numbers, I saw the Mooney lab has a new paper. Goddamn his lab has been crushing it lately. They labeled specific cell populations using viruses (in the zebra finch), and showed that following deafening, only the striatothalamic (HVCX) projecting neurons underwent synaptic remodeling. In contrast, the motor-projecting neurons (HVC-RA), were stable. The striatothalamic pathway has long been hypothesized to be responsible for plasticity in the system, and this is the best evidence to date.

Thursday, March 8, 2012

The extent of the scientific market

Last year, I wrote about how the new age of genetic mouse models means the transaction costs for distributing material (viz. mice) between labs has gone up. I argued that this, in turn, creates strong incentives towards agglomeration, or grouping together labs with similar interests (or at least similar mouse models). Today I want to explore a related economic idea: if we reorganize scientists into larger groups, we will have an opportunity to redefine their roles, increasing specialization.

Adam Smith on the division of labor


On one of the many econ blogs I read, I stumbled on Adam Smith's classic idea, "
the division of labour is limited by the extent of the market." (Is it more cliché to quote Adam Smith, or a dictionary?) I've long felt that neuroscience labs aren't nearly specialized enough, so I turned to the Wealth of Nations, to see what Smith wrote.



I'm no economist, but this is what I understood from the first three chapters. In the first chapter, Smith observed that specialized workmen have higher productivity than generalists, and hypothesized that productivity improves through the effects of the division of labor. He observed three reasons for this: 1.) increased skill of workers; 2.) reduced transaction costs in switching between tasks; and 3.) invention of machines that improve task-specific productivity. Of these, I think the first is most important for neuroscience, and my post today.


Then in chapters 2 and 3, Smith speculated on how and why labor was divided. First, labor was able to be divided because people could trade their goods. If I'm adept at shoemaking, I can make more shoes than other people, then trade the shoes for food. Micro-econ 101.


Second, Smith observed the quote that started me thinking: that the division of labour is limited by the extent of the market. His explanation was brief, but the basic idea is that if I'm a good shoemaker, and can make one shoe per day, I need to live in a place that can absorb 250 shoes per year; I can't work as a shoemaker in a hamlet of ten people. On the other hand, if I lived in a city of a million people, I could further specialize the shoemaking into its various components, and increase productivity even further.


So what does this have to do with neuroscience? The two key questions are: how can we divide the labor; and what is the extent of the market?

The division of labor

Neuroscience is the most integrative biological science, which makes dividing labor straightforward: by discipline.


To make this concrete, I'll draw on my experience in grad school. In the Yasuda lab, we studied the cellular mechanisms of synaptic plasticity. On a purely theoretical level, we studied cell signaling pathways (although surprisingly nothing about learning and memory). Then on a primary technical level, we performed experiments via imaging on a microscope. Once we had data, we had to analyze it. This could be quite complicated, involving software design, statistics, and simple modeling. On a secondary technical level, there were preparations before each experiment, which included doing dissections, or subcloning constructs. To verify our results, we often performed Westerns.

In total, you could theoretically divide the Yasuda Lab Labor Market into: literature research; imaging and microscopy; programming (software, statistics, and modeling); surgery; molecular biology; and biochemistry. Six (to eight) jobs in a lab of ten people. In practice, there was relatively little specialization. Everyone knew a little bit about microscopy, data analysis, and molecular biology. The only specialists per se were two post-docs who performed a lot of molecular biology and biochemistry (and of course, Ryohei, who knew everything).

The extent of the market

Which brings me to the second question, what is the extent of the market? On a large scale, one might think that the market includes all of neuroscience, 30,000+ people. But remember, the market is defined by trade, and most of these scientists don't "trade" with each other often, either in people or matériel.

So what group of people can practically trade time or resources? A lab. (Or, one might argue, a department, which I address below.) However, in a lab of ten people, each working on their own project, the market is quite small. This means that the benefits to specialization are limited.

Beyond lab size, the market is also limited by the duration 
of employment, typically 2-5 years. Specialties take time to master, and the most useful forms of specialization take the most time. Yet if people require years to learn a specialty, they will leave as soon as they master them. This is not necessarily a problem for the entire neuroscience community, but for individual groups looking to increase productivity.

In summary, I think neuroscience, as a multidisciplinary field, is ripe for specialization, but is held back by the organizational structure of research units that are too small, and short employment periods.

Counter-arguments

I can think of three counter-arguments against specialization. First, there is a big push now towards interdisciplinary science. Many people believe that there are new discoveries to be made, simply by synthesizing existing fields. However, I would say specialization and collaboration/interdisciplinary science go hand-in-hand: each of the disciplines have their own experts who need to communicate.

The second counter-argument is that there is much to be gained from generalization, and seeing the big picture. This is undoubtedly true. So I would restrict my argument in favor of specialization to saying that production should be specialized, if perspective is not. Getting a general view is what conferences and socializing is for!

Finally, increasing specialization implicitly requires improved coordination. I don't have much to say here, except to argue that we have such small working groups, and such limited specialization, that any increase in coordination costs should be easily offset by improved productivity.

Amelioration

So what can be done to increase specialization? Some larger groups, namely department-size entities, have made some progress. fMRI departments have a fairly clear split between the programmers and the cognitive scientists. Many departments have "core" facilities like 2-photon microscopes, or micro-array processing. Janelia and Max Planck institutes have professional cloners making constructs. Indeed, here in Geneva, we have a very good shop that can (with time) handle most equipment building needs. Most of these core facilities help with experimental setup or data acquisition.

We need to go farther. Working within the department paradigm, I would suggest creating department-centered positions for post-experiment processes like data analysis. This might be problematic in a department with diverse interests: molecular biologists might be unhappy about supporting (even indirectly) statisticians for systems neuroscience, and vice versa. Which only reemphasizes how important it is to focus neuroscience departments on overlapping interests. If you do it right, you just might make your market bigger, divide your labor, and conquer.

Addendum:

How does this apply to me now? The tasks I perform include: reading literature; surgery; recording; data analysis (light software development, and kludged stats); histology; and now behaviour design and running. Things that have been "specialized away" for me are mouse colony maintenance (by a tech), and equipment manufacture (by the shop). If I could specialize things further, I think surgery, data analysis, and behaviour are tricky enough that I'm still improving at them. As it is, I'm constantly torn between different tasks (Smith's #2 hypothesis), and it's hard to prioritize what to improve at.

Thursday, March 1, 2012

Remembrances of Sensors Past (Development of a PI3K FRET Sensor)

I recently got an e-mail from a colleague in the Yasuda lab (now moving to Max Planck Florida!) about a sensor I worked on before I left. Which reminded me to write a post about it.

The Sensor


The Yasuda lab specializes in making and refining fluorescent sensors for second messenger activity, most famous among them Ras, CaMKII, and Rho-GTPases (I remember one grant proposal where Ryohei proposed making sensors for all GTPases). My corner of the kingdom was to develop a FRET/FLIM sensor for PI3K. PI3K is a second messenger involved in LTP, and works by phosphorylating the third carbon of phosphoinositol, turing PIP2 into PIP3.

Diagram of the PI3K sensor. PI3K phosphorylates the third carbon of PIP2, creating PIP3. Our sensor consisted of mEGFP tagged to the membrane via a CAAX box, and the PH-domain of Btk tagged with mCh. Normally, the PH domain is cytosolic, due to the low basal PIP3 levels. When PI3K creates PIP3, it recruits the PH domain to the plasma membrane, bringing the mCh close to the GFP, causing FRET.
To detect PI3K activity, we modified an existing FRET sensor. I won't dive into the details of FRET nor FLIM here, but FRET is used to measure the proximity between two fluorophores. When the two fluorophores are distant, FRET is low, and when they are close, FRET is high. In this case, the donor fluorophore was mEGFP tagged to the plasma membrane via a CAAX box. The acceptor fluorophore was mCherry tagged to the PH domain of Btk, which selectively binds to PIP3. Under normal conditions, there is a lot of PIP2 at the plasma membrane, but little PIP3, which means the Btk-PH-GFP floats about in the cytosol, and FRET is low. When PI3K is active, however, it recruits Btk-PH-GFP to the plasma membrane, bringing the mCh and GFP into proximity, and causing FRET.

Most of the initial sensor development was done by an undergrad, Wei Leong Chew, under my supervision. We tried a variety of different donor-acceptor combinations, including using Akt-PH instead of Btk-PH; trying different acceptors like dimYFP or tandem-mCh. It's important to have the donor at the plasma membrane instead of the acceptor (to be honest, one year later, I can't remember the specific, technical reason why; certainly it's helpful to have a consistent donor fluorescence).

Testing in HEK cells

In any case, we got the sensor to work. To test the sensor in HEK cells, we applied EGF, which activates a wide-ranging signaling cascade, including PI3K (see below). When we did so, the cells underwent significant morphological changes, and the FRET fraction increased. Then, to reverse the process, we applied the PI3K antagonist LY294002, which caused the FRET fraction to decrease.

The PI3K sensor can detect changes in PI3K activity. Left. Images from one cell. Top are the donor, mem-GFP (localized to the plasma membrane) and acceptor (in the cytosol). Bottom shows the FRET images, where blue means low activity, and yellow/red means high. Following EGF, the FRET activity increases; following LY, it decreases. Right. Quantification of the FRET fraction. For this cell, the FRET fraction was reduced below the basal fraction following LY application.
To further verify that the sensor was detecting PI3K activity specifically, and not other phosphoinositides, we tested the sensor using a few different PH-domains. First, we tagged the donor GFP to the PH-domain of PLCdelta, which binds specifically to PIP2 (green trace, below). This sensor had a high basal binding fraction, reflecting the high basal PIP2 concentration, but was unchanged by EGF or LY application. Second, we created a point mutant of Btk-PH, E41K, which binds both PIP2 and PIP3 (red trace). This sensor had a high basal binding fraction, and was insensitive to EGF, but was reduced by LY. Finally, we created Btk-PH-R28C, which does not bind strongly to any phosphoinositides (blue trace). This sensor had a low basal binding fraction, and did not change with EGF or LY application. These experiments combined show that the sensor is PIP3 specific.
The PI3K sensor is PIP3 specific. We used three PH-domains to test PIP3 specificty. PLCdelta binds PIP2 specifically, has a high basal binding fraction, and is insensitive to PI3K drugs. Btk-E41K binds both PIP2 and PIP3, has a high basal binding fraction, and is inhibited by LY. Btk-R28C does not bind any PI, has a low basal binding fraction, and is insensitive to drugs.
While performing these experiments, I noticed one peculiarity. The basal binding fraction, and changes following drug application, all depended on the concentration of Btk-PH-mCh (see below). This is due to the non-specific nature of the FRET sensor. Most FRET sensors require direct binding between the donor and acceptor constructs, while in this sensor, we are relying on proximity. Thus, the higher the acceptor concentration, the more acceptor will be brought to the plasma membrane, and the higher the FRET. This would cause complications in neurons, where Btk-PH expression can occlude structural plasticity.
The concentration of the acceptor effects the strength of the sensor. Top. The initial FRET fraction, and changes in FRET, are all correlated with the concentration of Btk. Middle. These same variables are (mostly) independent of donor intensity. Bottom. Population average timecourse for high-Btk expressing HEK cells.


Testing in neurons


Having tested the sensor in HEK cells, we next moved to neurons. In addition to developing FRET sensors, the Yasuda lab specializes in two-photon glutamate uncaging, so we uncaged on spines to see how the sensor would react. During uncaging, we measured spine size as a proxy for synaptic strength. It was especially tricky here to get Btk expression high enough to make the sensor work. The downside of high Btk expression, however, was that it inhibited long-term structurally plasticity (below, left). (We also did uncaging experiments in the presence of LY, and found that LY partially blocked late phase structural plasticity.)

PI3K is active in stimulated spines. Left. Structurally plasticity of spines stimulated by glutamate uncaging. Initial structural plasticity was strong, but the late-phase was less than normal (we did  paired uncaging to confirm this). Right. Change in FRET activity in the stimulated and adjacent spines. The sensor was active in the stimulated spine relatively rapidly, and stayed elevated throughout.
I measured FRET activity in the stimulated spine, nearby dendrite, and adjacent spines. In the stimulated spine, there was a relatively rapid increase in activity which persisted for the duration of the experiment (above right). This activity did not seem to spread into the dendrite or adjacent spines (data not shown). This is in comparison to CaMKII, which is spine-specific but short lived, and Ras, which is longer-lasting, but not spine specific.

That's the last of my unpublished data from the Yasuda lab. In retrospect, the story doesn't seem far from some version of complete. The neuronal data just needed some refinement to get more consistent results (the n above is quite low). At the time, though, after seven years in Durham, I was impatient start the next phase in my career. If I was more strategic, I probably would have finished it. I know a couple people are still working on the sensor, trying to get it to work without interfering with structural plasticity. You can probably look forward to a more expanded result in the near future (1-2 years).