Pages

Monday, May 30, 2011

A Walk Down the Paper Trail: Glomerular Coding of Natural Stimuli

Larry Katz was one of the early giants in olfaction, and I am woefully uninformed about his work. Rather than find a recent paper to cast aspersions on, I figured it would be fun to take a look at one of Katz's papers.

While we perceive odors as a whole, each odor is a collection of individual monomolecular compounds. In 2006, Lin, Shea, and Katz looked at the relationship between how the brain represents complex, "natural" odors, and the individual molecules that constitute the odor (I put natural in quotes, since odors like coffee are not something a wild mouse would encounter, but that's just me being pedantic).  To start, they performed intrinsic signal imaging on the dorsal olfactory bulb of isoflurane anesthetized animals, while applying natural odors like cinnamon, cloves, and cumin.

Intrinsic imaging of response to natural odors.  From Lin. et. al. 2006
Of the 60 natural stimuli tested, 14 evoked "weak" activity (1-5 glomeruli), and 9 evoked "strong" activity (5+ glomeruli).  The glomerular responses were complex, with some glomeruli being activated quickly after odor presentation (within 1s), while others could have a delayed response more than 6s after stimulus start.

Then, to break down the odors into their components, they performed gas chromatography (GC) on a subset of the strong odors, and presented the results to the mouse.  While the "strong" odors as a whole activated many glomeruli, during GC, individual glomeruli were activated at different times.

Gas chromatography presented to the mouse activates individual glomeruli.
From Lin. et. al. 2006
One interesting aspect of this response is that the largest peaks (representing the most molecules) do not correspond with the largest glomerular response. Lin quantified how many times each glomeruli was activated, and found that most glomeruli were activated only once, suggesting that the response to the natural odor was simply a linear summation of the response to individual components.

The test of this is pretty obvious: just sum up the individual responses and compare it to the whole (compare panels C and E):
 
The response to clove (C) is similar to the summation of individual components (E).  The individual components were identified from GC (compare B and D).
From Lin. et. al. 2006
Overall, the similarity between the summed GC response and the natural odor response was 80%.  Why not 100%?  Besides the fact that the GC response was not entirely consistent between trials, gas chromatography involves heating molecules to high temperatures (200C+), which could change the structure of molecules.  Indeed, when they burnt clove, it increased the similarity between the natural map and the summed GC map.

One last thing they tested in this paper (and to be honest, a somewhat tangential story) was whether structurally similar molecules activated the same glomeruli.  So they applied a bunch of similarly structured molecules (alkyl aldehydes I think), to see which glomeruli they activated.  And while there was some overlap, it was not complete.  From this they conclude that the receptive field of glomeruli are more complex than simple feature recognition, and involves multiple features of each molecule.

All in all, a pretty cool paper.  It seems a little too clean, in that the GC components usually activated single glomeruli, when we know that single molecules can activate a large number of glomeruli.  But the use of GC as a way to separate individual odor components is pretty cool (it apparently was done before with humans), and directly address the question of odor composition while dodging questions of concentration.

It also makes sense that at the glomerular layer, which in some ways simply relays olfactory sensory neuron information, the processing is simple and linear (ignoring dendro-dendritic inhibition and such).  The big question, though, is what happens downstream when these multiples signals interact, and how natural odors are represented in the mitral cell layer.  But that is a question for another trip down the paper trail.

References:

1. Lin DY, Shea SD, Katz LC. Representation of natural stimuli in the rodent main olfactory bulb. Neuron. 2006;50(6):937-49. Available at: http://www.ncbi.nlm.nih.gov/pubmed/16772174.

Monday, May 23, 2011

PhD Revisions

A couple posts ago, I laid out how I think the PhD system currently works, and linked to a couple recent Nature articles about reforming the PhD system. I want to comment on one of those articles which lays out a few fixes to the system, but before I do so I want to briefly state what I think are some of the major problems are:

1.) The cost benefit ratio is off.  Spending six years in the prime of life should yield a tremendous payoff, and right now the reward for getting a science PhD is a post-doc or leaving academia. The payoff would be fine for three years work, but for six it isn't close.

2.) It's uncertain.  You graduate when your committee lets you, but ambiguous deadlines are soulcrushing.

3.) The training is spotty. All too often students get stuck in labs with little to no mentoring, and have no recourse but to quit or bear down. Rather than preventing those PIs from getting students, they are typically rewarded with more funding.

This list is non-exhaustive, but let's see if the solutions proposed by the article will address any of them.

1. Jump in at the deep end aka Let the Right Ones In

Michael Lenardo at the NIH is given credit for this idea:
When too many scientists are looking for too few academic positions, PhD programmes need to admit the students most likely to succeed, and provide them with all the skills they'll need...In 2001, Lenardo created a new degree programme...for a cadre of truly elite students. It admits just 12 of the 250–300 applicants per year. Independence is stressed — students devise and write their own project plan, begin their thesis work immediately, and skip the uniform coursework — but they must meet requirements such as authoring papers.
Basically, his idea is to treat PhD students like post-docs: give them minimal training, and let them work independently.  How did it turn out?

In the ten years since the programme's inception, more than 60 students have graduated, taking slightly more than 4 years apiece. They published an average of 2.4 first-author papers out of their PhD research...Half a dozen are already working as principal investigators.
Pretty well apparently.  To be frank, this is a non-solution.  The very best people are going to survive no matter how little training you provide, but that cadre is uselessly small.  A vibrant scientific community needs more than a couple hundred people to be useful.  The rest of us actually need a little bit of love to become productive and independent.

2. Forget academia 


From Animesh Ray:
To complete a PhD in Applied Life Sciences at KGI, students must complete the master's course there, then... [do] original research, with at least one adviser from industry.... [Students learn] not only the scientific method, but also how to write a business plan and present it to venture capitalists, how to carry out market research and the ins and outs of patent legislation.
I like this idea.  Duke was single-minded about the future of its graduate students: they were to become resarch professors, and anything less would be disappointing.  Industry and teaching opportunities were not discussed.

In talking to professors from other universities, though, I found they often run start-ups, or do consulting.  Including straight-up business classes would help ensure people are exposed to non-academic careers, validate them as viable, and increase the job prospects of graduate students.  Any management courses would be  helpful for future PIs. The downside may be that graduate education loses singularity of vision, but that is kinda the point.

3.  Trample the boundaries
Jacofsky did study monkeys — but also engineering, mathematics, computer science, kinesiology and neurophysiology... Nearly every new PhD programme at ASU is designed to be "transdisciplinary", says Maria Allison, dean of the graduate college. Other examples include Human and Social Dimensions of Science and Technology, Biological Design and Urban Ecology.
There has been a lot of talk this millenium about interdisciplinary research, and Duke recently started the Duke Institute for Brain Sciences (DIBS; these institutes tend to have hilarious names, like Human and Social Dimensions of Science and Technology or Institute for Brain, Mind, Genes and Behavior (... and life, man)). These institutes seem to be formally recognizing what is happening anyway: that to do good research you need to draw from multiple skillsets, like how neuroscientists now are computer scientists.


Yet, talking about how universities organize their faculty seems tangential to the larger issue of how to improve graduate education.  Even if students get degrees from umbrella programs rather than departments, the graduate experience is still going to have the same problems listed above. It may be even easier to get caught between advisors with weak mentoring, and an uncertain future.


The end of the article provided some upside:
Broadening the scope of a programme has advantages, however. It teaches students about their options. Jacofsky had entered his degree thinking he would one day teach university-level anthropology. Instead, he is vice-president of research and development at the at the Center for Orthopedic Research and Education, or CORE Institute.
Part of the downside of a PhD is that it's easy to get pigeonholed into a small subspecialty, and if interdisciplinary/non-departmental degrees helps alleviate that, it could improve the cost/benefit ratio.

4. Get it online

This could work well in fields without physical work, like genomics, theoretical physics, or computational biology. It could be useful at the masters level to train smart people who have a life.  But the science I know requires working with animals, so online courses are a non-starter.

5. Skip the PhD

Rather than quote this section, I would recommend reading it in full. The basic gist is that people who want to do science immediately should be able to do so, and all that is really holding them back is employers requiring the PhD credential for hiring and promotion.

This gets back to the idea of credentialism from my previous post; that having a PhD makes you more easy to justify as a hire, but does not vouch for your intelligence. An undergraduate fresh out of college is just as smart (or smarter, given intelligence declines with age) as a PhD.  Theoretically, if this is true, companies that hired non-PhDs would outcompete the others, then potential grad students would work for them, and the whole system would come crumbling down.  The most successful software companies aren't obsessed with credentials, so there is no reason science needs to be the same way.

Conclusion

After reading this article, I think these solutions don't seem that interesting.  Some of them don't address the fundamental issues of graduate school (being selective, or dismantling it entirely), and aren't applicable to experimental science (going online).  The idea of folding in business education with science could increase opportunities, and improve the cost/benefit ratio. But that does not help people who want to actually perform pure science.  And while going "transdisciplinary" could also yield opportunities, it seems like it would increase the risk of mismanagement.  Unfortunately, I don't have any better specific ideas at the moment.

Sunday, May 15, 2011

The Artificial Evolution of Constructs

The Carleton lab has begun to use ChR2 to stimulate neurons, and we want to use the most realistic stimuli possible.  One of the areas we want to stimulate fires at ~100Hz when active, but in researching ChR2's properties, I found that it can't drive neurons past 30-40Hz, for a couple reasons.  First, and most importantly, the channel kinetics are too slow: it takes 1-2 ms to open, and has a toff of 8ms, which in a perfect world might be ok, but in practice is insufficient.  Second, the conductance of the channels is modest, which makes it harder to drive firing.  Various labs have been sifting through ChR2 mutants to find versions with different absorption spectra, and channel properties, so I did a Google Scholar search, and found a new variant from Thomas Oertner's lab (I met him once at Wood's Hole; great guy).


In their paper, they report a new point mutation, T159C (TC).  ChR2's photosensitivity comes from a conformational change to bound retinal, and the T159 residue lies on the surface of the retinal binding pocket of ChR2.  To test this mutant, they transfected Xenopus oocytes and neurons with it, and found that that the mutant increased the conductance of ChR2.  The downside was that it had slow kinetics, and stayed open too long, leading to prolonged depolarizations, and inactivating voltage dependent Na channels (see pink trace in figure below).


To compensate for this, they added a second, known point mutation E123T (ET), which quickens both the activation and inactivation kinetics of ChR2.  The ChR2 channels with the combination of these mutations were able to induce action potentials at high frequencies (100Hz in interneurons, ~40-60Hz in pyramidal cells), while not significantly changing the resting membrane potential (blue trace in figure below).


Voltage traces from neurons during blue light stimulation at 40Hz.  TC has a high steady current at 42 mW/mm2, while ET/TC can follow the stimulus.  At lower intensities, TC is able to follow the stimulation with ET/TC is not. From Berndt et al 2011.
Reading these papers made me think:  1.) I don't know nearly enough about ChR2;  and 2.) We all remember the breakthrough technologies, but we rarely appreciate the incremental advances that make new technologies widely practical.


A good example is the discovery and development of GFP. GFP was probably the biggest technical advance of the past twenty years, but it took decades of work to become useful. In brief, it was discovered almost fifty years ago by Osamu Shimomura as the protein responsible for bioluminescence in jellyfish.  Then twenty-five years later Douglas Prasher isolated the protein sequence for GFP and Richard Chalfie expressed it in E. coli  Even then, though, GFP wasn't ready for primetime, and Roger Tsien and others spent years modifying GFP -- increasing its stability, shifting its spectrum, increasing (or decreasing) its excitation cross-section.  The end result EGFP, is now the standard by which all other fluorophores are measured, and useful in wide applications.


In comparison, look at red fluorescent proteins like DsRed.  You might think these are similar to GFP, and that developing them would be trivial, but the amino acid sequences are quite different.  Despite being discovered in 1999, and having dozens of labs modify DsRed to increase stability, and brightness, red fluorophores still suck.  They are dim, express poorly, and bleach quickly. (There's a good summary of fluorophores here.  It seems there are a few decent RFPs, not at the level of EGFP.  But the monomeric pickings are slim.)


Channelrhodopsin is still probably at the red fluorescent protein stage of development, where it's usable, but imperfect.  The EC/TC ChR2 variant described above extremely new, having been published online at the beginning of April. From my naive view, it looks like the new standard for ChR2.  We'll see how long it takes someone to improve upon it.


Berndt a, Schoenenberger P, Mattis J, et al. High-efficiency channelrhodopsins for fast neuronal stimulation at low light levels. Proceedings of the National Academy of Sciences. 2011;2:10-15. 


Gunaydin L a, Yizhar O, Berndt A, et al. Ultrafast optogenetic control. Nature neuroscience. 2010;13(3):387-92.

Wednesday, May 11, 2011

What's a PhD worth?

Since I like bloviating about institutions, a couple friends have asked for my opinion about the recent articles in Nature about the PhD system.  While I've discussed this over beers, I have not thought about this in depth, so before considering those articles, I want to organize my thoughts.  I will consider this from the perspective of people getting PhDs, and the system that accredits them, but first I want to talk about...

What a PhD is Not


It's not about knowledge. The jejune view of education is that education is about imparting facts upon students, like math or writing skills.  This is partially true: the facts we learn in elementary school, like 2+3=5, are critical.  But most of the facts and theories we are taught in high school and beyond are quickly forgotten.  While I learned about the treaty of Westphalia and the black plague in history class, I can only recall the vaguest details about them.

In terms of graduate school, this is also true.  The first couple years of a PhD program include classes that lay a foundation for research. But the details of those classes are quickly forgotten; I can tell you that Wnt is a trophic factor, and that trophic factors lead axon guidance, but I haven't a clue what Wnt signals to.

In the three to four years after classes, you do accumulate some specific knowledge in your research area, about techniques and recent findings.  But this knowledge is so specific that as soon as you venture outside your area it's only tangentially informative.* In summary, the knowledge you get in your PhD is either quickly forgotten, or too focused to be widely useful; which says nothing of how little emphasis is put on reading papers and actually acquiring knowledge.

* These tangents can certainly be important, since finding connections between disparate fields can lead to innovation, but I don't think anyone would argue that the knowledge acquired in a PhD is best used in tangents.

A PhD is also not about learning how to think critically.  A couple points here: 1. While the scientific method theoretically underpins how science is conducted and verified on a grand scale, it is less wontedly employed in practice.  The typical scientific practice is to try new things, see what happens, and try to explain why after the fact. 2. A lot of the critical thinking you go through in science is troubleshooting, which is certainly challenging.  But any creative endeavour entails troubleshooting, be it debugging programs or ameliorating marketing campaigns. The value added from a PhD over them is negligible. 3. The people who try to get a PhD are sharp to begin with, and already know how to critically think.

What it could be about: 

Having considered what the PhD system is not about, I'm going to informally hypothesize (i.e. pontificate about) what it's actually about: socialization to science, signaling, credentialism, and data.

Socialization

One of my favourite bloggers, Robin Hanson at Overcoming Bias, wrote about the value of high school:
The best evidence I’ve seen that school adds great value is the stories I’ve heard about how difficult are employees who grew up in “primitive” cultures without familiar schools.  Apparently, it is not [that they] don’t know enough to be useful, but that they refuse to accept being told what to do, and object to being publicly ranked relative to co-workers.
In other posts, Hanson expanded on this idea: school is not about expressing creativity or problem solving, it is about getting your homework in on time, and giving your boss (the teacher) what they want.  While grades and intelligence are correlated, grades probably best correlate with diligence and obedience.

Similarly for a PhD, while your results are important, other factors like how you talk to your committee, or are involved in lab life are important.  You need to be learn how to talk to others about science productively, including wrestling with your boss, or battling rivals.  You have to learn how to deal with experiments that refuse to work, troubleshooting them, and coming to work the next day when they still don't work. You have to learn how to network in science, and get others' perspective and aid.

While none of these factors will show up directly in a thesis, they are all part of the maturation process.  When you meet a science PhD, you can hope that they've already met these challenges with some success.

Credentialism / Signaling


As more people try to do science, the need to differentiate between people  has become important.  In the past, a bachelor's in biology or chemistry was enough to start a science-y job (beyond technician), but no longer.  Now all a biology degree is good for is discussing pop science with your fellow waiters. If you have any desire to do science in a meaningful professional capacity, your recruiter needs to be able to show his boss he or she hired someone "qualified."

This holds outside science as well.  A PhD tells institutions like school systems, investment banks, consulting firms, and the government know that you are minimally smart and socialized.

(Of note here is that having a PhD now is a minimum credential in academia. For faculty positions, or even post-docs, you will be evaluated on your publication history.  It is outside of academia that the degree-credential, counter-intuitively matters most.  People outside of academia are less able to evaluate your scientific accomplishments beyond the surface level.)

Signaling is closely related to credentialism, but socially: if you want to signal that you're a smart person (who has good genes to pass on), an easy(?) way to do that is to tack a few letters after your name, especially if one of them is D.

Data


If you produce enough of it, you get three letters after your name.  And there ain't no cheaper way to produce data than a 3rd-6th year grad student.

So that is my basic thinking about the PhD system: for the student, it's about getting a credential so you have move on to something else where you do your real work, whether it be a post-doc or in industry.  For the system, it provides cheap, smart labor.  And the socialization a PhD learns greases the relationship between institution and individual, so the PhD holder can keep working in science contentedly.

What the PhD system should be doing is another matter.

Friday, May 6, 2011

Annals of Underwhelming Papers: The Promising Knockout with No Phenotype

As a wannabe hipster, I love categorizing and listing things.  And what better to classify than the coin of the realm, papers?

For lab meeting this week, a post-doc presented a recent paper about the role of the gene Ano2 in olfaction. Ano2 enocodes a Ca2+ activated Cl- channel, which was thought to help amplify the signals from olfactory sensory neurons.  In other words, that it was essential for how your nose communicated with your brain.

So the authors did the sensible thing, and knocked the gene the fuck out. (I should state here, that while I will be flip, I respect the work the authors put in, and mean no disrespect.) The first four figures of the paper are dedicated to Westerns and immunostains identifying where Ano2 is expressed in the brain, where its expressed in the olfactory epithelium, and showing that their knockout technique indeed works.  Which is where the papers gets its first category: the Inception.

Ano2 is in the MOE.  Probably best in the Supplement.
From Billig et al (2011).

Inception was a great movie, but was weighted down by minutes upon decades of exposition.  So you can go multiple layers deep in dreams? And use totems to see if  you're dreaming?  Oh look, two hours have passed, and my ass is getting sore.

An Inception paper is one that spends an unseemly number of figures and text setting up its premise before getting to the meat. Note that the issue here is not performing or showing controls, but rather one of degree: one or two control figures is fine, but we invented supplemental figures for a reason. Figures that are probably best left to the supplement include but are not limited to: North, South, (East,) and Western blots showing that something is knocked in or out; immunostains showing that molecules you thought were coexpressed were indeed coexpressed; fEPSP timecourses showing you can get LTP in your system; immunoblots showing your antibody is specific.

(The Journal of Neuroscience recently stopped accepting supplemental figures, a decision I strongly disagree with (there was some blogosphere  discussion of this). When I read a paper that is not of direct interest, I want to the authors to get to the point concisely, and present only the most essential, interesting figures. I do not want to sift through perfunctory controls like actin stains, or negative results. While these controls are essential, they are best put in a place only reviewers and competitors. Needless to say, the readability of JNeurosci papers has plummeted, and it will be interesting to see if it impacts their citations going forward. (Which is not to say that interesting figures should go in the supplement, as unfortunately sometimes happens, but that is a larger discussion.)

In any case, the paper got a bit more exciting in figure 5, where they performed whole cell patch clamp on the knocked out neurons, and found that Ano2 was essential for Ca2+-induced chloride currents.  And in figure 6, they verified this by uncaging Ca2+ in cells, and showing that the KO neurons had reduced currents.

Ano2-/- neurons have smaller currents.
From Billig et al (2011).
Then they tested whether this reduced current was actually functional, and recorded from the olfactory nerve while presenting odors to the mice.  This part got a little weird to my naive eyes, as they presented the odors in fluid phase, and air phase.  As far as I know, mice's nostrils only fill with fluid when they're sick (I think...), so it seemed a unnatural.  Anyway, the olfactory nerve transmitted less voltage in the knockout mice! Ano2 was functional!  With the important exception that it was only for the fluid phase and not the air phase, but what's a chemical phase between friends?


Fluid phase nerve response is reduced in knockout mice.
From Billig et al (2011).
On to the behaviour!  Using an automated olfactometer (somehow the machine that presents odors is called a meter), they trained mice to discriminate between odors.  And they found that the Ano2 -/- mice were perfectly able to discriminate between every odor pair and concentration difference they tested.  While Ano2 may be important for some electrophysiological aspect of olfactory sensory  neurons, they aren't essential for function.  Which makes this a Sunshine paper.

Ano2-/- mice have no trouble discriminating odors. Another KO mice does...
From Billig et al (2011).
Sunshine started as good science fiction suspense movie.  It had a beautiful cast (if only scientists looked like that), unexplained occurrences, and cool special effects. Halfway through the movie I couldn't wait to see how it ended.  Then it ended as a bad horror movie. (An alternative, if clunkier name for the category: Invention of Lying.)

Like Sunshine, this paper started well (if Inceptionally slowly), and going through figure 7 I couldn't wait to see how it ended.  Until I saw there was no phenotype, and this ion channel isn't important in this system.  Kinda disappointing (but not horrifying).

In the discussion, the authors mention that humans with this gene deleted have no olfactory impairment, which should have been a tipoff.  And they hypothesize that, "The expression of Ca2+-activated Cl− channels in mammalian OSNs may be an evolutionary vestige from freshwater animals." Yeah.

As I said, it's a solid piece of work.  Shame about the phenotype.  The authors have my sympathy.

Monday, May 2, 2011

(Neuro) Biology is computer science

When I was an undergrad trying to figure out my major, I asked a professor if there was some way to combine my favourite subjects, neuroscience and computer science.  And lo! there is computational neuroscience.  What I didn't realize is that these subjects interact in a far more practical, if less significant way: to be a great biologist, you need to be a competent programmer.

I'm biased about this.  I'm a computer dork.  I got an Android phone because I like playing with ROMs, and I have written multiple website scrapers to get data that I want. As such, I have long thought that all science majors - biology and chemistry included - should include introductory programming classes because all data analysis is done on computers.  Yet, when I tell people that, they patronizingly say, "Yeah, that's a good idea," as if it would be nice but not that useful. Let me try to  convince you by speaking from experience, and looking at what types of techniques and analysis are used today.

In graduate school I worked in an imaging lab.  While the layman may think of  images as pictures made of colored pixels, a computer programmer realizes what they really are: two-dimensional arrays of integers (or 3D for color images). In grad school, these images could even be 4 dimensional, as each pixel had a time dimension. When you analyze imaging data, you need to completely understand that images are just multidimensional arrays of data; how drawing an ROI is not just a circle, but means masking the data; an how to filter in time and space. While this does not require high level math, it does require familiarity with using arrays in programs. I saw firsthand how people with little to no programming experience struggled, and were at the mercy of others' programs (including the main ones written by the boss). There almost was a division in the lab between those who could program, and those who could not.

Now, I am working on an electrophysiology project, recording multiunit data.  Understanding electrophysiology obviously requires some math since it involves voltage and currents. Beyond the theory there is data analysis.  We record data off 32 electrodes, which generate gigabytes of data. From this data we need to extract spikes, which involves figuring out whether voltage changes are real or noise, and clustering spikes recorded off the different electrodes (thankfully this has been largely solved by others).  Then once we have the spikes, we have to interpret them: make histograms with different time indices; cross-correlate the spikes with each other and with the stimulus; perform principal component analyses; and run stats to see if any of it is true.  In short, we have to program.

As neuroscience progresses from recording from a single cell in a single nuclei over a short time period to recording from many cells in different nuclei over modest time periods, the amount of data is increasing orders of magnitude, and thus the analysis necessary to analyze that data is becoming more sophisticated, and requiring more automated (i.e. programmed) processes.  I've already shown how important programming is to imaging and electrophysiology, but the trend is pervasive.  In molecular biology, people now use microarrays to identify interesting genes, which requires statistics.  In developmental biology, people use scripts to identify synapses where synaptic markers overlap. In EM, as serial sectioning becomes feasible, you need algorithms to reconstruct spines and whole cells. Even in the land of Western blots, programming will be necessary.  As we generate more blots, we will need some stats to keep track of whether the differences are significant.

Other, more successful scientists have beaten me to the punch and stated that "biology is information science." This was hinted at when sequencing the first genome, and is now naked in the age of  hundreds of genomes where people are trying to extract meaning from them.  While neuroscience may not yet be an information science, I would say it is a computer science.