We all want to publish in Nature. Papers in Nature are (supposed to be) the complete package: reliable results that show something novel; cool techniques; a famous corresponding author. And if you want to get one, you need a title that shows you are a refined gentleperson who belongs in the Nature club.
So to help you, dear blog reader, I have scoured the archives of Nature* to decipher the ideal form of Nature titles:
[research-y verb-ing] a neural circuit for [behaviour]
For example in hunger there are: Genetic identification of a neural circuit that suppresses appetite; and Deciphering a neuronal circuit that mediates loss of appetite. Or in anxiety there is Genetic dissection of an amygdala microcircuit that gates conditioned fear. Disambiguate is an underused verb here.
If you're feeling poetic, you can rearrange the elements. For example, you can try putting the neural stuff first, like The neural representation of taste quality at the periphery. Or in olfaction, there's Neuronal filtering of multiplexed odour representations.
If you are particularly concise, you can drop the verb altogether, and just combine a couple nouns with a preposition. The cells and peripheral representation of sodium taste in mice. Perception of sniff phase in mouse olfaction. Distinct extended amygdala circuits for divergent motivational states.
Under NO CIRCUMSTANCES are you to mention the brain region, molecular marker, or techniques you used to [verb] your [behaviour]. If you are studying how photostimulating AgRP neurons induces feeding, don't mention photostimulation or AgRP (Deconstruction of a neural circuit for hunger). Otherwise, you might end up in Nature Neuroscience (AGRP neurons are sufficient to orchestrate feeding behavior rapidly and without training).
And, most importantly, keep it short. If you're studying taste receptors, go for something like An amino-acid taste receptor. Stuff like Gustatory expression pattern of the human TAS2R bitter receptor gene family reveals a heterogenous population of bitter responsive taste receptor cells goes in the Journal of Neuroscience.
If you have any other examples of the beautiful Nature titles, write in the comments!
*skimmed the tables of contents
Bonus titles:
The receptors and cells for mammalian taste.
Excitatory cortical neurons form fine-scale functional networksA family of candidate taste receptors in human and mouse
Short-term memory in olfactory network dynamics.The detection of carbonation by the Drosophila gustatory system
The cells and logic for mammalian sour taste detection.
The subcellular organization of neocortical excitatory connections.
The receptors and coding logic for bitter taste
The cells and peripheral representation of sodium taste in mice
The molecular basis for water taste in Drosophila
Showing posts with label publishing. Show all posts
Showing posts with label publishing. Show all posts
Thursday, November 20, 2014
Monday, August 27, 2012
The paper currency of science
OR: Mike reads chapters 4&5 of Wealth of Nations
This is the 3rd in a series of posts wherein I attempt to apply economics principles to neuroscience. Econoneuroscience if you will. Previous posts covered transaction costs and specialization.
While currency (or money) seems simple and obvious - you trade goods for money, and then vice versa - it has great nuance. In the age of physical currency, money ranged from gigantic stones rings to cigarettes to gold. In our digital world, most money is not physical, but simply bits representing our bank account.
Since science is a (haphazardly planned) market, I wondered what our currency is, and perhaps if we could do better. Luckily for me, Adam Smith covered the basics of currency just after considering specialization, which I summarize here.
Adam Smith on currency
In the first three chapters of Wealth of Nations, Smith explored how labour specialization increases productivity, and in chapters four and five, he considered the implications of specialized labor on the market. Specialized labourers produce a "superfluity" of a single good, and must trade it for other goods. For some professions, like bakers, it is easy to trade the surplus: just trade bread for a nail. For others, it is awkward. How does a shepherd get a beer, when all he has are oxen?
Rather than barter we use money. Metals are especially useful for money because they are non-perishable, unlike bread or cows; and metals are easily divisible, so you can trade for the precise worth of goods. (Smith further describes how metals went via bars to become marked coins, and how those coins got debased, but that is irrelevant for us.)
Once a society starts using money, it next needs to find the price of each commodity. Smith argues that the economically "real" value of any commodity is how much labor it took to produce. Thus, if you buy a commodity with money, you are indirectly buying the producers' labor. Now, if you're measuring commodities' worth in labor, how do you then value the labor? Sometimes an hour's work is strenuous, and other times it is easy; and people invest in education and training to increase their productivity. The answer is money: if a commodity requires arduous work, or skill to produce, it will cost more. Thus money allows us to value and exchange labour, the real commodity. (Here, Smith goes on a long exploration of the real and nominal values of labor/money/commodities, which I briefly discuss at the end of this post.)
Putting these ideas together, you can create the classic definition of money: a store of value, medium of exchange, and unit of account. So what fits these properties for science?
Identifying the scientific currency
For simplicity, I would argue that there are two types of agents in the scientific market: labs, and funding agencies. Labs specialize in producing various forms of data, and seek to trade data for funding; the funding agencies "produce" funding, and seek to purchase the best data.
"Now, wait a second Mike, isn't money the currency of science?" This has obvious merit. Labs that produce more data generally get more funding, broadly fulfilling the unit of account aspect. But thinking of funding as a medium of exchange is strange, since funding agencies "produce" funding, rather than exchange something for funding. Indeed, most labs I'm aware of don't trade funding to other labs in exchange for data, which you would expect if funding were a medium of exchange. And funding is a terrible store of value since it runs out in 3-5 years, and labs are forced to spend their entire budgets while they can. While funding is an obvious currency, it does not fit well.
Instead, I would argue that in practice, papers are the currency of science. First, papers are a unit of account. From a lab's perspective, high-profile papers theoretically contain more, higher value, and labor intensive data than low-profile papers; and from funding agencies' perspective, more funding is given to labs with more and better papers.
This also emphasizes the second aspect of currency, namely that it acts as a medium of exchange. Labs trade data for papers, then trade papers for funding. Labs also sometimes collaborate together to produce data for papers. Funding agencies can't directly buy data, due to the circuitous route data production often takes (if only buying desired data was possible!). Instead, they must buy data after the fact, by giving funding to labs that produce papers.
Finally, papers act as a store of value. If I publish a paper in 2012, I will be able to "exchange" that paper for funding or positions, years down the line.
It may be counterintuitive to think of scientific papers as a currency, but they have all the requisite characteristics. There are, of course, many problems with this currency.
Problems
Smith noted that metals were commonly used as currency, since they are non-perishable, and easily divisible. In contrast, papers are neither. While a paper published in 2012 retains its value for a few years, that value constantly decreases; a paper published ten years ago will get you little funding or positions today. Indeed, this causes people to constantly trade papers for funding to generate more data; one might even call this inflation. I'm not sure any scientific currency can solve this problem since ten-year-old data is almost always less valuable than new data; the ten year old experiments have already been done (and hopefully replicated).
Papers are indivisible as well; in other words, they work as a poor unit of account. From the top-down perspective, it is difficult to compare the value of papers from different journals. Is a Nature paper worth the same funding as two Neuron papers, and four Journal of Neuroscience papers? Or perhaps we should rank the journals by impact factor, and the papers by citations? Whatever metric ones comes up with will be flawed.
From the bottom-up perspective, it is hard to identify how much a paper's constituent parts are worth. Smith claimed the value of money was how much labor it can command. How much data or labour goes into a paper? Nature papers have 4-6 main figures, but can have over a dozen supplemental figures. In contrast Neuron papers are 6-8 figures long, but have 5-8 supplemental figures. Which required more data? How does one compare different fields? Is a Western blot worth a two-photon image? And if someone uses better technology to get their data, should their paper include more figures or less? These are difficult questions, only made more so by filtering through the oxen of papers.
A new currency?
Biotech companies are lucky, in that they can use actual money as their currency: they produce data which is used to make products that get sold. What are we in academia to do?
Fundamentally, the problem with using papers as a currency is that they're bad units of account: they're too big, and only vaguely tied to value. It's as if we were trading oxen by only knowing their weight, and ignoring their parentage and health.
The size issue is relatively easy to solve: limit papers to just a few figures. Some people denigrate the idea of "salami science," but it's a much more precise accounting. The last paper I reviewed was published in Nature, and had six main figures, and fifteen supplemental figures. In comparison, another Nature paper last year had three main figures, and four supplemental (and much smaller figures to boot; note that both are fine papers, and am simply commenting on size). Wouldn't a more fair accounting system have split the first paper in three? They could even be published in different journals. It would also de-emphasize the pernicious idea of "storytelling," and simply let people publish nuggets of data that may not fit into a grand arc.
The issue of trying to assign value to data is a harder nut to crack. We could try to follow Smith, and measure the man-months taken to produce data. To account for time-saving innovations, we could assign a multiplier to innovative techniques. Yet, how would we account for effort, skill, or the simple dead time in the middle of running a Western? It would be easier to value data post-hoc, rather than summing the labour inputs.
Ultimately, I think the best appraisal of value is the one proposed many times before: let citations and the community weigh the value of data, rather than a few, arbitrarily chosen reviewers. Community rating may be subjective and have its biases - favouring established labs, or flashy results - but science is imprecise enough that I can't think of a better metric.
My core conclusion from thinking about scientific currency - that we need to ditch peer-reviewed papers, and replace them with smaller, post-publication-evaluated data bites (in some form) - is not new. Perhaps this idea is my panacea. Yet, the route was virgin. By looking at science as an exchange between labs producing data, and funding agencies providing money, you can see the fundamental question is how to value data. Regardless of other complaints about publishing - its delays, and arbitrariness - trying to connect data to funding via papers is like trying to run an economy by trading oxen for beer.
(Looking over my notes, Smith has some other interesting nuggets that did not fit into the main narrative of this post. He discriminates between value in use (what a good can do; water has this) vs value in exchange (what you can trade for; e.g. gold is expensive). In science, anatomical studies are often useful, but don't yield high-profile papers. In contrast, many flashy papers get published in Nature and Science, but are often simply wrong.
Smith also distinguishes between the real price (in units of labor) and nominal price (in money) of commodities. These often change with supply and demand, or due to technological innovation. For example, electron microscopy has probably had a stable real and nominal value over the last 20-30 years, and both the real and nominal value of Western blots has cratered due to improved technology. In contrast, the nominal value of imaging has gone up as fluorophores improved, even as the labor necessary to produce images has gone down. This further emphasizes the difficulty in trying to value papers by their inputs.)
This is the 3rd in a series of posts wherein I attempt to apply economics principles to neuroscience. Econoneuroscience if you will. Previous posts covered transaction costs and specialization.
While currency (or money) seems simple and obvious - you trade goods for money, and then vice versa - it has great nuance. In the age of physical currency, money ranged from gigantic stones rings to cigarettes to gold. In our digital world, most money is not physical, but simply bits representing our bank account.
Since science is a (haphazardly planned) market, I wondered what our currency is, and perhaps if we could do better. Luckily for me, Adam Smith covered the basics of currency just after considering specialization, which I summarize here.
Adam Smith on currency
In the first three chapters of Wealth of Nations, Smith explored how labour specialization increases productivity, and in chapters four and five, he considered the implications of specialized labor on the market. Specialized labourers produce a "superfluity" of a single good, and must trade it for other goods. For some professions, like bakers, it is easy to trade the surplus: just trade bread for a nail. For others, it is awkward. How does a shepherd get a beer, when all he has are oxen?
Rather than barter we use money. Metals are especially useful for money because they are non-perishable, unlike bread or cows; and metals are easily divisible, so you can trade for the precise worth of goods. (Smith further describes how metals went via bars to become marked coins, and how those coins got debased, but that is irrelevant for us.)
Once a society starts using money, it next needs to find the price of each commodity. Smith argues that the economically "real" value of any commodity is how much labor it took to produce. Thus, if you buy a commodity with money, you are indirectly buying the producers' labor. Now, if you're measuring commodities' worth in labor, how do you then value the labor? Sometimes an hour's work is strenuous, and other times it is easy; and people invest in education and training to increase their productivity. The answer is money: if a commodity requires arduous work, or skill to produce, it will cost more. Thus money allows us to value and exchange labour, the real commodity. (Here, Smith goes on a long exploration of the real and nominal values of labor/money/commodities, which I briefly discuss at the end of this post.)
Putting these ideas together, you can create the classic definition of money: a store of value, medium of exchange, and unit of account. So what fits these properties for science?
Identifying the scientific currency
For simplicity, I would argue that there are two types of agents in the scientific market: labs, and funding agencies. Labs specialize in producing various forms of data, and seek to trade data for funding; the funding agencies "produce" funding, and seek to purchase the best data.
"Now, wait a second Mike, isn't money the currency of science?" This has obvious merit. Labs that produce more data generally get more funding, broadly fulfilling the unit of account aspect. But thinking of funding as a medium of exchange is strange, since funding agencies "produce" funding, rather than exchange something for funding. Indeed, most labs I'm aware of don't trade funding to other labs in exchange for data, which you would expect if funding were a medium of exchange. And funding is a terrible store of value since it runs out in 3-5 years, and labs are forced to spend their entire budgets while they can. While funding is an obvious currency, it does not fit well.
Instead, I would argue that in practice, papers are the currency of science. First, papers are a unit of account. From a lab's perspective, high-profile papers theoretically contain more, higher value, and labor intensive data than low-profile papers; and from funding agencies' perspective, more funding is given to labs with more and better papers.
This also emphasizes the second aspect of currency, namely that it acts as a medium of exchange. Labs trade data for papers, then trade papers for funding. Labs also sometimes collaborate together to produce data for papers. Funding agencies can't directly buy data, due to the circuitous route data production often takes (if only buying desired data was possible!). Instead, they must buy data after the fact, by giving funding to labs that produce papers.
Finally, papers act as a store of value. If I publish a paper in 2012, I will be able to "exchange" that paper for funding or positions, years down the line.
It may be counterintuitive to think of scientific papers as a currency, but they have all the requisite characteristics. There are, of course, many problems with this currency.
Problems
Smith noted that metals were commonly used as currency, since they are non-perishable, and easily divisible. In contrast, papers are neither. While a paper published in 2012 retains its value for a few years, that value constantly decreases; a paper published ten years ago will get you little funding or positions today. Indeed, this causes people to constantly trade papers for funding to generate more data; one might even call this inflation. I'm not sure any scientific currency can solve this problem since ten-year-old data is almost always less valuable than new data; the ten year old experiments have already been done (and hopefully replicated).
Papers are indivisible as well; in other words, they work as a poor unit of account. From the top-down perspective, it is difficult to compare the value of papers from different journals. Is a Nature paper worth the same funding as two Neuron papers, and four Journal of Neuroscience papers? Or perhaps we should rank the journals by impact factor, and the papers by citations? Whatever metric ones comes up with will be flawed.
From the bottom-up perspective, it is hard to identify how much a paper's constituent parts are worth. Smith claimed the value of money was how much labor it can command. How much data or labour goes into a paper? Nature papers have 4-6 main figures, but can have over a dozen supplemental figures. In contrast Neuron papers are 6-8 figures long, but have 5-8 supplemental figures. Which required more data? How does one compare different fields? Is a Western blot worth a two-photon image? And if someone uses better technology to get their data, should their paper include more figures or less? These are difficult questions, only made more so by filtering through the oxen of papers.
A new currency?
Biotech companies are lucky, in that they can use actual money as their currency: they produce data which is used to make products that get sold. What are we in academia to do?
Fundamentally, the problem with using papers as a currency is that they're bad units of account: they're too big, and only vaguely tied to value. It's as if we were trading oxen by only knowing their weight, and ignoring their parentage and health.
The size issue is relatively easy to solve: limit papers to just a few figures. Some people denigrate the idea of "salami science," but it's a much more precise accounting. The last paper I reviewed was published in Nature, and had six main figures, and fifteen supplemental figures. In comparison, another Nature paper last year had three main figures, and four supplemental (and much smaller figures to boot; note that both are fine papers, and am simply commenting on size). Wouldn't a more fair accounting system have split the first paper in three? They could even be published in different journals. It would also de-emphasize the pernicious idea of "storytelling," and simply let people publish nuggets of data that may not fit into a grand arc.
The issue of trying to assign value to data is a harder nut to crack. We could try to follow Smith, and measure the man-months taken to produce data. To account for time-saving innovations, we could assign a multiplier to innovative techniques. Yet, how would we account for effort, skill, or the simple dead time in the middle of running a Western? It would be easier to value data post-hoc, rather than summing the labour inputs.
Ultimately, I think the best appraisal of value is the one proposed many times before: let citations and the community weigh the value of data, rather than a few, arbitrarily chosen reviewers. Community rating may be subjective and have its biases - favouring established labs, or flashy results - but science is imprecise enough that I can't think of a better metric.
My core conclusion from thinking about scientific currency - that we need to ditch peer-reviewed papers, and replace them with smaller, post-publication-evaluated data bites (in some form) - is not new. Perhaps this idea is my panacea. Yet, the route was virgin. By looking at science as an exchange between labs producing data, and funding agencies providing money, you can see the fundamental question is how to value data. Regardless of other complaints about publishing - its delays, and arbitrariness - trying to connect data to funding via papers is like trying to run an economy by trading oxen for beer.
(Looking over my notes, Smith has some other interesting nuggets that did not fit into the main narrative of this post. He discriminates between value in use (what a good can do; water has this) vs value in exchange (what you can trade for; e.g. gold is expensive). In science, anatomical studies are often useful, but don't yield high-profile papers. In contrast, many flashy papers get published in Nature and Science, but are often simply wrong.
Smith also distinguishes between the real price (in units of labor) and nominal price (in money) of commodities. These often change with supply and demand, or due to technological innovation. For example, electron microscopy has probably had a stable real and nominal value over the last 20-30 years, and both the real and nominal value of Western blots has cratered due to improved technology. In contrast, the nominal value of imaging has gone up as fluorophores improved, even as the labor necessary to produce images has gone down. This further emphasizes the difficulty in trying to value papers by their inputs.)
Saturday, March 10, 2012
Neuron vs Nature Neuroscience
One reason I loathe the current publishing system is the proliferation of supplemental figures: those figures that no one reads, but take lots of efforts to produce. I thought my PNAS paper was unusually bad when I had five main figures and ten supplemental ones, but I've noticed that that ratio is becoming routine. So in the tradition of hotornot.com, and my previous Nature vs Science, I'll pit Neuron and Nature Neuroscience against each other to see who's worse at requiring supplemental figures.
I simply looked at the two most recent issues of both Neuron* and Nature Neuroscience, and counted up the number of main and supplemental figures for each paper (tables were counted as figures; this ignores figure size):
What can you conclude from this? Depending on your opinion of each journal's editorial rigor, and the Elsevier boycott, you should probably submit your first manuscript to Neuron.
* While compiling these numbers, I saw the Mooney lab has a new paper. Goddamn his lab has been crushing it lately. They labeled specific cell populations using viruses (in the zebra finch), and showed that following deafening, only the striatothalamic (HVCX) projecting neurons underwent synaptic remodeling. In contrast, the motor-projecting neurons (HVC-RA), were stable. The striatothalamic pathway has long been hypothesized to be responsible for plasticity in the system, and this is the best evidence to date.
I simply looked at the two most recent issues of both Neuron* and Nature Neuroscience, and counted up the number of main and supplemental figures for each paper (tables were counted as figures; this ignores figure size):
What can you conclude from this? Depending on your opinion of each journal's editorial rigor, and the Elsevier boycott, you should probably submit your first manuscript to Neuron.
* While compiling these numbers, I saw the Mooney lab has a new paper. Goddamn his lab has been crushing it lately. They labeled specific cell populations using viruses (in the zebra finch), and showed that following deafening, only the striatothalamic (HVCX) projecting neurons underwent synaptic remodeling. In contrast, the motor-projecting neurons (HVC-RA), were stable. The striatothalamic pathway has long been hypothesized to be responsible for plasticity in the system, and this is the best evidence to date.
Labels:
publishing
Monday, September 12, 2011
Organic reviews
Long ago, on the nascent form of this blog, I wrote a little diatribe on the shortcomings of the peer-reviewed journal system. My basic gripe is that the system slows the dissemination of information for marginal benefit. For example, people claim peer review makes science more reliable, but it has been found that, "at least 50% of published studies from academic laboratories cannot be repeated in an industrial setting." And that's for the most reproducible natural science, chemistry.
Review papers are frustrating for different reasons. Certainly, most review papers are not delayed by peer reviewers. Instead, reviews are hobbled by their very format: review articles come out every few months, and cover a field of research as a whole. Their infrequency means they become outdated as soon as another important paper comes out. And their scope means they are forced to rehash the same basic information (there's only so many ways to say AMPA receptors are important for LTP). I often find reading reviews tedious, trying to segregate what's new from what I already know.
The review wiki is such an obvious idea that it must come to fruition. The biggest obstacles are probably authorship (people want credit), reliability (no one trusts a random web page), and quality control. The easiest solution to these problems would be for a known organization to sponsor a wiki. For example, I bet a neuroscience department could gain reputation by starting an awesome, up-to-date wiki. Over time, as the wiki grew, it could serve as an alternative sort of textbook (in fact, if you search for science wikis, you'll see many hits from teachers looking for textbook alternatives). They could brand the wiki with the department or university. Then when undergrads inevitably discover the wiki, they'll assume the authors are important. The first mover advantage here would be huge.
(Why don't I take action and start a wiki? I don't have the stature to get people to use it, nor get buy-in from others to expand it.)
Review papers are frustrating for different reasons. Certainly, most review papers are not delayed by peer reviewers. Instead, reviews are hobbled by their very format: review articles come out every few months, and cover a field of research as a whole. Their infrequency means they become outdated as soon as another important paper comes out. And their scope means they are forced to rehash the same basic information (there's only so many ways to say AMPA receptors are important for LTP). I often find reading reviews tedious, trying to segregate what's new from what I already know.
I wish we had a form of organic review. A format wherein you could write a complete overview of a field, and then update it piecemeal as new findings emerge; wherein you didn't have to rewrite the entire review; wherein you could stay current to within a month, or even a week. In effect, I wish we had review wikis.
There are a few non-traditional review sources out there, but they are all lacking in different ways. Most obviously there's Wikipedia. Like a lazy undergrad, I often turn to Wikipedia first when I read an unfamiliar term (what's a diencephalon, again?). The articles on popular things, like AMPA receptors, are fairly thorough, while the articles on more obscure things like T1R3 contain enough information for me to look elsewhere. Yet Wikipedia is true to its nature as an encyclopedia, and is rarely up-to-date, or technical enough to be useful to scientists.
Some people have tried to improve Wikipedia. A couple years ago the Society for Neuroscience tried to ameliorate the situation by launching the "Neuroscience Wikipedia initiative." Unfortunately, it appears to have netted less than 100 edits.
I myself have dabbled in editing Wikipedia. When I train students, I try to get them to read papers, and synthesize them into a whole. Instead of getting them to write a staid essay, I have them edit the Wikipedia page on whatever they're studying. For example, I worked with one student to develop a PI3K FRET sensor, so he added to the section on PI3K in long term memory (his username is Wc18, mine is Amphipathic).
I myself have dabbled in editing Wikipedia. When I train students, I try to get them to read papers, and synthesize them into a whole. Instead of getting them to write a staid essay, I have them edit the Wikipedia page on whatever they're studying. For example, I worked with one student to develop a PI3K FRET sensor, so he added to the section on PI3K in long term memory (his username is Wc18, mine is Amphipathic).
Besides Wikipedia, there are a few other web resources that almost act like organic review. There is wikigenes, which has useful lists of citations, but lacks any bird's-eye perspective on research. Some labs have wikis, but they are often quite focused (the Hayashi lab's is quite good). And some adventurous souls have set up regular ol' web pages dedicated to their field of interest, but static webpages by their nature cannot organically evolve. In general, I'd say these alternative forms of review fail because they are too superficial, lack Weltanschauung, or are too focused.
The review wiki is such an obvious idea that it must come to fruition. The biggest obstacles are probably authorship (people want credit), reliability (no one trusts a random web page), and quality control. The easiest solution to these problems would be for a known organization to sponsor a wiki. For example, I bet a neuroscience department could gain reputation by starting an awesome, up-to-date wiki. Over time, as the wiki grew, it could serve as an alternative sort of textbook (in fact, if you search for science wikis, you'll see many hits from teachers looking for textbook alternatives). They could brand the wiki with the department or university. Then when undergrads inevitably discover the wiki, they'll assume the authors are important. The first mover advantage here would be huge.
(Why don't I take action and start a wiki? I don't have the stature to get people to use it, nor get buy-in from others to expand it.)
Until then, I will continue to read traditional reviews, and supplement them as best I can. The precious few neuroscience bloggers out there do a decent job reviewing recent papers, and in doing so comment on the state of the field. I hope some of my blog posts can do the same.
Labels:
management,
publishing
Monday, June 6, 2011
Nature vs Science
The goal of any project (after, of course, doing sound research), is publication in a top journal like Nature or Science. Over the years I've developed some biases about those two journals, like that Science publishes more speculative articles, and Nature has a crush on birdsong.
Of course, developing biases without testing them is bad science, so I decided to go through one year of neuroscience articles (from June 2010 to now; labeled as "neuroscience" by the journal), and see what trends there were. I made a spreadsheet containing each article, the date published, and a general categorization of the article (these categorizations are rough, especially for some "transdisciplinary" papers, and for papers outside my expertise, like developmental neuroscience). So what are the findings?
Nature publishes more neuroscience articles. Over the last year, Science published 52 articles tagged "neuroscience," of which twelve were cognitive neuroscience articles. In comparison, Nature published 73 articles tagged "neuroscience," of which only three I categorized as cognitive. Without getting into a discussion about the semantics of cognitive science, neuroscience, and psychology, if you work in a non-human system, you may want to try Nature first. (I am, of course, ignoring the huge issue of how many papers each journal publishes, total across fields, which may also explain this.)
Science publishes more speculative/non-traditional/hard to categorize articles. For example, "Human Tears Contain a Chemosignal," or "Astrocytes Control Breathing Through pH-Dependent Release of ATP." In general, I had a much harder time figuring out what to label Science articles. While I don't doubt the veracity of these articles, if you are truly pushing the envelopes in terms of interdisciplinary work, Science may be a better target.
Regarding subject area, Science skews cognitive, while Nature skews towards systems and translational neuroscience. As mentioned above, Science published twelve cognitive science articles compared to Nature's three. In terms of translational neuroscience (ignoring things like addiction models), Science published three translational articles versus Nature's twelve. And in terms of systems neuroscience, over 1/3rd of all Nature's neuroscience articles were in systems Neuroscience versus 20% of Science. Of Nature's systems articles, there was a slight bias towards vision (twelve articles).
Finally, what about birdsong in Nature? In the past year, there's only been one birdsong paper, from the Fee lab. Since January 2006, there have been eleven birdsong papers in Nature, about two per year. Of those eleven, though, six came out between December 2007 and December 2008 when I got the impression that Nature birdsong. So while I was right to think an awful lot of birdsong articles were getting into nature, it was just a coincidence. And looking through this list reminded me of a cool paper, where they looking at the temporal coding of birdsong by cooling the brain down with a Peltier.
As a systems neuroscientist working in olfaction and taste, the conclusions seem pretty clear. Try Nature first, unless I've got a good cognitive hook to the data.
Of course, developing biases without testing them is bad science, so I decided to go through one year of neuroscience articles (from June 2010 to now; labeled as "neuroscience" by the journal), and see what trends there were. I made a spreadsheet containing each article, the date published, and a general categorization of the article (these categorizations are rough, especially for some "transdisciplinary" papers, and for papers outside my expertise, like developmental neuroscience). So what are the findings?
Nature publishes more neuroscience articles. Over the last year, Science published 52 articles tagged "neuroscience," of which twelve were cognitive neuroscience articles. In comparison, Nature published 73 articles tagged "neuroscience," of which only three I categorized as cognitive. Without getting into a discussion about the semantics of cognitive science, neuroscience, and psychology, if you work in a non-human system, you may want to try Nature first. (I am, of course, ignoring the huge issue of how many papers each journal publishes, total across fields, which may also explain this.)
Science publishes more speculative/non-traditional/hard to categorize articles. For example, "Human Tears Contain a Chemosignal," or "Astrocytes Control Breathing Through pH-Dependent Release of ATP." In general, I had a much harder time figuring out what to label Science articles. While I don't doubt the veracity of these articles, if you are truly pushing the envelopes in terms of interdisciplinary work, Science may be a better target.
Regarding subject area, Science skews cognitive, while Nature skews towards systems and translational neuroscience. As mentioned above, Science published twelve cognitive science articles compared to Nature's three. In terms of translational neuroscience (ignoring things like addiction models), Science published three translational articles versus Nature's twelve. And in terms of systems neuroscience, over 1/3rd of all Nature's neuroscience articles were in systems Neuroscience versus 20% of Science. Of Nature's systems articles, there was a slight bias towards vision (twelve articles).
Finally, what about birdsong in Nature? In the past year, there's only been one birdsong paper, from the Fee lab. Since January 2006, there have been eleven birdsong papers in Nature, about two per year. Of those eleven, though, six came out between December 2007 and December 2008 when I got the impression that Nature
As a systems neuroscientist working in olfaction and taste, the conclusions seem pretty clear. Try Nature first, unless I've got a good cognitive hook to the data.
Labels:
publishing
Thursday, April 22, 2010
Science, delayed
In our most recent lab meeting, I presented a recent paper from Science, CKAMP44: A Brain-Specific Protein Attenuating Short-Term Synaptic Plasticity in the Dentate Gyrus. This was a great, relatively straightforward paper that: 1.) did a proteomics screen to identify a novel AMPA receptor associating protein called CKAMP44; 2.) generated a CKAMP44 antibody 3.) performed immunostaining and northern blots to confirm it was expressed specifically in the brain; 4.) performed westerns to show that CKAMP indeed does associate with AMPA receptors in the brain; 5.) transfected oocytes with CKAMP and measured their modulation of AMPA receptor currents; 6.) generated a KO mouse of CKAMP; 7.) and used the KO mouse to show how CKAMP44 modulates synaptic currents in slices. For the details, I would recommend reading the paper, since it is relatively straightforward.
Normally, one would think reading such an interesting paper would be a delight. I, however, was annoyed. This paper represented years of work by the nine authors. I suspect that they initially identified the protein 3-4 years ago in the proteomics screen, and confirmed its importance using northern blots/antibodies shortly thereafter. Yet I had to wait until now to hear about it. People in the field may have known of CKAMP's existence from conferences, but the information had not disseminated through the community until the paper was published.
Isn't that insane? That in the age of the internet and instant communication, we as scientists are still waiting months and years to hear about others' research? Shouldn't we have a better system now?
I have many issues with the current publishing and review system, but the one this paper most applies to is the idea of how journal publishing works. Most of my problems with the system were inspired by Clay Shirky's recent book Here Comes Everybody about how the internet is changing our modes of communication and work. In one chapter of the book Mr. Shirky described how our models of news is changing. Before the internet, the model was that journalists would search out interesting stories (as well as be supplied them by publicists or interested parties), filter out the chaff, and publish the newsworthy items; simply put, they filtered then published. This was necessary becase the costs of gathering and transmitting information was high. For example, if you wanted court information, you had to actually travel to the courthouse, rather than calling them, or looking up the information online.
Now, however, the news model is radically different. With the internet, everyone has a voice (at least in theory), and can broadcast to their friends what they think is important. Many news stories now are broken on blogs, and then linked to by other blogs, until they are finally picked up by the major news outlets. In this model, then, everything is published, and then filtered by users to identify what is important and should be read.
So what does this have to do with science? The journal publishing system is stuck in the filter-then-publish mode, with editors and reviewers gatekeeping information. Their jobs (theoretically again) is to verify that scientific findings are true, and of interest. And to exceed their thresholds for publication, authors need to perform controls and do exciting experiments.
The problem, however, is they don't and can't perform those duties. It is literally impossible for a reviewer to verify any given work is true, either due to falsification or sloppiness. Journals are littered with papers that were retracted, or more commonly, never reproduced. And significance is completely arbitrary, and determined not by journal editors, but after the fact by citations. I can name many papers in prestigious journals I consider insignificant, and Journal of Neuroscience papers that have been cited one hundreds times (e.g. Rich Mooney's 2000 J Neuroscience paper).
And the cost of this antiquated system is time. It takes time for scientists to perform all the experiments, beyond the initial, interesting ones; it takes time for authors to put together "stories" (which is an issue for another time), write the paper, and put together pretty figures; it takes time for editors to decide whether to review it, and time for reviewers to pass judgment; and then it takes more time to actually publish it (although this time has lessened with internet publishing). And if you sum all these time together, you get year long delays between when people do interesting experiments, and the scientific community finds out about them.
Unfortunately, despite my dislike of the current publishing system, I have no simple alternative system. Whatever the new system entails however, I hope it includes faster publishing times so we can learn of the information faster.
Normally, one would think reading such an interesting paper would be a delight. I, however, was annoyed. This paper represented years of work by the nine authors. I suspect that they initially identified the protein 3-4 years ago in the proteomics screen, and confirmed its importance using northern blots/antibodies shortly thereafter. Yet I had to wait until now to hear about it. People in the field may have known of CKAMP's existence from conferences, but the information had not disseminated through the community until the paper was published.
Isn't that insane? That in the age of the internet and instant communication, we as scientists are still waiting months and years to hear about others' research? Shouldn't we have a better system now?
I have many issues with the current publishing and review system, but the one this paper most applies to is the idea of how journal publishing works. Most of my problems with the system were inspired by Clay Shirky's recent book Here Comes Everybody about how the internet is changing our modes of communication and work. In one chapter of the book Mr. Shirky described how our models of news is changing. Before the internet, the model was that journalists would search out interesting stories (as well as be supplied them by publicists or interested parties), filter out the chaff, and publish the newsworthy items; simply put, they filtered then published. This was necessary becase the costs of gathering and transmitting information was high. For example, if you wanted court information, you had to actually travel to the courthouse, rather than calling them, or looking up the information online.
Now, however, the news model is radically different. With the internet, everyone has a voice (at least in theory), and can broadcast to their friends what they think is important. Many news stories now are broken on blogs, and then linked to by other blogs, until they are finally picked up by the major news outlets. In this model, then, everything is published, and then filtered by users to identify what is important and should be read.
So what does this have to do with science? The journal publishing system is stuck in the filter-then-publish mode, with editors and reviewers gatekeeping information. Their jobs (theoretically again) is to verify that scientific findings are true, and of interest. And to exceed their thresholds for publication, authors need to perform controls and do exciting experiments.
The problem, however, is they don't and can't perform those duties. It is literally impossible for a reviewer to verify any given work is true, either due to falsification or sloppiness. Journals are littered with papers that were retracted, or more commonly, never reproduced. And significance is completely arbitrary, and determined not by journal editors, but after the fact by citations. I can name many papers in prestigious journals I consider insignificant, and Journal of Neuroscience papers that have been cited one hundreds times (e.g. Rich Mooney's 2000 J Neuroscience paper).
And the cost of this antiquated system is time. It takes time for scientists to perform all the experiments, beyond the initial, interesting ones; it takes time for authors to put together "stories" (which is an issue for another time), write the paper, and put together pretty figures; it takes time for editors to decide whether to review it, and time for reviewers to pass judgment; and then it takes more time to actually publish it (although this time has lessened with internet publishing). And if you sum all these time together, you get year long delays between when people do interesting experiments, and the scientific community finds out about them.
Unfortunately, despite my dislike of the current publishing system, I have no simple alternative system. Whatever the new system entails however, I hope it includes faster publishing times so we can learn of the information faster.
Subscribe to:
Posts (Atom)