Pages

Monday, August 27, 2012

The paper currency of science

OR: Mike reads chapters 4&5 of Wealth of Nations

This is the 3rd in a series of posts wherein I attempt to apply economics principles to neuroscience. Econoneuroscience if you will. Previous posts covered transaction costs and specialization.

While currency (or money) seems simple and obvious - you trade goods for money, and then vice versa - it has great nuance. In the age of physical currency, money ranged from gigantic stones rings to cigarettes to gold. In our digital world, most money is not physical, but simply bits representing our bank account.

Since science is a (haphazardly planned) market, I wondered what our currency is, and perhaps if we could do better. Luckily for me, Adam Smith covered the basics of currency just after considering specialization, which I summarize here.

Adam Smith on currency

In the first three chapters of Wealth of Nations, Smith explored how labour specialization increases productivity, and in chapters four and five, he considered the implications of specialized labor on the market. Specialized labourers produce a "superfluity" of a single good, and must trade it for other goods. For some professions, like bakers, it is easy to trade the surplus: just trade bread for a nail. For others, it is awkward. How does a shepherd get a beer, when all he has are oxen?

Rather than barter we use money. Metals are especially useful for money because they are non-perishable, unlike bread or cows; and metals are easily divisible, so you can trade for the precise worth of goods. (Smith further describes how metals went via bars to become marked coins, and how those coins got debased, but that is irrelevant for us.)

Once a society starts using money, it next needs to find the price of each commodity. Smith argues that the economically "real" value of any commodity is how much labor it took to produce. Thus, if you buy a commodity with money, you are indirectly buying the producers' labor. Now, if you're measuring commodities' worth in labor, how do you then value the labor? Sometimes an hour's work is strenuous, and other times it is easy; and people invest in education and training to increase their productivity. The answer is money: if a commodity requires arduous work, or skill to produce, it will cost more. Thus money allows us to value and exchange labour, the real commodity. (Here, Smith goes on a long exploration of the real and nominal values of labor/money/commodities, which I briefly discuss at the end of this post.)

Putting these ideas together, you can create the classic definition of money: a store of value, medium of exchange, and unit of account. So what fits these properties for science?

Identifying the scientific currency

For simplicity, I would argue that there are two types of agents in the scientific market: labs, and funding agencies. Labs specialize in producing various forms of data, and seek to trade data for funding; the funding agencies "produce" funding, and seek to purchase the best data.

"Now, wait a second Mike, isn't money the currency of science?" This has obvious merit. Labs that produce more data generally get more funding, broadly fulfilling the unit of account aspect. But thinking of funding as a medium of exchange is strange, since funding agencies "produce" funding, rather than exchange something for funding. Indeed, most labs I'm aware of don't trade funding to other labs in exchange for data, which you would expect if funding were a medium of exchange. And funding is a terrible store of value since it runs out in 3-5 years, and labs are forced to spend their entire budgets while they can. While funding is an obvious currency, it does not fit well.

Instead, I would argue that in practice, papers are the currency of science. First, papers are a unit of account. From a lab's perspective, high-profile papers theoretically contain more, higher value, and labor intensive data than low-profile papers; and from funding agencies' perspective, more funding is given to labs with more and better papers.

This also emphasizes the second aspect of currency, namely that it acts as a medium of exchange. Labs trade data for papers, then trade papers for funding. Labs also sometimes collaborate together to produce data for papers. Funding agencies can't directly buy data, due to the circuitous route data production often takes (if only buying desired data was possible!). Instead, they must buy data after the fact, by giving funding to labs that produce papers.

Finally, papers act as a store of value. If I publish a paper in 2012, I will be able to "exchange" that paper for funding or positions, years down the line.

It may be counterintuitive to think of scientific papers as a currency, but they have all the requisite characteristics. There are, of course, many problems with this currency.

Problems

Smith noted that metals were commonly used as currency, since they are non-perishable, and easily divisible. In contrast, papers are neither. While a paper published in 2012 retains its value for a few years, that value constantly decreases; a paper published ten years ago will get you little funding or positions today. Indeed, this causes people to constantly trade papers for funding to generate more data; one might even call this inflation. I'm not sure any scientific currency can solve this problem since ten-year-old data is almost always less valuable than new data; the ten year old experiments have already been done (and hopefully replicated).

Papers are indivisible as well; in other words, they work as a poor unit of account. From the top-down perspective, it is difficult to compare the value of papers from different journals. Is a Nature paper worth the same funding as two Neuron papers, and four Journal of Neuroscience papers? Or perhaps we should rank the journals by impact factor, and the papers by citations? Whatever metric ones comes up with will be flawed.

From the bottom-up perspective, it is hard to identify how much a paper's constituent parts are worth. Smith claimed the value of money was how much labor it can command. How much data or labour goes into a paper? Nature papers have 4-6 main figures, but can have over a dozen supplemental figures. In contrast Neuron papers are 6-8 figures long, but have 5-8 supplemental figures. Which required more data? How does one compare different fields? Is a Western blot worth a two-photon image? And if someone uses better technology to get their data, should their paper include more figures or less? These are difficult questions, only made more so by filtering through the oxen of papers.

A new currency?

Biotech companies are lucky, in that they can use actual money as their currency: they produce data which is used to make products that get sold. What are we in academia to do?

Fundamentally, the problem with using papers as a currency is that they're bad units of account: they're too big, and only vaguely tied to value. It's as if we were trading oxen by only knowing their weight, and ignoring their parentage and health.

The size issue is relatively easy to solve: limit papers to just a few figures. Some people denigrate the idea of "salami science," but it's a much more precise accounting. The last paper I reviewed was published in Nature, and had six main figures, and fifteen supplemental figures. In comparison, another Nature paper last year had three main figures, and four supplemental (and much smaller figures to boot; note that both are fine papers, and am simply commenting on size). Wouldn't a more fair accounting system have split the first paper in three? They could even be published in different journals. It would also de-emphasize the pernicious idea of "storytelling," and simply let people publish nuggets of data that may not fit into a grand arc.

The issue of trying to assign value to data is a harder nut to crack. We could try to follow Smith, and measure the man-months taken to produce data. To account for time-saving innovations, we could assign a multiplier to innovative techniques. Yet, how would we account for effort, skill, or the simple dead time in the middle of running a Western? It would be easier to value data post-hoc, rather than summing the labour inputs.

Ultimately, I think the best appraisal of value is the one proposed many times before: let citations and the community weigh the value of data, rather than a few, arbitrarily chosen reviewers. Community rating may be subjective and have its biases - favouring established labs, or flashy results - but science is imprecise enough that I can't think of a better metric.

My core conclusion from thinking about scientific currency - that we need to ditch peer-reviewed papers, and replace them with smaller, post-publication-evaluated data bites (in some form) - is not new. Perhaps this idea is my panacea. Yet, the route was virgin. By looking at science as an exchange between labs producing data, and funding agencies providing money, you can see the fundamental question is how to value data. Regardless of other complaints about publishing - its delays, and arbitrariness - trying to connect data to funding via papers is like trying to run an economy by trading oxen for beer.

(Looking over my notes, Smith has some other interesting nuggets that did not fit into the main narrative of this post. He discriminates between value in use (what a good can do; water has this) vs value in exchange (what you can trade for; e.g. gold is expensive). In science, anatomical studies are often useful, but don't yield high-profile papers. In contrast, many flashy papers get published in Nature and Science, but are often simply wrong.

Smith also distinguishes between the real price (in units of labor) and nominal price (in money) of commodities. These often change with supply and demand, or due to technological innovation. For example, electron microscopy has probably had a stable real and nominal value over the last 20-30 years, and both the real and nominal value of Western blots has cratered due to improved technology. In contrast, the nominal value of imaging has gone up as fluorophores improved, even as the labor necessary to produce images has gone down. This further emphasizes the difficulty in trying to value papers by their inputs.)

Monday, August 13, 2012

Walk Along the Paper Trail: 'Attaboy! Atasoy

On a basic level, feeding (viz. eating) is regulated by opponent pathways: for example the hormone leptin is anorexigenic (prevents feeding), whereas cannabinoids are orexigenic (causes feeding); in the lateral hypothalamus, AGRP-expressing neurons are orexigenic, while POMC neurons are anorexigenic. Many of the players in feeding regulation are known, and the current task in the field is tying them together into a coherent whole. Last month, the Sternson lab published a tour de force paper which takes a step forward in this direction, which I will cover here.

Opponent pathways in the arcuate nucleus

Two groups of neurons in the arcuate nucleus of the hypothalamus play essential, and opposing roles in regulating food intake. POMC-expressing neurons activate anorexigenic pathways, and are stimulated by leptin. The anorexigenic nature of these neurons can be seen from selective ablation of POMC neurons via diptheria toxin, which causes an increase in body weight and food intake. Furthermore, stimulation of POMC-ChR2 neurons causes a decrease in feeding. Multiple lines of evidence connect POMC neurons to leptin: bath application of leptin depolarizes POMC neurons; and selective leptin-receptor knockouts in POMC neurons causes increases in weight.

Selective ablation of neurons causes weight changes. c. Ablation of POMC-expressing neurons via diptheria-toxin causes an increase in weight. a. Ablation of AgRP neurons causes starvation.
From Gropp et al, 2005.
In contrast to POMC neurons, AgRP neurons are orexigenic. Slice work has shown that AgRP neurons inhibit POMC neurons via GABA release. AgRP neurons' orexigenic nature has been shown by ablation of AgRP neurons via diptheria toxin, which causes anorexia, and by stimulation of AgRP-ChR2 neurons, which causes obesity.

Channelrhodopsin stimulation of arcuate neurons changes food intake. a. Stimulation of POMC-ChR2 neurons decreases food intake. e. In contrast, stimulation of AGRP-ChR2 neurons causes an increase in food intake.
From Aponte et al, 2011.
POMC and AgRP neurons project to a few downstream nuclei, but of interest for this paper are AgRP neurons' projections to the paraventricular hypothalamus (PVH), and the parabrachial nucleus (PBN). I don't know much about PVH, but the PBN is the 2nd relay in the taste circuit. A series of papers from the Palmiter lab have implicated that PBN projection is important: in AgRP diptheria toxin mice, you can prevent starvation by implanting a cannula in the PBN that releases GABA agonists.

There are many more players in food intake, including NPY, melanocortin, ghrelin, cannabinoids, dopamine, insulin, and many more. If you'd like to know more, I recommend this solid (but aging) review from Morton. All you need to know for this paper is that AgRP and POMC neurons perform opposing functions in the taste circuit.

Interplay between AgRP and POMC

The main thrust of this paper is trying to understand how AgRP neurons can regulate feeding through their projections using a variety of transgenic, viral, and optogenetic techniques. This paper has 6 main figures, and 15(!?) supplemental figures, so I will only be highlighting the main points.

First, they investigated how AgRP and POMC neurons interact, advancing previous work by using optogenetics.  They cut slices in AgRP-ChR2 and POMC-ChR2 mice, and patched these cells to see how they were connected. As reported previously, they found that AgRP neurons have GABAergic projections onto POMC neurons. However, there was no reciprocal POMC->AgRP connection, nor any AgRP->AgRP or POMC->POMC connections.

Since AgRP can inhibit POMC neurons, they wondered whether the silencing of POMC neurons alone is able to influence feeding. To silence POMC neurons, they used POMC-hM4D mice. If you are unfamiliar with hM4D, it is an artificially developed GPCR that is activated by a molecule called CNO, and reversibly silences neurons. When they gave the POMC-hM4D mice CNO, the mice did not gain weight like diptheria toxin mice (or at least not statistically significantly in 8 mice). Stastitical significance aside, it appeared that inhibition of POMC alone is not able to drive feeding, and thus AgRP probably works primarily through other pathways.

As a final step to investigate the interplay between AgRP and POMC, they used double transgenic, AGRP-ChR2/POMC-ChR2 mice, and stimulated both groups of neurons simultaneously. These mice increased their food intake, showing: 1. that AgRP activity can dominate POMC activity; and 2. that POMC inhibition is not necessary for increased food intake. From this initial set of experiments, they conclude that AgRP neurons do not primarily work via inhibiting POMC neurons.

Stimulation of both AGRP and POMC neurons leads to an increase in feeding. left. Diagram of activated neurons. i. Pellet intake during light stimulation. j. Food intake increases during stimulation of both AgRP and POMC neurons.
From Atasoy et al, 2012.
Investigating other downstream nuclei

To look at AgRP neurons' effects on PVH and PBN, they once again used AGRP-ChR2 mice, but instead of implanting the light fibre over the hypothalamus, they put the fibre over the axons in the PVH and PBN. When they stimulated AgRP axons in PVH, they saw an increase in food intake, showing that AgRP->PVH activity is sufficient. However, when they stimulated the AgRP fibres in the PBN, they did not see an increase in food intake. Thus, of AgRP neurons' three possible targets, they hypothesized that their PVH projection is most important for food intake.

Stimulation of AgRP fibres in PVH is sufficient to increase food intake. top. Experimental setup and food intake for PVH stimulation. bottom. Experimental setup and food intake for PBN stimulation.
From Atasoy et al 2012.
Focus on PVH

Having identified PVH as important, they homed in on it. First, they explored the AgRP-PVH connection in slices, and found that there is indeed strong inhibitory input from AgRP to PVH. Then, to see whether PVH inhibition is sufficient to induce feeding, they expressed hM4D throughout PVH by using the SIM1 promoter (SIM1-hM4d). Upon administration of CNO, these mice gained weight, showing that inhibition of PVH is sufficient. Since no one would believe a single silencing paradigm, they repeated the experiment using PSAM-GlyR, and saw the same effect. To show that PVH inhibition is necessary, they created AGRP-ChR2/SIM1-ChR2 mice, and stimulated both populations simultaneously, and this was not able to increase food intake. Thus, PVH activation can prevent AgRP-neuron induced feeding, and PVH inhibition is necessary for AgRP-neuron induced feeding.

PVH inhibition is sufficient and necessary for increased food intake. b/c. In SIM1-hM4D mice, CNO administration causes increased feeding. e. Diagram of double stimulation experiment in AGRP-ChR2/SIM1-ChR2 mice. f. Double stimulation does not case an increase in feeding.
From Atasoy et. al., 2012.
The PVH contains multiple types of neurons, and of these, they decided to focus on the oxytocin (OXT) expressing neurons. They again performed the double stimulation protocol, this time in OXT-ChR2/AGRP-ChR2 mice, and again found that OXT neuron stimulation could prevent AgRP-neuron induced feeding.

In the final set of experiments, they investigated whether AgRP neurons release neuropeptide Y (NPY) and GABA in the PVH. To do this, they implanted cannulas with pharmacological antagonists for each of these neurotransmitters in the PVH of AgRP-ChR2 mice. Blocking either neurotransmitter decreased the AgRP-ChR2 induced feeding, showing that both neurotransmitters are functional at the AgRP->PVH synpase.

Publishing thoughts

Phew! I told you that was a tour de force. By my count, they used eight(!) transgenic mouse lines, and five different viruses. They nonchalantly mentioned results that might be a starting point for a whole paper in a single sentence, "we have found that food deprivation increases inhibitory synaptic drive onto PVH neurons (Supplementary Fig. 12)."

To be honest, the sheer magnitude of this paper kinda pissed me off, since the results could have come out sooner if the paper was split in two (yet more evidence of Nature's supplemental figure problem). This paper was received by Nature last September, accepted in May, and published in July; it took ten months for this to become public. Everyone's ok with this?

Scientific thoughts

Given the sheer number of experiments in the paper, I was somewhat disappointed by the two paragraph discussion. To be fair, this is probably due to the six page limit (which would explain the above mention of Sup. Fig. 12, and yet another reason to dislike journals). For example, as I mentioned in the background, there is evidence that AgRP neuron GABAergic signaling to the PBN is necessary for normal feeding. However, the PBN gets a single sentence in the discussion, "Finally, AGRP neuron projections targeting the parabrachial nucleus (PBN) in the hindbrain do not directly activate feeding, but instead they restrain visceral malaise that results from AGRP neuron ablation." Those Palmiter papers also investigated the PVH, and found that it was not important in their paradigm, so I would really like to have seen a more thorough exploration of the differences between the papers.

The most intriguing single experiment, to me, is the dual activation of both AgRP and POMC neuron populations, which implies that the orexigenic pathway is able to dominate the anorexigenic. If I may speculate, when humans talk about satiety, we range from hungry to sated to full. Hunger is a strong feeling, motivated by blood sugar levels (or something), while fullness seems more "visceral," governed by stomach distension. However, satiety is a rather subtle feeling, since it is the default (at least in the developed world). Perhaps the argument of origenic vs anorexigenic pathways is entirely wrong, and the actual opposition is between orexigenic and non-genic pathways. If provoked, we can feed while we're sated, but if we're full, stuffing more food in our face is nauseating. In any case, I look forward to their undoubtedly ongoing experiments looking at POMC's projections, and to see how those projections overlap (or not) with AgRP.

While I know vanishingly little about oxytocin (I leave that to cognitive scientists), in the discussion they note that oxytocin disorders in humans can lead to "instiable hunger." What I find strange is that the body would transduce a straightforward satiety signal (leptin/cannabinoids) into another hormonal signal, oxytocin; unless, of course, oxytocin here is simply a neurotransmitter, and not an endocrine signal. Of interest to me is that oxytocin is expressed by the glial-like taste receptors on the tongue. While these glial-like cells do not have taste receptors themselves (the receptors are in the aptly named receptor cells), it is possible that oxytocin could indirectly modulate taste itself, similar to how leptin and cannabinoids can directly modulate sweetness.

References

Atasoy D, Betley JN, Su HH, & Sternson SM (2012). Deconstruction of a neural circuit for hunger. Nature, 488 (7410), 172-7 PMID: 22801496

Monday, August 6, 2012

Is neuroscience a meritocracy?

Meritocratic failure

In a recent article, "Why Elites Fail," Christopher Hayes argues that meritocracies often fail. For example, the Hunter College High School, a prestigious public school in NYC, accepts students solely on the basis of an entrance exam. For decades, the meritocratic admissions process meant that the school had a diverse student body: in 1995, 12% of the school was black, and 6% hispanic. Today, however, when rich parents hire private tutors for their kids, the student body has become less diverse, with 3% black, and 1% hispanic.

Hayes identifies two keys to meritocracy. First, there must be interindividual differences in ability, skill, or what-have-you. Second, people must be rewarded for their performance: high performers get promoted, while low performers are "punished." If you looked at how families perform between generations, variance in individual ability and accountability would cause inter-generational mobility: if a parent is exceptional, and their child average, the family would change positions. In theory, the larger the variance in interindividual differences is, the larger the mobility should be (I am not sure how to state this formally and correctly, but that is Hayes argument).

However, this is not what usually happens. After one round of meritocracy, the winners get to invest in the next round, so their children have advantages. Sometimes the winners, often in authority, get to choose the winners (or losers) of the next round. The meritocracy breaks down, and become an oligarchy. This same basic story has played out in many different fields in the last thirty years, including college admissions ("legacy admissions"), or the decrease in inter-generational income mobility in the US since 1970.

After reading the article, I wondered, has the meritocracy has failed in neuroscience as well?

How to test scientific meritocracy

The scientific meritocracy can fail in many ways. Funding can go to prestigious labs, regardless of how efficient they are. Nobel laureates can publish shitty papers in high profile journals. Prizes can go to PIs at "elite" institutions, to reflect how portentous the prize is. (I use "elite" as a shorthand for the top 5-10 neuroscience programs, based on my rankings, and my perception. Exploring what constitutes an elite institution is for another post.)

Since I pretend to be a scientist, I wanted to measure how meritocratic "science" is. One possibility would be to measure whether labs from elite institutions get preferential treatment by journals, but that would require somehow objectively measuring paper quality, and controlling for funding. Instead, I settled on something simpler, and hopefully more objective: looking at the credentials of PIs at elite institutions.

In a meritocracy, you would expect that as you looked farther into the past, the winners would have increasingly diverse backgrounds. Or, looking forward, you would expect some percentage of people who start at the bottom of the hierarchy, but were talented, could work their way up. In terms of PIs at elite institutions, my expectation is that they would almost exclusively have done post-docs at other elite institutions. In contrast, if you looked at where they went to undergrad, I would expect a more diverse set of schools. To see whether this was true, I looked at 30+ PIs at elite institutions.

The test

The baseball writer Rob Neyer has a gimmick where he presents stats for two anonymous players, "Player A", and "Player B." Sometimes the stats would be similar, and the reader would be shocked to find that Player A was an All-Star, while Player B was a "scrub." Sometimes the stats would be dissimilar, but Players A and B would actually be the same player, playing under different conditions. The point of the gimmick was to look at the world more objectively, without the halo effect of their names.

So in that vein, I present two cohorts of researchers, and their associated institutions:

Cohort A Cohort B
MIT
Santa Barbara
Stetson
Cal State Chico
Williams College
Cambridge
Lawrence
Vassar
Duke
MIT
Harvard
MIT
UChicago
Stanford
Bryn Mawr
Yale
Harvard
CalTech
UChicago
Vanderbilt
Harvard
Brown
Princeton
Berkeley
Berkeley
UVA
MIT
UVA
Harvard


With all due respect to the universities represented in Cohort A, most people would agree that the schools in Cohort B produce more research. So who are these two cohorts of researchers, and how are they affiliated with the institutions? Both cohorts are professors at elite neuroscience departments. But Cohort A got their bachelors before 1990 while Cohort B got theirs after 1990:

Fogies Undergrad Whippersnappers Undergrad
Barres, Ben
Knudsen
Newsome, William
Moore, Tirin
Raymond, Jennifer
Shatz, Carla
Nicoll, Roger
Huganir, Richard
Bear, Mark
Julius, David
Malenka, Rob
Tsien, Richard
Katz, LC
Callaway, Ed
Cline, Hollis
MIT
Santa Barbara
Stetson
Cal State Chico
Williams College
Cambridge
Lawrence
Vassar
Duke
MIT
Harvard
MIT
UChicago
Stanford
Bryn Mawr
Datta, Bob
Wilson, Rachel
Ehlers, Mike
Scott, Kristin
Harvey, Christopher
Deisseroth
Dolmetsch, Ricardo
Heiman, Miriam
Huberman, Andrew
Potter, Chris
Shuler, Marshall
Tye, Kay
Goosens, Kim
Sabatini, Bernardo
Yale
Harvard
CalTech
UChicago
Vanderbilt
Harvard
Brown
Princeton
Berkeley
Berkeley
UVA
MIT
UVA
Harvard

(Methods: To select fogie professors, I included professors I recognized by name, or who are HHMI investigators. For the whippersnappers, I used the SFN Young Investigators award listing, and scanned the websites of departments for assistant professors. People educated outside the US were excluded. For each professor, I noted their schools for BS, PhD, and post-doc. For some professors, I could not ascertain their undergrad institution, and excluded them. This is by no means exhaustive, but I only spent a few hours doing this. Full spreadsheet.)

I have two general conclusions from this. First, if you want to be a professor at an elite institution today, you need to have gone to an elite undergrad. The "worst" school represented, UVA, is ranked #25 (for whatever rankings are worth). There are no Wisconsin-Madisons on the list, let alone places like Ohio State. Three steps removed from your PI position, where you did your undergrad is a determining factor for whether you can become a PI. As you move the credential window forward to grad school and post-docs, the credential threshold gets even higher.

Second, and more weakly, I think this shows that science has become less meritocratic over time. At first blush, I thought the fogies' schools were just generally worse. A quick Google, however, revealed that Williams, Vassar, and Bryn Mawr are well regarded liberal arts schools. So while it's not fair to conclude that the older cohort went to worse schools, I think it is fair to say they went to a more diverse set of schools, ones that did not necessarily emphasize research.

Questions I ask myself

Isn't this sample size small? Yes, but then again there aren't many professors. If I wanted to spend more time, I should probably quantify rankings of both the PIs' institutions, and their undergrad schools. It would also be helpful to look at non-elite institutions' PIs to see how what is happening there.

Don't elite undergrads reflect high SAT scores/intelligence? And science is a g-loaded occupation, so... I would argue that neuroscience is not as g-loaded as other fields like physics or computer science. Once you reach +1 or +2 SD, other factors like work ethic become important. A quick Google shows that elite institutions have, a ~100 point SAT score gap over other places, ~1SD. However, there are much fewer elite institutions, and they also have fewer students, so by numbers alone, there should be just as many equally smart kids at non-elite institutions as the elite ones, and a much larger group at -1SD.

Ok, if the elite kids aren't smarter, could they have some other trait? I can see this. College admissions are increasingly (and insanely) competitive. Thus, elite colleges may screen for competitiveness. If not competitiveness, it could be some other factor. Elite institutions use extracurriculars as differentiators between applicants, and if you believe this Gladwell piece* the extracurriculars reflect something real.

* What happened to Gladwell? People love to shit on him now, but I thought he did a good job summarizing social science for a wide audience in The Tipping Point and his older magazine articles. But now he's publishing half-baked essays on "slack," and name-checking Tyler Cowen's econ-foodie book?

What about non-Americans? I don't know much about this. My understanding is that the European college system is much more equal in terms of quality, so there is less fighting for spots (with exceptions like the French/Swiss écoles, Oxbridge,  etc.). As for Asia, I think a majority of Chinese PIs in the US come from Tsinghua or Peking University, and the Indians from IITs. But American grad schools may not be equipped to identify good applicants from less famous schools.

Isn't this a lot of words for what amounts to glorified googling? Yeah.

Concluding bloviating

As mentioned at the top, society as a whole is becoming less meritocratic. It would be remarkable for science to resist this trend. I'm not sure what, if anything can be done about it. The PIs at elite institutions are generally smart, motivated people, so from the perspective of funding agencies, why should they care whether the PIs have diverse backgrounds? And the NIH does fund non-elite institutions, just less so, if only to avoid senators asking why Idaho doesn't get any funding.

There is an opportunity for disruption here, in that elite institutions are completely overlooking talented, less credentialed people. Some places, like Washington University, seem to specialize in being less famous, but nearly as productive, and I think it's in large part by finding people the elite institutes can't be bothered with. Of course, they will still lose status contests to elite institutions in publishing and prizes.

On a personal note, I knew coming to Geneva that I would need to do a second post-doc to get a job back in the US. Seeing the credentials of these people made me realize just how important status and political connections are, rather than simple productivity. Hopefully, the status requirement are much lower one step down the ladder. The post-docs I knew at Duke were able to get positions at good schools like UNC, Baylor, and BU. Whether they would have been able to get the positions if they were post-docs at those schools is another question.