July 09, 2007
Competition in science: too much of a good thing.
In the post that triggered this month's offering from me, political blogger Digby is actually talking about the corporate-welfare state known as this here USofA, but it doesn't take much to aim the same screed at the modern practice of science (my text in blue):
The entire enterprise is designed as an exercise in conformity in which those most eager to reinforce the
corporate ethosexisting infrastructure and prevailing dogmas, rise to the top and enforce itthese things even more rigidly. (Which is understandable. Having been through the "boot-camp" that beat every original thought and idea out of their heads until they don't even know they once had them, the next generation of bossesPIs are always ready to give it even harder to those coming up behind them, if only to justify their own acquiescence to such humiliation.) And anyone who complains is reminded of that inspiring war cry of American liberty: "you can always quit."
Except, of course, most of us really can't and they know it. You can't go without health insurance and you can't afford to take a chance on a new job that might not work out because there just isn't much room to fail in our society. It takes a very brave person to put their own and their family's well being at risk when the consequences of failure are so high. Most people make the rational decision to stick with the soul destroying job, answer to a boss that treats them like a lackey and live a life of quiet desperation because to do otherwise would be irresponsible.
Identical objections might be raised to this characterization of modern science as might be raised to the same as a description of the USA: it's hyperbole, overstatement, it ignores the many bosses/PIs who treat their employees well and the many people who do live their dreams and so on and on. That's all true, and I'd probably just eat a bullet if it weren't. But still, given the usual caveats about generalizations, I think Digby's remarks hold up pretty well whichever target you choose. Digby goes on to say:
Doesn't that work out nicely for the corporate owners of America, eh?
But the thing is, in science, I don't believe it works out well for anyone. Being a postdoc myself, that's the first point of view I see; and the life of quiet desperation is not actually an option for postdocs. Sooner or later, you pretty much have to either make a jump for the bottom rung of the faculty ladder (at which point the majority are rejected), or leave academia. You're expected, by the time you have a couple of postdocs under your belt, to have scrambled into a faculty slot (or tried and failed to do so, in which case you don't count, loser) -- there's something wrong with you if you haven't.
There used to be a position called something like "research officer", which was a bit like an assistant/tech position but required a PhD and, accordingly, paid better. Those postions were good for postdocs who decided they didn't actually want to be "promoted" away from the bench -- and as far as I can tell, they have been phased out almost completely, because most PIs would prefer to pay a technician than fork out for the extra skill without also having a "tenure" carrot to dangle. (Of course, it's actually the "you can always quit" stick that most of 'em are unwilling to be without.) You might, after a postdoc or two, get a position as a research assistant/tech, if you're willing to take yet another pay cut -- sometimes people do that in order to spend more time with young families and keep that all-important health insurance. (You'll have to deal with the perception that you're only doing it because you're not good enough to go on in academia proper; this may or may not hurt your tender feelings but it will make a lot of PIs reluctant to hire you.) And, well, that's about it -- your other options are outside of academic research.
In 2003, among S&E doctorate degree holders who received their degree 4–6 years previously, 19.8% were in tenure-track or tenured positions at 4-year institutions of higher education (engineering 16.3%; life sciences 18.0%; physical sciences 16.7%; social sciences 30.8%).
The share of recent doctorate holders hired into full-time faculty positions fell from 74% to 44% from 1972 to 2003. At research universities the decline was from 60% to 31%. Conversely, the overall share of recent S&E doctorate holders who reported being in postdoc positions rose from 13% to 34% overall and from 22% to 48% at research universities.
At research universities, faculty-level jobs lacking the possibility of tenure have risen from 55% of new hires in 1989 to 70% in 2003.
So much for the cannon-fodder; what about the brass? Surely the system works to the advantage of the "corporate owners" in science -- PIs and up? Don't they get the best product at the lowest price, a benefit which naturally accrues to the public whose taxes are funding them? In a nutshell: no.
First of all, "ain't competition grand?" is the sweatshop owner's credo, and while scientists (especially postdocs) are not working in cramped, sweltering, dangerous third-world factories, they are being squeezed pretty hard in some ways. In 2005, the Sigma Xi research society published the results of an extensive survey of US postdocs which found that postdoc salaries did not compare well with overall US census data for the comparable age group (28-37; see graph). This gets worse if you consider that the average self-reported working week was 51 hours, for an hourly wage of about $14. Bear in mind that the average time spent in a doctoral degree is 8 years and the average age of degree award is 33 ; the opportunity cost of the postdoc path is immense and essentially unrecoverable. Factor in a roughly 1 in 5 chance of making tenure, as above, and it's not hard to see why surveyed postdocs reported job dissatisfaction at twice the rate of science/engineering PhDs in general (22% vs 11%) (figures from the factsheet and survey again). Complaints ranged from conflict with mentors to low remuneration, and frustrated expectation -- the considerable likelihood of never obtaining a PI position -- was identified as a potential root cause of much of the dissatisfaction.
Pause here to consider the plight of -- to strain the metaphor -- the sweatshop overseers, PIs. A modern PI is expected to be a researcher, a manager and team leader and a teacher all in one. That's three jobs being crammed into one worklife. The selection process for advancement focuses obsessively on research metrics (specifically, a track record in winning competitive grants), and neither management nor teaching are formally taught in the majority of graduate and post-graduate programs. So you have people being expected to excel at two jobs for which they have only whatever on-the-job training they've been lucky enough to pick up while being judged on their success at a third job in an ever-more-competitive environment. They are not much better off than their underlings in many ways, and the same effects of pressure on performance might be expected to obtain.
Secondly, what the "owners" in a competitive system get is not cream skimmed off the top but whatever "rises", and that's not always so wholesome. A PubMed search on research misconduct returns more than 3100 hits, about 1000 published in the last five years, ~1700 in the ten years before that and ~400 between 1979 and 1990 (though PubMed records date back to the 60's). It's important to note that misconduct does not only refer to famous, out-and-out fraudsters like Hwang Woo-Suk. The HHS Office of Research Integrity defines misconduct according to what's known as the FFP rule: Fabrication (making data up), Falsification (altering data) and Plagiarism, but evidence suggests that these most serious offenses represent only the tip of the iceberg.
A recent survey asked more than 3400 NIH-funded scientists about a variety of unethical behaviours, ranging from FFP to inadequate record-keeping. While fewer than 2% of respondents admitted to FFP-level offences, more than 10 percent admitted to each of: overlooking others' use of flawed data or questionable interpretation of data; changing the design, methodology or results of a study in response to pressure from a funding source; withholding details of methods or results in papers or proposals; inadequate or inappropriate research design; dropping observations or data points on the basis of a "gut feeling that they were inaccurate"; and inadequate record keeping. Fully one in three admitted to having enaged in at least one of the ten worst behaviours (so judged by six ORI compliance officers) in the last three years. A series of focus-group interviews with working scientists identified a wide range of similar non-FFP behaviours that the authors dubbed "normal misbehaviour" -- low-key, everyday misdemeanors that study author Brian Martinson describes as "more corrosive than explosive", but no less damaging for that.
These "normal misbehaviours" were explicitly linked to job pressure, the familiar "publish or perish" motto:
The pressure to produce... is associated with a number of behaviors that do not quite reach the threshold of FFP but nevertheless are regarded by scientists as misconduct. The problems mentioned by members of our focus groups included: manipulation of the review system, (improper) control of research by funders, difficulties in assigning authorship, exploitation of junior colleagues, unreported conflicts of interest, the theft of ideas from conference papers and grant proposals, publishing the same thing twice (or more), withholding of data, and ignoring teaching responsibilities.
In a recent Nature misconduct special, Jim Giles put it this way:
Take one prestigious laboratory. Add some pressing grant deadlines and a dash of apprehension about whether the applications will succeed. Throw in an overworked lab head, a gang of competitive postdocs and some shoddy record-keeping. Finally, insert a cynical scientist with a feeling that he or she is owed glory. It sounds hellish, but elements of this workplace will be familiar to many researchers. And that's worrying, as such an environment is, according to sociologists, the most fertile breeding ground for research misconduct.
Research misconduct has also been linked to perceived unfair treatment: researchers, like anyone, are more likely to cheat the system, the more they feel that they have been unjustly treated by that system. Martinson et. al found correlations between the likelihood of unethical behaviours (as described above) and perceptions of both procedural ("the game is rigged", "the old boys' network controls everything") and distributive ("too much is expected of me", "I don't get the respect or remuneration I deserve") injustice.
My point in all of this is that sweatshops rarely produce quality products -- they focus on quantity and churn out crap. A reasonable level of fair competition might select the best and brightest, but unfettered competition is rarely fair, and unfair competition is a poor selection method since it favors those who benefit from the unfairness. Under pressure and in the face of perceived injustice, people turn to ways of coping that do not improve the quality of their work. They find ways to manipulate the reward system; they cut corners, they cheat, they slack off; they turn resentful and throw sand in the gears. This may not have grave long-term consequences for the body of scientific knowledge, since science is largely self-correcting: errors that matter will eventually be found out. Nonetheless, all of these "sweatshop factors" have immediate and obvious consequences for the efficiency of the scientific endeavour.
Brian Martinson, quoted in a number of interviews about his work, says:
Competition and privatization are the great American way, but we've not stopped to ask ourselves whether we may have engendered a level of competition in science that has some dysfunctional consequences.
I believe we have done exactly that.
This work is licensed under a Creative Commons Attribution 3.0 License.
Posted by Bill Hooker at 06:05 AM | Permalink