Part 2 of the essay series Research and Knowledge Accumulation
Finding a way in
In my previous essay, I proposed that the idea of knowledge accumulation was a key missing concept in our understanding of research.
How knowledge accumulation works is itself an entire topic — hence this essay series. A good way in is to start with a nearby concept, the concept of research productivity.
Researchers are doing research. What determines how productive they are? What determines the total research output of the system? By starting to break down our concept of research productivity, we’ll find a natural place for the idea of knowledge accumulation.
Simple models of research productivity
One straightforward way to understand research productivity is as a function from money to knowledge. Money in, knowledge out.
Of course, there are intermediate steps. Money buys researcher time. Researcher time yields quality research effort. Quality research effort yields knowledge acquired.
When we start to break down research productivity like this, it becomes possible to model where we’re having problems. Funding for research should buy researcher time. But some of that funding may be lost to bureaucratic overhead. Researcher time should yield quality research effort. But researchers may underperform relative to given standards, the standards themselves may not be high enough, and there may be inadequate overall checks to ensure quality.
Taking these into account, we get a model of research productivity like this:
We might also add a factor to account for the difficulty of discovering new things in the relevant domain. The more difficult the domain, the larger the proportional loss:
A model like this lets us track knowledge production through stages and makes it easier to think about where problems are occurring and what sorts of solutions we might need. The solution types suggested by this model are: First, more money. Second, lower bureaucratic overhead. Third, higher performance from researchers, relative to given standards. Fourth, better standards. Fifth, better checks.
Putting the model to work
This can help us to more effectively parse the conversations that are occurring on the topic of problems with scientific and academic research. These conversations tend to diagnose a large number of problems. In them, people suggest a large number of solutions.
Among the problems: p-hacking, overfitting, spreadsheet errors, underpowered studies, poor experiment design, poor oversight, pressure to publish, intense competition, misaligned incentives, spotty peer review, publication bias, insufficient data sharing, insufficient sharing of methods, replication attempts without adequate context, high costs of replications, too much time on grant applications, insufficient funding, and the inherent difficulty of discovery.
Among the solutions: require lower p-values, require pre-registration, encourage replications, publish negative results, fund “uninteresting” work, teach better statistics, encourage skepticism, require data sharing, institute checklists, have journals enforce their standards, improve documentation, standardize methods, require pre-publication of data, improve supervision, improve experimental design, institute measures other than publishing for career advancement, tighten standards on peer review, streamline grant application processes, and increase funding.
These proposals can now be sorted. Teaching better statistics, encouraging skepticism, instituting checklists, improving supervision, and improving experimental design, for instance, fall under improving researcher performance. Requiring lower p-values, requiring pre-registration, improving documentation, and standardizing methods fall under raising standards. Encouraging replications, publishing negative results, funding “uninteresting” work, requiring data sharing, having journals enforce their standards, and tightening standards on peer review fall under improving the checks.
Pots, blankets, and research
While our model allows us to account for most of the discourse on the topic, fans of science may have a sense that something is missing. From what has been said above, there is as yet little to distinguish researchers from craftspeople engaged in the production of more mundane things like handmade pots or handmade blankets.
Imagine a craftsperson making a single handmade pot or handmade blanket. If there was demand, we might imagine employing a very large number of such craftspeople, all sitting in chairs at tables, making pots or blankets. It would then become necessary to institute quality control. There would be standard practices for how each piece was to be made, as well as spot checks to ensure that quality was sufficiently high. If we became worried about quality, we might think about how to train or motivate the workers, as well as whether we should the improve the standards or run more rigorous checks.
This is a partial analogy for modern research. Present-day researchers are engaged in the production of papers. There are standard research practices that they are supposed to follow, as well as a variety of checks to ensure quality.
The analogy has limits, though. Research, unlike pots or blankets, needs to be crafted in a way that takes into account and fits in with other research.
For this reason, it is better to analogize research to making puzzle pieces or parts of a map. If the truth is the completed puzzle, research is crafting and fitting together the puzzle pieces. If the truth is a full map, research is creating sketches of different parts of the landscape and stitching them together.
Put differently, while there may be such a thing as the logic of accumulation for handmade pots and handmade blankets, it is quite simple: you stack them. The logic of accumulation for knowledge is more complex.
Accounting for accumulation
Happily, it is easy to modify our earlier model of research productivity to account for the problem of knowledge accumulation. Adding that factor, our model becomes:
On this model, it still makes sense to think in terms of quality control. Bureaucratic overhead is still a problem. Researchers can still be made to perform better, standards can be raised, checks can be improved. But all of these things must take into account whether the knowledge will accumulate at the end.
Putting this in terms of puzzles and maps: You could imagine people making beautiful, exquisitely handcrafted puzzle pieces. You could imagine standards for craftsmanship as well as effective quality control measures. But it may not matter if the puzzle pieces aren’t made to fit in with each other. Similarly, you could imagine explorers going into a forest, making sketches, and then coming back. There could be standards for sketches and plenty of external checks. But it may not matter if the sketches aren’t being stitched together.
The real product of research is not individually held pieces of knowledge. Knowledge production does not end with individual discovery. What matters for science and for the world is whether the knowledge can be combined in a way that yields accumulation.
A separate logic
Of course, one might say that knowledge is supposed to accumulate, that that is part of how research should be done. That’s why researchers read each other’s papers, and that’s part of what being a good researcher is. One might then propose that “loss from knowledge failing to accumulate” should actually be tallied under the other types of losses.
Simple models like the one above are designed for use, and so the real question is a practical one: which things deserve to be broken out into their own terms and which don’t? If (1) knowledge does not accumulate automatically or easily, and (2) there is a distinct, understandable logic of knowledge accumulation, then “knowledge accumulation” should be broken out into its own term, as we have done.
In the next essay, we will dive into the concept of knowledge accumulation and learn that knowledge does not accumulate automatically or easily. In the following essay, we will begin to explore the distinct and counterintuitive logic of knowledge accumulation.