Sometimes when you mark papers about papers, you have to read the latter, continued: now reading a fine paper by Du, Leten & VanHaverbeke, “Managing open innovation projects with science-based and market-based partners”, in the journal Research Policy. Like most papers in this esteemed journal, the present one includes a clear statement of hypotheses. These hypotheses are then subjected to empirical tests. In Du et al., one hypothesis is that open innovation projects will perform better than other projects; a second is that formal project management helps the performance of market-based open innovation; a third is that formal project management hurts the performance of science-based open innovation (i.e., open innovation where the firm’s partner is not another firm but, say, a university). All hypotheses are backed up with the requisite discussion of previous research, but frankly items 2 and 3 could have gone any way – positive, negative, or no effect; effect of the same sign for both groups but bigger for one of them; whatever. The hypotheses actually offered are plausible, but there is neither strong theory nor clear prior research findings to say that it should be these hypotheses and not some others.
It is possible that the authors had these hypotheses clearly in mind before they started working with this data – this study, like most studies in the social sciences, gives us no way of knowing. It is also possible that the authors are HARKing – hypothesizing after the results are known. This means examining some data, finding a relationship between variables, and then formulating a hypothesis for which the observed relationship would provide a test. Trouble is, doing it in this order (discovery first, hypothesis second) invalidates the statistical test. It’s fine to do exploratory research, and fine to test hypotheses, but they are different things. So HARKing is on everybody’s list of research malpractices – see for instance this diagram from The Clinical Psychologists Bookshelf:
The location of HARKing in that diagram is instructive: right alongside publication bias, between hypothesis formulation and P-hacking. Journals have their styles, and just as most styles prefer tests that show statistical significance, many like to have formal statements of hypotheses (or “propositions” as some business & management journals like to say, which seems to be a way of saying “hypotheses” without sounding too sciency). So while the presentation in this paper is consistent with HARKing, I am hesitant to lay that at the authors’ door. I suspect that journal publishers like to have the story told in this way.