Skip to main content
. 2011 Jan 6;342:c7153. doi: 10.1136/bmj.c7153

Table 3.

 Responses from trialists who had analysed data on a prespecified outcome but not reported them by the time of the primary publication (n=16)

Trialist Explanation from trialist Category
09 “It was just uninteresting and we thought it confusing so we left it out. It didn’t change, so it was a result that we . . . you know, kind of not particularly informative let’s say, and was to us distracting and uninteresting.” Bias
12 “There was no mortality. No mortality at all”
“I had the idea that they would change with the intervention, but they didn’t change and that is why they were never reported. It was the negative result, which is also a result, but we never report on it. Normally I would have published these, I also publish negative data, but we thought that these data would explain why the intervention should work and that these data did not show, so that’s why but normally I would certainly publish negative results.”
Bias
18 “We didn’t bother to report it, because it wasn’t really relevant to the question we were asking. That’s a safety issue thing; there was nothing in it so we didn’t bother to report it. It was to keep ethics committee happy. It is not as if we are using a new drug here, it is actually an established one, just an unusual combination, so if we are using new things we report all that sort of stuff, so it’s not that experimental. We didn’t bother to report it, because it wasn’t really relevant to the question we were asking.”
“They’re kind of standard things in the bone world and they never show anything unless you have 5000 people, I’m exaggerating, 500 people in each group. In the end there was nothing really in it, I mean there were no differences, it was a short duration study, very small numbers, so actually the chances of finding anything was very small. Even if we found something it was likely to be confounder, there was a statistical chance. I think the protocol was adapted from a kind of a standard one we used for lots of drug trials and some of those bits could have been left out actually.”
Bias
22 “The whole study showed that there was nothing in the [intervention]. So the whole study was actually a negative result, so I don’t think the fact that there was no effect prevented me from putting it into the paper. It was either possibly an oversight or possibly something I thought, ‘well this isn’t relevant.’” Bias
29 “When I take a look at the data I see what best advances the story, and if you include too much data the reader doesn’t get the actual important message, so sometimes you get data that is either not significant or doesn’t show anything, and so you, we, just didn’t include that. The fact that something didn’t change doesn’t help to explain the results of the paper.” Bias
30 “When we looked at that data, it actually showed an increase in harm amongst those who got the active treatment, and we ditched it because we weren’t expecting it and we were concerned that the presentation of these data would have an impact on people’s understanding of the study findings. It wasn’t a large increase but it was an increase. I did present the findings on harm at two scientific meetings, with lots of caveats, and we discussed could there be something harmful about this intervention, but the overwhelming feedback that we got from people was that there was very unlikely to be anything harmful about this intervention, and it was on that basis that we didn’t present those findings. The feedback from people was, look, we don’t, there doesn’t appear to be a kind of framework or a mechanism for understanding this association and therefore you know people didn’t have faith that this was a valid finding, a valid association, essentially it might be a chance finding. I was kind of keen to present it, but as a group we took the decision not to put it in the paper. The argument was, look, this intervention appears to help people, but if the paper says it may increase harm, that will, it will, be understood differently by, you know, service providers. So we buried it. I think if I was a member of the public I would be saying ‘what you are promoting this intervention you thought it might harm people—why aren’t you telling people that?’” Bias
32 “If we had a found a significant difference in the treatment group we would have reported that, and it certainly would have been something we probably would have been waving the flag about. To be honest, it would have come down to a word limit and we really just cannot afford to report those things, even a sentence used, and often you have a sentence about this, and a sentence about that, and so it doesn’t allow you to discuss the more important findings that were positive or were negative as some of our research tends to be, because I guess it’s a priority of relevance” Bias
34 “No I think probably, it’s possible, I am looking on the final one, but probably was each time, reduced and reduced from the start to submitting to the journal. It is very limited on numbers, probably we start to . . . it didn’t get accepted so we kind of cut and cut, I believe this is what happened.” (the manuscript went to four journals) Bias
36 “It’s as dull as ditchwater, it doesn’t really say anything because the outcome wasn’t different, so of course [trial treatment] is going to be more expensive and no more effective so of course it’s not going to have a health economic benefit. Because you have got two treatments that don’t really differ. I just think, we have got to find a different way of . . . so for example I said well can’t we say something about the costs within the groups of those who relapsed and those who didn’t, just so that people get a ball park, but it’s written and he [coauthor—health economist] wants to put it in as it is, and I don’t have a problem with that, it’s rather a sense of I am not sure what it tells anybody.” Bias
39 “We analysed it and there was two patients who had the outcome, you know one in each arm, so we decided the numbers were so small that we didn’t think that adding another row to the table to describe two patients added anything.” Bias
41 “Patients in this particular trial turned out to use very low amounts of drugs. So, there was nothing essentially to compare. The use of other drugs was not an important issue in this population. There was nothing to report. There was no reportable data, no interesting story in the secondary outcome data, and our intention was always to focus on the opiate use not on the other drugs. I did look, we do have data on other drug use, we have collected data as we promised, but essentially there is nothing to report in this data. Patients do not use other drugs heavily. We will present again, I have all the intentions, the data is available for analysis and for presentation if one of my students decide to do some work with this and help me out with this, absolutely it will get published, but I have to pick and chose what I am actually working on.” Bias
44 “We probably looked at it but again it doesn’t happen by magic. So, I can’t imagine that there would be a difference. Why we didn’t? My guess is that we didn’t look at it because that is something that has to be prospectively collected, and so I would assume that we collected it and there was just absolutely no difference, but I don’t recall. I am pretty sure there would not be differences, it would be related to temperature, but what the results were I don’t remember at this point.” Bias
49 “Yes because what happened is, I am left with a study where everything is [non-significant], even though we walked in believing that we would see a difference, and even though we had some preliminary information, you know anecdotal, that there should be a difference, there was no difference. So, it really turned out to be a very negative study. So we did collect that information, and again it’s a non-result, but there are only so many negative results you can put into a paper.” Bias
50 “It didn’t add anything else to the data, it changed but it wasn’t anything that was remarkable, it wasn’t a significant change. If it had been something that either added additional strength to the data, or if it was conflicted, if it turned out it went totally against our data, and was counteractive to what we were saying, yes we would have reported.” Bias
54 “Yes, we have those data on file, and I am sorry to say that we are writing-up so many papers sometimes we do not know what’s in the other papers. It has been analysed, I know, because what I know right now is that all the measurements which would be performed in this study as well as in two other studies were done, very simply because the outcome is very difficult to measure, well it’s very simple to measure, to get the antibodies is very difficult, and we got it from an organisation that gave us just enough to do the measurements. I know the results, saying from what I have in my head the outcome is going up, we have high levels of it, but I am not sure whether it was a significant increase or whether it was significant compared to the control group or the other group as mentioned in the study.” Delay in writing up of secondary outcomes beyond primary publication
56 “I actually disagree that this outcome is important, but that was probably a more pragmatic aspect of making sure that our protocol was funded, because I think some reviewers might have said, ‘wow you are not measuring this outcome!’ That said, there is a vast amount of literature showing that it’s of completely no relevance but it was a practical decision to make sure we got money. Once we conducted the study and reflected on our results more we just didn’t think it had that much validity in telling us very much about the condition. So for the sake of brevity we didn’t report that. I didn’t expect there would be much of a difference, and our results show that there wasn’t much of a difference.” Bias