I want to thank Nature and Heidi Ledford for making the scientific community aware of my plight. It has seemed that I can’t win for losing. Up to now, I have lost every round. The first Campus Committee on Research Integrity (is that an oxymoron?) thought that I and my postdoc companion were lying. The ORI were the ones that put me on to the statistics but apparently it didn’t bother them that the probability that the questionable Coulter count terminal digits were random was on the order of 1.6 E-20 for 42 counts because my count probability was 0.047 for 61 counts – just a tad below the canonical p of 0.05. Dr Price, the person in charge at the ORI, complained that it was impossible to obtain controls (which he never apparently asked for). However, once we had the Discovery materials in our qui tam case, we were able to analyze for controls well over 2000 Coulter counts from 9 other workers in the lab and from 2 outside labs.
But the Coulter counts were actually a loss leader. It is the analysis of the colony counts that tells the tale. My husband read them to me – triplicate counts were recorded for each sample in the experiments -- and I entered them into a spreadsheet. It did not take long for him to notice that the average of the 3 counts was appearing with high frequency as one of the colony counts in the triples in the questioned experiments but not in the experiments of other workers in the lab. A complete statistical analysis of the masses of data – including the colony counts -- that were made available to us during Discovery is recorded in the preprint Joel Pitt and I are trying to publish, that is available on both arXiv and FigShare, as listed on my website, www.helenezhill.com. After the qui tam case was closed, both the ORI and the Campus Committee on Research Integrity refused to consider our new analysis of the colony data.
But the question that Ledford’s article raises is why don’t I give up? There are a number of reasons. First and foremost, I believe that the scientific literature must be reliable -- the public and the funding agencies are counting on that -- and papers that contain unreliable information should be retracted. But it goes deeper than that. The system is flawed. The fox is watching the hen house. The Campus Committees on Research Integrity that reviewed my allegations were composed, in part, of stake-holders – administrators (one was actually a midwife); of faculty in other fields who had no knowledge of radiation biology; of friends of the respondents; and a lawyer. I think that after a very cursory review to assess merit, cases such as mine should be fielded to a scientifically knowledgeable ad hoc committee outside of the institution with no ax to grind. Secondly, there should be a mechanism for both the complainant and the respondent to appeal. No mechanism for appeal was available to me, neither at the University level nor after the ORI ruling. Thirdly, the judiciary does not understand science (Appeals Court judge Dolores Sloviter states at the beginning of the oral arguments in our appeal "we're just judges...I never had a science course in my life"). We scientists need to police ourselves. And lastly, journals should be more willing to publish articles that dispute claims made in the literature and are suggestive of misconduct.
If results can’t be reproduced, this should be reported to the journal that published them. 16 attempts to reproduce experiments reported in 2 publications in Radiation Research (I am a co-author on one of those) failed using the same cell line and 6 additional attempts with a closely related cell line, also failed. Yet this has never been acknowledged, and Dr Howell has continued to cite those papers as recently as January, 2013. Posting of raw data will go a long way to help control scientific integrity. All of the experiments that Pitt and I analyzed are posted on my website.