To the Editor: Professor Bonora has written a critique looking back at his career in diabetes research since the 1970s and we reflect on his critique by also looking back at our careers in clinical endocrinology (Professor Doi) and clinical epidemiology (Prof Doi and Dr Abdulmajeed). Prof Bonora starts off by flagging the proliferation of papers that are repetitive, redundant and unable to provide relevance to the reader [1]. He goes on to claim that this is because of the contribution of meta-analyses to such research waste, even though the situation is no better for ‘primary’ research given that 85% of all research funding is actually wasted due to inappropriate research questions, faulty study design, flawed execution, irrelevant endpoints, poor reporting and/or non-publication [2-4]. He then observes that, in the past, meta-analyses were virtually non-existent (implying that they were initially well regulated and subsequently became dysregulated) [1]. The reality is that meta-analysis was formalised relatively recently (by G. Glass) in 1977 [5, 6] and later updated by DerSimonian & Laird in 1986 [7] and finally by Doi et al in 2015 [8, 9]. There can therefore be no expectation that, in the field of diabetes, there would be many studies reporting a meta-analysis earlier than the 1990s. We agree that, today, there is an increased publication rate for meta-analyses, but this is also seen for study designs that collect primary data, and many of these publications similarly lack value when compared with existing ones on the same topic. Therefore, far from this being a problem with meta-analyses, such waste is happening because those tasked with research in diabetes, especially clinicians, are not well trained in clinical epidemiology, which is the science behind good clinical research and evidence-based clinical decision making.

A big part of the problem flagged by Prof Bonora perhaps lies with clinical training (which has only recently been catching up with best practices in clinical epidemiology). Additionally, since clinicians are also data custodians, the idea that data alone generates good clinical research has also fuelled the current state we are in. If such a researcher is unable to optimise application of the principles and methods of clinical epidemiology to conduct, appraise, or apply clinical research for the purpose of improving, preventing, diagnosing or treating diabetes in their patients, then their research output will end up as research waste. This was flagged in the 1990s by the late Doug Altman, who reported in a paper titled ‘The scandal of poor medical research’ that ‘Put simply, much poor research arises because researchers feel compelled for career reasons to carry out research that they are ill equipped to perform, and nobody stops them. Regardless of whether a doctor intends to pursue a career in research, he or she is usually expected to carry out some research with the aim of publishing several papers’ [10]. The problem also extends beyond clinicians to peer reviewers and editors who also serve as gatekeepers of research and may not understand recent advances in clinical epidemiology that have dramatically changed the face of clinical research. This includes methods of meta-analysis, propensity scores, instrumental variables, competing risks, marginal structural models, generalised linear models, avoidance of bias, bootstrapping, missing data analyses and more, which then go hand in hand with the key requirements of relevant research questions, good data and the need for clinicians to do research. Additionally, many peer reviewers and first-line editors are not seasoned researchers, and their decision making may be influenced by institutional or author reputation [11], which may not align with key skills in clinical epidemiology [12]. This, of course, extends to the review and publication of meta-analyses.

Scientist, analyst, novelist

No one can deny the fact that properly conducted meta-analyses are the highest level of evidence in evidence-based medicine [13] and are therefore instrumental for guidelines and standards of care and, as Prof Bonora says, are the basis for statements issued by scientific societies, national and international medicine agencies and the WHO. We agree that such studies exist because other evidence (be it experimental or observational) has been published beforehand, because this is the essential data source for meta-analysis. But why is this evidence different from that collected for observational and experimental studies? The implication drawn from Prof Bonora’s paper [1] is that the ‘scientist’ in clinical research stems from data access and collation, and this is far from the reality that exists. To take a parallel with clinical practice, perhaps only nurses and allied health practitioners should be called clinicians because doctors largely make decisions (diagnostic, management and prognostic) and therefore do not really care for the patient directly at the bedside; they do not take their temperature, administer medicines or even do phlebotomies, so are they not really clinicians? Should they be labelled instead as ‘clinical analysts’? This is just as absurd for research because the real science behind clinical research and evidence-based decision making is not in data collation (usually done by IT specialists and extracted from electronic systems) but because the author was involved in the decision making around study design, safeguards against bias and best practices in data analysis and interpretation; altogether the science called clinical epidemiology.

While data is essential for all research, it does not and should not define the scientist, just as the delivery of medicines to a patient should not and does not define a clinician. Yes, the meta-analysis itself ranks more highly than the papers that reported the data it contains, but authorship of a good meta-analysis is based on the same skill set required of a clinical scientist, which does not surface unless they have been trained in evidence synthesis methods; it has been suggested that all clinical scientists therefore need to be trained in the methodology of evidence synthesis [14] and perform at least one [15] as a primary author who understands its methodology. This would formally embed their future work in the context of existing evidence and facilitate learning of clinical epidemiology skills [16]. Prof Bonora’s emphasis on ‘people who contribute to the raw data analysis’ [1] seems misguided, because whether the data is raw or not, the clinical epidemiology skills need to be there. What Prof Bonora really meant to say is ‘people who contribute to the raw data collection’. If there is any exploitation or parasitism, it is of the latter group since data collection is often delegated to junior researchers or research students who also do the analyses and find the gaps in knowledge through a literature review (‘novelists’) and thus contribute in a major way to the science but hold unimportant positions on the eventual authorship. In our view, the real problem is that ‘non-scientist clinicians’ (because they lack the proper training in clinical epidemiology) are unable to produce output with robust designs or methods and therefore tend to produce poorly conducted repetitive investigations that contribute to research waste.

Explosion of ‘novelists’

Science is cumulative and should be conducted in the proper context; this is an essential attribute of the ‘scientist’ [17]. The introduction of formal methodology for research synthesis should have modified Prof Bonora’s views about the ‘scientist’ because it has resulted in a profound change in our thinking about the outcomes of scientific research [16]. We should now view primary research as a contribution towards the accumulation of evidence on, rather than a means towards the conclusive answer to, a scientific problem [18, 19]. Therefore, what Prof Bonora claims is a ‘novelist’ [1] is actually the pinnacle of the research scientist’s role, which is to help define the place of the ongoing research work and its contribution to the advancement of the understanding of the topic under study. The ‘novelist’ is thus the scientist that has the capability (based on expertise in both research science [clinical epidemiology] and the content area) to help provide researchers with sufficient information to assess what contribution any new results can make to the totality of information, and thus permit reliable interpretation of the significance of new research [20, 21].

All of this goes against the idea put forward by Prof Bonora that this critical part of science belongs to clinicians who are ‘novelists’ but not ‘scientists’ [1]. He gives examples of multiple reviews on the metabolic syndrome and cardiovascular disease or on non-alcoholic fatty liver disease (NAFLD) and cardiovascular disease [1], but far from these being examples of ‘novelists’ these are just examples of research waste of the same type seen in primary research. The real tragedy here is that these are not ‘novelists’ narrating the scientific achievements of others, as Prof Bonora says, but rather non-scientists who are unable to interpret and synthesise existing research and therefore carry out repetitive studies. The whole purpose of the scientist is lost when a seasoned clinician can only see ‘linguistic acrobatics’ and a plurality of publications as the end goal of a review or synthesis of the scientific literature.

To make the claim that syntheses (qualitative or quantitative) are received by editors simply because they receive many citations, or that editors and publishers are very interested in citations because they increase the impact factor, and therefore the reputation, of their journals is really to miss the boat completely. In mid-2005, The Lancet reported that ‘bad research involves not only research conducted inappropriately, but also unnecessary research, research which is done but remains unpublished, and research which is published but not in a way that justifies its existence or its relevance’ [22]. They gave the example of aprotinin to reduce perioperative blood loss; 64 trials investigating the effectiveness of aprotinin were published between 1987 and 2002, but its effectiveness had been clearly established by the 12th trial in 1992. The following 52 trials were unnecessary and unethical, and wasted resources because there was a failure of the scientist, acting in the scientific interest, to review the extant literature that sets the context for the each of the 52 subsequent trials. This is why The Lancet announced in 2005 that ‘From August, 2005, we will require authors of clinical trials submitted to The Lancet to include a clear summary of previous research findings, and to explain how their trial’s findings affect this summary. The relation between existing and new evidence should be illustrated by direct reference to an existing systematic review and meta-analysis. When a systematic review or meta-analysis does not exist, authors are encouraged to do their own’ [22]. This is clearly at odds with the claim Prof Bonora makes for editors and journals [1].

Next, Prof Bonora suggests that synthesising the evidence in a review injures the ‘intellectual maturation of investigators, particularly early career investigators seeking to establish independence’ and therefore they must only ‘try to address unanswered questions with their original methodologies’ [1]. The implication is that there is no need to understand and pass on the understanding of previous research on the subject. This completely contradicts the call for an end to research waste and will only foster a system where primary research contributes little or nothing towards challenging what is unknown.

Proposals

Prof Bonora sums up by suggesting strategies to mitigate the invasion of meta-analyses, including a proposal that puts a cap on publishing evidence syntheses [1]. We would label this as a very dangerous strategy, and one which would perhaps go against the very spirit of science by capping dissemination of scientific understanding on a topic. Our suggestion for journals and editors would be very different. We would propose that a meta-analysis on a diabetes-related topic submitted to a journal must meet several criteria: (1) it must address a relevant research question based on a gap in the knowledge of an area; (2) it must use robust methods in epidemiology and biostatistics and include both a content expert and a methodologist on the author list; (3) if previous meta-analyses on the same topic exist, it must be explained why a new meta-analysis is required and the previous meta-analyses should be referenced and discussed; (4) it must be reviewed by a methodologist.

The first point is where the whole process fails: how do we know when there is no longer a gap? Currently, the answer to this question is that we really have no way of knowing, but we have recently been awarded a grant by the National Priorities Research Program in Qatar to try to solve this [23]. Our programme of work seeks to define when we can say that a meta-analysis is an ‘exit meta-analysis’, i.e. does not require any further primary studies on the topic, which of course also means no further updates to the meta-analysis. We have not yet worked out what would define an exit meta-analysis, but once the project is completed this would be a major step towards reducing research waste, both of primary research designs and meta-analytical designs.

Conclusion

We conclude that scientific progress in clinical research is not defined by the type of design a paper has but rather on the quality of the science behind clinical research, which is defined by an author’s knowledge of clinical epidemiology. Clinicians that reach a consultant level without proper training in clinical epidemiology are not automatically experts in the research process, and we find it rather problematic that in many academic health systems in which we have worked it is implicitly assumed that they are, as has been assumed by Prof Bonora’s classification of ‘scientist’ distinct from ‘analyst’ and ‘novelist’. These roles are inseparable, and in terms of clinical research the ‘scientist-only’ as defined by Prof Bonora ranks lowest compared with ‘scientist-analyst’ or ‘scientist-novelist’. This is because clinical research is not basic science research where laboratory experiments are conducted to investigate little-understood processes, and we are surprised that a seasoned clinician has used the same language commonly attributed to basic scientists when clinical research is being discussed. Finally, and most importantly, a new generation of clinical scientists, peer reviewers, editors and science-policy practitioners would benefit from an increased understanding of the methodologies and interpretation of evidence synthesis [16].