Chapter 7: Management and evaluation of the evidence
217. The requirement to have a methodology in place for the management and evaluation of evidence that is both understood and agreed by the members of any review is fundamental. It is necessary so that there is a shared understanding of what and how evidence is to be evaluated and reviewed. Most methodologies will give consideration on how to assess the number and or quality, the form and the weight or impact that each piece of evidence contains. Different disciplines may interpret evidence in different ways, so it is important to have clarity in approach.
218. Failure to effectively manage evidence in this way may result in a lack of agreement within a review and more widely, doubts over the credibility of its findings.
The Mesh Review
219. Given the large and diverse number of interests represented by the membership of the Mesh Review, discussions should have taken place on what evidence should be considered and how that evidence should be evaluated. The common dominators which should have provided the framework for the methodology should have been both the remit and the terms of reference.
220. Members of the Mesh Review had mixed recollections on the approach taken towards methodology. Some said that there was no agreement on methodology at the beginning of the process, but it was discussed. Others believed that the remit was the only document on methodology the group was provided with. A few perceived a “change” in methodology between the Interim and Final Report.
221. As noted in earlier chapters, there did not appear to be any agreement on what community” but acknowledged that it was a “mix”. She recognized that there were broader interests too.
222. Similarly, other members of the Mesh Review saw this as a clinical review. Such a review may measure numbers of successful outcomes or complications from treatments and procedures. Such methodologies are among the most rigorous within the discipline of public health and would be easily recognisable by the clinical representatives and professional organisations.
223. One member saw the approach as “Purely scientific. Sometimes science and presenting evidence scientists would accept and be able to deal with is not going to answer all the questions.”
224. This approach would not take cognizance of matters such as the effect of complications and how this impacted upon and affected someone’s life. In other words, the methodology used needed to take account not only of quantitative but also qualitative evidence.
225. This was further complicated by the fact that there was not one methodological style applied throughout either of the Reports. Most of the chapters had different authors. As a result, the Final Report lacked coherence in content, style and emphasis. Some of the chapters were very data intense making reading and understanding them potentially challenging for anyone not familiar with the statistical methodology adopted in other parts of the Report.
“You had to understand confidence intervals and I don’t know how many people there understood confidence intervals.”
“I think if an ordinary member of the public was looking at that stuff I think they would have had even difficulties with terminology.”
226. Chapter 3- which principally described the personal experiences of patients may have been an attempt to try and remedy the perceived inaccessibility of the scientific data marshalled elsewhere in the Final report. Some members of the Mesh Review recognised this dynamic, commenting that it was a mistake to try and turn Chapter 3 into “a bit of science” and that it should have been used instead to illustrate patient experiences. This may have gone some way to portray the impact of experiences of transvaginal mesh implants on the everyday life of a single patient. Another member with a clear understanding of methodological approaches echoed this saying:
“I felt that there needed to be qualitative information because I felt that this side was not addressed adequately but it is very difficult to argue against randomized controlled trials and to me there was a group of women who were desperate for someone to listen and pay attention to their needs and understand them.”
227. It was suggested that there needed to be an ethnographic study which would have been much more powerful and scientifically rigorous than having Chapter 3 in the format in which it was presented. Another member of the Review dismissed this suggestion on the basis that it would “take too long”. The same member who proposed the ethnography felt that “the individual who was doing the research was very against qualitative research.”
228. Such studies can take time but, given the duration of the Review, this probably would have been achievable and may have presented what one member termed as a more “socially cohesive result.”
229. Agreement on how to evaluate evidence would also have provided some clarity on what evidence should have been considered and what fell outwith the methodological criteria.
230. The chapter authors did not apply research criteria uniformly throughout the Report. Some authors took complete control and ownership of their particular chapter. This can be seen in Chapter 4, for example, where the author, Dr Rachel Wood also provided a Plain English version. This was made available on her organisation’s website.
231. Otherwise, there appeared to have been a lack of consistency across the Review with regard to what could be included and what fell outwith the methodological criteria.
232. This issue was most apparent in the disagreement over an article in an issue of the journal Nature that was not included in the data analysis. The stated reason for its omission was that it did not meet the predefined methodological criteria.
233. Prior to the Interim Report, the question of what materials were to be included rested with the evidence analyst, Phil Mackie. A Government official clarified that, “It hadn’t been the practice to circulate the papers that were going to be included in the systematic review to all members.”
234. This may have been workable up to the publication of the Interim Report but following its publication, the process for consideration of evidence seems to have completely broken down. Despite the petitioners repeated attempts to have what they regarded as pertinent materials shared with the whole group, this did not happen, nor was this lack of agreement minuted. Pressure to publish a Final Report combined with the lack of consensus on the clinicians’ chapter seemed to close down opportunities to review further materials.
235. In any review the membership is likely to consist of people with a range of skills and experience, some of which may be pertinent to the subject matter of the review. There may however be people coming to the review with little or no experience in the subject matter, or certainly no detailed knowledge of the technical, legal or medical issues which might arise in a review. It is important therefore in any review that some consideration should be given at the outset to the nature of any written materials being considered by the review group and whether any support is required in understanding some of the more technical aspects which are contained therein.
236. The credibility of any published report requires that, whilst its finding may not always find agreement, the methods by which the review has reached its findings are clearly and consistently applied. We recommend that complex, technical information be presented in a way that can be understood by the range of readership who are likely to have an interest in the subject under review.
We recommend that a methodology to evaluate evidence should be understood and agreed by all members of a review.
Email: David Bishop