Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Dr Faraz Mughal, General Practitioner and NIHR In-Practice Fellow, Research Institute for Primary Care and Health Sciences, Keele University

@farazhmughal 

In the words of Christopher Whitty, Chief Scientific Adviser, Department for Health and Social Care, ‘..way ahead of any other academic contribution to policy-making is rigorous and unbiased synthesis of current knowledge’ (Whitty, 2015). Whitty challenges the academic community by stating if we ‘….could do one thing to improve the pathway from research to policy, it would be to improve the status, quality, and availability of good synthesis’ (Whitty, 2015).

A complex systematic review defined by Kamal Mahtani and colleagues is:

A systematic review, performed by a multidisciplinary team, consisting of multiple components, large amounts of data from different sources or different perspectives, collectively contributing more than would be expected from their individual contributions, the individual components not being easily coordinated, analysed or disentangled’ (Mahtani et al., 2018).  

The new University of Oxford Complex Reviews course I attended in June 2018 explored some of the broader and more complex forms of systematic review methodology. The course aims to provide participants with a better understanding of the ways we can provide evidence-based conclusions that inform complex decision making for patients, policymakers, clinicians, and researchers. Some of us may be more familiar with traditional systematic review methodology, but the question of whether the field has kept up with changes in modern medicine and health policy, was explored.

Below, I mention some of the complex review methodologies that were introduced to us on the Complex Reviews course as possible solutions:

We learnt about and explored the role of regulatory unpublished Clinical Study Reports (CSRs) of drug trials from Tom Jefferson, who was involved with a pivotal Cochrane review on Tamiflu that used this type of data. We compared CSR data to corresponding journal peer-reviewed publications. The contrast in reporting of data and levels of reporting bias is concerning and asks the question; why these CSRs are not more readily available and used, and should we be doing more reviews of CSRs to better inform our decision making?

Carol Lefebvre in her guest lecture proposed that ‘grey literature is the new black’ and urged us to utilise these unpublished data in our reviews. Although potentially challenging in extracting data and conducting quality appraisal, she reminded us all, the value inclusion of grey literature adds to your systematic review. She concluded that the potential of computer science in automated searching and screening promises for an exciting and bright future in systematic reviews.

We were introduced to diagnostic test accuracy systematic reviews which allow us to ask clinical questions related to specific investigations and really add to our knowledge about what tests may help a patient at what time in their health journey. Prognostic systematic reviews allow us to study the value of certain disease prognostic factors for example which improves understanding of disease course.

Where a meta-analysis of published aggregate data has not reliably answered one’s research question, an individual patient data (IPD) meta-analysis may be appropriate to deliver a more accurate research answer and is known as the ‘gold-standard’ of systematic reviews (Stewart and Parmar, 1993). An IPD meta-analysis refers to the collection, checking, and re-analysis of data recorded for each patient from each study (Stewart and Parmar, 1993). Providing several added benefits where the standardisation of eligibility criteria for inclusion, participant characteristics, and outcomes across trials is one; the time and costs needed to contact study authors, collect and check data, and to generate a consistent dataset prior to analysis is a limitation to undertaking an IPD meta-analysis (Ioannidis et al., 2002).       

With the rising number of qualitative studies published on diverse healthcare topics, we were shown ways to synthesise qualitative data to draw out themes from the growing literature. This evidence synthesis supports policymakers in decision making and in improving accessibility of a rich evidence source.

This Complex Reviews course was stimulating, interactive, and provided an arena to critically discuss through complex reviews methodology, how we can bridge the research-to-policy gap. Important evidence synthesis skills applicable to all primary care researchers were learnt. In addressing Witty’s challenge to the academic community, conducting more complex systematic reviews of pressing health research questions is one way to improve the quality and availability of evidence synthesis.   

 

Declarations: I am a salaried GP in Birmingham. I am supported by a NIHR In-Practice Fellowship. I am the RCGP Clinical Fellow in Mental Health, Clinical Innovation and Research Centre. 

Acknowledgements: I am grateful to Kamal Mahtani for comments on an earlier draft and to the NIHR School for Primary Care Research Evidence Synthesis Working Group in supporting me to train in the field of Complex Reviews at the Centre for Evidence Based Medicine, University of Oxford.

 

References:

IOANNIDIS, J. P., ROSENBERG, P. S., GOEDERT, J. J. & O'BRIEN, T. R. 2002. Commentary: meta-analysis of individual participants' data in genetic epidemiology. Am J Epidemiol, 156, 204-10.

MAHTANI, K. R., JEFFERSON, T., HENEGHAN, C., NUNAN, D. & ARONSON, J. K. 2018. What is a 'complex systematic review'? Criteria, definition, and examples. BMJ Evid Based Med, 23, 127-130.

STEWART, L. A. & PARMAR, M. K. 1993. Meta-analysis of the literature or of individual patient data: is there a difference? Lancet, 341, 418-22.

WHITTY, C. J. 2015. What makes an academic paper useful for health policy? BMC Med, 13, 301.