Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

  • 1 July 2016 to 1 April 2017
  • Project No: 326
  • Funding round: FR 11

Many of the methodological practices currently used within trials are evidence free. For example, there is little evidence for the ways we choose to recruit people to a study. As researchers funded by public money, we have a responsibility to prevent research waste wherever possible. Measurement of 'fidelity' is one example where evidence of best practice is lacking. In a trial, practitioners are sometimes trained to change their own or their patients' behaviours as part of an 'intervention' that is then evaluated. Researchers should also assess whether the intervention is delivered or received as intended. This is known as 'implementation fidelity' and is important because if researchers have evidence that these changes are being implemented as planned, they can be confident that the study findings are due to the intervention under investigation. Additionally, if the intervention proves to be beneficial, they can feel assured that it would bring similar benefits elsewhere. Primary care is an important setting for assessing fidelity because many studies involve whole practices and multiple practitioners, and it is likely they will deliver interventions in different ways with different patients. There is a lack of evidence about the best ways to measure fidelity: should it be by self-report or by more objective observations, when should it be measured, for how long, and what level is acceptable?

We propose a study to map the strategies that have been and are being used to monitor or measure what actually happens when interventions are implemented. We will conduct a systematic review of published papers, study reports and guidance to identify best practice and evidence gaps that need to be addressed. Alongside, we will form a 'fidelity working group' comprising patients, staff and researchers who have been involved in primary care trials, to inform the review and advise regarding fruitful next steps. 

Amount awarded: £23,710

Projects by themes

We have grouped projects under the five SPCR themes in this document

Evidence synthesis working group

The collaboration will be conducting 18 high impact systematic reviews, under four workstreams.