March 4, 2019
Dr Howard White
This week we are launching the second edition of the Centre’s evidence and gap map of studies of the effectiveness of interventions. This map has an additional 34 studies – bringing the total to 260 studies – we found from completing our search. We will continue to update the map and publish another version within a year.
More importantly this map includes what is called critical appraisal of the included studies. This blog explains what is meant by that. Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness and relevance in a particular context.
Explore the map for yourself here:
When I first got involved with communicating evidence from systematic reviews I was very surprised at some of the things those engaged in such communication were saying. Summaries for decision-makers were full of expressions like ‘uncertain findings’ and ‘weak recommendations’. None of this seemed likely to encourage the decision-maker to listen to what you are saying!
But there is a good reason for speaking like this. As proponents of evidence-based policy we’d like there to be strong evidence for the findings we communicate to decision-makers. So an important step in reviewing evidence is to assess the quality of that evidence. ‘Quality’ is a rather value-laden abbreviation for the more correct term ‘confidence in study findings’. Research teams may encounter several issues which undermine how confident we can be in the study findings. Studies of interventions for those experiencing or at risk of homelessness are particularly prone to these problems.
A first obvious issue is that of blinding. In clinical trials of drugs participants do not know (are blinded) as to whether they are getting the real medicine or a placebo. Ideally the person giving the treatment doesn’t know either, nor the person collecting the outcome data, and not the person analysing those data. For social interventions – such as those for people experiencing homelessness – it is clearly not possible to blind those receiving or administering the intervention. In some cases it may be possible to blind those collecting outcome data – for example, a CBT intervention for residents of a shelter, hospital or prison. It is possible to blind the researcher analysing the data, but that is very rarely done.
It may seem unfair to mark down study quality for not blinding when it is simply not possible. I used to think that, but have changed my mind for two reasons. First, it is not quality we are assessing; it is confidence in study findings. And, second, because failure to blind does introduce sources of bias, from practitioners’ failure to comply with a random assignment rule in the case of individuals or families they think really need the intervention to researchers using well known data mining techniques (dropping observations, rejigging variable and model definitions and so on) to get the ‘right result’. These sources of bias are well documented in studies showing larger effects being found in unblinded studies compared to blinded ones.
The second major issue for the studies in the map is attrition; that is, people who are lost to the study. Attrition causes a particular problem if it is ‘differential attrition’, that is greater in one group than the other. It is not surprising attrition is high for many of these interventions as homelessness can make it very difficult for people to engage; it's also the case that the intervention may not have been fit for purpose or not addressing someone's most pressing need leading to disengagement. If the intervention is a lot of hassle to take part in, or even physically demanding or painful, then people might drop out so there is greater attrition in the treatment group than the control.
But more commonly attrition is higher for the control. Differential attrition is also likely as there is less regular contact with the control group, and they are not receiving the intervention which, if successful, might help the situation. This is especially so for those in unstable housing since they are likely to move and be difficult to trace. And attrition is overall likely to be high in populations with high incidence of mental health issues and substance abuse. Having said that, a significant minority of studies in the map do manage to achieve acceptable levels of attrition. The Centre will be looking at why some research teams have managed this – and others have not – so as to produce guidance for their own research and that of others in the sector.
A final issue is sample size. Some of the studies have very small samples. This is a problem mostly because they are less likely to find the programme works when it actually does so. To address this problem, research teams should undertake and report power calculations which determine the sample size required for their study. Only a minority of studies in our map discuss issues of power, meaning the appropriate sample size for their study.
I haven’t raised the issue of study design. That’s because overall this is good with a large share of randomised controlled trials. The vast majority of the studies in the map are from North America, both the United States and Canada. They are very few indeed from the UK, and those mostly from London. But the North American experience shows there are not practical constraints to RCTs in this sector which cannot be overcome. We hope the map will help other learn from that experience. And that brings me to my final point.
I started out by saying that we do critical appraisal to assess how confident we can be in study findings. But it can have another role too, which is to provide a standard for researchers to work toward in conducting their research. They don’t won’t their work to be branded ‘low quality’. Having transparent criteria for critical appraisal can make that less likely as they know the standards to be met. So that will lead to better studies. And at the Centre we firmly believe that better studies will inform better policies.