Our Evidence and Gap Maps bring together evidence on homelessness interventions from around the world to highlight where evidence does or doesn’t exist on what works and why. This helps target research investments faster and in a more strategic, impactful way.
Search a global evidence base
Explore individual studies in detail
Submit a study to the library
The Effectiveness Map, or ‘what works’ map, captures impact evaluations and effectiveness reviews and it highlights the level of confidence we can have in the findings. The Implementation Issues, or ‘why things work or not’ map, focuses on the factors that impact on the successful implementation of homelessness interventions. Together the two maps capture around 1,397 studies on interventions – the largest resource of its kind in the world.
These Evidence and Gap Maps have been created with our partners at The Campbell Collaboration and Heriot-Watt University and they will be updated at regular intervals.
Also known as the ‘what works’ map, you can find 800 quantitative impact evaluations and effectiveness reviews of homelessness interventions. It also shows the level of confidence you can have in the findings - high, medium and low confidence. View the map’s report, take a look at its Standards of Evidence, and view the critical appraisal.
Also known as the ‘why things work or not’ map, you can use it to find 597 qualitative process evaluations that examine factors which help or hinder the successful implementation of homelessness interventions. The information is currently displayed in two digital tools – barriers and facilitators. An integrated version will be available in future. Read the Implementation Issues Map Report.
We continue to add new studies as they are identified; if you know of any missing or new evidence that needs adding, please let us know.
Get in touchThe Centre is applying standards of evidence to each of our tools and maps, which have been developed with our partners The Campbell Collaboration.
Each study in the map has been rated as high, medium or low for ‘confidence in study findings’. For systematic reviews in the map this rating was made using the revised version of ‘A MeaSurement Tool to Assess systematic Reviews’ (AMSTAR 2). The rating of primary studies was made using a critical appraisal tool based on various approaches to risk of bias assessment.
The two tools - AMSTAR 2 and the primary study critical appraisal tool – assess a range of items regarding study design and reporting. Some of these items are designated as ‘critical’. The overall rating for the study is the lowest rating or any critical item.
Study design
At least 3 RCTs or 5 other studies with a combined sample size of at least 300
Attrition
High levels of attrition, especially differential attrition between the treatment and comparison groups, reduce the confidence we can have in study findings.
Outcome measure
For the study findings to be usable and meaningful there should be a clear description of the outcome measures, preferably using existing, validated approaches.
Baseline balance
We can have less confidence in study findings if there were significant differences between the treatment and comparison groups at baseline.
Blinding
The absence of blinding of participants and researchers can bias study findings. This may be so even though blinding is not possible.
Power calculations
Power calculations help determine the sample size required. Without such calculations there is a risk of underpowered studies and so a high likelihood of not correctly identifying effective programmes.
Outcome measure
For the study findings to be usable and meaningful there should be a clear description of the outcome measures, preferably using existing, validated approaches.
Description of intervention
A clear description of the intervention is necessary to be clear what is being evaluated, so that effectiveness is not assigned to similar, but different, interventions.We can have less confidence in study findings if there were significant differences between the treatment and comparison groups at baseline.
Protocol registered before commencement of the review.
Adequacy of the literature search. Justification for excluding individual studies.
Risk of bias from individual studies being included in the review.
Appropriateness of meta-analytical methods.
Consideration of risk of bias when interpreting the results of the review.Protocol registered before commencement of the review.
Assessment of presence and likely impact of publication bias.
PICOS in inclusion criteria
Rationale for included study designs
Duplicate screening
Duplicate data extraction
Adequate description of included studies
Report sources of funding
Risk of bias assessment for meta-analysis
Analysis of heterogeneity
Report conflicts of interest