Evidence-based programmes in schools: are they a realistic solution for drug and alcohol prevention?

On 26 March, Mentor will be hosting a seminar: ‘What works’ in supporting young people’s development – making evidence useful for schools and practitioners. Mentor is very supportive of the use of evidence-based programmes, but to say that their use in the UK for drug and alcohol prevention has been limited is an understatement.

There are different types of evidence (see the presentation below) but some of the strongest comes from randomised controlled trials (RCTs). Allocating randomly between the intervention and control groups makes it more likely that any improvement is due to the intervention and not some other factor.

Interest in the use of these in education has recently been stirred by Ben Goldacre’s paper, Building Evidence into Education. The applicability of RCTs in classrooms is questioned by some (see the comments thread under this Guardian article) but elsewhere (in the US in particular) this approach is widely used.

Nick Axford and Louise Morpeth of the Social Research Unit at Dartington have also recently written an interesting paper (abstract here) addressing some criticisms that have been made of the promotion of evidence-based programmes within children’s services.

They start by defining a programme as “a discrete, organized package of practices, spelled out in guidance – sometimes called a manual or protocol – that explains what should be delivered to whom, when, where and how.” A programme can only be called ‘evidence-based’ when “it has been evaluated robustly, typically by randomized controlled trial (RCT) or quasi-experimental design (QED), and found unequivocally to have a positive effect on one or more relevant child outcomes.”

The first set of arguments they explore are around the gathering of evidence. Although RCTs are widely seen as the ‘gold standard’ for programme evidence, results from RCTs may not be unanimous. There may also be limitations to the design of trials or reporting of their findings which mean that the strength of the evidence base is interpreted differently depending on the judgement of the commentator.

The authors draw attention to the CONSORT and TREND guidelines, checklists to ensure that randomized and non-randomized trials are reported in a standard way so that issues such as the number of people dropping out are clear to the reader.

It has been noted that evaluations led by programme developers are generally more likely to find significant effects. This could be because of bias or because there are greater resources (skills, motivation, understanding or budget) to deliver programmes well. This leads into another valid concern about evidence-based programmes: will they work in ‘ordinary’ settings without intensive support from the research team? Understanding this requires some information about how studies were carried out, for example how schools were chosen and how much support was given to them. It is entirely possible, however, for programmes to make the leap from experimental study to real-world implementation.

A more fundamental criticism is that evidence-based programmes focus too much on the immediate causes of poor outcomes for children and young people, and neglect the underlying structural factors. This can of course be true, but, as the authors point out, this criticism arguably applies to most frontline services. While tackling inequality may ultimately be a better solution for reducing many social problems, if other services are still to be provided it makes sense to ensure that these are as good as they can be. Why waste resources on educational innovations without a sensible logic model, or anti-drug adverts which have counter-productive effects?

Another concern is that a few programmes, mainly from other countries, whose developers have the resources to invest in large-scale evaluations, risk elbowing out home-grown programmes more suitable for particular groups. So far, a miniscule fraction of services delivered to children and young people in the UK are evidence-based programmes, so this has not yet come to pass, but it is a concern worth bearing in mind.

Similarly it is frequently asked whether evidence-based programmes designed and tested elsewhere will transfer across cultures or can be adapted to local conditions. From studies importing programmes into other countries, the short answer seems to be that sometimes this works and sometimes it doesn’t. This is why would be unwise to implement a programme at scale without first testing it in the context in which it will be used.

It is important to get the balance right between valuing local knowledge and resources, while avoiding wasting effort in reinventing the wheel, and recognising that not all initiatives are valuable. Sometimes, with the best of intentions, harm can be done. One example is that it has been found that bringing together young people who have a track record of anti-social behaviour or drug use for unstructured activity risks reinforcing these norms of behaviour among the group. Without evaluation, it may not be clear whether the positive benefits outweigh this risk.

It is true, however, that new and promising programmes may find it difficult to provide meaningful evidence of their impact when resources are limited. Initiatives such as Project Oracle and CAYT seek to support organisations to do this.

On the one hand, researchers and academics often express concern that evidence-based programmes may not be implemented with fidelity in the real world, and therefore may lose their effectiveness. However, on the other hand, professionals sometimes feel that having to faithfully implement structured programmes without adaptation risks de-professionalising them.

While programmes vary in how restrictive their guidelines are, in general the expertise and skills of the professional are still important. Some programmes are relatively simple, with a clear logic model to explain how inputs lead to improved outcomes. For others, the structure of the programme is more complex, and those implementing it would benefit from knowing which are the ‘core’ essential elements and which they can change.

The final criticism is the cost of the programmes. This cost should be weighed against the potential financial benefits from even a modest impact. For example, the cost of crime relating to young people’s drug use has been estimated just under £100m per year and the cost of health care: £4.3m per year. The cost of underage drinking and excessive drinking by young adults is also significant: one estimate, from 2003, is that up to 35% of all accident and emergency attendances and ambulance costs are alcohol-related.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s