Settling Disputes In Research

One of the key influences on young people’s behaviour is the boundaries and values that parents express.

Parenting that is too laissez-faire (“I can’t tell you how to live your life, do what you think is best”), or too authoritarian (“Do as I say or else!”), tends not to be as effective as authoritative parenting (“I want to know where you are and who you are with”, “I don’t want you to drink at the party”).

We also know that the attitudes and behaviours of the parents of young people’s friends also plays a part in protecting or raising the risks around substance misuse.

The Orebro programme (see here for the CAYT analysis of the programme) was developed in Sweden as a way of trying to help parents to set collective boundaries to reduce alcohol use. It brings together parents of young people who go to the same school and it; raises their awareness of the harms that alcohol can do to young people; why not engaging in early use of alcohol is beneficial to young people; and perhaps most importantly it asks the parents to collectively set boundaries and expectations for their children.

The journal Addiction has been running an interesting debate about the effectiveness of the programme, where a trial of the programme carried out by people independent to the developers found that it does not appear to reduce or delay youth drunkenness.

The authors of the original research wrote back to suggest:

Overall, our conclusions suggest the opposite of what Bodin & Strandberg concluded. The ÖPP programme appears to influence changes in youth drunkenness, which is the aim of the programme. In addition, the effect of the programme is seemingly explained by changes in parental attitudes to youth drinking, providing strong support for the validity of the programme theory.

Which led to a further letter in the journal from Bodin and Strandberg arguing that what they were trying to explore was not whether the programme worked in ‘laboratory’ conditions, but if it could work in the ‘real world’. They rightly point out this is one of the major challenges of current prevention research.

To me this sets out a critical issue for organisations like ours to think about; how do we not only pick the right interventions, but how can we make sure that the implementation of them has the best chance of replicating positive results?

The dangers are clear: delivering a good programme badly damages not only the reputation of the programme, but also the still fragile reputation of the evidence based prevention movement. However, programmes that only work when delivered by the programme developers aren’t of much use to the wider world and have little chance of achieving scale or sustainability.

The Society for Prevention Research – the US big sister organisation to the European Society for Prevention Research that I’m involved with – has set out standards for judging the very best programmes and interventions that include: having been tested at least twice, evaluated in real world conditions, and having clear cost information and monitoring tools.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s