The case for randomised trials in policy development

One of the more exciting new developments in Australian media has been the launch of The Mandarin - a news service dedicated to in-depth coverage of the Australian public sector and policy making. I recently sat down with one of their reporters, David Donaldson, for a chat about using randomised trials to help guide better policy development. Here's a summary:

The case for randomised trials in policy development, The Mandarin, 10 September

Governments should use more randomised trials in policy development, according to federal Labor frontbencher and former economics professor Andrew Leigh.

Randomised trials are used extensively in the private sector — “you are having randomised trials done on you every time you enter a supermarket or every time you use Google”, Leigh told The Mandarin at his Parliament House office recently.New South Wales has been conducting randomised trials with letters asking people to pay fines and tax, among other things, building on the work of the British government’s Behavioural Insights Unit.

recent NSW trial found the number of citizens paying overdue land tax jumped from 27% to 39% by introducing greater personalisation and a statement that “8/10 people pay their land tax on time” in legal notices. Other trials have included six or eight possible alternatives in the layout of websites, for example, allowing exact measurements of how customers responded to the inclusion of a photo, a logo or different text. 

Leigh, Labor’s shadow assistant treasurer, says the use of randomised policy trials has been gaining traction in the United States, the United Kingdom and Sweden.

“Basically, testing new policies the way in which we evaluate new pharmaceuticals, with the notion that the real problem in policy evaluation is working out the counterfactual — what would’ve happened if the policy hadn’t been put into place. And in the case of new drugs, we do that by tossing a coin — heads you’re in the treatment group, tails you’re in the control group,” he said.

With many health treatments researchers use double-blind techniques, so neither the participants nor doctors know which are the treatment and control groups, Leigh adds. You can’t do that with a job training program — people know whether or not they’re getting a job training program — “but we’ve got enough research now out of economics showing that evaluations produced that way are far more reliable than evaluations which are produced by, for example, observing people who opt-in to job training and those who don’t, who would’ve had different employment trajectories regardless.

“Because if you really want to do a job training program, you probably have a different degree of motivation than someone who really doesn’t want to do one.”

It’s useful to think about a hierarchy of evidence in evaluations, Leigh argues.

“It might be that you can’t do a randomised trial, but in that case you want to not immediately fall back on the approach of asking recipients of the program whether or not they liked it, which is sadly how some evaluations work,” he said. “Instead, you want to think about a hierarchy in which randomised trials are at the top, then natural experiments, then before and after studies, or comparisons at a single point in time with what’s going on in another place. Then down the very bottom of that, observing a single group.

“The way I like to describe it to people is that if you do a pilot you’ve got one of the lowest levels of rigour as what would have happened if you hadn’t implemented the program. A randomised trial is just a pilot with a control group, where you pick twice as many people at the outset, toss a coin, and then you compare those who got and didn’t get the program — but not much more expensive to run than a pilot.”

One example Leigh cites as useful is the 1999 NSW drug court trial, “which helped to break through the fractious politics of drugs in NSW” by randomly assigning offenders to either go through a drug court and then drug treatment, or the traditional criminal justice process. Within a couple of years, it was clear those who went through the drug court were much less likely to reoffend than those who went through the traditional judicial process.

“In an environment in which people knew that the drug court and the drug treatment program were going to be more expensive,” Leigh said, “it was really important to have a rigorous randomised trial, which ended up showing that the drug court more than paid for itself through averted crimes in the year after the program finished.”

Leigh would also like to see more encouragement of “ideas percolating up” in the public service.

Asked how the public service can remain innovative, he offers praise for a 2007 ideas competition called Policy Idol run by the Victorian Department of Premier and Cabinet, where young public servants would pitch an idea to a panel of senior executives, including a deputy secretary. The winner was given the opportunity to work offline developing a feasibility study on their idea for a week. As Leigh described it: “Australian Idol for policy nerds!”

He sees such programs as “encouraging a culture in the public service that the public service doesn’t just implement policies but it also produces them, and that kind of idea that all good organisations have, that clever ideas can come from anywhere, not just the top. The idea was not just to identify an idea or two at the end of it, but also to help change the culture and so people were putting ideas up to managers or ways in which things could be done better.”

And although he’s sceptical the public service will ever go the way of Google, where employees are given 20% of their time to work on new projects, he believes that “allowing a little bit of flexibility of ideas generation is important and helps with staff retention as well”.

Be the first to comment

Please check your e-mail for a link to activate your account.

Stay in touch

Subscribe to our monthly newsletter


8/1 Torrens Street, Braddon ACT 2612 | 02 6247 4396 |