Statistics can be useful but they need careful interpretation and shouldn’t be used in isolation or they can be more of a hindrance than a help.
It has often been said that you can prove anything you want with statistics and it’s certainly true that you can read them many ways. I am often involved in discussions about how to measure the success of volunteer programmes. A figure often discussed is the volunteer drop-out rate. This is attractive because it’s a single figure, should be quite easy to measure (assuming your organisation holds data on who volunteers and when they stop) and at first glance seems like a good measure of how much people enjoy their volunteering – after all, if they don’t enjoy it they will vote with their feet and leave. However, this statistic, like so many others, needs to be interpreted carefully. There is often an assumption that a lower drop-out rate is better, whether that’s in comparison to other charities or to previous years. Unfortunately a comparison like this fails to take into account all sorts of factors. For example, say Charity X specialises in engaging people who are not in education or employment in volunteering as a way of gaining skills, confidence and experience. We would hope that many of these volunteers might leave to go on to other things, such as paid work or progressive voluntary roles. So a low volunteer ‘drop-out’ rate may not be a good thing – it could show that people are not developing in the way we hope and are therefore staying in the programme. That said, a high drop-out rate may not necessarily be good either – maybe people aren’t going on to other opportunities, maybe they’re dropping out of the system and just not engaging any more. So other factors need to be taken into account, for example why people left and what they went on to do next. Just considering the statistic doesn't give us anything like the whole picture.
In my blog More than just fundraising I mentioned my frustration with the use of numerical measures as the way of judging a charity, for example by considering how much of each £1 is spent on fundraising or how much is spent on overheads. Aside from the wider issues of judging charities this way (eloquently explained by Dan Pallotta in his TED talk) the numbers themselves are not the immovable and definitive measure we sometimes think they are. Using the example of the proportion of money spent on overheads, the way this is calculated can vary widely. Charity A might count all their central office costs as an overhead, while Charity B allocates a proportion of these costs to each project. The money is therefore counted as a project cost rather than an overhead. (Incidentally I think Charity B is right to do this – after all, the projects wouldn’t exist without the support and activities of the central organisation, which are based at the office). So even if their spending is identical, Charity A might look like it spends a much higher proportion of its money on overheads than Charity B. The statistic doesn’t show the whole picture.
And the same numbers can be presented (or interpreted) several ways – for example, some funders and donors see it as a really positive thing that each £1 they invest in a certain non-profit is turned into £4 or £5 or £20 by their chosen charity. Others might criticise the proportion of each donated £1 that is used for fundraising, regardless of the fact that it results in a 4, 5 or 20-fold increase in the money available to help the cause. Numbers can be read many ways.
This is not to say that statistics cannot be helpful – they absolutely can, and I’m a big advocate of recording and monitoring data relevant to your organisation and your programmes and using it to help you understand what's going on, monitor progress, and measure success. Coming back to volunteer drop-out rates, one charity I worked with asked volunteers to stay with them for at least a year in order to provide stability to the young people those volunteers supported. By comparing the number of volunteers dropping out within a year from area to area and from year to year they were able to focus their efforts where they were most needed, monitor the success of the organisation in meeting this target, and refine their volunteer recruitment and support to encourage more volunteers to stay for this time. A successful use of a statistic in real life but it was not used in isolation – it was just one measure and one consideration in how successful the programme was – and many factors were considered alongside the raw statistic, such as key events that might have affected a large number of volunteers in a certain patch.
Recently I came across a quote regarding statistics by from Aaron Levenstein: “Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital.” I think we would do well to remember that.