Expert advice on reporting: progress reports, acquittals and outcomes copy

Posted on 16 Dec 2020

By Joshua Presser and Georgie Bailey, SmartyGrants

Abacus i Stock 614324724

How do you know whether your grants program is effective? How do you know whether it’s delivering value for money?

Evaluation is critical to finding out the answers to these questions, identifying any unintended negative consequences of your program, and uncovering areas for improvement. Evaluation also provides an opportunity to review how well your policies and processes function and make any necessary improvements.

Few grantmakers would dispute any of that, but at SmartyGrants we often get asked how to put the theory into practice. How do you do evaluation?

In this article we bring you a conversation between Josh Presser, the director of special projects at SmartyGrants, and Georgie Bailey, director of business solutions, design and development, who look at what grantmakers need to consider when it comes to evaluation.

Their discussion is based on a SmartyGrants webinar, which you can watch in full here.

It builds on an earlier discussion from the same webinar, Expert advice on reporting: progress reports, acquittals and outcomes.

Bailey Georgie usethisone June 2019
Georgie Bailey, director of business solutions, design and development, SmartyGrants

Is it more important to gather data from internal or external stakeholders? Should I be equally concerned about what the applicants, the grant officers, and the assessors think about the program?

Josh: Neither is more important than the other. A good program evaluation will take a holistic approach, identifying and engaging with all stakeholders who can provide an insight into how the program went.

Evaluation isn’t something you bolt on at the end as an afterthought. You should consider it part of your program design. Your evaluation strategy should clearly define the scope and purpose of the evaluation and identify key stakeholders, and how they will be involved in the process. A good evaluation will engage with all stakeholders, including beneficiaries.

Georgie: It’s really important to include the perspectives of all stakeholders. Different stakeholders are going to have different views of what is important. For example, there can even be divergence of views among internal stakeholders – a grants officer’s thoughts on how things run can be quite different from executives’ opinions, and different departments may have a different perspective. Anyone involved in the process should be consulted and their perspective considered in the context of the whole.

Presser Josh Mar2020
Josh Presser, director of special projects, SmartyGrants

How do you get real data from applicants/grant recipients?

Georgie: First of all, real data won’t just include positive feedback. If you really want a proper picture you have to be prepared to ask honest questions, and welcome both positive and negative feedback.

Stakeholders will be more honest if you give them the opportunity to tell their story, making it interesting, relevant and engaging for them. Applicants and grantees often go to great effort to supply information and feedback, and you don’t want them thinking that it is all going into a black hole and they’ll hear no more about it. Think about feedback – plan to share some aggregated results back to participants to let them know they were heard.

Getting real data starts with being realistic – don’t ask for data that respondents don’t have the skills, or capacity to actually collect.

Josh: If you have cultivated an open and trusting relationship with grant recipients over the course of the grant, to the extent that they feel safe to share not only their achievements but also their challenges and difficulties, then honest, frank and fearless feedback will flow easily into your evaluation. Negative feedback isn’t a bad thing, it’s an opportunity to learn and improve. If you’ve been talking and engaging openly with grant recipients over the course of the grant, there shouldn’t be any surprises at evaluation time. As Georgie said, a good evaluation will welcome both positive and negative feedback.

In terms of unsuccessful applicants, giving honest, open and timely feedback on their applications is a good way to elicit honest feedback on your program and process. Don’t wait until the end of your program to seek feedback from unsuccessful applicants.

How can grantmakers benchmark accurately?

Josh: Benchmarking can be challenging. You need to make sure you’re comparing apples with apples and ensure all grant recipients are reporting consistently against the same metrics. Depending on the capability and experience of your grantees, your systems, and the data you are asking for, this might require some capacity building and training for some grant recipients.

Benchmarking data can be useful, but an overemphasis on benchmarking (quantitative data) can also lead to perverse outcomes. For example, a grant recipient may only appear to be achieving outcomes for a small number of clients, but those clients may have more complex needs and require a higher level of resources to achieve the same results. Likewise, another grant recipient may appear to be getting a large number of outcomes for the same inputs but may be cherry-picking less-complex clients.

Clear and open communication with grant recipients about how benchmarking data will be used is important to alleviate any concerns they have. Using quantitative benchmarking data as the basis for discussion with grant recipients and reviewing other qualitative data is a great way to get additional context and nuance.

When do I start collecting evaluation data?

Georgie: You should start planning for evaluation when you design your program guidelines and you should start capturing data immediately. For example, how long did your assessment process take? How many applicant questions did you have to field? Don’t wait till the end of your program to try to figure out how you can tell whether it achieved your aims.

Josh: In terms of evaluating program outcomes this depends on the type of evaluation you are undertaking. Summative evaluations review a project at the end, while formative evaluation reviews processes as they are implemented. Some grantmakers choose a combination of both. As Georgie said, whatever your approach, your evaluation strategy should be devised during the program design stage.

Collection of information about your program design and administrative processes will commence once they are operationalised. It’s useful to collect information about how things are working as you go (for example, the number of calls to your information line, common questions received, incorrect responses on forms). Some of these things can be fixed or tweaked along the way.

What questions do I ask?

Georgie: Again, it depends very much on what you are evaluating. If you are looking at process, you will be interested in the ‘time vampires’ – what sucked all your time? You might also look at what was difficult, what was easy, what went wrong and what worried you. Did you attract the right applicants? Did you ask the right questions to pick the right projects? Did you have the right assessors?

Evaluating your program is very much about outcomes. What was the program supposed to achieve? What are the appropriate questions to establish if it did, or if not? And if not, why not? Did we attract the right applicants? Did we collect the right data to be able to measure change? Some of what you ask will vary from one program to the next

Josh: As Georgie has said, the questions you will ask will be determined by the purpose of the evaluation. For example, are you aiming to assess the effectiveness (achievement of outcomes), efficiency (value for money, return on investment) or administrative processes of your grants program, or a combination of all three?

The questions you ask and the way you ask them will also depend on the stakeholders you are engaging with. For example, your approach to grant beneficiaries will necessarily be different from your approach to grant recipients or internal stakeholders (e.g. grant managers).

Can you give an example of how a lesson learnt during evaluation informed what happened to a program the following year?

Georgie: As many of us would know it can be hard substantively to change a program or policy once it is in place, particularly for politically driven programs. The main areas grantmakers can influence are processes and procedures and how outcomes are measured as opposed to the actual policy or program guidelines.

An evaluation of our ability to measure outcomes led to the realisation that we couldn’t tell a change story. In response, we changed to a standardised data model across all our programs, which meant we could better talk about change, and also look at what we were doing more holistically. For example, we could now look at outcomes for organisations that we funded across several of our programs, and the sector they represented, not just in the separate silo of each program.

Process evaluation probably yielded the biggest benefit. Evaluation of how we were doing things led to revised processes and procedures cutting waste, double handling, and time-consuming tasks of little value. This increased efficiency and led to more effective results.

Josh: External program evaluations and reviews can be a great opportunity for change. I was managing a multi-year grants program which underwent an external evaluation by a major university. Through discussions with grant recipients and program beneficiaries, the review found that there were several vulnerable client cohorts that were unable to meet eligibility requirements for the program, and it recommended changes to eligibility to ensure that these groups didn’t fall through the cracks. While grant managers had known about this issue for some time, the external evaluation provided the evidence base key decision makers needed to take action, and changes were incorporated into the re-designed program.

The review also found that the program was not collecting the right data to be able tell if desired outcomes were being achieved, and as a result, new data collection and reporting requirements were incorporated into the next iteration of the program.

Want to learn more? Check out the Grantmaking Toolkit

Grantmaking Toolkit
The Grantmaking Toolkit is free for SmartyGrants users.

The suggestions in this article are tied to the processes outlined in the SmartyGrants Grantmaking Toolkit, the definitive guide to building best practice into your grants processes and programs. The toolkit covers the nine stages of the grantmaking lifecycle, including the stage dealt with in this discussion:

Stage 9 – Evaluate and Share: evaluating your grants program.
The toolkit also includes 24 separate policy and operational templates to help you to tailor each process for your organisation.

MORE INFORMATION
#