Step 4: Develop an evaluation plan

On this page:

This section provides advice on how to create an evaluation plan with key evaluation questions and indicators of success.

You will find an evaluation plan template on the resources and templates page.

What is an evaluation plan?

An evaluation plan records the purpose of your evaluation, the key evaluation questions you will use to evaluate the success of your project and the indicators you will use to define what ‘success’ means.

What is my evaluation’s purpose?

Before drafting key evaluation questions, it’s important to write down your evaluation purpose. This will help determine the design, resources, and timeframe.

Consider the context for the evaluation. Why is it being conducted? How will it be used? The purpose of your evaluation might be:

  • Advocacy – To make the case for the retention or addition of funding for a project that makes a difference. Here, the end reader might be government, a philanthropic organisation, your own board or organisation, or others.
  • Acquittal – To show a funding body that you have effectively and efficiently used their monies and met your objectives. Here the end reader might be your own board, or another funding body such as government.
  • Continuous internal improvement – To support the internal improvement of your project to increase the chance of reaching your desired outcomes now and/or in the future. Here the end readers are operational staff.
  • To contribute knowledge and practice to the field – To provide evidence of approaches to prevention that can underpin improved policies and practices (for self and others). Here the end readers might be practitioners, professional associations, or philanthropists. Your questions may be as simple as, ‘How well did our activity work? To what extent did it have the intended (or any unintended) impact on participants? Was there evidence of change?’

While evaluations often have many purposes, it is useful to keep the central purpose in mind, as the value of your evaluation can be reduced if you try to do too many things.

What are my key evaluation questions?

Key evaluation questions are the ‘big picture’ or high-level questions your evaluation is designed to answer about your project. For example, did you meet the outcomes outlined in your project logic? Was the roll out of your project effective and smooth?

These questions should be developed by considering the purpose of your evaluation, the type of evaluation being done (formative, summative, developmental), and its intended users.

Tip – key evaluation questions

State your key evaluation questions before you implement your project so you can be sure that you are collecting the right data during project implementation. Monitor your evaluation and data collection plans to ensure you are continuing to collect the data needed to answer your key evaluation questions.

Try not to have too many key evaluation questions. The Better Evaluation website suggests between five and seven questions is sufficient.

Examples of key evaluation questions(xxxv)

A simple method for building key evaluation questions is to draw on the evaluation criteria described in the ‘About evaluation’ section of this Toolkit. Here, we have taken the criteria and reframed them as questions.


What is the extent to which the project was relevant or suited to (or the best way of) delivering the outcomes?


What is the extent to which the project was delivered as intended by its developers and in line with the project model?


What is the extent to which the relationship between inputs and outputs was timely, cost-effective and to expected standards?


What is the extent to which the project achieved (or is expected to achieve) its objectives and its results, including any differential results across groups?


What is the extent to which we reached our short, medium, and long-term outcomes, drawing on the measures we used for assessment?


What is the extent to which the project has generated (or is expected to generate) significant positive or negative, intended or unintended, higher-level effects?


What is the extent to which the outcomes or benefits of the project can be sustained, and what is required to enable this? What is the degree to which there are indications of ongoing benefits that can be attributed to the project?

In relation to each of these questions, we might also ask, what is our evidence to support our answers to these questions?

Tip – Key evaluation questions versus data collection questions

Key evaluation questions are different to questions that you might ask participants while collecting data for your evaluation. A key evaluation question example is, ‘Was the project rolled out as planned?’ There are many data sources that can contribute to answering this question – but these need to be project specific. For example, you might ask participants during an interview, ‘How did you find the mentoring process?’ or ‘Was the enrolment process easy to navigate?’ Information from these interviews will then create a source of data for answering your key evaluation question.

Suggested resources:

Guides to primary prevention practice are helpful for formulating key evaluation questions. We suggest you look at:

What are indicators of success?

In evaluation, you will often hear terms such as indicators, measures, targets, and data sources. In this document we talk about indicators (measures of success), and the data sources you draw upon to measure them. Measuring indicators helps you track progress towards your outputs, outcomes, and key evaluation questions.

Develop your indicators

There are many ways to measure progress towards your goals. Indicators help you define what progress means for your specific project. For example, if you want to know whether your project rolled out efficiently, what might that mean? Does it mean that:

  • it was completed during the identified timeframe?
  • all the funds were expended?
  • the funds were used in the right way?
  • there has been little unnecessary wastage of resources?

If these are the variables that matter to you (or stakeholders) then these can become your indicators (measures of success). When added together, they help you measure whether or to what degree you have met your key evaluation question around efficiency.

Measuring success

To know whether you have met your indicators, you require a means of verification – a way of showing you have made progress. To verify that you have met your indicators, you will draw on data sources (which we discuss in Step 5). For example, if one of your project logic outcomes is ‘participants understand what family violence is’, an indicator may be, ‘increase in the number of participants who can identify different types of family violence’.

Your data collection sources might include results from a pop quiz that you run with participants, observations of conversations you have in a workshop, or feedback from a survey in which participants rate their levels of understanding. Interviews might tell you further why participants had different levels of understanding – this can help with improvements to your project.

As noted, your key evaluation questions might be asking about aspects of implementation (how you did things), outputs (what you produced or with whom you engaged) and outcomes (the change your project has made). When tracked over time, indicators and measures are useful in highlighting progress (or a lack of it) towards your project’s intended objective and outcomes from your project logic and the key evaluation questions from your evaluation plan.

Suggested resources:

Tools for developing indicators of success

Once you have developed some draft indicators it can be useful to test them to see how practical and meaningful they will be. If your indicators are too broad or abstract, they will be difficult to measure (for example, ‘participants understand our content’ leads to the question, what does ‘understand’ mean?). If they are too technical, you might not be able to find an easy way of measuring them. If they are going to occur years after your project has finished (e.g., all community members in the City of Smithtown regularly call out bystander behaviour) then measuring them will be difficult for now.

A popular method for setting and testing meaningful goals (first cited by Doran(xl)) is the SMART method. While words in the acronym have changed over time, SMART typically stands for:

  • Specific
  • Measurable
  • Achievable/Attainable
  • Relevant
  • Time-bound.

The SMART criteria are a lens through which to review and rework your indicators.

A note on primary prevention

The SMART approach is useful, however keep in mind that in a field like primary prevention (as in any behaviour change or social impact field) outcomes are not always easy to measure(xli). As Funnell and Rogers (2011)(xlii) counsel, ease of measurement shouldn’t be the only or defining factor in drafting your outcomes.

Suggested resources:

There are many useful resources containing descriptions and examples of SMART indicators in evaluation contexts.

Before you go ...

  • Brainstorm your indicators, and some of the data sets that will help you measure progress.
  • Use the SMART methodology to test your indicators.
  • Exclude things that will be too resource intensive, that are beyond the timeframe of your project, or that are just ‘nice to know’.
  • Consider whether the data you are collecting will be too time consuming compared with the benefit it will provide.
  • Think about whether you will need ethics approval, or whether there are other ‘gateway requirements’ that might make collection more challenging. This does not necessarily mean indicators should be excluded – just that you should weigh up their value.
  • Refer to your project logic and project plan as resources for creating your indicators.
  • Don’t have too many key evaluation questions – up to seven questions is plenty.



(xxxv) Adapted from the following sources:
– Respect Victoria (2021) Monitoring and Evaluation Strategic Framework. Respect Victoria, Melbourne.
– Better Evaluation (2022) Specify the Key Evaluation Questions. Better Evaluation, Melbourne. Accessed 5/7/22 Available at:
– OECD (2021), Applying Evaluation Criteria Thoughtfully, OECD Publishing, Paris. Accessed on 7/7/22. Available at: [].

(xxxvi) Breitenstein, S.M., Fogg, L., Garvey, C., Hill, C., Resnick, B., and Gross, D. (2010) Measuring implementation fidelity in a community-based parenting intervention. Nursing Research, 59(3), p. 158–65.

(xxxvii) OECD (2021), Applying Evaluation Criteria Thoughtfully, OECD Publishing, Paris. Accessed on 7/7/22. Available at: [].

(xxxviii) OECD (2021), Applying Evaluation Criteria Thoughtfully, OECD Publishing, Paris. Accessed on 7/7/22. Available at: [].

(xxxix) OECD (2021), Applying Evaluation Criteria Thoughtfully, OECD Publishing, Paris. Accessed on 7/7/22. Available at: [].

(xl) Doran, G.T. (1981), There’s a S.M.A.R.T. Way to Write Management’s Goals and Objectives. Management Review, (70), 35–36.

(xli) P. 13, Quigg, Z., Timpson, H., Newbury, A., Butler, N., Barton E, and Snowdon, L. (2020). Violence Prevention Evaluation Toolkit. Public Health Institute, Liverpool John Moores University/Wales Violence Prevention Unit, Liverpool/Cardiff.

(xlii) Funnell, S., & Rogers, P. (2011). Purposeful Program Theory: Effective use of theories of change and logic models. John Wiley & Sons, San Fransisco.