K&A plan template guidelines

Intended impact of your project (objectives):

The intended impact of your project essentially links back to your project's objectives. In undertaking your research, you intend to provide some new knowledge which would improve the management of natural resources. What is this impact? The following prompts may assist you in determining your intended impact:

  1. Is the project seeking to influence on-ground practice, NRM policy or NRM planning? At what stage in the project's life?
  2. Who are you trying to reach/influence through this project?
  3. Is there more than one target group? (Define each group precisely.)
  4. Why would the target group want to be involved with the project or the uptake of this research?
  5. What is the best way of reaching the target group?

Examples of intended impacts:

  • to inform policy
  • to build capacity amongst planners
  • to improve decision making processes
  • to inform an emerging scientific field.

Who (Target)

Who is involved in, affected by or interested in your project research in natural resource management can contribute to a range of activities. In the strategy we categorise them into 3 areas:

  • Policy - The Australian and State governments
  • Planning - e.g. 56 regional NRM groups (CMAs) and local governments
  • Practitioner - e.g. land managers, extension staff and networks, and agencies or organisations that manage land assets.

Note: These are a guide only - some individuals and organisations sit in more than one category.

Your project may be relevant to some or all of these sectors. Understanding their attitudes and practices concerning the research and/or the issue the research addresses will assist you in reaching them more effectively.

A comprehensive contact list for your target participants and audiences is fundamental to undertaking engagement and communication activities. It also assists in monitoring and evaluation.

I want to find out more:

  • Examples of target audiences

Type of engagement/how (Method)

How are they to be engaged (method)

There is a broad range of methods to manage knowledge for adoption, from direct engagement or collaborative research through to tailored communication products and, finally, indirect information provision.

Selection of your methods depends on the content, target audience, required outcomes and resources available for implementation or delivery. Not all of the methods will be applicable to all projects.

The method may often be influenced by the ‘adoptability’ of the research e.g. relevance, trialability, skills required and cost implications.

I want to find out more:

  • Examples of methods

Monitoring and Evaluation (engagement & impact) (Measure)

Monitoring and evaluation (M&E) at the project level links to program and corporate monitoring and evaluation. The monitoring and evaluation column refers to each individual line item that you have identified a target and method against – it does not refer to evaluating your entire project. It shouldn’t duplicate any other M&E work or project work that you do either.

It is useful to consider measuring how you have undertaken these activities as well as the outputs and outcomes. For example, feedback on how a steering committee or workshop has been run can be useful for making immediate improvements while the project is still underway. Measuring outputs is the easiest step – e.g. how many events have been run and how many people attended. Capturing outcomes can be more difficult in the lifespan of a project, particularly adoption outcomes. These are more likely to be measured at a program or corporate level with techniques such as Return on Investment.

I want to find out more:

  • Examples of monitoring and evaluation techniques
  • Monitoring and Evaluation


A K&A project plan should consider methods for project implementation and methods for project legacy when the research project is completed.

Your project may have important outcomes for adoption beyond the lifecycle of your project. A legacy plan ensures the research outcomes are not forgotten upon completion of your project. Managing the project’s legacy may be undertaken as part of the program.

I want to find out more:

  • Examples of legacy activities 
  • Legacy

Advice regarding media and branding

Before preparing a publication, presentation or media release contact Land & Water Australia to check any style, branding or media guidelines you should be using.

Formatting, style and branding issues are best handled early in production to minimise angst and cost.

K&A_plan_template.doc41 KB
Example_of_a_completed_KA_plan_V1.pdf42.87 KB

Examples of methods


  • Participatory action research
  • grower or user initiated research
  • research advisory committees
  • citizens’ juries
  • public hearings
  • web-based meetings
  • grower field sites


  • policy briefings
  • tailored workshops
  • targeted issues papers
  • information centres and field offices
  • one-to-one surveys and response sheets
  • focus groups

Information Provision:

  • general publications
    • fact sheets
    • newsletters
    • project reports
    • journal articles
  • websites
  • general media
    • newspapers
    • television
    • press releases
    • advertisements

Examples of monitoring and evaluation techniques


  • most significant change
  • ex-poste adoption survey
  • feedback on effectiveness and appropriateness of engagement processes
  • surveys on likelihood of uptake and barriers to uptake

Communication/information provision:

  • longitudinal survey of attitudes
  • feedback on usefulness, structure and content of events
  • downloads from a website
  • numbers at a seminar
  • feedback on publications

Monitoring and Evaluation

Monitoring and evaluating (M&E) your project's K&A activity, such as a workshop, gives you information to assess that activity. M&E provides an opportunity to learn and improve as you go, feeding the results back into research and employing adaptive management.

Knowing how, where and by whom your research is being heard about, tested, or applied can also:

  • confirm some of your assessments with evidence,
  • guide your allocation of resources, and
  • provide information for future papers and publications.

I want to find out more:

  • Monitoring & Evaluation questions
  • How to measure your impact
  • Define your performance indicators
  • Select your methods
  • Feedback on your Monitoring & Evaluation
Some_M&E_Methods_V_0_1.pdf37.49 KB

Define your performance indicators

As with research, developing the most relevant objectives, performance indicators, questions and methods is usually well worth the time invested.

For example: if your knowledge and adoption objectives are to increase farmers’ awareness and adoption of native grasses for grazing in Newhaven:

  • One of your performance indicators may be that 'After six months from the start of the project, 40% of farmers in the Newhaven area will have heard about the value of native grasses for grazing in Newhaven.'

Your evaluation methods for this performance indicator may include polling farmers at a community meeting, conducting short phone interviews with a sample of local farmers, discussing with the key leaders in the farming community and/or advisers.

One of your performance indicators may be:

  • the number of farmers actually using/planting native grasses over the lifetime of your project, and whether this number increases over time
  • whether relevant farming groups and agricultural advisers are incorporating the value of native grasses for grazing in Newhaven in their information and advice
  • whether a workshop or webcast on native grasses for grazing in Newhaven was a success, and who for
  • whether new publications on the value of native grasses for grazing were received by the target audiences and seen as useful
  • the media and web pick-up on your research/issues

Feedback on your Monitoring & Evaluation

Discuss your survey questions with a sample of researchers and/or natural resource managers you know, and ask them for their feedback:

  • Are the evaluation questions hitting the mark?
  • Are your evaluation methods and questions going to give you information that you can use to improve your knowledge and adoption?
  • Can they see any gaps in your evaluation, or make any suggestions?


How to measure your impact

Research that contributes to different areas will need different approach

If your research contributes to government policy then the stakeholders, knowledge and adoption approaches, and performance indicators you use may be different to those you would use for a collaborative industry project that contributes to on-ground practice.

The two main types of monitoring and evaluation methods are qualitative and quantitative.

Qualitative evaluation methods

Allow stakeholders to explore issues and provide feedback in more depth and complexity, unbiased by set questions. They can give details of evidence, examples, problems and ideas, but can be more difficult and costly to analyse and report.

Qualitative evaluation methods include:

  • focus groups
  • individual in-depth interviews
  • written comments
  • most significant change
  • quick polling face to face

Quantitative evaluation methods

These are relatively easy to analyse and report, but don’t tell you ‘why’ results are as they are.

Quantitative evaluation methods include:

  • survey questionnaires
  • special instruments such as 360 degree surveys
  • electronic surveying
  • card sorts
  • quick voting

Quantitative evaluation methods allow you to:

  • get precise measurements
  • track progress over time
  • measure strengths and weaknesses
  • compare to benchmarks

Some quantitative evaluation questions:

  • Was a draft report submitted to X Committee by a specified date?
  • How many people attended a public meeting?
  • What was the increase in the number of requests to be put on the electronic newsletter list?

Monitoring & Evaluation questions

Think about your research project:

  • Do your stakeholders feel informed about developments in your project, and to what degree?
  • Do your stakeholders know more about an issue or practice after a period of time?
  • Which communication activities were effective, and for which audiences?
  • Does web traffic increase to local Landcare and regional NRM agency websites following the establishment of an online discussion group about an issue or practice?
  • Can visitors to your trial sites apply the principles of what they had learnt?
  • Can visitors see any barriers to adoption and what could encourage them to adopt related practices?
  • What are the key points that the field day participants had learnt?
  • Are there any unanswered questions, if so, what are they?

Select your methods

Knowledge and adoption outcomes can be measured in a variety of ways, from short electronic surveys to in-depth case studies. A combination of evaluation methods is often the best way to go.

  • Check that the evaluation methods will answer the evaluation questions, and that they relate back to your project's objectives.
  • Consider the cost, time, resources and skills available when choosing methods.
  • Consider the ethics of your monitoring and evaluation – the people who are using your research are probably busy, and many producers and regional groups are over-surveyed.