Evaluation Checklist

Whether you are evaluating the effectiveness and efficiency of a program or service, your organizational processes, or the satisfaction of your clients or clients, successful evaluations require planning and thinking. The “success” of your evaluation should be measured by the degree to which it provides you with the information and knowledge needed to optimize your business, organization or department with the least disruption. This evaluation checklist contains fifteen questions that managers must consider before beginning the evaluation process.

1) If the evaluation relates to a program or service, have one logic Model been developed for it?

  • Logical models are frameworks that document program or service goals, related and measurable goals to achieve the goals, related activities to achieve the goals, and related performance goals and indicators.
  • If a logic model has not been developed, this should ideally take place before beginning your evaluation process. On the other hand, the evaluator can sometimes be engaged to help you develop the logical model.

2) What are the objectives of the evaluation?

  • What do you hope to achieve through the evaluation?
  • How will you know if the evaluation goals have been reached?
  • What happens if the desired results are not achieved or the evaluation results are not favorable? – Is the program or organization open to change if this seems to be required?

3) Why is evaluation evaluated at this time?

  • “Why now?” Did an event clarify the evaluation?

4) Where did the drive for evaluation come from?

  • Who initiated the evaluation process?
  • What was the motivation behind this request or claim?

5) Who are the primary target groups for the evaluation results?

  • What are the interests of your various constituencies, including: Funders; Board of Directors; clients / clients; Managers / supervisors; staff (etc.)?

6) What level of analysis do you require?

Survey results can be presented through several report formats, including: table reports (without analysis or interpretation), to highlight reports that provide an overview of survey results to fully annotated technical reports. Complete reports include:

  • Descriptions of survey methodology and sampling processes (where applicable)
  • Respondent profiles (where applicable)
  • The survey results, including statistics and interpretation
  • Conclusions based on the conclusions
  • Recommendations and next steps, based on the conclusions.

Reports should contain enough information for readers to form their own conclusions about the impact of survey results.

7) Who owns the data?

  • Who owns the data when the data is processed and the report is written? The implication in this case concerns control and responsibility.
  • What provisions have been made to share the data with internal and external stakeholders?

8) What resources are available for the evaluation?

  • Do you want to use your own staff to design and manage the evaluation or an external evaluator?
  • Do you have extra human resources to help with the evaluation? These may include both office support and / or technical support.
  • What financial resources are available for the evaluation? You need to develop a realistic budget that reflects the full scope of the desired evaluation.
  • Is it possible to offset some costs by having the organization incur costs related to printing, postage or administration of evaluation elements where appropriate?
  • If the evaluation is carried out internally, what access does the organization have to subcontractors to technical parts of the work, if necessary?

9) Has the evaluation been conducted internally by your staff? In that case, consider the following questions:

  • What is the related experience and expertise of the people responsible for taking the evaluation?
  • Should internal evaluators also fulfill their normal responsibilities at the same time? How much of their time will they have available for the evaluation?
  • How much autonomy and independence do internal evaluators have?
  • What resources will they have available for them to assist in this process? Do they want the ability to deliver parts of the work?
  • Can they feel free to present a report highlighting controversial issues or views that are less than favorable, or will they feel pressured to give positive results to the evaluation?

10) If external evaluators will be used, how are they screened and selected?

  • Who will develop the Request for Proposal (RFP)?
  • Does the search committee have experience delivering or overseeing evaluations?
  • What are the criteria for screening proposals?
  • What qualifications and credentials best match the needs of the organization?
  • Who makes the final decision on the selection of the evaluators?

11) If external evaluators will be used, who will be the link between the evaluators and the organization?

  • What is the connection area for decision making?
  • Does the liaison have experience or expertise in evaluation or research methods?
  • How long does this person have available for this task?

12) What are the organization’s schedules for conducting the evaluation?

  • Who established these timelines, and why?
  • Are the schedules realistic?
  • Are they flexible?
  • If your schedules are fixed, can the scope of the evaluation report be scaled back? For example, timelines can be reduced by presenting aggregated data only rather than analyzing data based on respondents’ gender, age, program, etc.
  • If your evaluation includes both a satisfaction and performance survey, it may be possible to present satisfaction data as part of a preliminary report, where the longer-term outcome data will be provided later.

13) When was the last time the organization was evaluated?

  • What was the nature of the most recent evaluation (methodology, scope, intent)?
  • What was the outcome of this evaluation in terms of the survey results and what effect did these results have on the organization or company?
  • Were recommendations implemented? If not, why not?
  • How did staff and managers respond to the final evaluation? Was there any support for the evaluation or the resistance?
  • Will an evaluation job working on this current project be easier or more difficult as a result of past evaluations?

14) Should we set up a steering committee to oversee the work of the evaluators?

  • Use of steering committee is recommended as a means to ensure that your company or organization maintains full control and involvement in the evaluation process. This is consistent with OCAP principles of Ownership, control, access and participation. These committees are set up to provide context for your evaluation work. These committees are generally responsible for establishing parameters for the evaluation, helping to develop any questionnaires, forms or focus group questions needed to conduct the study, and identifying and accessing the study sample. We also try to involve the committee in reviewing the survey results to identify related conclusions and establish recommendations based on the conclusions.
  • The composition of the steering committee will vary depending on the sector, industry or the nature of the evaluation. Ideally, members of the committee represent all aspects of your operation that are involved in the processes, services or programs being evaluated.

15) And finally Is the program ready to be evaluated?

  • Is the logic model in place?
  • Are employees and managers required to conduct an evaluation at this time?
  • Are all necessary resources in place or easily accessible, including human, material and financial resources.
  • Have the previous questions been addressed to ensure effective and efficient design and administration of the evaluation study?