A tailor-made Monitoring and Evaluation framework forms one of the foremost components leading to the successful implementation of any project activity. Prior to suggesting tips for the implementation of Monitoring and evaluation in CSR activities, understanding the meaning of both monitoring and evaluation in the context of project framework is important.
Monitoring is a periodically recurring task that allows results, processes, and experiences to be documented and used as a basis to steer decision-making and learning. It is a progress checking mechanism created to see whether the project is on track and meeting the outputs set within the time limits as mentioned in the project design and implementation plan.
Evaluation of a project revolves around how well or how badly has the project been implemented, to what extent it has achieved the results (outputs & outcomes), the challenges faced during the implementation and how they have or could have been mitigated. It also provides a base to see whether the same can be scaled up and if the model is replicable.
One of the most pertinent factors to be kept in mind is that monitoring happens during project implementation while evaluation takes place after the project has been implemented. Thus, even mid-term evaluations falls in the category of monitoring as it is focused on what type of hindrances are occurring which is holding back the successful implementation of the project and what type of course corrections needs to take place in order to put the project implementation back on track.
Without tailored performance indicators and specific parameters that reflect the context and focus of the program, both monitoring and evaluation is likely to produce generic results and be void of relevant lessons learned and useful recommendations for future programs. Thus, proper creation and management of a monitoring and evaluation framework is a critical element for the success of any developmental project.
The 8 key factors that need to be kept in mind while formulating a monitoring and evaluation framework for a project are as follows:
1. Track necessary Information only: M&E tools often ends up capturing a lot of information that is not required or has no potential use for the program. For instance: if we are conducting an education program that works to improve student’s learning outcomes by providing books, then the M&E team doesn’t have to collect information about the student’s mid-day meals. The idea behind it is to collect only necessary information that is required to ascertain whether the project is on track in case of monitoring and whether it has achieved the results to address the issues successfully in case of evaluation. This would result in optimization of resources in terms of cost, effort and time deployed for M&E studies.
2. Selection of Appropriate Data Collection Tools: Selection of tools with respect to the requirement of the nature of the project is a vital component in any monitoring and evaluation framework. It is well known fact that one size does not fit all, in the same way, the tools selected have to be as per the nature of the project, budgetary constraints and available timeline. Defining the methods for data collection will have important implications for the entire study and will have a direct impact on how the results will be reported.
3. Identifying M&E roles and responsibilities: It is imperative for any good monitoring and evaluation framework to assign specific roles & responsibilities within the project team personnel. Further, the assessment is to be done on the basis of specific indicators which are to be achieved within specified timelines. It is important to decide from the early planning stage who is responsible for collecting the data for each indicator. Data management roles should be decided with input from all team members so that everyone is on the same page and knows which indicators they are assigned and are supposed to record and keep track of. This way when it is time for reporting, appropriate actions can be taken based on real time data collected from the ground and learning’s can be synthesized.
4. Indicators should be SMART: The indicators to be decided for any monitoring and evaluation plan should be SMART (Specific, Measurable, Accurate, Realistic & Time Bound). This implies that it should be specific in terms of what it wants to measure/assess, measurable quantitatively/qualitatively, able to accurately capture data, based on realistic parameters as per the on-ground situation and should have a specific timeline. These are the most significant elements to be considered while framing the performance indicators and parameters.
5. Use of Logical Framework Matrix (LFM): It is one of the most critical tools for ensuring that the entire project implementation plan along with the results and the performance indicators are drawn up in a matrix so as to make it easier for any project implementation team to see if the project is going as per the plan. It aids in effective monitoring of the entire implementation phase by checking if the project results (outputs and outcomes) are being met in a timely fashion. Further, it also aids in assessing the risks and assumptions and creating a mitigation plan for the same. Moreover, it helps in analysing the roles and responsibilities of various stakeholders and helps in the better management of the entire project. At the end of a project, it also helps in assessing whether the project has been successful in meeting the outcomes and objectives that the project had set out to achieve.
6 Create a Comprehensive Analysis Plan: The accurate analysis of data is a key component for the success of any monitoring and evaluation plan. It is very important for the data to be analyzed by a management professional who has had prior experience in analyzing both qualitative and quantitative sets of data. The analysis of quantitative and qualitative sets of data requires different sets of competence and it is imperative for the data analysis personnel to be well versed with both of them. The inferences drawn from the analysis should be substantiated with data comprising of facts and figures (quantitative data) or/and views, opinions, perceptions of the relevant stakeholders (qualitative data).
The M&E plan should include a section with details about what data will be analyzed and how the results will be presented. Do research staffs need to perform any statistical tests to get the needed answers? If so, what tests are they and what data will be used in them? What software program will be used to analyze data and make reporting tables. All these elements should also be given equal importance.
7. Verification through Triangulation of Data: One of the most important factors to be kept in mind while designing of the monitoring and evaluation framework is that the data collected needs to be verified from multiple sources both through primary and secondary research. This is imperative in order to increase the authenticity and accuracy of the data.
8. Sharing of Data with Relevant Stakeholders- The data collected and the resulting analysis should be shared with all the relevant stakeholders. This will ensure that the data collected and analysed leads to real time change/course correction on the ground or leads to better planning and design of future similar projects. Data should always be collected for a particular purpose that will inform the staff and stakeholders about the success and progress of the program. It must also assist the staff to make modifications in real-time. The M&E plan should also include plans for internal dissemination among the program team, as well as wider dissemination among stakeholders and donors.
For example, a program team may want to review data on a monthly basis to make programmatic decisions and develop future work plans, while meetings with the donor to review data and program progress might occur quarterly or annually. Dissemination of printed or digital materials might occur at more frequent intervals. These options should be discussed with stakeholders and the M&E team to ascertain reasonable expectations for data review and to develop plans for dissemination early in the program.
Thus, in a nutshell monitoring forms the lifeline of any project as it makes sure that the right decisions are taken at the right time in order to mitigate the risks and challenges facing the project in real time while evaluation shows to what extent the overall targets and outcomes set at the time of the project designing phase has been met post implementation of the project activities.(The Author Shariq Jamal is a Programme Management professional.)