Top 10 Challenges of Monitoring and Evaluation (and how to tackle them)

Working in Monitoring and Evaluation (M&E) can be extremely challenging at first, as you navigate your way through deep theory, and complex implementation. This article covers a few basic challenges you might encounter, as well as how best to tackle them.

#1 Bad Data

There can be no doubt that the only thing worse than no data, is bad data. And you will almost certainly come across a great deal of badly structured, unverifiable data as you embark on your M&E journey. Working with data is an important part of your M&E insights discovery work, and when designing systems for Monitoring and Evaluation, it is as important to work with the data as it is to understand why data in not being collected well and verified properly. Data is living, it provides little windows into what is actually taking place, and it needs to be cultivated, over its lifespan, and with care. Build systems which minimise the potential for bad data; cleverly use dropdowns, research incentives, create inbuilt systems for accountability and verification along the impact insight assembly line and most of all, holding what is fundamental – be ready to shift tack if a certain KPI or means of collecting it is just creating unnecessary effort and distraction. The more pre-work you do, the leaner your logic, and the cleaner your data is likely to be.

#2 Changing Mindsets

Many people and organisations that work in complex settings can, at times, be sceptical to approaches to measurement and evaluation. This is understandable given that the work they do and the change that they see each day often defies the laws of causation and predictability. Bringing people on an M&E learning journey is absolutely critical to your success. Bringing a broad knowledge of different approaches, particularly those which are able to properly consider uncertainty, unexpected outcomes and true novelty will significantly benefit you as you aim to change mindsets and bring people together in the idea that M&E can be a means to a whole new voice.

#3 Aligning Perspectives

Aligning the perspectives of impact and financial management is key to ensuring that your M&E is most useful and is able to bring both your funders and your philanthropists on board. This is not so much about conflicting priorities as it is about ensuring matched rigour and terminology as you measure and evaluate your work. We all want the same good things, after all. As you craft your M&E strategy, think as a social financier might. Anticipating these conflicts, and understanding well, when you ask a survey question, ask what it will show in terms of measurable social benefit. Building with this language in mind, will help a great deal to ensure alignment at all times. Know your hypothesis. Plan with the end in mind. Terminology is everything!

#4 Data Collection

Given that you want to avoid the wasted effort behind bad data, ensuring that data collection and verifications systems you put in place produce great data means that you need to pay careful attention to how to collect the data you are going to use in M&E. There is a great deal of guidance on data collection, but the most key is ensuring that the data is consistent. If people use the same definitions, and collect data under the same, defined parameters, then the data will be useful. This is more than just defining your KPIs – its about meeting with your teams regularly to see how they are interpreting what they see in front of them, and how this match with the way they interpret the indicators. The best way to ensure this consistency and coherence is to take a collaborative approach when building. Build and name indicators from what the team know to be true within that context. Be sure not to ask things for the sake of it! Keep systems lean. And be sure you build in ways of verifying the data, ensure a manager has oversight, and can check data as its being collected to be sure it’s verified.

#5 Resource Constraints

Doing M&E well takes time and committed people. As a programme grows it might also involve skills teams – teams to collect and verify data, teams to analyse it, and strong leaders to channel insights to where they need to go. On top of this, building good M&E systems might entail technical solutions and building systems for information management.  The good news is that funding proposals can go into quite a lot of detail about the M&E you are proposing, and funders will not only consider funding your M&E learning journey but will appreciate seeing this in your bid for funding the programme itself. Do your homework and propose M&E (including the cost of generating the results) which will be most likely to show your impact and generate learning. Understand and articulate the resource need for the M&E of your programme, and don’t underestimate this. Scope, Scope and Scope again! After a pilot run with well resourced M&E you should have no trouble using your insights to grow your funding base and your work.

#6 Designing your hypothesis

So, you have a great programme, and you want to measure its impact. The key question then is what and how to measure. Core to this is understanding M&E as a learning strategy. You will no doubt be implementing a programme, or providing developmental support based on a certain understanding about the way the world works, and why doing what you do will solve the problem you aim to address. Starting with your impact statement, and your log frame, design a hypothesis which will be able to remain the North Star as you then engage in complex M&E practice. A good knowledge of the subject matter surrounding your project will best enable you to ask the right questions, to make the right assumptions, and to guide you as to which stones to never leave unturned as you build a system to populate the evidence base which will answer your key questions around why and how your programme works.

#7 Closing systems, stopping learning

Remember the old adage that change is the only constant? Well, it is worth remembering this as you embark on your M&E practice. When a programme has been running well, you might find the most efficient means of scale is just to replicate (indeed this is a kind of ideal), with larger and larger numbers – the programme, as well as its M&E.  But always be sure to keep open to what is truly novel. Education development research notes ‘the cult of efficiency’ as being a threat to truly developmental work! Take note of small changes, reported outcomes you never expected, build in periodic surveys on the M&E process itself to check that it is not missing in key nuggets of information, important for programmatic development which might have got lost in the initial formalisation stages. Never stop asking questions. Never stop exploring. And never stop listening to those implementers and beneficiaries on the ground.

#8 Statistical Challenges under conditions of uncertainty

M&E is really based in social systems and with all this complexity, the problem of mistaking correlation for causation, or of finding enough to prove false hypotheses true are real risks. It would be great if we could all afford a resident statistician, but even they may not pick up a false correlation in the complex world of development, with cultures, and practices, and contradiction and mind-changing humans. Always be aware of the limitation of using overly quantitative approaches in development practice. Although a powerful RCT might springboard you into new boardrooms, with new brains leading new funds, being aware of precisely how your change takes place might not be able to be shown through the stats! The best solution to this is a mixed methods approach. Know which approaches to use when, be sure to really unpack the ‘black box’ between output and impact measurement and be open to not knowing what you don’t know!

#9 Generating Insights

This is a great challenge to have as it means you already have the data you need, in a clean enough format, that you’re beginning to ask questions about what the data is telling you. This is also a difficult task and requires that you start to bring in some theories or comparable case examples to put the data through the right analyses. As you draw insights from the data, it is a good idea to ensure that you have a good understanding of the context, and what is important to key stakeholders. Seek to answer key questions, while also looking for what the data might be showing that you never intended it to.  Once you have generated some insights, if you are not running an independent evaluation, run your insights by key stakeholders to see what light they may shine on your findings. Insights never sleep – keep a page in your notebook open for a list of things that ‘pop’ as you go about your daily M&E grind. And the best source of insights are the implementers and beneficiaries themselves. Stay engaged, especially when things are running smoothly!

#10 Impact Washing

Impact Washing is more easily done than we realise. With the best intention, you may settle at a positive result, and report this, without fully understanding what this means, and if it can be verified. Key to ensuring that you are not impact washing is to be sure that you never over state your impact. Always check your results with other stakeholders and be conservative. When picking financial proxies, be extremely rigorous and remember the old saying – when in doubt, leave it out.

Did you find this article useful? Support our work and download all templates.

About Angela Biden

Angela Biden is a consulting strategist and M&E consultant. She has worked across a range of development, and business contexts. She holds a Masters in Economics and Philosophy, and has worked in the nexus of M&E and social impact; to help those doing good do more of it; for some 15 years. From policy board rooms, to Tech start-ups, to grass roots NGOs working in the face of the world’s most abject challenges; Angela is focused on conducting relevant and meaningful M&E: fit for purpose, realistic, and useful for stakeholders creating positive change.
Support our work ♡Download all templates