Building an Evaluative Mindset at Hallam #5: Evaluation Implementation
This is a fifth blog post, in a series contextualising the various sections of the Office for Students’ Evaluation Framework, which focuses on the implementation of the evaluation. The post explores evaluation planning, data collection and the appropriateness of data collection mechanisms, ethical practice and resourcing.
‘Don’t know where we’re going, Got no way of knowing, Driving on the road to nowhere’…the importance of an evaluation plan
An evaluation plan has been referred to as a ‘road map’ that sets out the ‘what’, ‘how’ and ‘when’ of the evaluation, which helps to clarify what you need to prioritise and plan in terms of resources, time and skills. It is also a ‘live document’ in the sense that it needs to be continually updated, especially as not all aspects of the evaluation may happen in the way it was originally intended.
There are many tools available to help evaluators think about the key questions to answer when developing a plan. As part of the Student Engagement Evaluation framework, Thomas and TSEP (2017) advise that an evaluation plan should address the following:
- What are the key evaluation activities?
- Who will lead them?
- Who else will be involved and what is their role?
- What resources of support (e.g. staff, time, budget) is required?
- When will key activities take place?
- What will be the outputs of each activity?
- How will the evaluation team work together?
- What arrangements are in place for using the results, such as the dissemination and development of recommendations? (This question has been added to the original list provided by Thomas and TSEP).
Other recommended tools include the Roles-Outcomes-Timing-Use-Resourcing (ROTUR) framework (Parsons, 2017), which was outlined in the previous blog post in this series, and the comprehensive guidance produced by Better Evaluation.
Contingencies: ‘Putting plan A plan B plan C into action’
Undertaking a risk assessment and building contingency options into planning can leave evaluators better positioned to adapt to changing circumstances (Reed, 2020). A range of potential risks to consider within an evaluation plan are shown below, with a nod to some of the prominent issues that were highlighted in 2020 (and beyond):
Risk | |
---|---|
Ethical risks to the wellbeing, privacy and confidentality of participants, stakeholders and evaluators | – A key ethical consideration evaluators might be faced with is to determine whether it is appropriate to continue with the evaluation activity during a crisis. – It is necessary to understand what impact a crisis is having on participants and stakeholders, especially those who are the most marginalised, to ensure that the evaluation will not cause any harm as pre-existing inequalities might be exacerbated. – Participant involvement in the evaluation planning, design and decision-making will enable their views to be represented. – There is growing recognition to prioritise the well-being of evaluators/researchers alongside the safety and care of participants (Boynton, 2020). |
Different needs and capabilities | – The needs and capabilities of participants, stakeholders and evaluators may need to be reassessed. – Patton (2020) urges evaluators to be proactive when thinking about the impact of a crisis, for example, by working with stakeholders to initiate change and, if necessary, to amend an evaluation’s theory of change, evaluation design, implementation and/or timelines. – Kara and Khoo (2020b, 2020c, 2020d) have edited three books presenting a range of examples of how researchers have adapted to the Covid-19 pandemic, such as by: re-assessing aims; utilising existing systems and secondary data; and collecting primary data using techniques that are ‘non-intrusive’ for participants. |
Barriers to data collection | – A crowdsourced document has been developed to provide ideas for those who might need to consider alternative methods to ‘in-person’ approaches (Lupton, 2020). – Presenting numerous options for participation can help promote flexibility, autonomy and lessen fears about taking part, with asynchronous methods enabling participants to take part in their own time and to edit their responses (Partlow, 2020). – It is important to consider the potential risks of remote methods for sampling, transparency and inclusivity (Kara & Khoo, 2020a), for example, issues of convenience sampling and the potential exclusion of participants and communities, such as those who have no or limited access to the internet, devices and mobile data. |
Adjustments to resources (e.g. staff, budget) | – Mitigating for any changes in staff in the project team, such as through job changes or illness, can help to minimise any disruptions to project timelines and detriment it may have on the quality of the evaluation. Ensuring that the required skill set is shared across the project team will avoid any ‘single point of dependencies’ and cover any absences. – If there are any areas you need to build capacity in, ensure there is enough time for staff to learn these skills, or consider drawing on support that is available within the institution or the sector. – If you have a budget, Better Evaluation (2020) advocate ensuring that there is flexibility within it to take into account any changes that might occur. There is the potential to reduce costs further by focusing on capturing evidence that is pivotal to the evaluation, as opposed to data that is potentially useful |
Data collection principles and examples of application
There is an abundance of advice across the sector that focuses on data collection. Key summary points are shown below, which is followed by an example of how these principles were applied to a project at Sheffield Hallam:
- Collect data that is relevant to your evaluation needs (Thomas & TSEP, 2017). At the point of programme-design, use the key questions and indicators of the evaluation to inform your decision-making about what data you need to collect, from who and when, in addition to how it will be analysed.
- As part of a UNICEF guide to evaluation, Peersman (2014) advises to ‘start the data collection planning by reviewing to what extent existing data can be used’, before filling in any gaps with new data. Austen (2018) provides an overview of some of the existing sources of evidence within an institution.
- Individual-level data is more useful than data for whole level cohorts (Centre for Social Mobility, 2019), for example: for tailoring targeting of activities (if applicable); for monitoring purposes to assess progress against targets; for the impact evaluation to see what difference the activity is having at a various levels (individual, sub-group, cohort).
- The sample size will affect the inferences you can make from the results. Larger sample sizes are needed if the aim is to generalise findings of participants to the wider population, whereas smaller sample sizes could be sufficient if the aim of the evaluation is to describe the findings of participants. For those collecting quantitative data, a short guide has been published by the Poverty Action Lab (2018) about determining sample size and statistical power
- Tracking participants can help to measure the long-term impact of an activity, which could be collected directly from participants, such as via a survey, or from data sets collected by partners, such as UCAS and HESA (Centre for Social Mobility, 2019).
- Consider involving students in the data collection process and throughout the other phases of the evaluation, with appropriate training and support given. This has the potential to empower groups and communities and capture ‘insider’ perspectives in Higher Education (Kara, 2020).