Building an Evaluative Mindset at Hallam #5: Evaluation Implementation
This is a fifth blog post, in a series contextualising the various sections of the Office for Students’ Evaluation Framework, which focuses on the implementation of the evaluation. The post explores evaluation planning, data collection and the appropriateness of data collection mechanisms, ethical practice and resourcing.
‘Don’t know where we’re going, Got no way of knowing, Driving on the road to nowhere’…the importance of an evaluation plan
An evaluation plan has been referred to as a ‘road map’ that sets out the ‘what’, ‘how’ and ‘when’ of the evaluation, which helps to clarify what you need to prioritise and plan in terms of resources, time and skills. It is also a ‘live document’ in the sense that it needs to be continually updated, especially as not all aspects of the evaluation may happen in the way it was originally intended.
There are many tools available to help evaluators think about the key questions to answer when developing a plan. As part of the Student Engagement Evaluation framework, Thomas and TSEP (2017) advise that an evaluation plan should address the following:
- What are the key evaluation activities?
- Who will lead them?
- Who else will be involved and what is their role?
- What resources of support (e.g. staff, time, budget) is required?
- When will key activities take place?
- What will be the outputs of each activity?
- How will the evaluation team work together?
- What arrangements are in place for using the results, such as the dissemination and development of recommendations? (This question has been added to the original list provided by Thomas and TSEP).
Other recommended tools include the Roles-Outcomes-Timing-Use-Resourcing (ROTUR) framework (Parsons, 2017), which was outlined in the previous blog post in this series, and the comprehensive guidance produced by Better Evaluation.
Contingencies: ‘Putting plan A plan B plan C into action’
Undertaking a risk assessment and building contingency options into planning can leave evaluators better positioned to adapt to changing circumstances (Reed, 2020). A range of potential risks to consider within an evaluation plan are shown below, with a nod to some of the prominent issues that were highlighted in 2020 (and beyond):
Risk | |
---|---|
Ethical risks to the wellbeing, privacy and confidentality of participants, stakeholders and evaluators | – A key ethical consideration evaluators might be faced with is to determine whether it is appropriate to continue with the evaluation activity during a crisis. – It is necessary to understand what impact a crisis is having on participants and stakeholders, especially those who are the most marginalised, to ensure that the evaluation will not cause any harm as pre-existing inequalities might be exacerbated. – Participant involvement in the evaluation planning, design and decision-making will enable their views to be represented. – There is growing recognition to prioritise the well-being of evaluators/researchers alongside the safety and care of participants (Boynton, 2020). |
Different needs and capabilities | – The needs and capabilities of participants, stakeholders and evaluators may need to be reassessed. – Patton (2020) urges evaluators to be proactive when thinking about the impact of a crisis, for example, by working with stakeholders to initiate change and, if necessary, to amend an evaluation’s theory of change, evaluation design, implementation and/or timelines. – Kara and Khoo (2020b, 2020c, 2020d) have edited three books presenting a range of examples of how researchers have adapted to the Covid-19 pandemic, such as by: re-assessing aims; utilising existing systems and secondary data; and collecting primary data using techniques that are ‘non-intrusive’ for participants. |
Barriers to data collection | – A crowdsourced document has been developed to provide ideas for those who might need to consider alternative methods to ‘in-person’ approaches (Lupton, 2020). – Presenting numerous options for participation can help promote flexibility, autonomy and lessen fears about taking part, with asynchronous methods enabling participants to take part in their own time and to edit their responses (Partlow, 2020). – It is important to consider the potential risks of remote methods for sampling, transparency and inclusivity (Kara & Khoo, 2020a), for example, issues of convenience sampling and the potential exclusion of participants and communities, such as those who have no or limited access to the internet, devices and mobile data. |
Adjustments to resources (e.g. staff, budget) | – Mitigating for any changes in staff in the project team, such as through job changes or illness, can help to minimise any disruptions to project timelines and detriment it may have on the quality of the evaluation. Ensuring that the required skill set is shared across the project team will avoid any ‘single point of dependencies’ and cover any absences. – If there are any areas you need to build capacity in, ensure there is enough time for staff to learn these skills, or consider drawing on support that is available within the institution or the sector. – If you have a budget, Better Evaluation (2020) advocate ensuring that there is flexibility within it to take into account any changes that might occur. There is the potential to reduce costs further by focusing on capturing evidence that is pivotal to the evaluation, as opposed to data that is potentially useful |
Data collection principles and examples of application
There is an abundance of advice across the sector that focuses on data collection. Key summary points are shown below, which is followed by an example of how these principles were applied to a project at Sheffield Hallam:
- Collect data that is relevant to your evaluation needs (Thomas & TSEP, 2017). At the point of programme-design, use the key questions and indicators of the evaluation to inform your decision-making about what data you need to collect, from who and when, in addition to how it will be analysed.
- As part of a UNICEF guide to evaluation, Peersman (2014) advises to ‘start the data collection planning by reviewing to what extent existing data can be used’, before filling in any gaps with new data. Austen (2018) provides an overview of some of the existing sources of evidence within an institution.
- Individual-level data is more useful than data for whole level cohorts (Centre for Social Mobility, 2019), for example: for tailoring targeting of activities (if applicable); for monitoring purposes to assess progress against targets; for the impact evaluation to see what difference the activity is having at a various levels (individual, sub-group, cohort).
- The sample size will affect the inferences you can make from the results. Larger sample sizes are needed if the aim is to generalise findings of participants to the wider population, whereas smaller sample sizes could be sufficient if the aim of the evaluation is to describe the findings of participants. For those collecting quantitative data, a short guide has been published by the Poverty Action Lab (2018) about determining sample size and statistical power
- Tracking participants can help to measure the long-term impact of an activity, which could be collected directly from participants, such as via a survey, or from data sets collected by partners, such as UCAS and HESA (Centre for Social Mobility, 2019).
- Consider involving students in the data collection process and throughout the other phases of the evaluation, with appropriate training and support given. This has the potential to empower groups and communities and capture ‘insider’ perspectives in Higher Education (Kara, 2020).
References and Further Reading
Austen, L. (2018, February 18). ‘It ain’t what we do, it’s the way that we do it’ – researching student voices. Wonkhe. https://wonkhe.com/blogs/it-aint-what-we-do-its-the-way-that-we-do-it-researching-student-voices/
Austen, L. (n.d.). Ethical Checklist. Enhancement Themes. https://blogs.shu.ac.uk/steer/files/2020/07/Ethical-checklist.docx
Better Evaluation. (2014). Rainbow Framework. https://www.betterevaluation.org/sites/default/files/Rainbow%20Framework.pdf
Boynton, P. (2020). Do the best you can: researcher safety in a pandemic. In Kara, H., & Khoo, S. M. (Eds.), Researching in the Age of COVID-19 Volume II: Care and Resilience (pp. 119-127). Policy Press.
Centre for Social Mobility (2019). Using standards of evidence to evaluate impact of outreach. https://www.officeforstudents.org.uk/media/f2424bc6-38d5-446c-881e-f4f54b73c2bc/using-standards-of-evidence-to-evaluate-impact-of-outreach.pdf
Collingridge, D. (2014, September 22). Validating a Questionnaire. MethodSpace. Retrieved from: https://www.methodspace.com/validating-a-questionnaire/
Informal Science (n. d.). Evaluation Tools and Instruments. https://www.informalscience.org/evaluation/evaluation-tools-instruments.
Kara, H. (2020). Creative research methods: A practical guide (2nd ed.). Policy Press.
Kara, H., & Khoo, S. M. (2020a, October 26). How the pandemic has transformed research methods and ethics: 3 lessons from 33 rapid responses. LSE. https://blogs.lse.ac.uk/impactofsocialsciences/2020/10/26/how-the-pandemic-has-transformed-research-methods-and-ethics-3-lessons-from-33-rapid-responses/
Kara, H., & Khoo, S. M. (Eds.). (2020b). Researching in the Age of COVID-19 Volume I: Response and Reassessment. Policy Press.
Kara, H., & Khoo, S. M. (Eds.). (2020c). Researching in the Age of COVID-19 Volume II: Care and Resilience. Policy Press.
Kara, H., & Khoo, S. M. (Eds.). (2020d). Researching in the Age of COVID-19 Volume III: Creativity and Ethics. Policy Press.
Lupton, D (Ed.). (2020). Doing fieldwork in a pandemic (crowd-sourced document). https://docs.google.com/document/d/1clGjGABB2h2qbduTgfqribHmog9B6P0NvMgVuiHZCl8/mobilebasic
Macfarlan, A. (2020, April 21). Adapting evaluation in the time of COVID-19 – Part 1: MANAGE. Better Evaluation. https://www.betterevaluation.org/en/blog/adapting-evaluation-time-covid-19-part-1-manage
Ozkan, S., & Koseler, R. (2009). Multi-dimensional students’ evaluation of e-learning systems in the higher education context: An empirical investigation. Computers & Education, 53(4), 1285-1296.
Parsons, D. (2017). Demystifying evaluation: Practical approaches for researchers and users. Policy Press.
Partlow E. (2020). Prioritizing inclusion, ethical practice and accessibility during a global pandemic: the role of the researcher in mindful decision making. In Kara, H., & Khoo, S. M. (Eds.), Researching in the Age of COVID-19 Volume II: Care and Resilience (pp. 119-127). Policy Press.
Patton, M. (2020, March 23). Evaluation Implications of the Coronavirus Global Health Pandemic Emergency. Blue Marble Evaluation. https://bluemarbleeval.org/latest/evaluation-implications-coronavirus-global-health-pandemic-emergency
Peersman, G. (2014). Overview: Data Collection and Analysis Methods in Impact Evaluation, Methodological Briefs: Impact Evaluation 10, UNICEF Office of Research, Florence. https://www.unicef-irc.org/publications/pdf/brief_10_data_collection_analysis_eng.pdf
Poverty Action Lab. (2018). Six rules of thumb for determining sample size and statistical power. https://www.povertyactionlab.org/sites/default/files/research-resources/2018.03.21-Rules-of-Thumb-for-Sample-Size-and-Power_0.pdf
Reed, M. (2020, April 25). Reflex or Reflection: Three Lessons for Evaluators Amid COVID-19. AEA365. https://aea365.org/blog/reflex-or-reflection-three-lessons-for-evaluators-amid-covid-19-by-martena-reed/
Student Engagement Evaluation and Research. (n.d.). Research and Evaluation Ethics. https://blogs.shu.ac.uk/steer/evaluation/ethics/
Thomas, L. (2017). Evaluating student engagement activity: Report, evaluation framework and guidance. The Student Engagement Partnership. http://tsep.org.uk/wp-content/uploads/2017/06/Student-Engagement-Evaluation-Framework-and-Report.pdf
UK Evaluation Society. (2019). Guidelines for Good Practice in Evaluation. https://www.evaluation.org.uk/app/uploads/2019/04/UK-Evaluation-Society-Guidelines-for-Good-Practice-in-Evaluation.pdf
Willis, G. (2004). Cognitive interviewing: A tool for improving questionnaire design. Sage Publications.
close