Using Artificial Intelligence in Teaching and Learning Practices

It goes without saying that the use of artificial intelligence for educational purposes has become a central topic of conversation in universities across the UK and beyond. The University of Manchester’s new initiative aims to support learning, research and teaching by creating an equitable pedagogical future for all students and staff, recognising the importance and prevalence of these AI tools in the future of higher education.
In an effort to promote this change, The University of Manchester has become the first UK university to provide Microsoft 365 Copilot access and training to all students and staff, and the Faculty of Science and Engineering seek to spotlight nominated staff members already implementing the use of AI in their work beyond this, influencing teaching and learning practices across the faculty.
Throughout a series of this semester’s articles, you will learn about case studies on the effectiveness of AI, ranging from individualised exam feedback to quick course analysis from large data sets, as staff members comply with the University’s policies on the responsible use of AI.
Today’s issue features Charles Walkden (Reader in Pure Mathematics) and Stephie Tsai (Senior Lecturer in Strategic Management) and their experience with using Microsoft 365 Copilot for different purposes.
Charles discusses his use of this AI tool when generating alternative text (alt text) for images and graphs in Maths. It is a legal requirement for UK universities websites, including the content within a Virtual Learning Environment (VLE) system, to be compliant with the Web Content Accessibility Guidelines (WCAG) 2.2, and the availability of alt text for images being a key part of this, it is vital we find the best and most efficient way of providing this in our daily practices.
Charles speaks of the benefits and limitations of Copilot when it comes to complex tasks such as this one.
Can you tell me a bit about what you were trying to do?
I co-teach a large 1st year course in Mathematics. Over the summer, I was uploading content onto Canvas and there’s a large number of very technical images and diagrams. Most of them are more complicated than just graphs of functions: it’s things like diagrams representing subsets of the complex plane, diagrams to motivate technical definitions, such as the definition of continuity of functions, and diagrams to illustrate how a counter-example or a proof works. To make the content as accessible as possible, I wanted to create alternative text for these images to be uploaded onto Canvas.
Creating alt text can be quite a time consuming task so finding a quicker way to do this would be really helpful! Please could you tell me more?
I first tried writing the alt text by hand, but it was taking far too long. Instead I tried using Copilot to create at least the first draft of an alt text. It was then much quicker to edit that into something suitable.
What prompts did you use?
Copilot loves making assumptions about an image and will often give its own interpretation that could be very far from what was intended. It worked considerably better if I told it the context in advance. Copilot would remember the context for other images in the same conversation, which meant I didn’t have to keep explaining the context each time.
So Copilot remembers previous context, which helps generate future alt text… Interesting! What were the results like?
To be honest, mixed. Copilot only allows you to upload one image at a time, so I couldn’t do it in batch-mode. Some alt texts produced were good and only needed minor editing by me. Others were too long or missed the point, and for these it was often easier to write something from scratch myself. I’m probably happy with about 70% of what it produced, but I’m more confident in going back and fixing the bits that need improving when I have time, plus also creating alt-text for my Semester 2 unit.
I’d like to know more – do you have any suggestions for useful resources?
There is good general advice on creating accessible learning material on Canvas on the FSE Getting Started with Canvas website. When it comes to good practice specific to Mathematics (but a lot is relevant to any scientific or highly numerate discipline), the London Mathematical Society have a collection of links to resources. I also wish I’d found this resource – Complex Images – Making Sense for Accessibility – and, particularly, the workflow before I started: it really clarifies how to think about and write good alternative text.
Despite Copilot’s limitations when it came to accuracy, the AI tool provided Charles with a useful starting point for generating complex alternative text for mathematical images.
Alternatively, Stephie shared her experience with Copilot when generating assessment feedback for students. The main benefit of using this AI tool for marking in a multi-marker context was the improved consistency of the marking for the students, improving student confidence at a larger scale.
What are the benefits of using Copilot?
We developed and used a shared Copilot agent to support text feedback for an assessment with 320 submissions marked by four staff. The most distinctive benefit in a multi-marker context is improved consistency and perceived fairness. Because all markers worked from the same agent, which embedded the assessment brief, marking criteria and an agreed feedback structure, students were less likely to experience marker-to-marker variation in tone, level of detail, or how feedback was framed against the rubric. This matters especially at scale, where small differences between marker groups can feel significant to students.
Beyond the number of markers or cohort size, the tool also supported productivity and feedback quality. It helped convert markers’ brief comments into a coherent piece of written feedback more efficiently, and encouraged clearer signposting of strengths, areas for improvement, and feedforward.
Sounds very beneficial but I imagine the setup was complex. How long did it take you to develop this tool?
The development was practical and iterative rather than a one-off build. We started with a marker meeting to align on the rubric and what we wanted feedback to look like for students, including agreeing a broadly consistent feedback length and structure. Once that shared approach was agreed, the unit coordinator built and calibrated the Copilot agent through a small number of trials.
Were these the results you were hoping for?
Overall, our experience broadly matched what we hoped for: improved efficiency in drafting and stronger consistency across a large cohort. The workflow was designed so that academic judgement remained central. The Copilot agent then used those points to draft a structured narrative in the agreed format. This kept responsibility with markers while reducing the effort of repeatedly composing polished text from scratch.
So, it rendered the process of providing feedback more efficient?
In terms of time saving, the biggest gains were in reducing blank-page time and speeding up the production of well-structured feedback in a professional and polished tone, although the benefit was not uniform. For weaker answers in particular, markers often needed further tailoring to pinpoint exactly where the work fell short against the criteria and to make the guidance sufficiently specific and actionable. In terms of consistency, the shared agent helped reduce variation between marker groups and supported student-perceived fairness.
With that in mind, what are the shortcomings you have found with using this tool?
A key limitation was limited transparency and control in a shared-agent model. Markers could use the same Copilot agent, but users cannot see or adjust how it had been configured. This made it harder to diagnose why outputs sometimes differed across contexts and constrained fine-tuning for individual markers.
For continuous improvement, we would share configuration notes and the setup rationale alongside the developed agent. This would support light-touch calibration check-ins during marking while maintaining consistency across markers. It would also facilitate embedding and applying effective feedback practices, ensuring that guidance remains clearly linked to assessment criteria, […] and consistently focuses on constructive feedforward rather than generic remarks.
What do you see for the future of this tool?
Moving forward, we see value in treating the agent as shared infrastructure for assessment feedback, particularly for units with multiple markers. The core idea is to maintain a common baseline that reflects agreed expectations on structure, tone and length, while creating a clearer process for maintaining consistency as marking progresses. We also see strong potential in transferring lessons learned from one agent and assessment to another.
In practice, that means building a reusable approach that can support both agent setup and feedback practice: using a consistent framework for embedding the assessment brief and criteria, specifying the intended feedback length and structure, and guiding markers towards constructive, structured comments.
Both Charles and Stephie’s experiences show that Copilot can streamline time‑consuming academic tasks—such as generating first‑draft alt text for complex mathematical diagrams and producing consistent, well‑structured feedback across multiple markers—while still requiring expert oversight to refine outputs, ensure accuracy, and manage limitations in transparency and control.
For more case studies on the current uses of AI within FSE, we’ll continue to share these with you in our upcoming newsletters – so, stay tuned!