Using Artificial Intelligence in Teaching and Learning Practices

For our second edition of Using Artificial Intelligence in Teaching and Learning Practices, we spoke to Cesare Giulio Ardito, Lecturer in the Department of Mathematics, and Alison Harvey, Katie Moore, and Dirk Engelberg from the Department of Materials to gather more thoughts on the use of artificial intelligence (AI) in their teaching and learning practices.
In the last issue, we discovered the advantages of AI when producing complex alternative text on mathematical images and when delivering personalised assessment feedback to students. Evidently however, with benefits there are also shortcomings. The previous academics discussed with us their thoughts on the faults of artificial intelligence: it wasn’t always accurate thus falling short in terms of efficiency. This lack of accuracy continues to be a fault when using AI as Professor Cesare Giulio Ardito discusses, remarking on the danger of miscommunication. In the newest issue, we hear the various ways in which AI is employed in higher education teaching and learning practices, noting both the advantages and disadvantages.
Cesare Giulio Ardito shares his experience of using AI
Lecturer Cesare Guilio Ardito shares with us his experience of using AI as well as offering his thoughts on how students should be using it. For the most part, Cesare‘s opinion on the use of AI was generally very positive, hailing it as being a useful tool to accelerate everyday tasks. He promotes AI use for students, suggesting that where possible they should take advantage of having access to it. However, he also notes that self-discipline and integrity are key, and that students should be guided by their professors and taught how to use it correctly.
Tell us, what AI tools do you use, and for what purpose?
I use ChatGPT, Microsoft Copilot, and Claude daily for a very wide range of tasks, and Gemini for image generation. Truth be told, the list keeps growing: as I think about what I do week to week, more use-cases come to mind. At a high level, I use these tools as general-purpose assistants to move quickly in the drafting phase of each project and then I apply my own judgement to verify, refine, and align the result with the context and standards of my role.
In practice, this includes: drafting and improving text; troubleshooting practical issues as they arise; supporting teaching development by prototyping explanations, examples, and lesson structures to compare options efficiently; generating reports using available online sources, or searching for information in books, newspapers or the entire web; organise my work, or change text by modifying tone, voice, or translating it, or changing the format or making any kind of edit that comes to mind.
What do you find are the benefits and the shortcomings of AI use in higher education?
Benefits:
For me, the benefit is simple: it is excellent. In many day-to-day contexts it is better than most human assistants I could realistically access, and not only because it is faster. The combination of speed, breadth, and iteration means it multiplies my productivity, and not by a low factor: I can draft, test, refine, and reframe work at a pace that would otherwise be impossible. The net effect is that I am a much more productive person, and I can spend more of my time on the parts of the job that require subject expertise, judgement, and responsibility.
Shortcomings:
The main risk is the temptation to offload the thinking rather than using AI to support it. In practice this means you have to actively check that the output really reflects what you intended, rather than what the model decided was a “reasonable” direction. There are, of course, straightforward errors: occasionally the model is simply wrong, and it can also hallucinate details with confidence. A large fraction of issues, in my experience, come from miscommunication. The tool cannot read my mind, and if I am not explicit about the task, audience, constraints, or standard of evidence, or I have some internal assumptions that I fail to communicate, it will fill gaps. Moreover, it’s not just mistakes that I worry about: sometimes the model will produce something that looks polished, but it does not say what I wanted it to say. That is when I need to acknowledge it and start over, or it stops being my voice and AI now sits in the driver seat. Good AI usage takes a genuine, strong commitment to integrity.
I encourage colleagues and students to experiment deliberately and build a personal workflow. For students in particular, I emphasise the fine line: AI can be valuable to get unstuck, to see an explanation in a different way, or to support revision, but they still need to genuinely learn the material and be able to perform independently.
What do you suggest are some points to look out for when using AI in this context?
The single most important point is communication. You have to tell students what you are doing, why you are doing it, what you think they should do, and why. That means being explicit about what is permitted, what must be their own work, and what “good” use looks like in your course. Let me remind everyone that there is no way, and there will never be a way, to police AI use through detection tools. So the practical route is the same one we learned with earlier “inevitable” tools: teach students how to use it well, what to trust, what not to trust, and how to verify.
I trialled this explicitly this year in a Foundation Year course. After securing the assessment, I wrote a short AI statement, and then unpacked it in the first lecture. I told students, frankly, that modern LLMs are on average better than they can possibly become at the content of the syllabus, and hence that it is okay to use the tool as support, but to be mindful of certain aspects: first, failure modes and verification. Second, assessment: there will be an in-person exam where AI is not available, so they should be incentivised to learn properly. Third, the learning remains worthwhile even if “the model can do the exercise”: we still need human competence and judgement, and we should be honest about why. We do not yet know whether AI can reliably generate genuinely novel advanced ideas, and even if it can, we still need independent human experts to judge correctness, or we end up in Borges’ Library of Babel.
A related point is not to lose track of good pedagogy. If something is educationally appropriate, keep doing it. I have seen many approaches where people propose changing the assessment from doing the exercise to asking AI to do it and then critically evaluate the output. That shifts the activity away from the intended learning objectives and can end up assessing something else entirely. We should not discard valuable teaching practices simply because they are now AI-vulnerable.
Finally, treat this as an ongoing, iterative process. We cannot pretend AI is not there and we should not default to invasive monitoring of student behaviour. Our job is to steer students towards good usage and simultaneously to safeguard assessment so that it remains a meaningful measure of learning.
Moving forward, how do you think we should approach AI in higher education?
I’m treating AI literacy as part of “how to study maths” rather than as an optional add-on. In the foundation course mentioned above, this worked well in practice: students referenced using AI for support, and some even sent a few model errors to me.
More broadly, I think we should not keep our heads in the sand, and I think we should keep avoiding (or start avoiding, if you haven’t) easy, flawed solutions such as AI detectors or invasive surveillance. At the same time, we should reject the idea that we need to restart from scratch and throw away centuries of pedagogy just because AI is disruptive.
So the first pillar is good, explicit communication. The other pillar is robust assessment. In-person assessment is the closest thing to a silver bullet here and should be prioritised and used whenever possible. Sure, there are drawbacks, and we were very happy to emphasise those when Covid forced us online, but many of those drawbacks can be mitigated with good design and support without giving up the in-person element. We can still innovate: remote activities are still necessary for some tasks, but they should sit inside a broader framework of varied ones, ideally including some in-person components.
Of course, this places a responsibility on lecturers to gain some practical expertise so they can tell students explicitly how AI relates to the topic of the course, what a good workflow looks like, and where the risks are. My consistent advice is: use AI as much as possible, for everything that comes to mind, with good discipline. Staff and students who do this tend to get better outcomes, and it is hard to overstate how much this matters going into the next few years.
While Cesare highlights AI as a powerful support for thinking, the MATS16402 Materials Shaping the World unit demonstrates how these principles can be embedded directly into assessment design through coursework. Faced with the reality that students were already beginning to use AI tools in their coursework, the teaching team decided that instead of ignoring this shift they would make AI use explicit within the coursework, embedding it as part of the task. By integrating AI into the coursework, requiring reflection of its strengths and weaknesses, students are given the opportunity to use AI in a thoughtful, deliberate way, benefitting from the right guidance as Cesare suggested. This implementation of AI in assessment encourages the student to critically engage with it as a useful (or not so useful) tool that they can apply to future learning, gaining insightful feedback from academic professionals. Alison Harvey, Katie Moore and Dirk Engelberg evaluate the strengths and limitations of this coursework task.
Materials Science Academics on sampling AI use in student coursework
MATS16402 is a first-year coursework only unit for (currently) around 150-200 students on our Materials Science and Engineering program. For one of the coursework pieces, we ask our students to write an article based on a technical topic, with the format and style of their articles aimed at a lay audience (coursework – “lay article”). We provide a choice of six different technical topics aligned to our six specialisation directions of the program. The objective of the article is two-fold: (i) to give our students the opportunity to read into potential topics related to our specialisation directions, with (ii) a strong emphasis on how the article content is communicated and made accessible to people without specialist knowledge.
In our annual unit-based review of how the content was delivered, we always have an open discussion of what has gone well and ways to improve course delivery for the next academic year. In our 2022/23 unit review, we were concerned that AI tools were likely to be used in preparing the ‘lay article’ coursework. Rather to replace the coursework, we decided to explore incorporating using AI as a potential delivery tool into this activity. This included to let students use AI-generated text for their submission, but with a discussion and reflection on how AI-tools were used in their articles, outlining the strengths and weaknesses in relation to their text and figures produced, with a clear judgement in relation to the final article produced. This allowed us to retain all key aspect of the coursework, but gave students the opportunity to use AI for gathering information, This also provided us in parallel with information about (i) the proficiency of students in Ai, and (ii) their positive/negative experiences. All students were warned that what is produced by AI might not get them full marks, and that we expect them to submit an article that they feel happy with. We decided to rename the coursework into “L(AI) article” with this amended delivery rolled out first time in the academic year 2023/24.
The new coursework was outlined to students during an in-person lecture, to emphasise what we are after, to allow students to ask questions, and to clearly outline our expectations by discussing our marking scheme.
Students were asked to prepare an article for a lay audience given a choice of the following topics:
TT: Smart textile materials for the 21st Century – where do we go from here?
BIO: The future of PLLA screws for orthopaedic fixation: their successes, failures and competitors.
METAL: How can we reduce the environmental impact of metals production without compromising alloy quality?
POLY: Should we design our plastic packaging to be compostable, recyclable, or to have a different fate?
COR: Corrosion engineering for net-zero – Is it worth it?
NANO: Design with nano-materials – health & safety implications?
The AI part of the review now included -as part of the coursework- that they must:
(i) use AI tools at least once and provide evidence of this by providing up to 10 pages of transcripts from their AI conversations,
(ii) reflect on the AI content generated and decide how much (or little) they wish to use for their submission. Importantly: students could decide to anywhere from 0 to 100%; from completely re-writing content to accepting all, but with a clear justification and discussion of their decision. AI was not permitted to be used for the reflection.
(iii) provide (2) pages of reflections on the AI activity. Prompts were provided to aid students in their answers.
Marking – The final L(AI)article, 10-page transcript & 2-page reflection is double marked by 2 academic members of the teaching team. In parallel, students were asked in small groups to review, mark and discuss 4 to 6 anonymised L(AI) articles, to get an impression of the variety, breadth and quality of articles submitted.
What were your reasons for doing this? What were your key drivers and what issues were you trying to resolve with this tool?
We assumed that students would already be starting to use AI tools and therefore, we wanted to ensure there was some component within the programme that encouraged careful reflection about the benefits and risks associated with its use.
We also wanted to even the playing field. We realised some students will already be using AI tools and exploring their possibilities, while other students may be nervous about that. By requiring all students to ‘have a play’ in this coursework we hoped to encourage those students who might not use AI by themselves to consider its uses. We felt this is an important skill for our students to develop.
We focused on adapting essential skills for our students, rather than closing our eyes and rejecting important developments. By openly encouraging students to explore AI technology, critically discussing its use and usefulness for their assignment, but also allowing students to make their own decision, whether and how much to use for their submission, gave us confidence in the robustness of our approach.
In parallel, this gave us (teaching on the unit) insight into how AI is used by our students, and how confident they feel about using such tools.
What were your findings? Were your results what you were hoping for?
So far, we have used this activity twice. In the first year we were pleasantly surprised by the depth of reflection, and the variety of articles produced by the students. Most acknowledged the benefits of using AI tools to help them structure the article, and find useful information, but almost all students recognised the risk in trusting the answers and therefore, the need to ‘read around’ to make sure what they were including was true. Based on the students reflections, we wondered if the students were in fact now ‘reading around’ more than usual!
In the second year we ran this however, we received many very similar, almost entirely AI produced articles, and the students often used AI tools to write their reflections on the activity, completely by-passing the purpose! Interestingly here, students would comment in the reflections on what was lacking in the article produced by AI, but not go on to make the changes to improve it.
We monitor the development of this activity closely from our side, and also try to respond to feedback from our students: we included a general overview session about AI/LLM in the course (given by Cesare Giulio Ardito/Maths department); we adjust every year the assessment brief and assessment intro session, to make sure we optimise the learning journey of our student cohort. Last year 24/25, for example, after realising our students had used AI to write their reflections and did not address some key questions, we added a full f-2-f feedback session after releasing the grades, to discuss what has gone wrong and why some L(AI)-articles received low marks.
As a team we are now changing the activity slightly for the upcoming student cohort. This time students must use AI to produce the article, then review it and provide 5-10 areas for improvement and act on these with evidence given in a table.
What are the benefits of implementing this method?
We believe that there is a need to incorporate AI use within our degree programme. Our students will be expected to use these when they enter the world of work, and we are doing them a dis-service if we don’t give them support in navigating how to use these tools effectively, and responsibly. By including activities such as this we provide students with a reason to explore the functionalities of AI tools, but we also provide the frameworks for considering what ethical, responsible, and effective use looks like.
From our own perspective: these tools and their ways of being used are clearly developing quickly. Using an activity such as this with our students is one way for us to gather an insight into how our students use AI.
It is also an excellent opportunity for us, on the delivery side, to better understand the confidence and level of AI our students are willing to use, and more importantly to get a view of the AI landscape.
What are the shortcomings you have found with using this method? What are some points to look out for?
Something we’ve learnt through the last two years, is that if you want to use AI in your teaching then you need to be prepared to adapt quickly. Nothing is staying still in terms of AI and so each year may require adjustments of your approach.
Another point we learned was to emphasise to use AI as a tool/technique, and not as a centrepiece of the coursework. The aim of the coursework is to explore potential specialisation topics and use AI as a tool to support our students to gather and evaluate information, and not to replace the task to read, think and make decisions.
Explored together, these examples highlight a pragmatic, varied approach to AI use in teaching and learning across the University. The academics featured in this issue agreed that AI can be an incredibly useful tool in supporting teaching and learning practices. However, they recognised that for this tool to be effective, rather than being a hindrance to learning, it must be employed deliberately, transparently and critically. Both perspectives emphasise that AI shouldn’t replace thinking but accompany it, supporting work by rendering it more efficient allowing space for deeper learning and independent judgement.
For more on AI use in the FSE department, keep an eye out for future issues in the Teaching College newsletter.