By HLF
Our FLC is about effective and ethical use of Gen AI tools for academic writing, with literature review as an example. For undergraduate students in STEM, there are only a few limited instances where they have to write a literature review: for very specific assignments in special lectures (not the ones that I am teaching) or for their final year report in their last year. Hence, how can the results from our research be applicable to STEM undergraduate students? To propose suggestions to this issue, let me first summarize a few learnings from our FLC workshop. Then, I will share about a few trials in my lecture which stemmed from our FLC discussions. And finally, I want to share a broader idea that sparkled from this FLC, and that I will explore further.
We ran our 2-days workshop a few months ago, and the preparation and experience with the students was mind opening to me. One key learning for me was that some students use Gen AI quite a lot, for their academic studies but also for other areas in their lives, for translation or to schedule events, write emails, etc. Since I tried Chat-GPT first time after joining the FLC, it is very interesting to see how advanced some students are with this tool. Another key learning was that although they may be used to using it, they do not necessarily know how to prompt it efficiently. Our workshop was therefore very informative and helpful in introducing proper frameworks for efficient prompting. Finally, the last key learning is that the ethics related to the use of Gen-AI is a very grey area, which seems to fall over the shoulders of the lecturers at the university. Inquiring a bit more about the practice in my own school (MAE), the current requirement is for the students to submit a declaration of whether they have used it or not, and for what purpose, and that is all.
From these learning, I was wondering how to apply Gen-AI, or more specifically Chat-GPT, to my STEM courses? The first idea I had was to use it to improve their laboratory reports. I teach injection moulding, a 5-week laboratory course at the end of which the students have to submit a report. In such report, the students have to write a short abstract, introduction, brief overview of the process, their results, discussion of the results, and a special section where they have to compare injection moulding with another process of their choice, and a conclusion. Within the FCL, we enrolled a student under URECA to explore how Chat-GPT can be employed to improve the quality of these reports. In general, we found Chat-GPT 4 to be higher quality than Chat-GPT 3, with less mistakes and inaccuracies. In view of the workshop, now I realize that with efficient prompting, probably the reports can be much more extensively improved, but this is not something I can teach during my injection moulding lab course.
The second idea I had was to use Chat-GPT to promote discussion and critical thinking in class, by asking Chat-GPT to answer my exam questions and then commenting on the quality of these answers. The course for this test is also on plastic manufacturing and spans only 3 weeks with 3 hours lecture per week. For this course, I particularly want the students to compare different plastic formulations, different processes, and to discuss their answers. Indeed, each process, each industry, each polymer will have their own manufacturing approach. My exam therefore looks like a case study where the student is an engineer who has to take some decisions. After running our FLC workshop, I realized that my questions are ideal for Chat-GPT because I set the context (‘You are an engineer who wants to recycle plastic toys to make cheap gifts to regular customers’) and then ask open questions (‘List 3 characteristics that the plastic toy should fulfil to be transformed into the gift and explain your answers’). For 2 classes (part time and full time), I therefore ran the exercise where I give the students my practice exam questions, ask for their answers, then ask Chat-GPT and discuss, then I give what I was expecting. It appeared that Chat-GPT provides longer and more detailed answers than what I was expecting. However, some terms used by Chat-GPT were also not taught in my lectures, whereas some other terms I would argue that they are not 100% correct. However, because of the lack of student’s participation, my key learnings from this activity are that (i) I should better tell them what I am expecting them to do, (ii) maybe do it in groups and let them type into the Chat-GPT directly instead of me, (iii) find questions where I disagree with Chat-GPT so we can have a good discussion.
Based on these ‘small’ classroom trials of using Gen-AI, I thought of another setting where it can actually be more usefully applied for engineering. This is in the context of biomimicry. In biomimicry, the idea is to copy nature’s mechanisms into engineering systems to solve grand challenges. The most interesting application of biomimicry is that the engineering solutions will likely be more sustainable than the existing approaches. However, numerous previous research on the application and teaching of biomimicry at the undergraduate or postgraduate levels indicate that there are large knowledge gaps, in particular relative to the biological systems. Hence, I would like to explore further how we can effectively use Chat-GPT to bridge this knowledge gap and support engineering students in finding biomimetic solutions to challenges. Indeed, Gen AI can be used to improve writing, but also to brainstorm and learn.