Why we need to agree on what our ‘collective practice’ is

by ET

After October’s meeting, our team decided to split into two sub-teams. One will work on coming up with a rubric that would be used in our course on academic writing and AI literacy (slated to happen in June 2024), and the other will work on engaging with the literature surrounding our topic of using AI in writing courses. We used November to meet and work in our sub-teams.  

 

In early December 2023, we held an online meeting to update each other about the different sub-teams’ work. H shared about the Rubric sub-team’s efforts. They have come up with a rubric to evaluate student writing, specifically, a literature review on a topic to be decided on 

 

From the rubric sub-team’s work, we realised that while the rubric seemed to be able to measure students’ effective use of ChatGPT in writing a lit review, the ethical aspect was not specific enough. We discussed what the parameters of ‘ethical use’ are – should there be evidence of oversight, where students have to had read the journal article so that they can spot mistakes that ChatGPT makes, or does ethical use of ChatGPT mean that they should use ChatGPT how they will (they may not need to read the article) just as long as they show their prompt usage and can justify why they wrote their prompts that way? 

 

As for the literature review sub-team, our research assistant reported that search terms have been identified, but I shared that from my own reading, I had observed that some relevant articles’ keywords seem more general and broader than the search terms the sub-team has come up with. While this does not mean that we cannot include those articles in our literature review, this does raise questions on the efficiency of solely using a keyword search to filter out irrelevant articles.

 

In this meeting, what interested me most was the point in our discussion where we touched on what ‘ethical use’ of Gen AI tools consists of. Is it about just following the rules set by one’s lecturer / professor / university regarding the use of Gen AI tools? Seeing as Gen AI tools are being upgraded frequently and new Gen AI tools become available every day, would an institution’s rules be able to effectively regulate and guide students regarding the usage of Gen AI tools, or do ethical standards go beyond those rules? Our member, S, then pointed out that we need to determine our baseline ‘ethical use’.

 

I was recently reminded of what Prof. Teo You Yenn wrote in her 2018 book This is What Inequality Looks Like, “Ethics are created from collective practices” (p. 273). Even though she was writing about the sociological issue of inequality, her statement is relevant to our team’s research – as a group of educators trying to navigate this complex Gen AI tool landscape, what is our collective understanding of AI & ethics, and how can we articulate our collective practice of ethical Gen AI tool use so that we can guide our students more effectively? For this reason, this month, we will be meeting with an expert in the field of AI and ethics to learn more about what the ethical use of Gen AI use entails.

Learnings from our FCL workshop and possible applications to a STEM classroom

By HLF

 

Our FLC is about effective and ethical use of Gen AI tools for academic writing, with literature review as an example. For undergraduate students in STEM, there are only a few limited instances where they have to write a literature review: for very specific assignments in special lectures (not the ones that I am teaching) or for their final year report in their last year. Hence, how can the results from our research be applicable to STEM undergraduate students? To propose suggestions to this issue, let me first summarize a few learnings from our FLC workshop. Then, I will share about a few trials in my lecture which stemmed from our FLC discussions. And finally, I want to share a broader idea that sparkled from this FLC, and that I will explore further.

We ran our 2-days workshop a few months ago, and the preparation and experience with the students was mind opening to me. One key learning for me was that some students use Gen AI quite a lot, for their academic studies but also for other areas in their lives, for translation or to schedule events, write emails, etc. Since I tried Chat-GPT first time after joining the FLC, it is very interesting to see how advanced some students are with this tool. Another key learning was that although they may be used to using it, they do not necessarily know how to prompt it efficiently. Our workshop was therefore very informative and helpful in introducing proper frameworks for efficient prompting. Finally, the last key learning is that the ethics related to the use of Gen-AI is a very grey area, which seems to fall over the shoulders of the lecturers at the university. Inquiring a bit more about the practice in my own school (MAE), the current requirement is for the students to submit a declaration of whether they have used it or not, and for what purpose, and that is all.

From these learning, I was wondering how to apply Gen-AI, or more specifically Chat-GPT, to my STEM courses? The first idea I had was to use it to improve their laboratory reports. I teach injection moulding, a 5-week laboratory course at the end of which the students have to submit a report. In such report, the students have to write a short abstract, introduction, brief overview of the process, their results, discussion of the results, and a special section where they have to compare injection moulding with another process of their choice, and a conclusion. Within the FCL, we enrolled a student under URECA to explore how Chat-GPT can be employed to improve the quality of these reports. In general, we found Chat-GPT 4 to be higher quality than Chat-GPT 3, with less mistakes and inaccuracies. In view of the workshop, now I realize that with efficient prompting, probably the reports can be much more extensively improved, but this is not something I can teach during my injection moulding lab course.

The second idea I had was to use Chat-GPT to promote discussion and critical thinking in class, by asking Chat-GPT to answer my exam questions and then commenting on the quality of these answers. The course for this test is also on plastic manufacturing and spans only 3 weeks with 3 hours lecture per week. For this course, I particularly want the students to compare different plastic formulations, different processes, and to discuss their answers. Indeed, each process, each industry, each polymer will have their own manufacturing approach. My exam therefore looks like a case study where the student is an engineer who has to take some decisions. After running our FLC workshop, I realized that my questions are ideal for Chat-GPT because I set the context (‘You are an engineer who wants to recycle plastic toys to make cheap gifts to regular customers’) and then ask open questions (‘List 3 characteristics that the plastic toy should fulfil to be transformed into the gift and explain your answers’). For 2 classes (part time and full time), I therefore ran the exercise where I give the students my practice exam questions, ask for their answers, then ask Chat-GPT and discuss, then I give what I was expecting. It appeared that Chat-GPT provides longer and more detailed answers than what I was expecting. However, some terms used by Chat-GPT were also not taught in my lectures, whereas some other terms I would argue that they are not 100% correct. However, because of the lack of student’s participation, my key learnings from this activity are that (i) I should better tell them what I am expecting them to do, (ii) maybe do it in groups and let them type into the Chat-GPT directly instead of me, (iii) find questions where I disagree with Chat-GPT so we can have a good discussion.

Based on these ‘small’ classroom trials of using Gen-AI, I thought of another setting where it can actually be more usefully applied for engineering. This is in the context of biomimicry. In biomimicry, the idea is to copy nature’s mechanisms into engineering systems to solve grand challenges. The most interesting application of biomimicry is that the engineering solutions will likely be more sustainable than the existing approaches. However, numerous previous research on the application and teaching of biomimicry at the undergraduate or postgraduate levels indicate that there are large knowledge gaps, in particular relative to the biological systems. Hence, I would like to explore further how we can effectively use Chat-GPT to bridge this knowledge gap and support engineering students in finding biomimetic solutions to challenges. Indeed, Gen AI can be used to improve writing, but also to brainstorm and learn.

Short reflections after the workshop

“My experience with student in this workshop were amazing, especially in terms of their perspectives on using generative AI for writing tasks. The students were highly engaged and highlighted how AI tools strengthen their writing skills. Despite the excitement, there were also significant ethical concerns raised by the students. Student wondered whether submitting an improved version of their writing after receiving AI feedback could be considered plagiarism. This led to a lively debate about the boundaries of originality and the role of AI in the academic writing process. I think there is a need for ongoing dialogue and education on the ethical implications of generative AI in writing.”

  • Dr. Alex Pui, CCEB

“As a faculty member who has very minimum usage and experience of Gen-AI and Chat-GPT, I very much enjoyed the learning content of the workshop. In addition, hearing from the students’ experiences and challenges was eye-opening. The discussions about the ethical use also prompted me to double-check the policy from my School and to implement clearer rules in my assignments from the coming semester.”

  • Asst. Prof. Hortense Le Ferrand, MAE

“Regarding the content of the workshop, it was helpful that the instructors addressed the principles and perspectives of good literature review writing, which is highly complementary to the focus of the workshop on the effective use of GenAI tool. If we are to conduct another round of workshop, the materials can be restructured in such a way to give participants more time for hands-on activities, including peer-evaluating each other’s work.”

  • Dr. Poernomo Gunawan, CCEB

“I really enjoyed working with this FLC team because everyone is interested in figuring out their personal pedagogies regarding this very complex topic of GenAI in education. The workshop was small, but meaningful. As a pilot workshop, I do see many things I would like to refine – the content, the connections between each part, the flow of content etc. – but overall, I am energised by the potential this workshop and its other iterations (hopefully) have for helping students navigate the uncertainty of using GenAI writing tools in academic writing.”

  • Eunice Tan, SoH

“The participants had a positive experience of the workshop. However, it would have been beneficial to have more undergraduate students involved to better understand their perspectives on Gen AI and its applications. We may consider organizing a second round of the workshop with increased participation from undergraduate students and refine our workshop materials to include more hands-on activities and demonstrations about prompts.”

  • Dr. Mukta Bansal, CCEB

“Previously, I was the School Academic Integrity Officer. I investigated written assignments which contained fake content, statistics, in-text citations and references. I found that this was due to students using GenAI tools to complete their assignments. Hence, through the two-day workshop, I hope to guide students to properly and ethically use GenAI to augment human intelligence and highlight the role of human oversight to address the risks associated with GenAI.”

  • Asst. Prof. Sabrina Luk, SSS

“I often encounter students who either overuse or underuse generative AI in their written assignments. This has made me wonder whether there’s an optimal approach to using Gen AI in academic work, and if so, what that might look like. Being part of this Faculty Learning Community (FLC) and planning and executing this AI workshop has greatly helped answer this burning question.”

  • Dr. Felix Lena Stephanie, MAE

Tentative plans for AI Literacy Workshop

by ET

We almost didn’t have a workshop – due to lack of participants. Undergraduates, our target sample, did not seem interested in taking part in our workshop. In our June meeting, we brainstormed the reasons for the low sign-up rate (only 2 undergrads signed up in the end) and came up with this list:

  1. the 6 workshops spread out over 3 days might appear like too much work
  2. undergraduates might be busy with internships and/or part-time holiday work to want to join
  3. we had not managed to reach enough undergraduates. 

So, the remedies were:

  1. We emailed the student chairs we knew and asked them to send out a mass email to the undergrads. 
  2. We expanded our target sample to include Masters and Ph.D students.
  3. We collapsed the 6 workshops into 1, and rebranded them as a 2-day workshop, to be held on 1 and 2 August 2024. 

This meant that IRB approval had to be sought again, and we made the necessary amendments to our protocol as well as informed consent sheets and recruitment advertisements. We all agreed that any number less than 10 participants and we would postpone the workshop. To date, we have received 13 participants. 

Posts

H’s reflection on meeting #3 (October 2023)

E: Our first two meetings were more administrative in nature, but things really took off recently with our third meeting. Here, H reflects on that meeting:

Our faculty learning community (FLC) focuses on teaching students how to use generative artificial intelligence (AI) effectively to improve their writing. We are a group of lecturers and professors from various schools at NTU and NIE: SoH, MAE, NSSE, CCEB, PPGI and supported by students from our schools. Our aim is to develop a short course for undergraduate students in year 3/4. After 3 months into the project, I would like to reflect on three (open) challenges we discussed during our investigation.

First challenge we had to discuss was the profile of the intended undergraduate students: from STEM or non-STEM? Initially we thought about focusing on STEM students only with the idea that narrowing the students’ profile will allow us to go more in depth and in details about how to use the AI tool. However, we realized that there a lot of diversity between the STEM students, and that the non-STEM may likely be the ones more prone to benefit from learning how to write with AI given the amount of writing they have to do. Therefore, we are now exploring the use of generative AI for STEM and non-STEM equally. This leads to other challenges since each community use different types of AI tools: some tools are better for coding, better for creative writing, better for literature review, etc. We set our mind on Chat-GPT4 as a ‘safe’ and widely use generative AI tool, with the hope that our outcomes will be adaptable to other tools.

Second challenge we faced is the difference in point of view between educator and students. We have different worries and different interpretations. For example, when asking students if they would mind be graded higher if they used the AI tool than students not using it, the students said that they thought it is ok. Some of us faculty initially thought they may perceive it as not fair instead. In addition, students may turn to the AI tools for gaining time, for getting better grades with lesser efforts, whereas as faculty we deeply worry about the skills the students will acquire to prepare them for their future jobs. What happens if our students write beautiful essays in school but cannot thrive in their work environment? We definitively will include this discussion about what should be the student’s intention when using the AI tool in our final workshop.

Finally, a third challenge we tackled is how we should assume the use of these AI tools will be in the future. Should we assume that AI will inevitably become part of our lives in similar way that we now all use Google, or should we not make such assumption? Assuming that AI will be used anyways sets a positive angle on the use of the tool, prompting us to think of how to use it more efficiently in addition to our own critical thinking. In turn, if we do not make such an assumption, we may be considering more on how to restrict and control the use of the AI tool, for example by limiting the number of prompts the students may use. From this discussion, we nevertheless all agreed that what is important are the skills and the thought process that are involved when using the AI tool. How to assess them is, however, the subject of our next discussions.