by ET
After October’s meeting, our team decided to split into two sub-teams. One will work on coming up with a rubric that would be used in our course on academic writing and AI literacy (slated to happen in June 2024), and the other will work on engaging with the literature surrounding our topic of using AI in writing courses. We used November to meet and work in our sub-teams.
In early December 2023, we held an online meeting to update each other about the different sub-teams’ work. H shared about the Rubric sub-team’s efforts. They have come up with a rubric to evaluate student writing, specifically, a literature review on a topic to be decided on.
From the rubric sub-team’s work, we realised that while the rubric seemed to be able to measure students’ effective use of ChatGPT in writing a lit review, the ethical aspect was not specific enough. We discussed what the parameters of ‘ethical use’ are – should there be evidence of oversight, where students have to had read the journal article so that they can spot mistakes that ChatGPT makes, or does ethical use of ChatGPT mean that they should use ChatGPT how they will (they may not need to read the article) just as long as they show their prompt usage and can justify why they wrote their prompts that way?
As for the literature review sub-team, our research assistant reported that search terms have been identified, but I shared that from my own reading, I had observed that some relevant articles’ keywords seem more general and broader than the search terms the sub-team has come up with. While this does not mean that we cannot include those articles in our literature review, this does raise questions on the efficiency of solely using a keyword search to filter out irrelevant articles.
In this meeting, what interested me most was the point in our discussion where we touched on what ‘ethical use’ of Gen AI tools consists of. Is it about just following the rules set by one’s lecturer / professor / university regarding the use of Gen AI tools? Seeing as Gen AI tools are being upgraded frequently and new Gen AI tools become available every day, would an institution’s rules be able to effectively regulate and guide students regarding the usage of Gen AI tools, or do ethical standards go beyond those rules? Our member, S, then pointed out that we need to determine our baseline ‘ethical use’.
I was recently reminded of what Prof. Teo You Yenn wrote in her 2018 book This is What Inequality Looks Like, “Ethics are created from collective practices” (p. 273). Even though she was writing about the sociological issue of inequality, her statement is relevant to our team’s research – as a group of educators trying to navigate this complex Gen AI tool landscape, what is our collective understanding of AI & ethics, and how can we articulate our collective practice of ethical Gen AI tool use so that we can guide our students more effectively? For this reason, this month, we will be meeting with an expert in the field of AI and ethics to learn more about what the ethical use of Gen AI use entails.