edUMinded

What student use of Grammarly tells us about writing support

In this article, John Harbord, writing advisor at the Faculty of Arts and Social Sciences (FASoS), looks at the patterns of Grammarly use and the writing challenges students face. He also explores the impact of relying on automated tools for developing academic writing skills.

Not long ago, I worked with a student, let us call her Samira, who was preparing her BA thesis. My role as writing advisor at FASoS is to support students alongside their supervisor, offering feedback and guidance about clarity of expression and effective argumentation: how to use scholarly work to support their ideas, how to engage critically with the literature without losing their own voice, or how to combine theory and data effectively to convincingly answer a research question. The student I was working with, though she spoke and wrote good English, lacked linguistic confidence. Her research design was good, and appropriate theory and methods were well applied for an interesting analysis and conclusion. When she came to submit, I felt she had done good work. 

Grammarly: saviour or saboteur?

About ten days later, Samira told me she was being called up before a committee for suspected GenAI use. She confessed she had used Grammarly to suggest improvements to her sentences to avoid any infelicities of language. Lacking confidence in her second language, she wanted to be sure a diligent piece of scholarly research was not spoiled by poor English. The good news is that Samira was able to convince the committee that her use of GenAI had been superficial, and the thesis was graded. She was one of the lucky ones: other students who relied rather more heavily on Grammarly to polish their work are currently facing the prospect of sanctions. Those who have used other AI tools may also be at risk.

Samira’s experience has important implications, not just for policies on GenAI use but for how we think about students writing a thesis in an additional language. Samira has two things in common with many students at Maastricht University: she is working in a language she does not master fully, and high standards are expected of her as regards language proficiency. Though I have yet to see a thesis fail solely on the grounds of language proficiency, supervisors frequently refer students to me because of “poor English.” 

ChatGPT

Flagging versus fixing

Supervisors also frequently advise students to pay for proofreading services to ensure their English proficiency is not detrimental to the final grade. There is extensive literature on ethical proofreading (led by Nigel Harwood, a scholar at Essex University who seems to have made it his life’s work). The gist is that ethical policy entails (1) flagging not fixing, i.e., indicating errors but not correcting them, and (2) only using proofreaders from a university-approved list who have been fully versed in this policy and pledged to keep to it. 

Honestly? How realistic is that? And university-approved proofreaders don’t come cheap. So much for equity in higher education. If less well-off students have difficulty paying a professional proofreader, they have two alternatives: The first is to ask a friend or family member to check it over. In practice, I suspect this happens a lot; indeed students tell me they have done this, and don’t see it as illegitimate. The second is they get Grammarly or ChatGPT to proofread and edit their work free of charge. Harwood’s concern, of course, is that those amateur proofreaders will be fixing, not flagging. Similarly, ChatGPT fixes the student’s work without telling them what it has fixed. I suspect a lot of students don’t go over their thesis to see if they agree with their proofreader’s decisions, whether human or AI. 

The enforcement conundrum

It is a well-known legal saw that you should never make laws (or policies) you cannot enforce. If a student asks his girlfriend to polish the text of his thesis (and add a couple of suggestions of her own), we won’t catch him. If he uses GenAI to polish the thesis, we do – more or less – have the tools to catch him. Does that make human intervention OK (because we can’t enforce it) and AI intervention not (because we can)? These are tricky questions: it is something of a slippery slope, and indeed hard to permit the use of GenAI for proofreading but draw a clear line between that and what – if a human were doing the work – would be called copy-editing, and rightly considered illegitimate. Developing policies that say “this much help is ok, but that much is illegitimate” will only create headaches for supervisors and examination boards as they try to assess whether each student has stepped over the line. Our BoE currently has a backlog of weeks adjudicating possible inappropriate GenAI cases, not counting the theses that have accumulated over the summer.

Redefining language proficiency standards

Instead, especially at BA level, it is perhaps time to relax our expectations of language proficiency. I regularly get students sent under a cloud of shame by supervisors who tell them there is “an error in every sentence.” Clearly, these colleagues feel students should meet the same standards that they themselves are required to meet when publishing. I have difficulty finding all these mistakes; apparently some people’s English standards are higher than mine. What I do find interesting, however, is that when these supposedly grammatically incompetent students bring me their drafts, the grammar is not as bad as they have been accused of, but the writing lacks clarity because the student is unsure what they want to say. And precisely when an argument is unclear, critical readers often latch onto the grammatical errors because these are much more tangible and easier to address. The problem is not where they think it is, and AI will probably not help you decide what you really want to say, anyway.

Supporting students by reading between the grammar

How can tutors and supervisors help students to become better writers without succumbing to the temptation to use GenAI to polish their work? First, don’t make it about the English: it may be that the student makes grammatical or lexical mistakes, but that’s probably not why you can’t understand. When students write badly it’s often because they (have been made to) worry more about form than content. Most students I meet, and that includes a lot of the weak ones, actually have something to say; they just haven’t thought through yet what they think and how they can persuade others their topic is important or their argument valid. Good writing support means helping those students to find what they want to say. Ask them questions like “why does this matter?” or “what do you really want to say here?” Engage them as if you really care about what they are writing about and share their enthusiasm. Mirror what you hear them tell you: “so are you trying to say…?” If the answer is yes, your modelling may help them put it more clearly; if it is not, they will be able to tell you what they actually wanted to say because now they know what you didn’t understand. I have met many staff over the years who say “if the grammar is tidied up I can see what the student wants to say.” I think in reality what they are saying is “once the grammar is tidied up, I can give it a bad grade with a clear conscience because I can see the ideas are unclear.” In contrast, if the ideas are well thought-through and argumented, the grammar mistakes will probably hardly bother you.

I once had a student from Kyrgyzstan, Aziz. Like Samira, he was an intelligent, diligent student who had a good grasp of research design, of the methodologies and theories he needed to apply, and worthwhile research highly relevant to combatting rural poverty in his country. The problem was, his English was bad. He came to me again and again, not because his ideas were unclear or poorly argumented, but because he couldn’t use articles and prepositions correctly. Today, Aziz is a senior economist with the World Bank. Good research and clear, effective argumentation count for a lot more in life than grammatical and lexical correctness.

By John Harbord, writing advisor,  Faculty of Arts and Social Sciences, Maastricht University. 

This article is a publication of edUMinded, the Maastricht University online magazine on Teaching & Learning.

Also read

  • EDLAB invites Maastricht University teaching staff to apply for an EDLAB education innovation grant à €5,000,-. The application deadline is Monday 13 November 2023, 12:00 CET.

  • Hüseyin Sakalli, a teacher at FASoS, has developed a new card game designed to support the PBL process. Are you curious? Read his story!

  • EDLAB has developed the UM TutorKit to provide quick and easy tools for common challenges in Problem-Based Learning (PBL) classrooms. From September 2024, this new toolkit, which supports group dynamics, feedback, and evaluation processes, will be available to all UM teachers across faculties.