Fordham Professors Seek to Attack Cheating at the Root and Decriminalize ChatGPT Use 

By Paige Lesperance

Sitting down at my computer, I decided to create an account on ChatGPT. I was not about to plug an algebraic equation into the search engine, or ask it to write the perfect essay. Instead, I was going to subject it to an interrogation. 

“Would you consider yourself a danger to society?” was my first question. I was only being half serious, and was not surprised to receive this answer: “No, I am just a computer program created by OpenAI, and I do not possess the capability to cause harm or pose a danger to society…my actions are entirely dependent on how I am used by individuals, and I am designed to follow ethical guidelines and legal regulations.”

A sensible, modest response. Next came my real question: “would you consider yourself a threat to education?” This one generated a longer, more defensive sounding answer. “No,” replied the computer, “I do not consider myself a threat to education. In fact, I can be a valuable tool for educational purposes when used appropriately.” 

ChatGPT is not even a year old–it was first released on November 30, 2022 by the tech company OpenAI–but there is nothing infantile about the intelligence and speed with which it can produce results about virtually any topic. The concern that it has caused among those in the academic field, especially in the realm of higher education, mainly stems from the question of how to go about limiting its usage by students as much as possible. The main assumption among educators appears to be that their students, if choosing to utilize ChatGPT when working on assignments, are doing so in a deceitful way–essentially leaving all of the brain work for the computer in an effort to get their assignments done as quickly as possible. In a Time article entitled “The Creative Ways Teachers Are Using ChatGPT in the Classroom,” which was published in August and highlights opinions of educators across the country on this issue, an Ohio high school principal was quoted as saying, “The majority of the teachers are panicked because they see [ChatGPT] as a cheating tool, a tool for kids to plagiarize.” 

However, some professors are not so eager to jump to the conclusion that using ChatGPT is akin to cheating, or that a student should be vilified if they choose to do so. Some would argue that the way a student uses an AI tool matters more than whether or not they are using it to begin with–a sentiment that echoes the very response I received from ChatGPT when I asked it if it poses a threat to education. Others, like Professor Jordan Stein, think that the problem doesn’t even originate with the invention of ChatGPT. The decision to cheat in an academic setting has always existed, even before technology, so he offers a solution that seeks to nip cheating in the bud. 

Stein, a professor in the English department at Fordham University’s Lincoln Center campus, recently penned an article for The Chronicle of Higher Education which took a strongly anti-policing stance regarding student usage of ChatGPT and other forms of AI. In his article, entitled “Instead of Policing Students, We Need to Abolish Cheating,” he argues just that, propelled by the belief that college students are strongly driven to use AI, and to cheat in general, by both external and internal contexts that complicate their already busy lives. His article has a pronounced tone of empathy for students throughout, and evidences his belief in the idea that professors should not be so quick to villainize students for turning to ChatGPT, but instead should look for the “human context,” as he puts it, that pushes them to do so. “Policing,” as Stein writes in the Chronicle article, “constructs punishment rather than moral accountability.” 

When I met with Professor Stein virtually to discuss the contents of his article, I was unsurprised to find myself speaking with a person who had a very relaxed and compassionate demeanor. I figured that someone who urges professors to consider the many possibilities of why a student might cheat, and implores them to move beyond the typical assumption of “because they don’t care,” would present themselves as such. Stein was contemplative, thoughtful and ultimately very articulate in how he went about responding to the questions I asked him, and what he had to say nicely condensed all of the thorough points he makes in the article. 

In his article, Stein provides a list of strategies which can be utilized by professors in order to create a classroom environment that would ideally decrease the incentive to cheat. If the desire to use AI tools like ChatGPT arises outside of the classroom from economic, technical, or familial contexts–which Stein specifically outlines in the article–then the least that can be done, he believes, is integrating different educational techniques that will alleviate students’ stress concerning class. One of these strategies is known as “pink time,” and, according to the official website designed to promote this concept, students on pink time “skip class, do whatever [they want], and grade [themselves]” (pinktime.org). The freedom that pink time offers students might make them feel as though an educator would never consider taking this technique seriously, and an educator who relies on traditional learning techniques might scoff at the concept–but the big idea is that it increases the opportunity for students to learn in a way that expands their creativity and limits space for definite mistakes. ChatGPT is designed to give answers that arise from objective, technical questions. It cannot summarize a student’s subjective, personal experiences. 

Stein himself has used this technique, and noted in his article that it stood out from the other techniques in its ability to allow “class dynamics to [gel] better.” When I asked him to expand on how he specifically wove this concept into his curriculum, he told me that he “picked a two-month window” for his students to use the 75-minute time that they were allotted for his class, to do whatever they wanted–outside of class. Stein’s students, he said, “were intrigued by [their pink time].” Being accustomed to the rigid structure that the student lifestyle commands, many of them felt directionless and uncomfortable. “I will admit to you, it was a little bit concerning to me that people really hadn’t thought about [having nothing to do during class] before,” Stein recalled. 

Despite this, the discomfort of his students in response to this concept was striking to Stein, and actually conveyed to him the efficacy of pink time. “I don’t know if people liked it, but they thought it was a useful experiment,” he remembered. In order to make this experiment meaningful, Stein assigned his students a write-up, “which was a chance to reflect on [their pink time] …and learn from it.” Naturally, this reflection would be written by the student, and only the student. Even if they attempted to use ChatGPT, it would be impossible to generate a response about their personalized experiences while on pink time. This is just one of the many ways that educators can fulfill a duty that Stein believes is imperative for students’ success, which is to provide them with opportunities to make “mistakes that won’t have great consequences.” 

Another professor of English at Fordham who has written on the subject of AI is Leonard Cassuto. He too recently published an article for The Chronicle of Higher Education, entitled “Artificial Intelligence: A Graduate Student User’s Guide.” In it, Professor Cassuto advises professors of higher education how to move forward in the age of AI, and for students pursuing their degree from graduate school, he offers how to use it in an honorable way. 

A statement that appears early in Cassuto’s article– “there’s already a tendency among faculty members to criminalize the use of AI” –echoes the sentiments of Professor Stein. Cassuto argues that the dishonorable intentions associated with students using AI to help with their work are misconstrued by many educators. He remarks that “the internet has long relied on [AI]. You use it when you do a Google search, for instance. Predictive text…is another instance of how we’ve long been splashing in the shallow end of the AI pool.” Since Cassuto argues that AI and its use amongst students, in its various forms, really is nothing new, “the important thing [going forward] is how [AI is used].” 

Of course, how they decide to use AI is entirely up to the student. After speaking to a couple of English students on campus, I gathered that the sentiments regarding the usage of ChatGPT on college campuses are somewhat polarized, especially with regard to how strict their professors should be when tackling the issue. Leah Garritt (FCRH ‘25), believes that “professors should absolutely prohibit the use of ChatGPT, or at least strictly monitor its use.” Garritt expressed how she is “usually of the opinion that students should get to do what they want in class,” but in the case of being a college student, she proclaims that “the use of ChatGPT or other AI technologies to complete classwork is something that I personally will never condone…since [students] are paying to attend this university.” Garritt’s opinion stems from her belief th”it promotes unfairness, as it leads to students [putting] in far less effort than their peers.” 

Another English student, Maeve Hamill (FCRH ‘25), voices an opinion that is essentially the same as Professor Cassuto’s. She views the use of AI among students as the responsibility of the student–meaning that if used appropriately, it can be a helpful tool. “ChatGPT is great for generating ideas and brainstorming…but I don’t think it should write your paper for you,” Hamill says. She drew on a personal experience to formulate this belief, remembering how she “recently witnessed someone using [ChatGPT] in one of my classes to write their entire essay,” and how it made her “feel angry knowing that they might get an A without having written a single word.” 

Nearly a year after its release, the tension and stigma surrounding the academic use of ChatGPT might be enough to guilt-trip a student from ever using it, even if their intention is not to use it maliciously. Like with any technology that’s been invented, ChatGPT simply becomes a reflection of the morals of the person using it. If used ethically and in a way that aids, rather than replaces, a student’s work efforts, then it could be considered an essential tool in the future years. Yet, there will always exist students who put academic integrity on the line to make their lives easier, whether out of desperation or not. What a student does outside of class is something a teacher cannot control, but exacting new and more creative forms of discipline might be the most effective way at combating future deceitful use of AI. ChatGPT is likely not going anywhere anytime soon, but at least the technology is quite aware of the ethical dilemma it poses. The last question I typed into my computer asked ChatGPT to offer some final advice for those on campus, and like before, the answer was pretty objective: “Educators and students should ensure that AI is used to complement traditional educational methods and not as a replacement for critical thinking, problem-solving, and human interaction in the learning process.” 

This article was written as a part of ENGL 3014 — Creative Nonfiction Writing, taught by John Hanc.

Works Cited

Cassuto, Leonard. “Artificial Intelligence: A Graduate-Student User’s Guide.” The Chronicle of 

Higher Education, 25 July 2023, 

www.chronicle.com/article/artificial-intelligence-a-graduate-student-users-guide.

Accessed 26 Oct. 2023

Stein, Jordan Alexander. “Instead of Policing Students, We Need to Abolish Cheating.” The 

Chronicle Of Higher Education, 7 Sept. 2023,

www.chronicle.com/article/instead-of-policing-students-we-need-to-abolish-cheating

Accessed 27 Oct. 2023.

“The Inspiration: Pink-Time.” Pink Time, www.pinktime.org/the-foundation. Accessed 5 Nov. 

2023. 


Previous
Previous

Make Your Art No Matter What

Next
Next

Literary Travel: Interview with Fordham Alum Melanie Blake