/

What Does ChatGPT Mean for the Future of Academic Writing? Students, Faculty Weigh In

In November 2022, Open AI released ChatGPT, an artificial intelligence (AI) tool that has been programmed to answer virtually any question in a matter of seconds. The chat box is able to respond in a natural and conversational manner, attracting over 100 million active members only two months after its launch. As the new technology increases in popularity for its information capabilities, it has also raised new questions about how students might utilize it in academic settings.  

GPT stands for Generative Pre-training Transformer. A transformer model is able to process large bodies of data and carry out natural language processing tasks. But what makes ChatGPT unique is its ability to carry out entire conversations in an almost human-like way. ChatGPT has circulated academic circles, dinner-time conversations, and student study sessions. In addition, ChatGPT has a wide range of expertise — the technology made headlines for passing the Bar exam, medical licensing exams, and most recently, a Wharton MBA exam.  

Both faculty and students had positive impressions of the application, but also earnest suggestions on how to effectively integrate ChatGPT without compromising student learning. 

In an interview with The Phoenix, Adam Karakra ’24 mentioned that he first learned about the app from his friends and was urged to enter a few prompts.

“I asked [ChatGPT] to do ridiculous things like write me a story about George Bush and Elon Musk. And it did. I even put some past homework questions in and [ChatGPT] solved those pretty accurately too. I was pretty mind blown,” Karakra said.

Swarthmore professors were equally shocked to uncover ChatGPT’s abilities. After hearing about ChatGPT through social media, Professor Sam Handlin of the political science department marveled at how accessible and powerful the application has become. Handlin also explained his plans to integrate the application into his teaching. 

“I first heard about it over social media. So I asked ChatGPT this question but told it to answer in the form of a Shakespearean sonnet, or in the form of a pirate, and I was just blown away by what this thing was doing,” Handlin said. 

Professor Handlin is not the only professor interested in the intersection between ChatGPT and learning. Visiting Professor of Economics Kara Dimitruk also hopes to embrace ChatGPT rather than shy away from it for fear of students misusing the application. 

“I started entering prompts related to the writing assignments I was developing to see what answers [ChatGPT] would give. And then I had the idea that rather than being concerned about students using [ChatGPT] to write answers, I could try to include it in the classroom. Here’s our prompt, based on reading that we’ve done in the class, and here’s what this AI output is. Do we think this is good? What are some issues with it?” Dimitruk suggested. 

Professor Peter Schmidt of the English Literature Department shared similar opinions and asked his students to evaluate controversial essays generated by the technology. 

“When I asked [ChatGPT] to write a defense of slavery from Jefferson’s point of view, it refused to do so on ethical grounds. When I asked it to summarize Jefferson’s criticisms of slavery, based on what it does to both white and Black people in the institution, [ChatGPT] made it sound like [Jefferson] was 100% against slavery. Only in the third essay [ChatGPT] admitted that Jefferson’s ideas were contradictory but it couldn’t explain the contradictions very well,” said Schmidt. 

The essays ChatGPT wrote about Jefferson were not the only errors the program made. In an assignment Professor Schmidt discussed with a different class, ChatGPT made up quotations and got entire plot lines wrong. 

“If [ChatGPT] makes basic mistakes, like getting some of the plot of a book wrong and unsuccessfully distinguishing between what’s accurate and what’s inaccurate, isn’t that going to create more problems rather than fewer? Especially if it can pass as true to people?” Schmidt asked. 

As with any technology, there are potential dangers of ChatGPT that have implications in the long-run. Professor Schmidt further explained how the possibility of misinformation is especially threatening in an era of digital credulity. 

“I mean, we’re already in an age of disinformation and people don’t know how to be skeptical about things. What they want is, you know, the answer. So, it’s easier to fool them. That’s a scary situation.”

Professor Handlin further described how ChatGPT often fabricates information due to its inability to perform in-depth analysis. 

“It can’t really call specific facts out of the Internet. It’s just associating text strings. So it ends up being kind of a bit of a BS generator at times. And if you try to push it to get specific on things, it just can’t go there.”

Professor Schmidt detailed two potential avenues of response towards ChatGPT; either teachers can warn against it entirely, or they can try to integrate it into lesson plans.  

“I would say teachers are thinking in two directions. One is to be paranoid and say that this is just going to generate a bunch of students [who] will use [ChatGPT] to write an essay or … use this as a good first draft, and then they’ll work from there. And we can’t prevent that … The other way, we teachers are thinking, is this: how can we use this in class? We need to adapt to this technology, rather than try to stop it?”

The discussion of ChatGPT at Swarthmore is far from over. In the coming weeks, Swarthmore faculty plan to host a meeting on how the application will affect learning and academic discourse within the community. 

Leave a Reply

Your email address will not be published.

The Phoenix

Discover more from The Phoenix

Subscribe now to keep reading and get access to the full archive.

Continue reading