Campus Discussions: How Swarthmore is Thinking About AI

April 30, 2026
Phoenix Photo/Devin Gibson

Generative Artificial Intelligence (AI) has arrived on Swarthmore’s campus, and with it comes the question of if, when, and how students should use it. The Phoenix’s December poll of the student body shows that around one in four students use AI for academic purposes on a weekly basis, and 15% of student respondents reported using AI daily for their coursework. The remaining 55% reported using AI once or less per month. However, exactly how these chatbots are being integrated into the classroom experience varies based on the personal approaches of students and faculty. 

AI platforms such as ChatGPT have only been accessible to the public for almost four years, since Nov. 22, but have rapidly become widespread. A campus survey conducted in May 2025 by Olivia Medeiros-Sakimoto ’25 showed that 50% of respondents had first used AI during their time in college, but anticipated this data to change by the following spring, due to the graduating class of 2025 being the last to experience a year at Swarthmore without a saturated market of AI tools. 

Head of Digital Scholarship and Visiting Associate Professor of English Literature Amanda Licastro has witnessed this rise in AI usage and led campus-wide discussions on the implications of the shift. Over the past four years, she and Bob Rehak, associate professor and chair of film and media studies, have convened a group of community members into the Critical AI Inquiry Group to discuss the impact of AI on higher education. Licastro mentioned that this is a group co-sponsored by the libraries and the Teaching and Learning Commons that is open to faculty, staff, and students. 

“I do think it’s a very emotionally charged and polarizing topic,” Licastro said in an interview with The Phoenix. “I think we have everyone here on campus, from those who are coming from a position of refusal or resistance to folks who want to embrace [AI] and understand it, and integrate it into their courses.” According to Medeiros-Sakimoto’s data last year, 27% of students agreed with the statement “I have used generative AI in my studies” while 25% strongly disagreed.

The campus community seems to be in a phase of evaluating these technologies before adopting them fully, Licastro observed. She told The Phoenix that, based on her experience leading workshops on other campuses, Swarthmore is thinking more about critical engagement with AI than many peer institutions. “We are more of an AI-resistant campus than most, but not in an ignorant way. I think people are deeply engaged with the topic and are thinking critically [about both the computational and ethical considerations], and that leads them to the conclusion that we should be using it minimally.” 

Licastro has traveled to other campuses to give workshops on AI and has noticed how Swarthmore seems to be critically engaged on this topic in comparison to other institutions. The libraries have been leading campus conversations, and the college gives access through the campus IP address to a range of AI tools including Notebook LM, LibreChat, and Google Gemini. 

However, how helpful students find AI tools to be and how often they use them varies.

Sindia Michael ’29 feels there is a polarized divide that arises in conversations among peers on whether or not to use generative AI. She uses AI tools on a roughly weekly basis, and her own view is nuanced:  “I was very much against AI for the longest while, but then I’ve seen here it’s helped people a lot.” That being said, she believes that the manner in which AI is used shapes how useful it is, and worries about AI becoming a shortcut that limits student learning. 

Michael occasionally uses AI to probe deeply into specific questions in order to add to her understanding of concepts she encounters in her coursework. She also finds it useful for synthesizing vast amounts of articles and pointing her to areas of existing research that she might have missed during her own review of scholarship on a traditional search engine. 

As she’s become more familiar with the technology and the student opinions on AI, she’s become interested in why and how the technology should be used. This inspired her to conduct a study in her Psychology Research Design and Analysis class on whether incentivizing AI use for academic reading impacts curiosity and engagement with the text. 

Reflecting on conversations with her peers, she said “People talk about it like, ‘Oh, I would use [ChatGPT] for this, do this, and do that.’” But Michael is curious “when it comes to actually taking a step back and talking about, ‘Hey, do you actually need to use it?

Her question is echoed by Damian René ’27, a computer science major, with cognitive science and film studies minors, currently taking the course “Human–AI Interaction.”. “There was this massive push when AI first started about ‘Oh, we need to be more productive.’ ‘We need all of these tools that streamline everything.’ But it’s almost like there’s the question of ‘Why do we need that?’” René said, reflecting on a class discussion in an interview with the Phoenix.

With the introduction of any new technology comes the question of what skillsets are lost by delegating to it tasks humans once did  Many faculty members have expressed concerns about cognitive off-loading – that by using AI to do the “heavy lifting” in academic tasks, students no longer work their “brain muscle.” Worries over off-loading recently arose in a Night Owls discussion with Chloé Bakalar, the AI ethics lead at OpenAI, the company that launched ChatGPT. Bakalar warned against overreliance on AI that could result in this off-loading the learning process and stressed the importance of asking oneself “why am I using AI to do this?” when using AI tools.

René asserted that there is a difference between using AI to generate their assignments for them and to support learning and productivity. He said that AI helps him test code, get a first glance overview of concepts, and work through bugs. “I’ve used it to enjoy the things that I’m doing more,” he added.

René also noted that AI has allowed him to go more in-depth in his work: “I’m able to expand on software products that I’m building and to build full apps for class projects, rather than just having slideshows or websites. I can actually go into the weeds of trying to design a full system, rather than being limited by how much code I can write one day.”

That being said, he recognized the importance of not using AI to shortcut the learning process, or to become too focused on efficiency. “It’s nice to be able to be a little inefficient sometimes and enjoy the process of doing something rather than being like, ‘How can I optimize this to the maximum?’ Because at the end of the day you should want to enjoy the things that you’re doing.”

The concerns surrounding student AI use – of sacrificing learning for efficiency, and cognitive off-loading – are shared across disciplines at Swarthmore. However, whether the use of AI for an academic task constitutes off-loading seems to depend on the context and aim of the assignment itself. Licastro noted that banning the usage of AI in one class does not imply that it should be banned in another, and that each professor should work to set clear standards for what is allowed in their own courses.

Yi Hsuan Huang, visiting assistant professor of political science who teaches Artificial Intelligence, Ethics, and Politics, told The Phoenix in an interview that she has seen AI use rise among students since the beginning of her teaching career. As a result, she’s designed her curriculum to limit student’s ability to off-load assigned readings and the early stages of writing papers to AI. When it comes to lower level tasks “A lot of the tedious work and the repetitive work is actually helping to build the muscle to do the higher-order work,” she said. “It seems that there is this trend of shortcutting this process.”

One way Huang has adapted her curriculum is to have students do writing assignments in class so she can be sure they are not using AI to offload. In addition, she has students draft longer papers in class for the same reason, but then asks them to use AI for the same essay and compare the products. Huang found that “anyone who has the basic knowledge to write the first draft, will realize or find that AI is actually lacking to kind of compose the specific voice that they want to convey.” 

While it might be tempting to lean on AI for planning papers for efficiency, Huang intentionally chose to disincentivize this in her syllabus. On the other hand, Huang said “If you want to use AI to play as a critique against you, if you want to brainstorm with it, if you want to add to [the paper] with it, that’s fine.”

Meanwhile in the Computer Science department, Professor Sukrit Venkatigiri is teaching the course “Human-AI Interaction.” In the syllabus, Venkatagiri wrote that using AI tools “responsibly and thoughtfully is a way to increase your learning; but using them haphazardly may actually undermine your learning.” His AI policy revolves around having students recognize and cite the external human labor that is inherent in AI content, explain how the AI added to the quality and creativity of their work, and fact check and cite any AI-sourced claims. 

Venkatigiri wrote to The Phoenix that, in his course, “learning how to use AI responsibly for programming is one of the core learning goals, and I don’t stop students from using it. But we also have conversations about how it impacts our own ability to think and reason.” As for the rest of the computer science department, other courses’ AI policies for courses tend to be more strict.  

A Fall ‘25 Phoenix faculty poll  found that when it comes to campus restrictions on AI, 47% of faculty responded they were limiting but permitting AI, and 33% said their approach was to ban it entirely.

When asked whether he felt that AI could or should be banned entirely, Venkatigiri’s response was “No and no.” Instead of an all-or-nothing approach, what seems to be happening across campus are intentional discussions about how to approach the technology.

As of now, campus-wide guidelines recognize that: “Generative AI is not the first technology to present both opportunities and risks to academia. Similar to the advent of the Internet, smartphones, and social media, it too is a tool that can enable or hinder the mission of the college.” Both Michael and Licastro hope that going forward, there will be open and trusting conversations between students and faculty on how to consciously approach AI in the classroom context.

Editors Note: Damian René ’27 is The Phoenix’s Web Manager.

Leave a Reply

Your email address will not be published.

Previous Story

Seniors Reflect: ‘There Can and Must Be a Better Swarthmore College’ 

Next Story

Generative AI Pervades Higher Education: Here’s How Three Institutions Responded

Latest from News

Previous Story

Seniors Reflect: ‘There Can and Must Be a Better Swarthmore College’ 

Next Story

Generative AI Pervades Higher Education: Here’s How Three Institutions Responded

The Phoenix

Don't Miss