Office Hours: Faculty on the Implications of Generative AI for Higher Education

November 20, 2025
Phoenix Photo/James Shelton

As part of our regular Opinions series, “Office Hours,” we aim to feature a range of faculty voices on higher education and specific questions relating to Swarthmore College. We gather responses by emailing the entire Swarthmore faculty at least four days prior to publication. Each contribution is edited for clarity and syntax only. We believe that students, staff, and other faculty can greatly benefit from reading professors’ diverse perspectives which many in the community may not have ever considered. In our fourth edition of this column, we asked professors to share their thoughts the following questions:

What are your thoughts on the increasing prevalence of generative artificial intelligence (AI) and its implications for higher education and the liberal arts? Should there be a college-wide approach regarding the use of these tools by students and faculty at Swarthmore? How have you, in your own teaching and research, navigated issues concerning generative AI? 

Syon Bhanot, Associate Professor of Economics

Sample advertisement

Generative AI is here, and I think we cannot pretend it is not. It is being widely used by students and faculty. I think we have to adapt to the reality of AI, rather than think about ways to keep it out. My main approach has been to AI-proof my courses to the extent possible by changing my assessments of students. I do, however, openly encourage students to use AI when I feel it can significantly improve their learning experience (for example, for help with coding using software packages like Stata or LaTeX). I also think the college should, carefully, think about ways to use AI in strategic ways to improve efficiency and reduce the extent to which faculty and staff are overworked. I think this could improve morale by reducing burnout and giving people the time to do substantive work that actually fulfills them (and not so much busywork).

Sibelan Forrester, Sarah W. Lippincott Professor of Modern and Classical Languages

Leaving aside questions of resource consumption and What It Does to Your Brain, I want to comment in particular on AI in translation and interpretation — areas in which we’ve been promised a future of seamless communication like that from the Babelfish in Douglas Adams’ “Hitchhiker’s Guide.” On the one hand, a paper English-Whatever dictionary or a booklet of useful phrases aimed at tourists can count as AI: you didn’t know it, and when you looked it up you used the work someone else had done. Google Translate or whatever it is that Meta uses on Facebook can give you a fair idea of what someone has written or posted. It’s surely a lot of fun to bargain with someone in a market using your phone or to try communicating with someone in the youth hostel — if that works.

Plus, there are some very formulaic kinds of written or spoken discourse where what words mean is pretty predictable. I’d say that an article in most natural sciences might be amenable to AI translation — except that at present most natural scientists have been publishing in English anyway. (Worth thinking about that: were you expecting all your colleagues elsewhere to do the work for you, or were you just glad it happened that way instead of needing to pay someone to put your research into Mandarin?) In some areas of culture, the precise meaning of an original is less important than its sound and rhythm — I heard about a friend earning a bunch by rendering the songs of The Little Mermaid into Croatian.

Then: one big issue with AI is that it feeds on the big masses of online material — the English-language internet is the biggest in the world, the Russian-language internet is (or was recently) the second-biggest, and an ordinary text might be translated not badly. Yet what if we need a translation from Estonian? (I can tell you: it renders every third-person singular pronoun as “he.”) What if a refugee from South America doesn’t speak much Spanish and can only describe what they have fled in a language that the AI hasn’t “eaten” yet?

The other big issue I see is that any figurative use of language will trip up the large language model (LLM). It can’t grasp puns (unless it’s “eaten” someone’s printed explanation), and it can’t penetrate the significance of fiction or poetry (unless it’s “eaten” someone else’s translation, in which case it’s just practicing plagiarism). Facebook regularly shows me its translations of Russian poems posted there, and a lot of its efforts are just awful — although supported by grazing on the second-largest internet in the world, remember. Not even counting the cases where it hallucinates: ask ChatGPT to write your own short biography and see what it picks up from people whose bios showed up on the same page with yours. These concerns don’t apply just to literature or the humanities but to any discipline where both imagination and accuracy are important: anthropology, history, political science, psychology…

Even if AI doesn’t plunge into slop as quickly as some of the pundits are predicting (yes, I subscribe to WIRED magazine), it will take much more than the current versions to move beyond these limitations. YOU can do things that it is not able to do.

Emily Gasser ’07, Associate Professor of Linguistics

I have no use for synthetic text extruding machines. And let’s be clear, that’s what they are: they don’t think, they don’t meaningfully “know” things, they can’t analyze or imagine. They simply string together likely sequences of words and phrases based on the materials they’ve ingested, without any insight or critical eye. The term, coined by Professor Emily Bender of the University of Washington, who has written extensively on commercial generative AI products and the LLMs they’re based on, is the most apt description I’ve heard, with an honorable mention to “spicy auto-complete.”

That lack of thought and knowing makes them useless for scholarship. Any response an LLM makes or text it composes must be scrupulously checked on every level, from basic facts to citations to “analyses.” In order to trust an LLM’s output, you must be knowledgeable enough about the material yourself to verify it, in which case you don’t need to ask in the first place. If you don’t know enough to verify, then you can’t trust the answer — maybe it’s given you something true, or maybe it’s done the equivalent of telling you to put glue in your pizza sauce. Care to roll the dice? You’ll get better results from a search engine, which, while they are being progressively enshittified by the insertion of AI results, still presents sources that you can evaluate and decide to trust or not. LLMs just say “Trust me.” I don’t. And I’m not putting my name to analysis or writing that I didn’t produce — that’s straightforward plagiarism.

Sure, it can write your term paper. In that case, why are you here? You’re not at college to memorize facts; you can always look those up. You’re here to learn to think critically and creatively, to hone your skills in analysis and argumentation, to cast a critical eye on a body of knowledge and a way of approaching it. If you outsource that to an LLM, then you’re paying Swarthmore ungodly sums of money for what, the privilege of eating at Sharples every night? A credential that you’re unable to make good on? The mental work that goes into putting your thoughts onto paper, revising your drafts, poring over sources and debating them with classmates is where learning happens. That’s the value in college, and what will serve you in the “real world.” Having sat through the class does little; having engaged with it is what matters. LLMs rob you of that.

There are now dozens of cases of lawyers using LLMs to write their case materials and ending up with nonexistent citations and incorrect facts, with real legal consequences. People have been hospitalized after eating mushrooms that AI apps assured them were safe. AI children’s toys have given instructions for how to find knives and light fires. Cases of AI-fueled psychosis have spiked, and at least eight deaths have been linked to ChatGPT alone. Far from being an objective provider of facts, LLMs reproduce and magnify the biases of their training materials. The data centers used to run AI models churn out greenhouse gases, use huge amounts of water, drive up electricity costs, and pollute nearby neighborhoods, with consequences for public health. A recent study showed that developers using AI actually took longer to complete their work. I have no interest in supporting any of that. But hey, it can also write me a shitty essay in two seconds flat! Thanks, but I’ll pass.

Sam Handlin ’00, Associate Professor of Political Science

While I have experimented enthusiastically with generative AI in my classes, my current view is that the increasing sophistication and ubiquity of these tools represents a major threat to higher education and the liberal arts.

Banning AI usage has always felt like a dead end. I want my students to engage with and understand the tools available to them in the world. Moreover, a complete ban is pragmatically impossible, since AI is now built into Google search, Microsoft Office, macOS, and other commonly used software. However, it is increasingly clear to me that we need to draw some sharp lines and defend them. Numerous recent academic studies have found that using AI for tasks like writing and reasoning leads to reduced cognitive function. Put simply, when we offload complex mental tasks to AI, we don’t work our “brain muscle” to the degree we otherwise would. If I allow students to run wild with AI, they may leave my class less cognitively capable than when they entered. This seems bad!

This burgeoning set of research underlines the need to develop a college-level policy that limits reliance on AI for complex academic tasks. Many institutions, including Swarthmore, have resisted taking this step, partly out of respect for the freedom of professors but also because it may be difficult. In my view, the time has come to develop a college-wide policy that includes outright bans on AI usage that involves significant cognitive offload, such as the use of AI for composing academic prose or outlining writing assignments and the use of programs like NotebookLM to understand and analyze texts or groups of texts.

We also need to confront the effects of AI usage on the K-12 student pipeline and the implications for Swarthmore. What happens when kids are leaning heavily on AI — to compose their essays, do their math homework, and engage in other forms of complex thinking — throughout their entire K-12 education? Fewer students will be prepared for the intense academic work and deep thinking expected at Swarthmore. Troublingly, these impacts may also be disproportionate across socioeconomic strata.  Finally, variation in AI policies and enforcement may make high school grades an even more meaningless predictor of academic readiness than they already are. In sum, I fear that “Swarthmore-caliber” students will be fewer in number, more likely to come from privilege, and harder to identify without standardized testing.  

Emad Masroor, Visiting Assistant Professor of Engineering

I believe that the wide availability of generative “artificial intelligence” is an impediment to student learning. This is not to say that generative AI tools are not useful, but simply to say that, on balance, they are harmful in an educational context and likely to lead to a serious deterioration of students’ ability to write well, think critically, and read deeply. By short-circuiting the difficult process of learning, these AI tools give us the illusion of knowledge while in actuality being a simulacrum of the real thing. After all, if you only know how to do something with the help of a chat bot, do you really know how to do it? And, perhaps more relevant for students entering the job market, why would anyone employ you for a “skill” that anyone else with an internet connection could just as well claim to have?

The path from ignorance to knowledge is not an easy one. It is challenging, and struggling against that challenge is, pretty much, the entire point of the educational enterprise. Forgive me for having a luddite opinion here, but I think it is quite possible that some technology is actually bad for society, and that innovation can be regressive instead of progressive. A mass plagiarism machine that can do students’ homework for them without their having to lift a finger, compose their essays, write their code, make their presentations, summarize their readings, and even answer interview questions in real time is, in fact, just as bad as it sounds.

While it is true that these tools are proliferating in many professions — leading some to contend that colleges must “prepare their students for the AI age” — I believe that faculty at a liberal arts college should exercise discernment in their desire to keep abreast of this latest fad. To students, I would say that no matter how much the AI maximalists would like to tell you otherwise, there will never be any substitute for thinking, reading, and writing. These three activities are essential to the formation of young people and have been the bedrock of education in civilized societies for thousands of years. To the extent that a new tool promises to alleviate the burden of having to think, to read, or to write, that tool offers us only a devil’s bargain that will leave us poorer of mind and spirit and will rob our students of a true education.

Donna Jo Napoli, Professor of Linguistics and Social Justice 

AI has ruined the internet. It puts up blocks as you try to find information. Thank heavens it hasn’t destroyed scholar.google.com … yet.

Look, AI is useful when you don’t want to think your way through something.

So it’s useful for plenty of things.

But most of us in the Swarthmore classroom are fascinated by thinking our way through things.

So if you’re tempted to use it as a replacement for thought in your classes, why are you taking those classes?  

Take classes on topics you love, topics that intrigue you. Prowl your way through them. Then what you learn belongs to you forever, for it helps to shape who you are.

Federica Zoe Ricci, Assistant Professor of Statistics

Generative AI tools can definitely be helpful to us as scholars, teachers and learners: for example, they can help us finish dull tasks faster (e.g., changing the format of an assignment from LaTex to HTML), point us to resources (e.g. books or articles), polish our emails. But systematically relying on AI because it feels “comfortable” prevents us from gaining skills and confidence in our abilities. It also deprives us from opportunities to exercise our social muscles, which are key to our professional success and, even more importantly, to our happiness as human beings. I often wonder if generative AI has had a net positive or negative impact on higher education: for the moment, I suspect that the damages its use can cause outweigh its potential benefits. What I am quite strongly convinced of is that the successful student in our times possesses one AI-related skill: they know how to interact with generative AI without compromising the meaning and value of their learning experience. As a college, we must learn how to help students develop this skill. In my teaching, generative AI has impacted how I choose assessment policies. I feel obligated to give larger weights to in-class examinations, for which students need to prepare themselves to face a problem by thinking independently and without relying on AI — which is part of what they are in college for. 

Warren Snead, Assistant Professor of Political Science 

The prevalence of generative AI is deeply troubling in the social sciences and humanities (I won’t speak to fields outside my own). Early scientific research shows that writing with AI significantly reduces human brain activity compared to writing without AI. There may be a misconception that the point of assigning essays is for students to produce an end result — a paper on a topic of a particular length. Ultimately, that is secondary. The value in writing assignments is that it pushes students to think and to struggle to clearly articulate ideas. Writing is supposed to be difficult. Much like physical exercise, it is the process, not the outcome that is most important. Reliance on ChatGPT, in any form, reduces the amount of thinking students do. While there are many reasons to attend college, a fairly important one is to do some thinking. I am tremendously grateful that these “tools” were not available when I was in college or graduate school. I believe departments should be empowered to set their own AI policies; top-down mandates like Ohio State’s may inadvertently prove catastrophic to the mission of higher education in the United States. 

Leave a Reply

Your email address will not be published.

Previous Story

Peace and Conflict Studies Presents ‘The Future of Palestine’ Panel 

Next Story

Two Mural Projects Color Downtown Swarthmore

Latest from Office Hours

Previous Story

Peace and Conflict Studies Presents ‘The Future of Palestine’ Panel 

Next Story

Two Mural Projects Color Downtown Swarthmore

The Phoenix

Don't Miss