Editor’s Note: What follows is a report on the state of artificial intelligence policy at Swarthmore College, Emporia State University, and Oberlin College, published in collaboration with The Collegiate Journalism Network.
Emporia State University
Emporia State has taken a number of steps to address AI use on campus, particularly in regard to how it can be used as a tool for teaching and learning.
Last year, ESU introduced a state-mandated AI policy taken from guidelines by the Kansas Board of Regents, which governs the state’s six public universities, and the state’s Office of Information and Technology Services. The policy sets guidelines on ethical and responsible AI use and how AI is to be used in a way that maintains academic integrity and does not risk data privacy and security.
ESU’s AI Task Force has been largely responsible for making recommendations about ESU’s AI policy and the maintenance of AI-related resources, tasks it completed earlier this year according to Amy Sage Webb Baza, head of the AI task force. The task force also spearheaded several AI trainings.
Students have access to a number of AI resources in Canvas through the Canvas Catalogue of Instruction AI module. This includes AI training and a resource repository with guidance and support for using AI. Faculty-specific AI guidance is located in the Faculty Canvas Catalogue of Instruction.
A change to ESU’s AI policy was also approved earlier this semester, Webb Baza said. The new policy, which takes effect in the Summer 2026 semester, will require faculty to address in their syllabi how AI will be used in their course.
Oberlin College
Oberlin College has undergone a “Year of AI Exploration.” The initiative, supported by the President’s Office and divisional deans, has been framed by the College as a “campus-wide examination of artificial intelligence: the possibilities, the challenges, and our commitment to be at the forefront of understanding.”
Since the fall, the college has held numerous panels regarding AI, launched a Critical AI Studies Minor, collected data on AI usage among students, and granted faculty and staff access to premium versions of ChatGPT and Gemini. Oberlin’s Honor Code Charter Revision Committee proposed changes surrounding acceptable AI use.
While the college has largely embraced AI this year, both students and faculty have had more mixed feelings. At the end of the fall semester, many professors expressed uncertainty about how to deal with AI and mentioned a lack of clear direction or guidelines. While some have reverted to time-tested methods, such as bluebook exams, others have actively embraced the new technology in the classroom, encouraging students to use it in assignments or for guidance.
In the fall, the Luddite Club, a student organization, became one of the most vocal groups on campus against AI. The club’s leaders wrote a letter, published in The Oberlin Review, on a 70-year-old typewriter to President Carmen Twillie Ambar, imploring the president and the college to take a stand against AI use. Despite this, the college has continued forward with its AI endeavors, including granting AI micro grants to faculty on April 9.
Swarthmore College
As of April 2026, Swarthmore College has not imposed a school-wide policy on the usage of generative AI in academic work. The college’s Information Technology Services (ITS) webpage, however, now displays a set of guidelines around privacy, purchasing, and accountability questions related to AI technology. The regulations forbid the sharing of “non-public, confidential, proprietary, or otherwise sensitive information to any AI tool,” unless the college has a specific contract with the service’s provider regarding the use of this data. They also stipulate that all purchases of AI tools using college funding must be approved for security by ITS. Provost Rich Wicentowski recently told The Phoenix that the college had convened a committee of faculty and staff members to guide future changes in AI policy.
In 2022, Swarthmore was among fifteen institutions to participate in the National Humanities Council’s Responsible Artificial Intelligence Curriculum Design Project, a program aimed at developing coursework around ethical engagement with AI technology. Courses including Artificial Intelligence, Ethics, and Politics in the political science department and Special Topics: Human-AI Interaction in the computer science department have since been offered to students.
Numerical data from Phoenix polling of students and faculty in 2025 reflects divisions over AI approval. 55% of the students report using AI for academic work with a frequency of once a month or less, while 28% report weekly usage, and 15% report a frequency of at least once daily. Roughly half of faculty responded that they would permit, but limit AI in their classes, while a third favored an outright ban. A Phoenix survey of Swarthmore faculty in late 2025 went further in revealing varied opinions about the utilities and dangers of generative AI, the magnitude of its impact, and the degree to which coursework should incorporate AI usage. Several faculty expressed appreciation for their ability to develop AI policies on the personal or departmental level, and warned against the pitfalls of a college-wide AI mandate.
In an interview with The Phoenix published this week, President Val Smith asserted the heightened importance of the liberal arts in the age of generative AI:
“This is, I think, an opportunity to speak powerfully about the importance of the arts and humanities at a time when we have seen both a striking emphasis in terms of technology but also some concern about the extent to which AI might be displacing human talent in that area. So I want to be making the case for the full range of disciplines and ways of knowing that the liberal arts embraces and why it’s important for us to educate students to have that kind of broad habit of mind, where they can develop ethical values, and creativity, collaboration, communication; a set of skills.”
