When Lisa Meeden wrote her Swarthmore College admissions essay, it took her less than a minute. More accurately, it took an artificial intelligence (AI) tool 30 seconds to create a false student and their experiences—including a service trip to Guatemala and multiple fake student clubs—and write an original essay. Meeden, not an applicant but a Swarthmore computer science professor studying developmental robotics, was testing the skill level of AI in applying to college, a pressing question in the first application round since the release and spread of Chat GPT. Meeden was not impressed with the results.
“Right now, I think if someone turned in a purely Chat GPT-written essay, it wouldn’t be a very good essay and it wouldn’t be impressive,” Meeden said. “If I was an admissions officer it wouldn’t make me say ‘I want to bring that person in’.”
Though the essay was impersonal and broad, AI in admissions continues to improve, and not just on the applicant’s side. AI-powered processing services promise to lighten the work of college admissions teams by reading transcripts and helping transfer students understand credit conversions. Sia, an AI program developed by larger software and technology company OneOrigin Inc., promises to process at least 80% of transcript data in seconds with no training or intervention.
“There’s no other system out there in the world today that actually does what we are doing,” said Abhinand Chincholi, president and CEO of OneOrigin in an interview with The Phoenix. “Every engine or anyone who says that they’re processing transcripts, they require a lot of training and the implementation cycle is crazy.…[Our] engine is so robust, and we’re proud. We have built something that’s so incredible. Today there are large companies who are actually behind us for partnerships because it will take ages for them to build this kind of system.”
Although OneOrigin started with admissions and transfer data, their broader focus is on creating tools for education because, according to Chincholi: “If you want to change the perspective in the world, you start with education.” Currently, OneOrigin is developing a “brain” that can help tutor students, create and grade assignments, and assist in essay writing. Despite the developments, Chincholi stresses the importance of continued human intervention and not relying on AI solely for decisions.
“We always love our systems,” Chincholi said. “Can they make decisions? Of course, they can make decisions because we have built the system like that. But do we force it to make decisions? No, not yet. Because we still want control to be with humans. We are not trying to replace people in the university today. We are trying to help them make their job more efficient.”
In the Swarthmore admissions department, AI is not used (apart from Lucretia Bott, a customer service chatbot). Admissions does not have guidelines against or for applicants using AI, but warns against it as the sole essay writer, viewing purely AI essays as plagiarized.
“Students who use AI need to recognize the limitations of current AI engines as sophisticated language models that use statistics to determine how they ‘speak’ or ‘write,’ and not as ‘brains’ that are using critical thinking to make decisions or provide students with a human level of guidance,” said Isthier Chaudhury, director of Swarthmore admissions.
Partially in response to the spread of AI, Swarthmore is also launching an optional video essay application component, allowing students to offer unscripted thoughts or experiences. According to Chaudhury, the admissions team continues to look for the “core qualities that produce graduates who have the skills and curiosity to perform work that cannot be automated” while reviewing applications.
Meeden views AI as having more of an impact on the classroom than the admissions process. She has seen fellow professors both embrace and ardently discourage it. For writing code and essays, it is skilled at editing and writing smaller functions for students to put together in programs, but lacks the sophistication to piece together functions or develop critical connections. Regardless, Meeden sees it as important for students to learn on their own before using AI.
“I think the whole point of a student coming to Swarthmore is that they want to learn how to do these things themselves,” Meeden said. “I think partnering with AI eventually on the job to accomplish a task could be a good option in the future, but as a student learning how to think about these things, it’s really important to do the work and do it yourself and really understand it yourself. You can’t be a good partner until you understand how to do it on your own first.”
During the Spring 2024 semester, Meeden will co-teach an Ethics and AI class with Associate Professor of Philosophy Krista Thomason, sponsored by a Responsible AI grant from the National Humanities Center. The course will include issues that are impacted by AI, such as the environment and justice, covering both technology and humanities components.
“I think [the ethics of AI] was always just more of a thought experiment before,” Meeden said. “People knew that AI was getting better and it was this distant future we had to worry about, but now it’s right here right now. I think there’s more urgency to study it now because of that.”
Despite fears of AI world dominance, Meeden stresses the importance of remaining calm and taking preventative action.
“There’s a lot of hysteria and concern,” Meeden said. “It’s still very early on and we still have time to regulate AI and I think that that would be an important step. Our government tends to move very slowly and it’s complicated to enact laws, but I think that’s probably the best approach moving forward. We have to put in place guardrails. What is it we’re willing to let AI do and what is it we’re not willing to let AI do?”
In an earlier interview with Higher Ed Dive, Chincholi stressed the danger of allowing AI to make decisions. In his interview with The Phoenix, he amended to a more open view of AI-powered decision-making, suggesting future uses coming with technology improvements. In fact, the role of AI in admissions could be growing.
“I [talked about] the dangers of using AI, but I think it was over-exaggerated,” Chincholi said. “It’s just [that] AI is not there yet to completely take over and start making decisions. But we are not afraid or skeptical about allowing AI to make decisions. It’s just [that] when you’re talking about students, their career is on the line. We always want an extra set of human eyes to actually validate it.”