Amna Nawaz:
This year’s senior classes at colleges across the country are the first to have spent nearly their entire college careers in the era of generative AI, a type of artificial intelligence that can create new content such as text and images.
As technology improves, it becomes harder to differentiate from human work, and it is shaking academia to its core with some very big questions.
Special correspondent Fred de Sam Lazaro has the story for our Rethinking College series.
Megan Fritts, assistant professor of philosophy, University of Arkansas, Little Rock: And the principle of humanity says: treat all people as ends in themselves, never merely as means.
Fred by Sam Lazaro:
About two years ago, Megan Fritts, a philosophy professor at the University of Arkansas at Little Rock, began noticing something unusual in her students’ writing.
Megan Fritts:
You suddenly get an essay or a test answer, some kind of assignment from a student whose normal writing you know, and you get back something that sounds like an official business document or a piece of technical writing, writing that sounds very polished, but very impersonal.
Fred by Sam Lazaro:
Impersonal because it was probably not written by a person. This was the beginning of a turning point for higher education, as generative AI had conquered not only its campus, but college campuses across the country.
A survey last year found that 86 percent of students now use AI tools like ChatGPT, Claude AI, and Google Gemini for schoolwork. The reason why generative AI has spread so quickly on college campuses is not difficult to understand. It has turned tests that previously took hours, even days of writing and revision, into something that can be done in just minutes.
For example, I can ask ChatGPT and write a 1000-word essay on the topic “Is it okay to lie?” And using a huge amount of data, it instantly predicts and generates sentences on this topic.
Fritts says the impact has been deeply disruptive.
Megan Fritts:
If I read ChatGPT’s writings, instead of my students, I have lost the very best tool I have for seeing whether I am being effective in my capacity as an instructor or not.
Brian Berry, Vice Provost, University of Arkansas, Little Rock: We really need a framework where people can use these things and innovate while minimizing risk.
Fred by Sam Lazaro:
University policymakers have done their utmost to stay ahead.
Brian Berry:
I think the realization over the last year and a half is that technology is outpacing our ability to detect it.
Fred by Sam Lazaro:
Vice Provost for Research Brian Berry leads one of UA Little Rock’s committees charged with creating clear campus-wide policies on AI
Brian Berry:
I think it really comes down to helping students understand what’s at risk, that if they use AI correctly, it’s literally the most powerful tool they’ve ever had the chance to use, and it will make huge differences. But if they use it incorrectly, it can short-circuit their learning process.
Fred by Sam Lazaro:
The university is finalizing a policy that will allow professors to determine what AI uses are acceptable in their classrooms, as long as they clearly spell it out in their syllabus.
But for Fritts, who has a strict no-AI policy, identifying them was complicated and time-consuming.
Megan Fritts:
So Phrasly is one of the softwares that I use. If I suspect AI use, the first thing I do is use detection software. I actually use eight different detection software.
Fred by Sam Lazaro:
If her suspicion is confirmed, she will meet the student.
Megan Fritts:
And if they can talk about what they wrote about, that’s fine, but often they can’t.
Fred by Sam Lazaro:
It sounds like it’s annoying and a lot more work for professors like you.
Megan Fritts:
It certainly cuts into my life quite a bit. If anything, it has made teaching sometimes seem like police work.
Fred by Sam Lazaro:
And these detection methods are not foolproof. Students online say they are in the middle of it.
Woman:
I have been falsely accused by my university of using AI to write a paper.
Woman:
My final paper was detected as 60 percent AI
Ashley Dunn, recent graduate, Louisiana State University:
We may be about to find out if I’m being unfairly expelled from college for…
Fred by Sam Lazaro:
Ashley Dunn was a senior at Louisiana State University when she was accused of using AI to write a short essay for a British literature class after a detection tool flagged her last year.
Ashley Dunn:
And I thought: Am I going to fail this class? Do I get a zero? Every university takes plagiarism and things like that very seriously. So I just went crazy.
Fred by Sam Lazaro:
After communicating with her professor, Dunn says she ultimately received an A on the assignment, but the reaction on TikTok proves this is a widespread problem.
Ashley Dunn:
A lot of people ended up commenting on my video saying that they had experienced the same thing, but they weren’t really that lucky and ended up getting zeros or failing the class.
Some people recently made videos about, oh, my professor said my essay was AI because I used an em dash. But that’s just a normal way of writing, especially for university level.
Lori Kendall, professor at Ohio State University:
You are being asked to go out and venture into gen AI
Fred by Sam Lazaro:
Not all schools are anti-AI. Some are even looking for ways to embrace it.
Lori Kendall teaches entrepreneurship at The Ohio State University Fisher College of Business.
Lori Kendal:
When Gen AI came out, me and all the other instructors were like, oh great, what now? Do we allow AI? Do we not allow AI? And the reality is, you know what, they’re going to use it anyway.
Fred by Sam Lazaro:
She now encourages her students to use AI to critically examine their original work and as a learning tool.
Rachel Gervais, student at Ohio State University:
Many people may use AI only to complete assignments, plagiarism, but I like to use AI only for better understanding.
Fred by Sam Lazaro:
Rachel Gervais is a freshman majoring in air transportation.
Rachel Gervais:
I will often use AI to create questions on this topic so that I can not only get a better understanding of the actual material, but I can also test and see what else I might need to focus on.
Lori Kendal:
If you don’t use AI or the next technology to be more effective, you won’t be competitive in the job market. The labor market is changing under your feet.
Ravi Bellamkonda, Executive Vice President and Provost, The Ohio State University: As Chief Academic Officer, I have the authority to decide on academic integrity issues, honor codes and violations.
Fred by Sam Lazaro:
Ravi Bellamkonda is executive vice president and provost at The Ohio State University. He says he was hit by one alleged violation last year: a student accused of using AI. It was a case of deception, he says, but it got him thinking.
Ravi Bellamkonda:
What if there is a technology that can indeed allow our students to produce very high quality work? Shouldn’t we investigate this a little further?
Fred by Sam Lazaro:
Bellamkonda led Ohio State’s new AI fluency initiative, which requires all students from all academic disciplines to learn and use AI tools.
Ravi Bellamkonda:
The trick, as with any human interaction with technology, is to figure out what can be extracted from the technology and what should we add value to? Ohio State wants to be at the forefront of creating those rules.
Fred by Sam Lazaro:
This has led to experiments across disciplines, such as music professor Tina Tallon’s AI and music class, which explores innovative applications of the technology.
Tina Tallon, professor at Ohio State University:
I always start the lesson by asking them to think about a challenge in their field. At that point, we’re not even talking about AI. I just want them to identify something that they’ve run into or that their students or their colleagues have.
Fred by Sam Lazaro:
A member of her class, tuba instructor and doctoral candidate Will Resch (ph), uses AI to analyze the airflow in his instrument over thousands of repetitions. The data helps students play the perfect note.
Another, Natalia Morano-Britrago (ph), is a music graduate student who studies how babies acquire musical knowledge. She spent hours sifting through subjects’ home recordings, listening to moments when parents or caregivers sang or hummed around the child. Now AI does this for her.
Tina Tallon:
If we critically examine the tools we engage with and are actively involved in their development, I think we can do some pretty incredible things.
Fred by Sam Lazaro:
But inevitably, these tools also wreak havoc on academia and the jobs students hope to one day hold.
Ravi Bellamkonda:
How do we move through a transformative moment like this with the disruptions it will cause, and yet in a way that ultimately adds something to us as a society, that it improves our lot as human beings?
Fred by Sam Lazaro:
A question without a clear answer, he says, but one that students should help address.
For the “PBS News Hour” I’m Fred de Sam Lazaro in Columbus, Ohio.
