Back-to-school for higher education sees students and professors grappling with AI in academia

America’s colleges have an AI dilemma, but students are embracing it.

September 12, 2023, 1:06 PM

As millions of students return to school this fall, ABC News spoke with students and professors learning to navigate the influence of generative artificial intelligence.

ChatGPT, which launched in November 2022, is described on its website as an AI-powered language model, "capable of generating human-like text based on context and past conversations."

At the University of California, Davis, senior Andrew Yu found himself using AI to help outline an eight-page paper for his poetry class. It needed to be in the style of an academic memorandum, which Yu had never written before -- so he turned to ChatGPT to help him visualize the project.

"I think it's kind of ironic or it's a really funny thing because I'm an English major," Yu told ABC News.

Yu says he is careful to use ChatGPT technology to template and structure his assignments but not go beyond that. "I feel like it's not authentic to me in the way that I write, so I just use it as a skeleton, like an outline," he said.

"Sometimes we get a little stuck and need extra help," said Eneesa Abdullah-Hudson, a senior at Morgan State University, a historically Black college in Baltimore. "So just having this tool here to help guide us, so we can add our feedback with it, it's definitely helpful."

PHOTO: Eneesa Abdullah-Hudson is a senior at Morgan State University in Baltimore, Maryland.
Eneesa Abdullah-Hudson is a senior at Morgan State University in Baltimore, Maryland.
Courtesy Eneesa Abdullah-Hudson

Among U.S. adults who have heard of ChatGPT, only 41% between ages 18 and 29 have used it, according to Pew Research data. Now, professors across the country are learning alongside their students about the risks and rewards of generative AI in education.

University of Florida Warrington College of Business Professor Dr. Joel Davis said AI innovations like ChatGPT could be used as a tool for core language arts subjects.

"There's a natural progression," Davis, who researches the integration of generative AI solutions, told ABC News. "New tools like the calculator, like Grammarly and editing tools that came out a number of years ago that made all of our writing better, including mine, right? Those are things that are just going to keep on coming. And, we can't stop them from coming, but it's up to us to decide how to integrate them appropriately."

Despite using AI himself, Yu says he is cautious and concerned by the quality of AI responses.

"It's definitely like a risk," Yu said, adding "I feel like it can be beneficial. You just have to use it responsibly and make sure that the majority of your wording is you, and none of it is by the AI."

Does AI promote or detect cheating?

Most schools are embracing the novelty of generative AI use by teachers and students, but some faculty are already facing the issue of it interfering with assignments in their classrooms.

Davis demoed AI's abilities with his own tests.

"It does fairly well on my exams," he said. "It can recite those answers pretty well, but I don't view that as a ChatGPT problem; that's my issue."

Furman University Professor Darren Hick had a hunch that one of his students used AI to write a final paper last December.

Hick said the student sat in his Greenville, South Carolina, office, hyperventilating, before confessing to using ChatGPT.

"We're not ready for it [AI chatbot cheating]," Hick, who gave this student a failing grade, told ABC News. "We're not prepared to deal with this. It's just harder to catch. That's always been my concern."

Furman's current definition of plagiarism is "copying word for word from another source without proper attribution." These instances, when AI use is detected, could be considered "inappropriate collaboration or cheating," a school spokesperson said.

An OpenAI spokesperson also said the company that powers ChatGPT has always called for "transparency" around generative AI use. Davis, at the University of Florida, told ABC News that assessments need to change if educators are worried about students using chatbots to cheat.

But Jessica Zimny, a junior at Midwestern State University in Wichita Falls, Texas, told ABC News she earned perfect scores on her political science discussion posts until her account was flagged for AI-assistance detection.

"I noticed that when I logged in, it said that I had a zero for that assignment, and to the right of it was a note stating that Turnitin detected AI use on my assignment," Zimny said.

PHOTO: Jessica Zimny stands on a bridge above a pond at Olbrich Botanical Gardens in Madison, Wisconsin, on July 31, 2023.
Jessica Zimny stands on a bridge above a pond at Olbrich Botanical Gardens in Madison, Wisconsin, on July 31, 2023.
Courtesy Jessica Zimny

Turnitin, a popular resource used by schools to check for plagiarism within student assignments, searches text for signs it was generated by AI. When AI detection is indicated, the company recommends that teachers have conversations with their students and this will usually "resolve" the issue one way or another.

Even though Zimny claimed she didn't use AI, she said her professor failed her because the program flagged her assignment.

"It's just really frustrating," Zimny said. "I just hate the fact that there are actually people out there that do use AI and do cheat to where you have to get to the point where there have to be detectors that are made that can falsely accuse people who aren't in the wrong."

ABC News reached out to Midwestern State for a comment on Zimny's failing grade. Interim Provost Dr. Marcy Brown Marsden said she's unable to speak directly to a student's academic record or appeals due to FERPA regulations. On the school's public directory, it says if Turnitin.com detects that an assignment was completed using AI, the student will be given a grade of zero for that assignment.

However, Turnitin leaves the final decision-making to the instructor.

"Teachers should be using similarity reports and AI reports as resources, not deciders," Annie Chechitelli, chief product officer at Turnitin, told ABC News in a statement.

Aside from plagiarism concerns, there is also worry about the security of data entered by students and teachers into generative AI tools.

"It is indeed entirely possible, and indeed likely, that bad actors will use open source large language models - which students may well use – to obtain sensitive personal data, for the purposes of targeting advertising, blackmail, and so forth," "Rebooting AI" author Gary Marcus told ABC News.

New York City Public Schools placed ChatGPT on its list of restricted websites, though it is still accessible if schools request it. However, in a statement on Chalkbeat, Chancellor of New York City Public Schools David C. Banks said, "The knee-jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial."

Is AI avoidable?

Generative AI has become the new frontier for educators and students to work around. Some universities said there's no one-size-fits-all approach while others have strict guidelines to combat unethical usage.

Carnegie Mellon University's (CMU) Academic Integrity Policy prohibits "unauthorized assistance," which would include generative AI tools unless explicitly permitted by the instructor, according to a recent letter published by the school's leaders. Most colleges -- like CMU -- still embrace the novelty of generative AI use by teachers and students.

"I think for probably the first time, nine months ago when OpenAI put out the ChatGPT, it [AI] sort of allowed an average person to kind of touch and feel it, to explore it, to try it," AI Scholar and CMU Professor Rayid Ghani told ABC News. "It wasn't something we could touch and feel and do. We weren't using it ourselves."

PHOTO: Carnegie Mellon University (CMU) Professor Rayid Ghani is the co-lead of the Responsible AI Initiative at CMU.
Carnegie Mellon University (CMU) Professor Rayid Ghani is the co-lead of the Responsible AI Initiative at CMU.
Carnegie Mellon University's Heinz College

Abdullah-Hudson said she uses ChatGPT to check her work. "It's just like an extra helping tool."

And, OpenAI invites teachers to use its technology. The company's Teaching with AI page suggests prompts to help teachers come up with lesson plans. Embracing AI, Abdullah-Hudson believes it's here to stay.

"Living in the world today, there's no way around it," Abdullah-Hudson said. "So there's no way to avoid the AI, might as well learn to use it instead of being afraid of it."