I’m famous: AI and teaching edition

As an academic, it’s always especially cool to make it into the Chronicle of Higher Education.  I got lots of my thoughts into the latest article on how professors are responding to LLM’s. 

Steven Greene, a political-science professor at North Carolina State University, explicitly encouraged AI use in his senior seminar this past spring. But his focus there was to have his students, who already understood the content, use generative AI (he prefers one called Claude) to improve the writing in their papers. He sees that as akin to asking a classmate to read over a final draft.

Even still, Greene wishes his students used AI more effectively by creating better prompts that would allow for more sophisticated feedback. “There was definitely less truly bad writing in the final seminar papers I graded,” he notes. “But over all, it struck me that most students massively failed to fully take advantage of AI to improve their papers.” 

Greene, a tenured professor who teaches two courses per semester, which he says gives him the opportunity to experiment, also doesn’t worry about assigning take-home exams. “My prompt literally said, ‘You can use the AI to help your writing, but in large part because of that, I am looking to see not just your knowledge of political parties, but your knowledge of … spring 2024 Steve Greene’s political-parties class.’”

He has continued to use take-home finals in an introductory class. He figures that if about three out of 20 students used AI, “that’s a cost I’m willing to pay at this point.” Still, he is careful, he says, to design questions that he thinks AI would do a mediocre job answering, at best. He asked them to describe, for example, what important concept for understanding how government and politics works is widely misunderstood by the American public. They also had to cite research, explain how democracy would work better if people understood the concept, and consider what they might have gotten wrong about the argument.

Like other AI users, Greene is bothered by how little discussion is taking place among the faculty and administration. “I get where that’s coming from, because there is such a strong tradition and culture of faculty autonomy,” he says. “And it’s like: ‘We’re not going to tell you how to teach your class. We’re not going to tell you how to use this tool.’ But that’s not enough. A lot of people are like: ‘I don’t get it. I need help. I want to understand how I need to evolve and adapt in response to this tool.’”

For the record, I actually massively worry about take-home finals, but, for now, I have been able to modify them enough that I am reasonably confident my students, not the LLM’s are doing the thinking. Also the assignment described was a paper assignment, not a take-home final.  

How we deal with AI in the classroom is very much a work in progress.  One thing I am quite confident of, though, is that burying our heads in the sand gets us nowhere.  

About Steve Greene
Professor of Political Science at NC State http://faculty.chass.ncsu.edu/shgreene

One Response to I’m famous: AI and teaching edition

  1. Tez says:

    An oral examination of each student is one (very labour intensive!) way to avoid all this. But….AI is “cheap” labour! So how long before I can hand Claude (or whomever) all my lecture notes, along with some extra documentation of the kinds of things I expect the student to know, and then have them individually examine each student? Perhaps sending me a report of what they found the student did or didn’t know and letting me decide the grade, or in (hopefully rare) cases letting me follow up more directly. Such “personalized examinations” I suspect would be much fairer and potentially less stressful than standard methods.

Leave a comment