ChatGPT is now a routine part of how many students read, write, and study, which means the old rules for grading no longer match what is actually happening in class. If teachers keep pretending every essay is written alone at a keyboard, grades will say more about who hides their tools than who understands the work. A better path lies in reshaping assignments, feedback, and policies so grades capture thinking, not just polished text.
From “catching cheaters” to grading thinking
The first shift many teachers are making is mental: moving from hunting for misconduct to designing work that reveals how students think. When educators talk about What Makes an assignment “AI-resistant,” they are really asking how to create assessment tasks that cannot be outsourced to a tool. That means grading the process, the choices, and the reasoning, not only the final product on the screen.
Several guides on assessment now urge instructors to rethink familiar tasks, such as the classic take‑home essay, that are easy for a chatbot to complete. Rather than assuming every typed answer is original, teachers are encouraged to require planning notes, drafts, and reflections that show the steps behind the work. In this model, a grade reflects how a student approaches a problem, not just how well a language model can finish a paragraph.
Designing AI‑resistant tasks without going backward
Some schools respond to ChatGPT by banning devices and returning to blue books, a move one guide jokingly calls “Going Medieval.” The appeal of pen‑and‑paper tests that feel safer because they happen in a controlled room is easy to see. Yet even those traditional assessments can miss deeper skills if they only reward memorized facts. The more promising trend is to use in‑class writing and oral work as one piece of a larger picture of learning.
Another response focuses on what students can do that AI cannot, such as drawing on their own lives. Advice on AI‑resistant tasks often starts with “Require Personal Reflections or Experiences,” because a chatbot has no childhood, no neighborhood, and no feelings about a book or a lab. When assignments ask students to connect content to “Lived Experiences,” the work becomes much harder to fake and much more interesting to grade.
Making expectations about AI use explicit
Even the best task design fails if students are guessing about the rules. Guidance on ChatGPT in class stresses that teachers need to set Explicit expectations for each assignment, including whether AI tools are banned, allowed for brainstorming, or permitted for full drafts. Clear rubrics that separate “no AI,” “AI as helper,” and “AI as partner” are a basic fairness issue, because students should not be punished for honest use that was never discussed.
Students themselves are asking to be part of these decisions. One account of working with 200 teenagers reports that Students need a seat at the table when schools set AI rules, and describes how One student moved from fear to thoughtful use after being invited into that conversation. When I grade, I want to know not only whether AI appears in the work, but whether its use matched a shared, transparent agreement.
Leaning on authentic, collaborative assessment
AI tools produce fluent text, but they still struggle with messy, live human interaction. For that reason, some colleges now highlight Types of AI Resistant Assignments such as Live debates and Mock trials, where students must respond in real time. These activities foreground listening, argument, and improvisation, and they give teachers rich evidence to grade that no chatbot can generate on the spot.
Group work is part of this trend. One library guide argues that Using collaborative activities and discussions can limit copy‑and‑paste answers, and notes that While students might consult AI, they still have to negotiate roles and ideas when they are Creating Podcasts or Videos. When I grade a podcast script or a group‑built video timeline, I am really grading how students pull together sources, voices, and visuals for a real audience.
Rewriting the essay assignment for the AI era
Few tasks are more exposed to AI misuse than the take‑home essay. One philosophy instructor admits that in Philosophy classes, students often write on well known figures, and But those prompts are especially vulnerable to AI misuse. A simple fix is to ask students to apply those thinkers to local issues, personal dilemmas, or niche case studies that will not appear in a generic chatbot answer.
Teachers are also learning that AI is not always as strong a writer as it looks. One report notes that If the chatbot is asked to write an essay about a book, it often misquotes the reading or invents passages. When I grade literary analysis now, I pay close attention to how students handle quotes and page numbers, because accurate evidence has become a strong signal that a human actually read the text.