Wed. Feb 18th, 2026

A veteran teacher explains how to use AI in the classroom the right way

AI portraits robertsBlue


When ChatGPT launched in 2022, Jen Roberts had been teaching middle or high school students for more than 26 years and was running on fumes. The pandemic had pushed many educators into burnout, but where others saw artificial intelligence as a threat—a technology that facilitated student cheating—Roberts saw a tool to help her survive.

An English teacher at Point Loma High School in San Diego, Roberts is a pioneer of educational technology. She has taught with one-to-one laptops since 2008, years before most schools adopted them. When generative AI emerged, she was quick to test whether it could make feedback faster and grading fairer.

[Live event: Life in the Age of AI. Join SciAm for an insightful conversation on the trends and innovations shaping AI in the year ahead. Learn more.]


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Scientific American spoke to Roberts about how she guards against the misuse of AI and why she believes the technology can help teachers battle their own biases.

[An edited transcript of the interview follows.]

Many teachers see AI primarily as a cheating tool for students. You saw it differently. How did you start using it?

I’ve found it’s very effective for feedback. When a student takes an [Advanced Placement (AP) English Language and Composition] test, [the free-response section of] their test is scored by two people. And if those scores disagree, there’s a third score. I thought: What if AI is the second scorer? If I grade it and have the AI grade it, I see if we agree. If we disagree, the AI and I have a little chat about which one of us is right.

In my time-constrained world, my comment might be brief or terse. What the AI comes up with is usually spot-on. I like to say that AI doesn’t really save me time; it just lets me do more with the time I have. When I’m using AI-suggested scores and feedback, my students get their writing back in days instead of weeks. That means they’re revising more and revising better.

I also use education-specific tools. MagicSchool provides a student-facing option where I can add my rubric and assignment description and then give students a link. I’ve seen students put their work into that four, five, six times in a single period. It’s immediate feedback that I can’t provide to 36 students simultaneously.

How do you guard against students using AI to write for them?

Nothing is a magic bullet. It’s a combination of tools and strategies that psychologically convince my students I will know if they use AI inappropriately. I require them to do all writing in a Google doc where I can see version history. I use Chrome extensions to examine the writing—I can watch a video playback of their writing process. I also use the old-school method: you’re going to bring your writing to your writing group. Students are cavalier about turning AI writing in to me. But if they have to bring it to a writing group, read it aloud to peers and explain what they wrote and why, they won’t do that.

I show them ways to use AI responsibly. You can’t use it to write for you, but you can use it for feedback, sentence frames, outlines and organizing thoughts. If I show ethical use cases, they’re less likely to use it unethically. I do an activity where I give them three paragraphs and ask which one is AI. They all immediately know. I say, “You could tell, so I can tell.”

A lot of the hype around AI in education focuses on customized lesson plans. Is that the reality?

AI lesson plans are generally crap. I don’t use AI often for lesson planning. I use it specifically to build materials. There’s a Chrome extension called Brisk that lets me take something students are reading, design learning objectives for it and create an interactive tutor to show students how much they understand.

Also, I can take a page that’s a wall of text, give it to [Anthropic’s AI assistant] Claude and say, “Help me rewrite this. Improve the clarity, use color-coding, emojis.” Now I have a page that’s beautiful and easy to understand, with colored boxes around important parts. When students understand what they’re supposed to do because directions are clear, that’s really helpful.

In what ways does AI help with the cognitive burden of teaching?

Lots of ways. I often need to come up with a writing prompt. Am I capable of that? Absolutely. Am I capable at 4:15 P.M. on a Thursday afternoon when I’m really tired? Maybe not. I’ll tell the AI what we’ve been studying and ask for suggestions. It’ll spit out five or six options, and we’ll pick the one that works.

Another example: I was doing an activity with a long reading that I wanted to break into smaller sections. I didn’t want to spend 45 minutes rereading it to create sensible breaks. I gave Claude the PDF, and it took only five minutes for [the AI] to help me reorganize the material. I also asked for 40 vocabulary words students might struggle with, organized in the order they appeared in the article. That is support I would never have had time to provide manually.

What warnings would you give teachers who are starting to use AI?

Do not require or suggest students use ChatGPT or Claude. Those tools are not [compliant with] COPPA [the Children’s Online Privacy Protection Act] and FERPA [the Family Educational Rights and Privacy Act]—federal laws covering children’s privacy and educational privacy rights. It’s better to have students use tools within MagicSchool or Brisk that are compliant and that allow teachers to monitor conversations.

Second, do not provide personally identifying information about students to AI. Instead of giving the whole IEP (Individualized Education Program), take the one goal you’re supporting and say, “How could I support a student with this goal?” You get the same help without providing student information.

Can you talk more about AI-assisted grading?

According to a University of Michigan study, a statistically significant chunk of students at the end of the alphabet got lower grades and worse feedback, probably because teachers get tired. I think of AI as my balance check. When I get to the student whose last name starts with Z, and [they had] annoyed me today, am I giving them a fair grade? Often the AI says, “No, you should be giving them a higher grade.” I look at the work again and am like, “It’s right.” If AI can mitigate that [issue], that’s good for my students. I see it as a fairness issue, making sure students get consistent scoring.

Every time I tell teachers about [the University of Michigan] study, heads nod. We shift how we grade over a single grading session—firm at first, loosened up by the 10th essay, tired and grouchy at the end. We’re human. For all the concerns about AI bias, I have more concerns about human bias.

A version of this article appeared in the March 2026 issue of Scientific American as “Jen Roberts.”

By uttu

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *