Month: February 2025

“You Don’t Just Get AI”: A Tufts Alum on Learning How to Learn With It

“You Don’t Just Get AI”: A Tufts Alum on Learning How to Learn With It

An interview with Sam Kent Saint Pierre ‘24, Biochemistry by Mehek Vora When Sam graduated from Tufts in Spring 2024 with a degree in Biochemistry, they left with more than just academic knowledge, they left with an understanding that using AI well isn’t something that 

When Machado Meets Machine: Exploring AI in the Language Classroom

When Machado Meets Machine: Exploring AI in the Language Classroom

An interview with Dr Ester Rincon Calero, Senior Lecturer, Romance Studies. In a world where artificial intelligence seems to be seen most often with STEM fields, it’s refreshing to talk about its often overlooked role in the humanities.

Not All AI Wins Make Headlines And That’s Okay!

Not All AI Wins Make Headlines And That’s Okay!

An Interview with Dr Meera Gatlin Assistant Teaching Professor, Department of Infectious Disease and Global HealthClinical & Assistant Professor, Public Health and Community Medicine at Tufts Cummings School of Veterinary Medicine

By Mehek Vora

When you think about AI in classrooms, we might picture slideshows made with AI or editing essays with AI. But what does it look like when you bring generative AI into a veterinary public health classroom? According to Dr. Meera Gatlin who is the Assistant Teaching Professor in the Department of Infectious Disease and Global Health and Clinical Assistant Professor in Public Health and Community Medicine it looks a lot like playful experimentation, pedagogical curiosity, and a whole lot of trial and error. 

Dr. Gatlin first tried ChatGPT in her kitchen, just to make a packing list and see what it could do. Soon she was using it for emails and SMART goal planning. It got her wondering, if this works for me, could it help my students too?

In the DVM/MPH program at Tufts, Dr. Gatlin teaches a small cohort of students. Her “Integration” class, which is a small seminar blending public health and veterinary medicine, became the testing ground for introducing AI into teaching.

One memorable assignment focused on disease eradication. Instead of pairing students up with classmates, students were asked to “partner with AI,” using ChatGPT to research topics and then critique the results. The exercise sparked dynamic discussion. “It made for a really interesting conversation”.

Other classroom experiments included summarising research papers using Claude, identifying public health events to recognise AI’s outdated training cutoff and mind-mapping federal research work. 

Despite these small wins, things shifted when Dr. Gatlin tried AI in a larger, lecture-based class. “Two-thirds of my students had no interest in using it,” she noted. “It was honestly kind of shocking.”

This stark resistance opens up important questions to think about:

Why were students, especially those in graduate or clinical programs, so hesitant to adopt AI?

Was it a lack of training or trust in the tool?

Were they unsure how to use it meaningfully in such a specialized field?

Interestingly, the tide may be turning. “My first-years this year used AI in their undergraduate programs. When I asked them to research a public health topic, they casually said, ‘Oh yeah, we asked ChatGPT about it.’ That never happened before.”

This shift prompts more questions for Meera:

Is familiarity with AI in earlier education shaping comfort and curiosity?

Will incoming students expect AI integration in all their courses, including medicine and veterinary science?

Dr. Gatlin sees the generational transition as pivotal. “The first cohort I introduced AI to had already been in vet school when it launched. Now, students arrive having used it already, so what does that mean for me? 

For Dr. Gatlin, one key message stands out: AI is just another learning tool coming our way.

“It’s not this all-knowing force,” she says. “It deserves the same kind of evaluation and critical thinking as any other resource like a book, a peer-reviewed journal, or a website.”

OR 

“It’s just another resource. If we’re going to manipulate it in a way that’s useful for us, then we need to do our due diligence—like we would with any other source.”

She emphasizes the importance of transparency and integrity and being worried of not knowing when students are using AI even if her class policy openly encourages citing it, disclosure and use. 

Though her classroom experiments with AI have paused, Dr. Gatlin remains hopeful about clinical applications. From reading pathology slides to drafting client-facing communications, AI has promising use cases in diagnostics and workflow efficiency. But these applications come with caution. 

Dr. Gatlin’s story is less about triumph or failure, and more about experimentation, context, and growth. Her experience reminds us that AI isn’t a one-size-fits-all solution. It’s a tool that works or maybe does not but is always evolving.

As AI continues to shape higher education, stories like Dr. Gatlin’s are essential. They show us that successful integration isn’t about flashy tools. It’s about thoughtful teaching, honest reflection, and meeting students where they are.

Think Critically, Not Just Quickly – Using AI Without Losing Learning 

Think Critically, Not Just Quickly – Using AI Without Losing Learning 

An Interview with Jennifer Ferguson, from Tufts Tisch Library. As a librarian, she views AI as an extension of a long-standing challenge: How do we teach people to evaluate information in an age where algorithms filter what we see and we don’t always know where the data is coming from?

AI at the Extremes: Beyond Utopian Aspirations and Dystopian Fears

AI at the Extremes: Beyond Utopian Aspirations and Dystopian Fears

An Interview with Dr Jamee Elder, Assistant Professor of Philosophy. It seemed very natural to think about my own use of AI at the same time that I’m teaching my students about AI.