Author: Mehek K. Vora

If It’s That Bad at Tic-Tac-Toe: Reflecting on how we may be victims to the WALL-E Theorem?

If It’s That Bad at Tic-Tac-Toe: Reflecting on how we may be victims to the WALL-E Theorem?

An interview with Jack Davis ’25 By Mehek Vora When Jack Davis first discovered ChatGPT, it wasn’t in a lab or during a lecture—it was just a whisper in the back row of a Tufts computer science class. “Someone behind me said, ‘you can tell 

“You Don’t Just Get AI”: A Tufts Alum on Learning How to Learn With It

“You Don’t Just Get AI”: A Tufts Alum on Learning How to Learn With It

An interview with Sam Kent Saint Pierre ‘24, Biochemistry by Mehek Vora When Sam graduated from Tufts in Spring 2024 with a degree in Biochemistry, they left with more than just academic knowledge, they left with an understanding that using AI well isn’t something that 

When Machado Meets Machine: Exploring AI in the Language Classroom

When Machado Meets Machine: Exploring AI in the Language Classroom

An interview with Dr Ester Rincon Calero, Senior Lecturer, Romance Studies

By Mehek Vora 

In a world where artificial intelligence seems to be seen most often with STEM fields, it’s refreshing to talk about its often overlooked role in the humanities. This article presents Dr. Ester Rincón Calero, Senior Lecturer in Romance Studies at Tufts, who’s not only unafraid of AI in her classroom but eager to explore it, challenge it, and reflect on what it means for teaching and learning languages today.

What sparked her journey with AI? For Dr. Rincón Calero, it was a “double motivation.” First, a personal love of technology. She says. “I try to keep up with all the new technology I can find (although it is getting to be impossible!).” But the second reason came from a very real classroom dilemma: how to address students using translators or generative AI to write essays. Tools, while tempting, that were impacting how deeply they were actually learning the language.

Her first hands-on experience came through a workshop on using AI in language courses. From there, she began integrating AI in small but meaningful ways. “I find it very useful to break the ‘blank page’ syndrome,” she shares. For example, while creating two brand-new courses recently, she asked ChatGPT for sample syllabi. Not to follow them, but to see what not to do. “It helped me identify areas where I really had to add my personal touch.”

Last semester, Dr. Rincón Calero experimented with AI in two very different courses. In one, she used RumiDocs, an Academic Integrity & Artificial Intelligence platform piloted by Tufts EdTech. She used it to curb the reliance on generative AI by visualizing students’ writing processes. 

On the other hand, it’s in her literature course where things got really creative. With more freedom, Dr. Rincón Calero made AI a central part of the course design. “I used AI to generate ideas for possible activities and creative assignments,” she explains. While the outputs were sometimes vague, the process served as an excellent brainstorm partner.

In that class, RumiDocs was repurposed as a digital reflective journal. Students could request grammar and vocabulary feedback from AI but had to correct the errors themselves. Why? “Most of our language learning happens when we correct the mistakes we make using the language,” she notes. “If you take away that correction, you miss a lot of learning opportunities.” Something she reiterated in the process of integrating AI and language learning. 

Perhaps the most striking example of AI use in her class was a creative poetry assignment. Students could either write their own lyrics, adapt a poem into music, or use Suno, an AI music-generating tool, to create musical versions of a poem they were learning. The class even debated what Spanish poet Antonio Machado would think of a bachata remix of his work. “My students held very different opinions than me,” she laughs, “but it made for a very lively discussion.” 

Dr. Rincón Calero is quick to point out both the benefits and limits of AI. “It can make what is considered busy work easier,” she says, “so we can dedicate our energy and unique skills to do more creative things.” Still, she emphasizes the importance of learning foundational skills first. “To use AI in a beneficial way requires a level of critical thinking that can only be acquired by learning first to do things completely on your own.”

She adds a healthy warning: “We must always be in control of the tool, not the tool in control of us.”

Her main message to the Tufts community? Don’t fear AI. “AI is not our enemy, and even if it is, it is better to get to know your enemy as well as you can.”

She encourages both students and faculty to explore AI not as a shortcut, but as a collaborative tool that still requires human thought, context, and creativity. “Guiding students by example may be the best way to start,” she offers. And with characteristic honesty, she adds a confession: “I tried to generate a full syllabus for my class using AI so I didn’t have to work so hard. It did not work! But the process helped me see how much I could improve it if I added my own time, skills, and effort. And that was rewarding.”

Not All AI Wins Make Headlines And That’s Okay!

Not All AI Wins Make Headlines And That’s Okay!

An Interview with Dr Meera Gatlin Assistant Teaching Professor at Tufts Cummings School of Veterinary Medicine What does it look like when you bring generative AI into a veterinary public health classroom? According to Dr. Meera Gatlin it looks a lot like playful experimentation, pedagogical curiosity, and a whole lot of trial and error. 

Think Critically, Not Just Quickly – Using AI Without Losing Learning 

Think Critically, Not Just Quickly – Using AI Without Losing Learning 

An Interview with Jennifer Ferguson, from Tufts Tisch Library. As a librarian, she views AI as an extension of a long-standing challenge: How do we teach people to evaluate information in an age where algorithms filter what we see and we don’t always know where the data is coming from?

AI at the Extremes: Beyond Utopian Aspirations and Dystopian Fears

AI at the Extremes: Beyond Utopian Aspirations and Dystopian Fears

An Interview with Dr Jamee Elder, Assistant Professor of Philosophy

“Technology, by its nature, is disruptive. It changes things. It gives us new opportunities but also makes certain skills and knowledge less valuable than they once were.”

Shannon Valor, a leading technology ethicist, coined the term “acute technosocial opacity” to describe how difficult it is to understand the full impact of new technologies on society. And isn’t that exactly how AI feels today—both fascinating and unknowable, full of promise yet riddled with uncertainty?

Jamee Elder, an assistant professor in the philosophy department, specializes in the philosophy of science and technology ethics. Through her research in astrophysics, she has a deep-rooted interest in technology’s evolving role in society. AI, in particular, has become an integral part of her teaching and personal exploration. Her popular course  Philosophy of Technology explores the impact of emerging AI technologies like Stockfish, IBM Watson, and ChatGPT, their ability to perform human tasks, and the ethical questions they raise in society.

Jamee began to more deeply engage with AI, as she designed courses on digital technology, society, and ethics. “It seemed very natural to think about my own use of AI at the same time that I’m teaching my students about AI,” she explained.

In developing her Philosophy of Technology course, she actively engaged AI as a thought partner. Using tools like ChatGPT, she sought feedback on syllabus design, revised course descriptions, and even brainstormed class activities. For example, she asked ChatGPT to play the role of a university colleague, providing constructive feedback on inclusivity, accessibility, and universal design for learning. AI, in this way, became a tool for refining pedagogy—not replacing intellectual engagement, but enhancing the conversation around it.

Rather than just treating AI as a part of her syllabus policy, Jamee integrates it directly into her students’ learning experiences. Students get to experience 

  • Debates with AI: Philosophical arguments with AI, questioning its consistency, biases, and ability to reason.
  • 20 Questions with an AI: Probe ChatGPT’s ability to maintain coherent responses across a structured set of inquiries, pushing the boundaries of what it “knows” versus what it merely mimics.
  • AI and Consciousness Discussions: Explore the classic question—can AI ever be truly conscious, or is it just an advanced pattern generator?

By actively engaging with AI rather than merely discussing it in the abstract, students witness its strengths and limitations firsthand. The goal? Reflecting on the AI, being skeptical by pushing its limits and understanding it first hand. 

Jamee urges a measured perspective on AI’s role in education. Instead of falling into utopian optimism or dystopian panic, she advocates for critical engagement. “We need to push back against the extremes,” she says. “AI isn’t inherently good or bad—it’s about how it’s applied. What are the opportunities? What are the risks? Who benefits? Who is left behind?”

One of the greatest concerns she raises is how AI tends to amplify existing biases. Who gains from AI-driven education? Who is further marginalized? These are not abstract questions—they have real-world consequences, from automated hiring decisions to biased facial recognition software.  She advises students and faculty alike to think critically about AI’s societal impact by asking: who is helped? Who is harmed?

What does AI mean to the future of education?

If AI is rapidly reshaping the job market and knowledge landscape, what skills will still matter? “It requires some deep soul-searching about what is actually most important to get out of education,” she suggests. As automation shifts the value of certain skills, adaptability, ethical reasoning, and critical thinking may become more essential than ever. After all, AI can generate answers—but can it question itself?

With her upcoming Philosophy of Artificial Intelligence course, Jamee plans to dive even deeper into these questions. The course will challenge students to explore not only the technical aspects of AI but also its broader ethical, social, and existential implications.

Jamee’s insights leave us with a challenge: to engage with AI not as passive users, but as critical thinkers. Technology is always disruptive, but that disruption can be harnessed for good—if we approach it with awareness and intentionality.