Author: Mehek K. Vora

Rethinking AI’s Role in Learning and Human Connection

Rethinking AI’s Role in Learning and Human Connection

An interview with Dr. Carie Cardamone, Senior Associate Director, Center for the Enhancement of Learning and Teaching By Mehek Vora In a world where AI conversations often focus on individual adoption stories, Dr. Carie Cardamone offers something different: a campus-wide perspective on how an entire 

If It’s That Bad at Tic-Tac-Toe: Reflecting on how we may be victims to the WALL-E Theorem?

If It’s That Bad at Tic-Tac-Toe: Reflecting on how we may be victims to the WALL-E Theorem?

An interview with Jack Davis ’25 By Mehek Vora When Jack Davis first discovered ChatGPT, it wasn’t in a lab or during a lecture—it was just a whisper in the back row of a Tufts computer science class. “Someone behind me said, ‘you can tell 

“You Don’t Just Get AI”: A Tufts Alum on Learning How to Learn With It

“You Don’t Just Get AI”: A Tufts Alum on Learning How to Learn With It

An interview with Sam Kent Saint Pierre ‘24, Biochemistry

by Mehek Vora

When Sam graduated from Tufts in Spring 2024 with a degree in Biochemistry, they left with more than just academic knowledge, they left with an understanding that using AI well isn’t something that comes naturally to everyone. And that’s okay.

“It took me some time to figure out how to effectively use AI in my studies,” Sam admits.

Early on, the flashy promises of AI tools fell flat. During a molecular biology assignment, Sam had to review an AI-generated summary of a major paper on CRISPR-Cas. The summary, generated by BARD, sounded polished but did not go deep. 

“While the summary wasn’t entirely wrong, it was incomplete… much of it just being vague, not engaging with the main findings or the specific details of the research.”

It left Sam frustrated and confused. Was this all AI could do? Was it actually helpful? The answer wasn’t a no but it wasn’t a yes either. Sam stopped using AI for a while because they were unsure how to make it work. 

But like any tool, the value of AI depends on how you use it is. 

When Sam began exploring more active study strategies, they decided to revisit AI but this time from a different perspective. 

“I began using ChatGPT to generate questions based on my notes, particularly questions aligned with various levels of Bloom’s Taxonomy.”

Instead of asking AI for answers, Sam asked it for questions. Questions that made them think harder, not less. One AI-generated prompt stood out:

“Assess the impact of pH on the ionization states of amino acids in a biological system. How do these changes in ionization states influence protein folding and activity?”

That question challenged Sam to engage deeply with content from their biochemistry course, pushing them to consider real-world biological contexts.

It wasn’t about skipping the hard work, it was about trying to get the tool to help you get there. 

Even once Sam found their rhythm, it wasn’t all smooth sailing. AI could still be wrong, confidently wrong. Like when it used an incorrect formula to calculate DNA’s superhelical density.

But instead of turning away again, Sam adapted. They learned to use AI with a critical eye. Because using AI well isn’t just about knowing what to ask. It’s about knowing when to double-check the answer as well.

When asked what they want the Tufts community to understand most about AI, Sam was clear: “AI is a powerful tool, but it should not be used blindly… users must verify the information and ensure it aligns with what they’ve learned to avoid encoding misconceptions.”

And they’re right. The ability to use AI thoughtfully isn’t a magical skill some students have and others don’t. It’s something everyone can learn through trial, error, and intentional practice.

Sam’s story is a reminder that struggling with AI at first doesn’t mean it’s too complex or superficial. It just means you’re at the beginning of the learning curve. And like any good lesson at Tufts, it’s one worth sticking with.

When Machado Meets Machine: Exploring AI in the Language Classroom

When Machado Meets Machine: Exploring AI in the Language Classroom

An interview with Dr Ester Rincon Calero, Senior Lecturer, Romance Studies. In a world where artificial intelligence seems to be seen most often with STEM fields, it’s refreshing to talk about its often overlooked role in the humanities.

Not All AI Wins Make Headlines And That’s Okay!

Not All AI Wins Make Headlines And That’s Okay!

An Interview with Dr Meera Gatlin Assistant Teaching Professor at Tufts Cummings School of Veterinary Medicine What does it look like when you bring generative AI into a veterinary public health classroom? According to Dr. Meera Gatlin it looks a lot like playful experimentation, pedagogical curiosity, and a whole lot of trial and error. 

Think Critically, Not Just Quickly – Using AI Without Losing Learning 

Think Critically, Not Just Quickly – Using AI Without Losing Learning 

An Interview with Jennifer Ferguson, Head of User Experience & Student Success at Tufts Tisch Library

By Mehek Vora 

A librarian, educator, and former private equity research analyst, Jennifer Ferguson has been at the forefront of AI literacy, teaching students and faculty how to critically engage with these ever-evolving tools. Jennifer’s journey with AI isn’t one of mere curiosity—it is intertwined with her professional experiences. Before stepping into the world of academia, she worked as a research analyst in private equity, analyzing tech startups and emerging technologies. She’s seen AI grow from niche applications in the early 2000s to the all-encompassing, algorithm-driven world we live in today and recognizes AI isn’t a new revolution—it’s the next step in a long evolution. As a librarian, she views AI as an extension of a long-standing challenge: How do we teach people to evaluate information in an age where algorithms filter what we see and we don’t always know where the data is coming from?

Teaching AI Literacy at the Library (link to their website/canvas/resources)

Jennifer and her team at Tisch Library are teaching students not only how to use AI for research but also to critically evaluate its limitations and ethical implications. Through a mix of synchronous and asynchronous learning modules, students engage with technical topics like algorithmic bias and AI hallucinations to understand where the information is coming from. It helps break the stereotype of thinking about robots when hearing ‘artificial intelligence’. As she emphasized in her interview, it’s important that all students understand: 

  • “AI is a predictive algorithm working across enormous data sets. Large Language models aren’t sources of absolute truth—they’re just really good at predicting what sounds right. AI isn’t thoughtfully crafting responses; it’s playing an advanced game of autocomplete, guessing the next best word based on the data it has.”
  • “This becomes a real problem in research if  students trust the tool, only to realize later that the article they’re looking for was never published—it was just a convincing hallucination. The issue isn’t just misinformation; it’s the illusion of credibility. AI can sound confident, but a polished answer isn’t always a correct one—and that’s exactly why individuals need to approach it with a healthy dose of skepticism and ability to critically evaluate what they are working on. “

Can AI streamline time-consuming tasks at Tisch Library?

Jennifer and her team have been exploring ways to integrate AI into library workflows to free up more time for deeper, more meaningful work.

Using Microsoft Copilot agents, the library team trains AI with their own data, ensuring that it understands their unique needs and materials. This approach allows them to automate the process of generating descriptions for library resources—a massive undertaking given that their website houses over 500 different items. Jennifer tested the system on a small dataset of 27 items, and the results were promising—the AI produced high-quality descriptions with impressive accuracy. By leveraging AI as a partner rather than a replacement, Jennifer and her team are proving that technology, when used thoughtfully, can empower libraries to serve students and faculty more efficiently than ever before.

What are we losing when we let AI do too much thinking for us?

Using AI to produce polished work without actually engaging with the material is not just about plagiarism or academic dishonesty. The deeper issue is metacognition—the ability to think about one’s own learning. If AI is doing the heavy lifting, are students truly understanding concepts, or just submitting well-structured AI-generated answers? We see an increasing use of education for credentialization – getting a credential rather than “I need to learn how to think.”

That’s the risk AI poses—it’s quietly replacing essential cognitive skills, and students may not even realize what they’re losing (i.e. you don’t know what you don’t know). The challenge for educators, then, is not just how to incorporate AI into learning, but how to ensure that learning still happens. Jennifer offers a simple but powerful analogy:

“I already know how to read a map. But if I had never learned and started relying only on GPS, my brain wouldn’t develop that spatial awareness. I wouldn’t even know that I was missing it.”

Jennifer doesn’t advocate for banning AI—far from it. Instead, she urges a shift toward AI literacy with a focus on understanding the power imbalance. The idea that there is a gap between AI model training data that users are not aware of but engage with. It is important to be critical of AI-generated information because if you’re not an expert in a topic, you may not even recognize what’s missing. AI tools already shape our search engines, academic databases, and even job applications. Ignoring them isn’t the answer—using them wisely is.

“Everyone is already using AI—it’s embedded in everything. Ignoring it won’t make it go away. But we can teach people how to make good choices about it.”

As AI continues to evolve, the real challenge isn’t whether students and educators will use it—it’s whether they’ll use it thoughtfully, critically, and ethically. 

Jennifer leaves us with the question: “What are we losing when we let AI do too much thinking for us?” 

AI at the Extremes: Beyond Utopian Aspirations and Dystopian Fears

AI at the Extremes: Beyond Utopian Aspirations and Dystopian Fears

An Interview with Dr Jamee Elder, Assistant Professor of Philosophy. It seemed very natural to think about my own use of AI at the same time that I’m teaching my students about AI.