S1 E4 How is AI Helping (or Hindering) Learning? 

Teaching@Tufts: The Podcast
Teaching@Tufts: The Podcast
S1 E4 How is AI Helping (or Hindering) Learning? 
/

Episode 4 of CELT’s Teaching@Tufts: The Podcast takes on a question many instructors are asking: what is AI actually doing to student learning? Hosts Heather Dwyer and Carie Cardamone draw on a rapidly growing body of research to explore when AI functions like a helpful tutor and when it acts more like a robot lifting weights for students, short-circuiting the mental “lifting” that learning requires. Drawing on recent studies, and guided by two fundamental theories from the learning sciences, productive struggle and social learning, they offer concrete guidance for integrating AI into courses in ways that support cognition, protect human connection, and help students develop cultivate AI habits that support learning. 

Strategies for Students 

  1. Think First, Then Prompt: Always start with your own thoughts or ideas before using AI. Don’t give AI your assignment and start there—engage your own mind first to gain the learning. 
  1. Ask AI for Alternative Perspectives: Direct AI to give you counterarguments, critiques, or suggestions that lead you in new directions rather than just asking “is this a good essay?” 

Strategies for Instructors 

  1. Protect Human Interactions: Keep peer interactions, office hours, and social interactions that foster learning your course. 
  1. Build in Explicit Reflection: For example, you might make analyzing AI’s responses part of the assignment and ask students how interacting with AI shaped their learning or experience. 
  1. Design for Process, Not Just Product: The research shows outcomes depend heavily on context and implementation. As you design your assignments and guide student AI use, consider how it impacts student cognition and social interactions. 

Resources & References 

CELT’s Website on AI Resources 

Music Attribution: Inspiring and Energetic by Universfield – License: Attribution 4.0

[00:00:00] *Music*   
Heather Dwyer [00:00:11] Welcome to Teaching at Tufts: the podcast, brought to you by Tufts University’s Center for the Enhancement of Learning and Teaching. This season, we’re exploring what the learning sciences can tell us about engaging today’s students, how their cognitive, emotional and social environments are changing, and what instructors can do to meet them where they are. I’m Heather Dwyer, and I’m here with my co-host, Carie Cardamone. Today’s episode is about a topic that’s on everyone’s mind, Artificial Intelligence, or AI. And in particular, today we’re asking what impact is AI actually having on learning, and how can instructors respond. We’re really starting to see a wave of research emerge on this topic, and in today’s episode, what we’ll do is explore some of the work that suggests how AI is shaping and sometimes short-circuiting student learning. So, Carie’s going to talk with us today about that, because she’s done a bunch of research on this. And thank you for speaking with our audience today about AI and the impact on learning. Carie, can you start with maybe summarizing what you see as the impact of AI on learning?  
Carie Cardamone [00:01:15] Thank you, Heather. I’m excited to have this discussion here in this space, because I think I’ve been talking to dozens of faculties and students, maybe even hundreds, here across Tufts, about what they see as the impact of AI. And it’s really helpful to kind of look at the unfolding literature. There’s been, I don’t know, 50 to 100 studies that have come out in the last year, of varying quality and size. And one of the ways that I like to kind of frame this in my mind, to differentiate what we’re interested here, which is learning as opposed to productivity in a work environment, is thinking about the difference between using a forklift in a factory-like environment, and having a robot lift weights for you. You know, as a productivity tool. AI is a wonderful forklift. It automates. It can move faster, move heavier loads than you as a human can. But when you’re thinking about like, why are you lifting weights? You’re lifting weights to build muscles and so having a robot lift weights kind of defeats the entire purpose of going to the gym, in the same way that having AI, quote, unquote, do the learning for you interrupts the entire point of doing the learning activities, because you’re not actually moving muscles there gaining learning.  
Heather Dwyer [00:02:28] That makes sense. And I’ve definitely heard in other contexts; the brain is referred to as being used like a muscle. And so, I think that’s a helpful analogy for us to consider how AI might enhance or interrupt the exercise of that muscle. Can you tell us a little bit about ways, themes where we consider how we’re thinking about the learning sciences are showing us the impact of AI on learning?  
Carie Cardamone [00:02:55] Yeah, I find it’s helpful for me to organize the studies in my own brain around themes from the learning sciences. Let’s just do two today, and we can kind of weave in studies under each. But the two I think would be helpful is first, thinking about that productive struggle, or the cognitive load that is happening when we are learning. And then second, I think especially in the college classroom or ecosystem environment, it’s also important to think about social learning and belonging and the role that plays in learning in higher education.   
Heather Dwyer [00:03:27] Okay, great, yeah, so let’s start with that first one you mentioned, which has to do with productive struggle and cognitive load.  
Carie Cardamone [00:03:33] Yeah, I think this digs down to the broader result across learning sciences around learning happening through effortful mental work. So, it’s not: Is a student using AI, or aren’t they? It’s really how they’re using it. Are the students relying on AI and bypassing a productive cognitive load that will lead to learning? When that happens, they’re actually going to undermine the learning. But perhaps it’s also possible to use AI to support that engagement, right, that cognitive engagement, and then it would have the potential to enhance learning. So, I think the negative studies are often ones that we grapple to with faculty, and I’d love to talk about one study that has been really flooding the popular press there’s only, you know, I think, 40 participants in the beginning and maybe a dozen by the end of it. But what’s interesting and unique about the study is they’re actually putting EEG scans on of student’s brains while they’re doing the act of creating essays. As the cognitive engagement of the students is increased, as they’re doing more work on their own, they can see a more distributed network of things happening in their brain. So, when they write an essay on their own, their brains light up, and when they write an essay with more support, such as using Google in Google searches. in this one, it’s a little bit less, and even less when they’re using AI to actually create the essays. The other thing that I think is interesting, even in their very small sample, by the third session they had with the students, they were able to show a concrete difference between students who began using AI at the very beginning of the essay writing task where they just, they didn’t even remember what the essay was about, right? Like they had no memory of what was happening because they just produced it with AI. They didn’t think about it much at the moment, and those who used LLMs later, so they still had more engagement, more learning. They started with their own brain, but then they came to AI second.  
Heather Dwyer [00:05:37] Yeah, that’s really striking. And I’m remembering that you said, you know, take this with a grain of salt. It’s one study. It’s a relatively small sample size. So how seriously would you suggest we take this study? Does it align with other studies? What do you think?  
Carie Cardamone [00:05:50] Yeah, that’s a good question. I think each of these studies are very different conditions, right? That’s artificial, that’s out of the classroom, but there are other studies we can point to that looked at times, even hundreds of students, and I’ll link to them in the show notes. But Gerlich, M et al., Fan et al., Bastani et al, they all have different studies, and they show in various ways that students can perform better at a given task or create better essays or solve harder problems, etc, using AI. But when they used that AI, they show negative correlations between the AI use and in different cases, critical thinking, self-regulated learning, and in particular, the ability to perform the task independently in the future. And so, I think this study is just one among many that sort of paints this picture really supported by the learning and sciences. It’s not whether or not you’re using AI, but it’s what ways are you cognitively engaging in the task.  
Heather Dwyer [00:06:47] So, I think what I’m summing up from you know these various studies that you’ve described is the output can be a lot stronger with the use of AI, but the actual learning that occurs in the students becomes eroded by that. So, are there any examples that you know of where AI actually can support students in the learning, and not just in producing stronger writing or output, whatever they’re creating?  
Carie Cardamone [00:07:12] Yeah, and I think that really gets to the analogy with the weightlifting, right? Like if they’re using AI, are they still lifting weights themselves. Right? To extend the analogy, can the robot be a coach in the weightlifting room and the students still lift the weights themselves? And I think there are studies that show this. Wang and Fan did a metadata analysis of around 50 studies, and they showed that across the studies, really the devil was in the detail they saw, in many cases, better performance, which we’ve already described, better perceptions of learning, but also in some cases increased higher order thinking skills. But it really depended on how AI was used, and they emphasized in their analysis of the literature that it’s important to keep AI’s role as that of a tutor or a learning partner.  
Heather Dwyer [00:07:59] So, it sounds like the takeaway from this series of studies that you’ve described here is, at least for the faculty listening, if the AI interaction students are engaged in are those that mimic tutoring, right? So, if they’re designed to support the effortful work of learning, such as helping students sort of pace the information, or if they’re using AI to prompt reflection, and if the AI is sort of adapting to the student needs, it can really support productive struggle. But when AI use is really more focused on getting a task done or removing all difficulties when it comes to performing a task, then it’s going to harm learning. Is that fair?  
Carie Cardamone [00:08:38] Yeah, that’s fair. I think Kestin et al’s study, they did this in a large lecture class. They had this study titled AI tutoring outperforms active learning. So, it got all this press again. And I think the key was, they were comparing students in a lecture classroom, right, where they had active learning and lecture going on, but all the students are moving at the same pace, right? All the students are doing the same things. They are actively engaging in it, but they compared that with students at home interacting with a private AI tutor. And in this case, the AI’s role is well defined. It’s structured by the design to be this tutor and support role, and it was designed to facilitate the active engagement with the topic, to manage cognitive load and produce a growth mindset, and they saw increased learning, like students did better on tests afterwards, when they had this interaction with that AI tutor, versus not having that interaction with the AI tutor, just having active learning in a classroom. And I think here, what they saw is affordances of the AI tutor that were unique were providing timely, individualized feedback, teaches the students interacting with it, and the ability to self-pace, which you can’t do in a big classroom with a bunch of other students.  
Heather Dwyer [00:09:47] So, engaging with a tutor as you’re describing, even an AI tutor, it feels like a social act, and I remember that one of the themes you mentioned early on has to do with social learning and belonging. So, let’s dig into that one a little bit. What can you tell us about what you found in the literature right now that has to do with the impact on social learning?  
Carie Cardamone [00:10:09] That’s a good question. And I think particularly in the college classroom, you know, from Vygotsky, we really looked at the impact of how learning with others is critical to developing those knowledge and skills, not just learning independently. And what I’ll just highlight a couple of studies that show when AI is actually eroding those social interactions, it’s actually harming not only learning, but overall sense of well-being and belonging. So one recent study, I like this study, it’s small and it’s just on computer science students, but Hou et al. did a study all roads lead to ChatGPT, and what they found in semi structured interviews, it was 17 students, different institutions, but overall, they’re describing all these extensive ways they’re interacting with AI, and they found that this AI was even mediating their interactions with peers, like you ask a friend a question and the friend goes to ChatGPT before they respond. And so, when they’re redirecting their questions to AI or not even engaging with their peers, because they have this ready buddy next to them, this automated system, they’re eroding their social support system from peers, and they reported these feelings of isolation and even a lack of motivation of engaging in the work, of learning.  
Heather Dwyer [00:11:31] That’s really interesting, and that study that you just described is a relatively small one. Are there other studies that look at a bigger sample size but find maybe some similar conclusions?   
Carie Cardamone [00:11:41] Yeah, so in contrast, Crawford et al did this study when artificial intelligence substitutes humans in higher education, they looked at almost 400 students use of AI, and they found that students who are using AI extensively and these interactions were replacing interactions with librarians, faculty, student advisors, not just their peers in class. And why were they doing it? They were doing it because they were responsive and adaptable. They felt like AI was helping in their learning, and even many reported that they felt socially supported from their interactions with AI. But for those students who really found this important social support from AI, that also correlated with having fewer interactions with their peers, with their family, with their faculty, with all of these support systems, and ultimately, it also correlated with feelings of loneliness. So precisely, those who felt socially supported by the use of AI were the students who felt most disconnected and lonely, the students who felt more socially supported by other people had more interactions with other humans in the institution, reported higher grade performance, and they reported more of a sense of belonging. They wanted to stay there.   
Heather Dwyer [00:12:56] It sounds like the students who used AI as a social replacement, in that case, are the ones who end up actually feeling more isolated. Is it possible that AI can actually support social learning?  
Carie Cardamone [00:13:10] Yeah, I think in a classroom environment, it really depends on how you structure the activities. This is just a little study by Liu et al., but they looked at using generative AI, enhanced collaborative whiteboard in a design project space. And what I thought was interesting around this was they really found significantly heightened behavioral, cognitive and emotional engagement in the projects. And they also saw a positive impact on higher order thinking skills and creativity. And I’m not saying the study is perfect, it’s small. They had two sections of a course, and they measured things by students, you know, interactions with it on a whiteboard and then their surveys. But what I thought was helpful here is that the students, interactions with AI were designed in specific structured way, with the motivation of scaffolding complex thinking. And not only that, they’re engaging with it in teams. So, they’re not individually interacting with the AI. The team is interacting with the AI. These teams of two to four students, and so it’s primed to give the groups relevant examples and data point students in new directions, increasing divergent thinking, which was part of the point of this project, and helping them distill complex information into key points. To me, it was the scaffolded use of it in this particular assignment and the interactions of it being group oriented that showed a positive social support.   
Heather Dwyer [00:14:30] Okay, so what does this all mean for faculty who might be listening to us having this conversation, right? What might we want to think about as we as instructors are guiding students on what to do independently if they wish to use AI, because a lot of time that they spend in the learning process, not directly in our classroom, it’s on their own, and I’m guessing this is most often when they turn to AI. So, what guidance can instructors give students about how to best use AI when they’re doing so on their own?  
Carie Cardamone [00:14:57] Yeah, I think this is important, especially because we’re trying to guide students in syllabus, statements and assignments, when and how should they use AI? And it’s not that effective to just say, don’t use it ever. But to be very specific, I’ll pick two tips for this. Think first then prompt. Always start with your own thoughts or ideas for chatting with AI. Don’t just give AI your assignment and start brainstorming there. That’s critical to engaging your own mind and gaining that later, learning AI can then point you in new directions, help you come over stumbling blocks but start with your initial thoughts. And the second thing is, ask direct AI into giving you alternative perspectives and feedback in specific ways. You can ask it for counter argument, critique, suggestions that might lead you in a new direction, as opposed to just what would a good college essay look like. You know, you’re asking for it, you’re interacting with it more like a human right. You’re learning with it.  
Heather Dwyer [00:15:52] I appreciate this, even just for myself, even not as a student, but in terms of my day-to-day interactions and leaning on AI as a tool in my own work, the reminder to try it, attempt, attempt, whatever task I’m working on first, rather than going directly to a tool and taking its first response. And then, in terms of integrating AI into course design, what would you recommend instructors keep in mind to best make sure that AI is supporting rather than eroding learning?   
Carie Cardamone [00:16:22] That’s a great question. I mean, obviously we’re keeping in mind that tutor role that we’ve just been talking about in the student’s own interactions, but specifically in a classroom environment, there are two different things I would keep in mind. First, protecting those human interactions, keep your peer reviews, keep office hours or interactions with other human supports as central to the course, and then only add in AI, with that structure. Primarily focus on the human-to-human connections. Secondly, when you’re using AI or scaffolding, any use of AI, build in that explicit reflection on the process. Can you analyze AI’s responses? Make that part of it. How does interacting with AI shape their learning or their experience?   
Heather Dwyer [00:17:02] So, I have one more question before we close, which is the research coming out on AI and learning is constantly evolving. It’s very fast paced. The studies that you’ve cited today are all really recent ones. Whatever conclusions we draw from the research will continue to evolve. So, I’m kind of wondering how durable Do you think these conclusions are in this moment, like next year at this time, would we see something totally different? Or do you think that’s unlikely?  
Carie Cardamone [00:17:25] Well, I think that’s why it’s helpful for me to structure it around what we know about how humans learn. Because I think any one of these studies, especially like a small classroom experiment, what they show over and over again is the devils in the detail. It’s how we’re using it, what we’re doing with it, how those interactions integrate in with the course, which is true of much of educational research, right? You can’t just plop something from one context into another and expect it to work in exactly the same way. And so, I think if we stick on those mental models of, how is this use of AI impacting what’s happening the cognition of the student, and how is it impacting their social interactions, those will be a robust framework. The details of individual studies, I think, will show everything all over the map.   
Heather Dwyer [00:18:09] Thanks, Carie, I really appreciate you sharing your expertise with us today and for taking the time to comb through the literature and distilling it into a few solid recommendations for faculty who might be listening.  
Carie Cardamone [00:18:21] Thank you. The conversation was fun. I was excited to talk about it.    
Heather Dwyer [00:18:25] Yeah, me too. For those of you listening, thank you so much for joining us today. There were a lot of resources referenced in today’s episode. They can all be found in the show notes, and until next time, keep teaching, keep learning, and don’t forget to take care of yourself, too.  
[00:18:36] *Outro Music* 

Transcribed by https://otter.ai

This podcast was brought to you by Tufts Center for the Enhancement of Learning and Teaching and is hosted by Teaching at Tufts, https://sites.tufts.edu/teaching/