Direct Replication (and a Registered Report) in Undergrad Experimental Psychology

tl;dr

In this post, I describe my evolving approach to teaching undergrads to be effective producers of research through direct replication and writing a Registered Report. There’s some hand-wringing and whinging about science drama and overwork, and a list of links to course materials that I’ve developed this semester.

Making a Difference

Back in about 2011, the shit hit the fan. Yes, it did. Reports of data fabrication + the consequences of p-hacking + extrasensory perception in the field of psychology conspired to rock the scientific ground on which I walked. It’s now 2018 and I’m still wending my way back, a process with highs and lows, some of which I described in a thread on Twitter.

One of the things I highlight in that thread is the sense of responsibility I feel to make a difference as I find my way back to stable scientific ground – to parlay what I’ve learned about doing solid research into pedagogical benefits for the college students I have the pleasure to work with at Tufts. Seems the best place to make a difference is in our experimental psychology course, required for all psychology majors.

Experimental Psychology

Experimental psychology is – as I say in my syllabus –  “designed to teach the basics of psychological research, both how to conduct your own research, but also how to be an effective consumer of research conducted by others.” We spend a lot of time talking about things like reliability and validity, what it takes to make frequency, association, and causal claims, and how to conduct, analyze, and interpret empirical research.

Over the last couple of years, I’ve embraced the idea that one of the best ways to inculcate scientific skills is to mentor students in direct replication research (Frank & Saxe, 2012). That is, to help students learn how to figure out whether using the very same method from a published journal article yields the same results in a new sample of participants.

My approach to mentoring direct replication in this course has been evolving over the past few semesters. When I introduced direct replication for the first time in the Spring of 2017, the entire class replicated a single experiment that sought to determine whether taking notes longhand or with a laptop impacts the characteristics of the notes and academic performance. I preregistered the study and programmed everything up in Qualtrics, and students each collected data following a standardized protocol. They then analyzed the data and wrote up their own empirical report. I felt pretty triumphant about having emphasized the importance of transparency and replicability, and about the fact that their empirical work that semester focused on a topic that was directly relevant to them as college students (and, thus, hopefully naturally interesting). At the same time, I worried that maybe writing just one empirical report wasn’t enough. I also had the nagging feeling that maybe there wasn’t enough room for students to bring their creativity to the table. (Here’s the project on the Open Science Framework, which still awaits my posting of the summary report, data, and code – gah.)

So, in the Spring of 2018, we did two replication experiments in the class. For the first, the students served as participants in a conceptual replication of the laptop/longhand experiment. This was a course project that wasn’t meant to contribute generalizable knowledge, but it gave them the opportunity to experience the research first-hand, and then analyze the data and write up an empirical report pretty quickly. That enabled us to move to a second project, this one a replication of a study testing whether standing improves selection attention relative to sitting. Again, I preregistered and programmed things up in Qualtrics, and then the students recruited participants and collected the data. (Here’s our preregistration.) I liked that students had the chance to write two empirical reports, but again it felt like maybe there was too much that was decided for them; that nagging feeling that they needed some room to flex creative muscles persisted. Plus, coupled with a *bunch* (like, a shit ton) of other changes I introduced this semester (new textbook, pre-class reading quizzes, a whole new specs-based evaluation plan) and developing a new graduate seminar at the same time, I ended the semester completely exhausted (Too.Many.Things.) and deflated (a subset of students in experimental psych kinda hated the course –  sigh). How ever would I sustain this?

I wouldn’t.

Phase III: Set Some Damn Priorities

At the start of the current semester, Fall 2018, I took stock of course goals (scientific inquiry and critical thinking, communication, ethics, and professional development) and personal priorities (to live a full life that involves some fun and doesn’t involve working 60 or more hours per week). I listened to friends, feedback from students, and to my own inner voice. These collective voices led me to set the following priorities this semester:

  • Priority #1: Keep it simple, stupid.
  • Priority #2: Give students more opportunity to be creative.
  • Priority #3: Maintain a focus on instilling transparent practices.

Priority #1: Keep it simple, stupid.

One experiment with a simple two-groups between-subjects design. One empirical report. That’s it as far as student research.

And on the course design side, same everything – same textbook, same use of pre-lecture reading quizzes, same specs-based approach to evaluation (well, tweaked a little).

K.I.S.S. mwah

Priority #2: Opportunity for feasible creativity.

I wanted students to have the chance to figure things out for themselves, but in the context of studies that are feasible. Thus, I paged through the table of contents for the past few years of the journal, Psychological Science. (Psychological Science has a preregistered direct replication submission option!) I identified five articles in which the authors had published at least one experiment with random assignment to one of two conditions using methods that were feasible in the context of the course, and that had some hope of being of broad interest. Students worked in groups to read one of those five articles and presented it to the rest of their lab section. They all then voted on which one of those experiments their lab section would conduct this semester. With this new approach, students got to pick the study they’ll do (from among feasible options), and they’re the ones who will get to figure out how to make it happen (with support).

Priority #3: Transparency, baby.

The one empirical report referenced as part of Priority #1 will be a Registered Report. Students will first write a Stage 1 manuscript in which they introduce the study they’re going to do, making it clear why it’s a good idea to do the work and they’ll relay the method they’re planning in order to conduct their direct replication. Once that idea is finalized in light of feedback, they’ll preregister their study, obtain institutional review board approval, and collect the data. They’ll then analyze the data and write a Stage 2 manuscript, now a full report that includes the results and discussion. It’s a bloody brilliant way to do science.

Final Thoughts and Some Materials

So, that’s where I’m at in my evolution as a professor hoping to encourage effective research among early career psychology researchers. Students just learned in lab this week which project their section will conduct this semester and are now working on planning their study and writing a draft of the Stage 1 manuscript, due in a couple of weeks. I’m liking the way things are shaping up so far and look forward to learning in the coming weeks which aspects of the plan work well, and which require some adjustments.

If you’re interested in learning more about this class and our replication project this semester, here are some of the materials I’ve created.

Reference

Frank, M. C., & Saxe, R. (2012). Teaching replication. Perspectives on Psychological Science7(6), 600-604.