Interrogating our assumptions about assessments

By Carie Cardamone, Tufts Center for the Enhancement of Learning & Teaching, PhD Astrophysics

Assessments are how instructors measure learning.  In any credited course with a grade, embedded in our process of assessment is the assumption that we are able to objectively measure what a student has learned.  This causes many of us to focus narrowly on assessment as an evaluation tool, putting us in the position of attempting to preserve the accuracy, validity, and reliability of assessments that may not always measure learning as well as we hope.  However, when we focus on, for example,“fixing” grade inflation and (mis)using tools to detect cheating students, this can negatively impact students’ learning, trust, sense of belonging in the discipline and ultimately, their future opportunities.  We forget that assessment can be a tool FOR learning and increasing equity. 

The process of grading can also interrupt student learning by centering the extrinsic motivator of “points,” unintentionally guiding students away from behaviors that lead to learning. Penalizing mistakes without the opportunity for students to revise their work or show improvement diminishes the possibility of leveraging mistakes or failures in the process of learning. Evidence of continuous improvement over time might not be reflected depending on the types and timing of assessments.

The first step in creating equitable and informative assessments is to deepen our awareness of potential implicit biases and assumptions underlying our assessment practices. Below, I offer several reflective questions to help  interrogate our assumptions. These are broad and may feel abstract, so in order to make them more concrete, for each question I will also offer my own approach in the context of my teaching journey. 

What kinds of evidence of learning do we privilege in our disciplines when assessing students? 

We often gravitate toward particular assessment practices within a given discipline, whether it be multiple choice exams, papers, problem sets, critiques, or something else. These practices develop for a variety of reasons; many are inherited, some are from our own experience, some are logistical, some are for efficiency, and others philosophical. These go-to practices result in part from our assumptions about how one should measure student learning in our disciplines. When considered more closely, students’ performance on these assessments may or may not represent accurate or valid measures of their achievement of the learning objectives. For example, a learning objective might focus on a student’s ability to use certain knowledge in a future course or apply a concept to a new situation, yet be measured with a multiple choice exam focused on factual recall. Some students may have extensive experience in taking tests with factual recall, and others more comfortable writing, providing verbal explanation or using artistic expression. By privileging some forms of assessment over others, we don’t always know whether we are assessing student learning equitably and accurately.

How do I begin to approach this question? When I first started teaching physics, every problem set and quiz looked exactly the same – a set of example problems to solve. Then I started experimenting beyond this constricting format by giving students opportunities to make predictions about how the physical laws might play out in the real world, draw concept maps, write their own questions, correct their own thinking when they make a mistake, and demonstrate their knowledge through writing, drawing, posters and presentations. In using these different kinds of assessments, I started seeing deeper evidence of the physics that they understood. 

What are we trying to assess and why?

Before getting too deep into analyzing our approaches to assessment and grading, it is useful to critically examine our goals for our students, and why we prioritize particular knowledge bases or skills. Equitable assessment is strengthened when we understand what students’ goals are in addition to our own, and it can shift the power dynamics in a classroom. What are the range of outcomes that students are hoping to gain from the course?  What skills or knowledge are they hoping to achieve? What are their concerns and aspirations? Asking these questions at the beginning of and throughout a course can prompt a mutual commitment to learning and a respect for the diversity of hoped-for outcomes while engaging students in eventually becoming self-directed learners. Further, this approach helps us check ourselves for biases inherent within our disciplines about what is required and important.  Success for one student might not look like success for another, and they almost never begin at the same place.   

How do I begin to approach this question? As an astronomer, I believe that students are taking my courses to gain appreciation for the beauty and scale of the universe, and to gain quantitative and problem solving skills that are broadly applicable to their lives.  However, when teaching, I must ask my students what they are most excited about learning, and provide examples and opportunities for them to apply the skills that I assume are transferable to their future lives.

What counts as a valid way of measuring student learning?

What we choose to grade and how we go about designating those grades can unintentionally lead to inequities for students based on their prior academic experiences and their experiences of the classroom environment.  The cumulative effect of centering assessment on the assumption that students all begin at the same place has inequitable consequences. Remaining conscious of this potential disparity in our assessment design can help us to support all students in successfully learning throughout the course rather than measuring students’ prior skills and knowledge. The differences in expectations between disciplines and instructors can compound these disparities when students don’t understand what is expected from them.  Several examples illustrate this point. 

  • Averaging grades over the semester might privilege prior knowledge and skills over a student’s ultimate performance and result in relatively lower final grades for a student whose performance steadily increased over the semester, but who may have had a steeper initial learning curve.  If a student can demonstrate mastery of the courses learning outcomes at the end of the semester, are there opportunities for this to be reflected in their grade?
  • Late penalties may disproportionately impact students who have fewer resources, family commitments or work, less academic support and preparation, and might not have control over the circumstances that cause an assignment to be late.  We can mitigate this impact by allowing for flexibility such as dropping a lowest homework grade, or allowing for deadline extensions.
  • Grading a student’s “participation” can be particularly vulnerable to bias toward our own notions of desired behaviors. Without intentionally making space to include non-dominant modes of interaction and behavior as valid ways of participating or engaging, we risk rewarding students for “acting white” (Feldman, 2018). Traditional classroom participation policies center the instructor in maintaining control, and maintain racist, sexist, classist norms by privileging those first to speak, those who speak loudest, and those who constrain the expression of emotions (O’Brien, 2004).  We can help by providing various modalities and platforms through which a student can demonstrate participation and by clarifying (to students and to ourselves)  how this is relevant to their learning.

How do I begin to approach this question? When I started teaching, all my assessments asked students to solve physics problems and were graded the same way: points for identifying a correct equation, finding the right answer, and remembering those units!  A grade was simply the weighted average of them over the course of the semester, with attendance points thrown in.  Yet this process focused students on earning each point, and away from the ‘physics!’ The highest grades in the class also consistently went to those who had the most physics in high school.  Over time, I have tried building in more flexibility by allowing student choice on assignment topic and format, dropping lowest grades, and providing opportunities to revise or resubmit work.  However, I still wrestle with how to assign grades that value their learning progress over time and the knowledge with which the students leave the course — and to not be biased by the background and environment of each individual student.

How do our individual identities influence our assumptions around assessment and grading? 

Our disciplinary conventions, academic experiences, and our policies around teaching come from a positionality that has been ingrained in us and predisposes us to certain biases that are difficult to uncover.  Positionality is the social and political context that creates our identity in terms of race, class, gender, sexuality, and ability status among others.  Our positionality also describes how our identity influences, and potentially biases, our understanding of and outlook on the world – including our teaching practices.  When we design assessments and decide how we grade, we are building on these philosophical assumptions. 

How do I begin to approach this question?  As a scientist, it has taken a while for me to understand that true objectivity is not possible. As individuals, we interpret reality based on own experience and this interpretation colors how we make sense of the world.  For example, as a native English speaker, interpreting the meanings of long, wordy prompts or writing out short answers on exams never presented me a challenge due to language proficiency.  As an instructor, I need to think about the different positionalities of my students. To understand my students’ perspectives, I encourage them to ask questions to help me see when my instructions aren’t transparent to them, consider how much time they might need to complete a task, and provide regular opportunities for them to provide anonymous feedback on their experiences of the course environment.

So given these four assumptions, how do we center equity in our assessment and grading practices?

It’s not that an individual assessment ‘is’ or ‘is not’ equitable.  Rather, the ways in which assessments are used can promote equity or can lead to inequities.  There is no simple list of tips to adopt to achieve equitable assessments in all courses, because equity requires ensuring that each student receives what they need to be successful.  By focusing on the essential learning of our students, we can begin to incorporate more authentic forms of assessment that connect to the real world and build on a student’s intrinsic motivation by giving them choice in topic, format or even in the evaluation of the assessment.

In what ways are you rethinking your assessment practices? Share your stories by emailing us at celt@tufts.edu.

Selected References

Cardamone, Carie N., and Bethany Cobb Kung. “Paired Dialog on Equity —Six Goals for an Introductory Astronomy Course.” College Teaching https://doi.org/10.1080/87567555.2022.2076648.

Feldman, Joe. Grading for equity: What it is, why it matters, and how it can transform schools and classrooms. Corwin Press, 2018.

Montenegro, Erick, and Natasha A Jankowski. “Equity and Assessment: Moving towards Culturally Responsive Assessment (Occasional Paper No. 29). Urbana, IL: University of Illinois and Indiana University.” National Institute for Learning Outcomes Assessment (NILOA), 2017.

O’Brien, Eileen. “‘I Could Hear You If You Would Just Calm Down’: Challenging Eurocentric Classroom Norms through Passionate Discussions of Racial Oppression.” Counterpoints 273 (2004): 68–86.


This is part 5 of a series exploring how many of our assumptions about learning were challenged, and in many cases, transformed by those challenges during the pandemic. In this post we reflect on the assumption that our assessments are an accurate measure of learning, that our disciplinary norms around assessment are effective, and that our assessments are equitable. To read other articles in the series, start with the introduction: Have We Been Transformed & How

See also Reconsidering Academic Writing from a Culturally-Informed Perspective