I tend to favor assessments that fit within the mid-to-upper levels of Bloom’s Taxonomy: analysis, synthesis, and evaluation (University of Central Florida, n.d.). Most often this results in summative assessments that privilege creation of an artifact. This is due to my belief that “assessment, if not done with equity in mind, privileges and validates certain types of learning and evidence of learning over others, can hinder the validation of multiple means of demonstration, and can reinforce within students false notions that they do not belong in . . . education” (Montenegro and Jankowski, 2017, p. 5). However, this belief and its impact on my practice has made traditional data collection difficult for me. While I understand and can calculate the statistical concepts – such as mean, median, mode, range, and standard deviation – used in education for data collection and analysis, I have struggled to connect them to my actual practice.
After all, it is much easier to quantify results from a behaviorist-inspired test with true and false or multiple choice questions (Watson, 1905). Either the learner knew the answer or they got it wrong. In contrast, by privileging assessments that allow for multiple ways of knowing and demonstrating knowledge, I am incorporating culturally relevant pedagogy into my practice (Ladson-Billings, 1995). This shows my students I value them, but makes it difficult for me to quantify the subsequent data. How do I compare a student who wrote a poem compared to a student who wrote an article? When I call a student to my desk and ask them to explain their thinking for a written answer, should not that oral response count as a demonstration of knowledge? Therefore, my data usually ends up being a loose amalgamation of grades, informal conversations, and assignments students turn in that I view as individual progression rather than something scientific where I look at a class as a whole.
Therefore, the idea of sorting through data based on a particular lens has been helpful to me. If I use a specific lens – age, gender, ethnicity, race, language, etc. – I can gather whole class information to guide my designs.
In a recent assessment I intentionally considered gender in my design and as source of data. The assessment itself is evaluating two areas: the ability of students to apply Claim, Evidence, and Reasoning (CER) to writing in drama, and their ability to analyze a play from a dramaturge’s perspective. By examining student responses through the lens of gender – and applying a few mathematical concepts – I can ask myself the following questions:
- What gender of students more effectively fulfilled the requirements of the CER prompt?
- Do I notice similar gender trends in writing in my Drama class as I do in my English classes?
- When I leave feedback on their writing, does my feedback privilege one gender of students over another?
- When students analyze the play, what themes are they picking up on, and is there a difference between gender?
- What does this reveal about how students analyze plays?
- Do students of certain genders favor one dramaturgical aspect over another?
- Do boys favor tech and girls acting?
- Did students across genders do well expressing themselves this way, or should I evaluate the structure and format of this assessment?
These questions allow me to not only quantify information in a way I might previously have thought difficult in a humanities course, but allows me to evaluate my teaching. If one group of students is performing better on this assessment than another, I need to investigate how I am teaching as well as how I am evaluating my students. I may need to make changes so all of my students can learn to the best of their ability. Paying attention to the data and using it to inform my instruction is one way to do this.
References
Ladson-Billings, G. (1995). Toward a theory of culturally relevant pedagogy.
American Educational Research Journal, 32(3), 465-491. https://doi.org/10.2307/1163320
Montenegro, E., & Jankowski, N. A. (2017). Equity and assessment: Moving towards culturally responsive assessment. (Occasional Paper No. 29). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).
University of Central Florida. (n.d.). Bloom’s Taxonomy. https://fctl.ucf.edu/teaching-resources/course-design/blooms-taxonomy
Watson, J. B. (1905). Contributions to the study of the behavior of lower organisms. Psychological Bulletin, 2(4), 144-145.