Last month, all faculty received a Faculty Course Outcomes Report from Institutional Effectiveness for each of their spring 2021 sections. This report listed the number of students who passed, withdrew, or did not pass, along with comparable data for all sections combined. The goal of this data sharing is to prompt individual reflection and department conversations around student success in line with GCC’s Focus 2024 Strategic Plan goal to increase successful course completion.
Before we can use our data, though, we need to understand it. We turn now to a brief data lesson from Mary Anne Duggan, educational psychologist and statistics instructor.
Getting to know the data
It’s important to understand the source of these data and what the numbers represent. The numbers in the “Your [Course] Data” section are straightforward percentages of students’ grade categories for that course. Perhaps the easiest interpretation of the data is the “Success” percentage, or the percent of your students earning a grade of A, B, C, or P in the course.
The Comparison to “All [Course] Sections Regardless of Course Format” is less straightforward. First, the individual course data are included in the total course data, which muddies the comparison between your course percentage and the total. (This is especially the case when there are just a few course sections.) In addition, the standard deviations are not provided in this report, which makes it difficult to meaningfully compare your score to the course average. Finally, the course is being compared to all courses regardless of teaching modality, course length, and other factors, which can present a sort of apples to oranges comparison.
For these and other statistical reasons too lengthy to describe here, this type of “norm-referenced” analysis is not advised. Instead, setting a criterion-referenced goal might be a more appropriate use for these data. In a criterion-referenced goal, a desired score or level is set. In this case, instructors can look at their own individual course success percentages and decide if they can be improved.
For example, let’s say a course of 24 students has a success percentage of 75%. An initial goal could be set to increase that percentage to 79%. (This increase represents one additional student in a class of 24 making it across the finish line.) Criterion-referenced goals should be reachable and reasonable at the same time. So what steps can be taken to meet such a goal, and what can we do at this point in our semester? Here are three things we can do, among many.
What to do with the data
1. Gather information from students. Consider regular “temperature checks” throughout the semester to see how students are doing. The semester midpoint is coming up and is a perfect time to ask students to reflect on their progress and ask what we can do to help them continue to progress.
2. Consider a “post-mortem.” Can we identify any reasons for withdrawals or unsuccessful completions of last semester’s courses that we actually have some control over? And then, how can we adjust strategies we may already use to address those? What new strategies can we adopt that may impact those reasons? Strategies might include more frequent messaging to students, more personalized messaging to students, or drawing on these messaging strategies (while researched in eCourse, they could be used in face to face classes).
3. Consider developing a “pre-mortem” for courses starting in January. Asking students what we need to know about them in survey form can help to develop connections. When we come back to check in on them at various points in the semester, we can tailor our check-in to their particular situation if they have shared something with us.
Be on the lookout for future posts on how faculty are discussing their class completion data.
What have you planned as a result of seeing the class completion data? Share it in the comments or email one of us with your ideas for possible sharing in future posts.
Co-authored by Mary Anne Duggan, Julie Morrison, and Beth Eyres