Goodwin and Hubbell begin this section discussing the serious issue of grade inflation in school ((Goodwin & Hubbell, 2013, p. 44-45). Naturally, grade inflation is creating a situation where students are being given grades that may not reflect their knowledge or skills. However, perhaps the grades are reflective of what was learned and that the bar for the learning was lowered!
Janet Pilcher describes employee performance as a continuum where about 90% of employees operate in and around solid to high-performing behaviors, but for the other 10% there are several outcomes: they modify the behavior to do better, they leave the job, or they are let go (Pilcher, 2018, p.28-29). Organizations applying good leadership approaches only see up to 3% of those employees in the latter two outcomes. Pilcher argues that with good coaching, we can do the same for our students, but only if we can measure them against high expectations to begin with.
In the last section, we tackled making effective rubrics based on performance. This section will discuss how those rubrics and other formative assessments should be guiding students to reach the high expectations set for them.
In my AP CSA course, I set out to create a lesson and assignment based on learning about String objects and methods. Students were given a basic task; take the email addresses of everyone in class and use String objects to remove everything except their full names. Our data, in this example, used email addresses that were formatted in an easy pattern, firstname.lastname@email.com (i.e. john.doe@gmail.com). As in the case of the exercise I mentioned in the last touchstone page, students needed more work on using return methods so this would be a requirement. I also told students that, while we would be using the emails from our class roster, I want to be able to provide ANY email address to their program and it return the expected results.
This expectation was crucial because otherwise students would just manually remove the unwanted characters with handcrafted substrings methods. Several students did this anyway, which also meant they did not use a return method, and it lead to an initial poor grade based on performance. A running theme in our CS classes are how programming is a tool to automate the boring store. Sure, they figured out how to manually remove the unwanted characters, but what if this were a spreadsheet of 1,000+ records? We are in the business of being efficient! Students were also being sloppy with their naming conventions. Variables and methods can be named anything, so long as they are syntactically correct and are consistently used.
So with these aspects in mind, the rubric to the right was developed for the assignment.
I used my rubric to define a higher expectation for my students. I know they are capable of using the substring method, but I expected more. It should be noted that before this particular assignment, I was not leveraging effective rubrics, if at all. This assignment made for a good experiment to examine how students may react to a well crafted rubric.
Happily, I am able to report the students rose to the challenge. 50% of my students showed an increased interest in the completing the assignment, did so sooner than previous assignments, and reached out for additional help with more specific and targeted questions regarding their code. I noted earlier several students did the easier substring only method; but with this rubric I was easily, and quickly, able to point out how this was not meeting performance expectations based on the rubric.
So far as the 'experiment' was concerned, it was evident students were willing and able to reach the expectations placed before them.