Report on Academic Performance for Academic Year 2008-2009

Report on Academic Performance for Academic Year 2008-2009
By Dr. angus munro (Vice President for Academic Affairs)

The following is the executive summary of the report. The figures and tables cited below refer to this document, although three are included here also.
An overview of instructor involvement indicates that of the 74 who taught the 331 undergraduate and/or graduate classes (for 124 courses) at UC over the three terms of 2008-9, a ‘typical’ instructor taught for two terms, during which they did 2.50 courses (3.50 classes: data are medians). The relatively small contribution of individual instructors (Tables 2 and 3; Figure 1) in part reflects attrition due to poor performance based on poor student and peer evaluations, but also is the result of other factors: for example, only a periodic need for a particular instructor’s expertise or individuals’ changing circumstances (including leaving the country).
The introduction of a new grading system in Term II had an impact on the distribution of grades for both undergraduates and graduates in this (Munro, 2009) and Term III (Figures 2, 3), as well as on the class Grade Point Average (cGPA: Table 4). An additional difference was that there was the emergence of differences in cGPA between sessions in Term II, with morning and afternoon students having higher scores compared with weekend students; scores for those attending evening classes were intermediate (Figure 4, Table 5). Further analysis indicated that, in Term II, this was mainly associated with differences in the performance of Foundation Year students in different sessions (Table 6). However, this difference was maintained in Term III (Figure 5), when Foundation Year courses are not offered.

The instructor of each class was evaluated by the students. In general, results were comparable between terms (Figures 6 and 7). Overall, poor evaluations were for a small proportion of instructors who taught relatively few classes (Figure 8); and there was no obvious trend for an instructor’s evaluation to improve over subsequent terms (Figure 9). Whilst large class-sizes tended to be associated with relatively low student evaluations of the instructors involved (especially in Term I), there was no discernible negative effect on cGPA (Figures 10 and 11). Also, contrary to what might be expected, an overall analysis indicated that students’ evaluation of a class was not related to the cGPA, in any of the terms (Table 8 and Figure 13); whilst, for Foundation Year classes, there was a negative correlation rather than the anticipated positive one (Figure 14). Overall, the best correlation with student evaluations was for the proportion of an undergraduate class getting at least a B+ grade for Term II and especially Term III; no such correlation was apparent in Term I (Table 8 and Figure 12). There were no clear correlations between evaluations and any measure of the performance for the smaller number of much smaller-sized graduate classes (Table 8).
Instructors were also evaluated by one or more full-time faculty. There was a trend for an improvement in average instructor performance in Term II, relative to Term I (Figure 15). Overall, there was a weak positive correlation with the weighted-mean score of the student evaluations for the same instructor (Figure 16). Comparisons of different sessions for Terms I and II failed to identify any potential instructor-related contribution to the observed inter-sessional difference in cGPA, based on the limited data available.

 

A profile of each of the six Colleges was compiled, based on data on the number of courses and classes offered (Tables 10 and 11), together with data on the overall rankings of individual instructors based on student evaluations (Figure 17), and also the relationship between student evaluations and both mean cGPAs and peer evaluations (Figures 18-23). Whilst there were no clear-cut differences between Colleges, there was some evidence for at least two ‘leagues’: one comprising Law, Management and Social Sciences, the other with the Colleges of Arts and Humanities, Education and Science and Technology.

It is concluded that there is evidence for an improvement in students evaluations over the previous year. The emerging differences between sessions for undergraduate cGPAs cannot be easily attributed to differences in the quality of teaching: instead, it may reflect differences arising from other pressures upon the students in evening and weekend sessions, or the selective effects of the Scholarship Exams. The differences between Colleges, most especially the low rating of Science and Technology, are consistent with the findings at other universities overseas.
Various proposals are made regarding the future implementation of student and peer evaluations, and how they should be best utilized in order to ensure the University’s further development whilst maintaining its academic integrity.