David Gehring and Hannah Nicholson, University of Nottingham
Abstract
This study investigated the use of participation marks (broadly understood) in assessment by HEIs in the UK. By means of two surveys – one for institutions in the East Midlands, one for the rest of the UK – staff members were asked questions about the perceived and real benefits and challenges in using such marks, and whether using these marks in their modules affected student perceptions and choice. Results varied. While some surveys returned relatively negative views, others were fulsome in their praise; others still were somewhere in the middle. A focus group was also conducted to ascertain the views of students at an institution where participation marks have been used for several years. Although views among staff varied to some extent, the students’ perspectives were comparatively uniform in their positivity. Some general conclusions are drawn from the evidence provided in the surveys and subsequent interviews.
Rationale & Methodology
The rationale for this study was clear: meaningful and consistent student engagement in seminars has long been a challenge faced by educators. Although instructors often have a small handful of particularly active students in any given seminar group, getting the whole class to participate and engage with the material (primary sources, historiography, etc.) can be elusive. Students’ attendance in seminars, often strictly compulsory in universities, can also be affected by low levels of engagement. When students are rewarded for their seminar preparation and participation by way of concrete marks that contribute to their overall module mark, however, attendance and participation levels can improve significantly. The use of such marks in UK universities is at present sporadic and anecdotal, but HEIs in the USA and elsewhere have assessed student participation in a variety of ways for decades. Seminar participation marks can come in a variety of forms, for in one instance a general and impressionistic mark for overall participation and engagement over the course of a module can be assigned, while in another instance a more document-friendly approach may be taken in the form of a portfolio of short written and/or reflective works by students over the course of the module. In short, this study sought to address a widespread issue, known by nearly all instructors, by investigating how and to what extent UK HEIs have used participation marks to increase not only student attendance but also their preparation, participation, and engagement both inside and outside the classroom.
The study was conducted initially by way of surveys emailed to departments, schools, and faculties of History; or, where History was combined with other disciplines, to the parent department/school/faculty. Where possible, directors of teaching and heads of department were emailed directly. One survey was designed for HEIs in the East Midlands, where the authors of the study work, while a simplified version was provided for the rest of the UK. The former survey asked eight questions ranging from a simple yes/no regarding the current use of participation marks, to more detailed questions addressing problems of verifiable assessment and external moderation. The latter survey was limited to four questions because eight was thought too onerous to inflict on HEIs outside the region; these questions were relatively limited to perceived benefits, opinions of staff members, and student reactions (as perceived by staff). Ninety-seven HEIs were sent these surveys, and forty-one returned completed forms. In addition, one external examiner for an institution using participation marks was interviewed because of the repeated concerns noted in the surveys regarding verifiability, consistency, and moderation. Finally, a focus group of undergraduates was interviewed at an institution where such marks have long been employed with significant effect; some of these students volunteered (and so were self-selecting) while others needed more ‘encouragement’ from staff to participate. By means of these three data sources, it was hoped that a reasonably rounded picture would emerge, and suggestions for increasing student attendance and engagement would result.
Staff Survey Responses
Of the forty-one HEIs to return surveys, eighteen used participation marks in one variety or another. Of the remaining twenty-three, four responses represented a ‘hard no’ in that no indication came forth that such marks were even a topic of conversation, but the other nineteen responses reflected a ‘soft no’ because it was clear that these institutions had thought about and/or weighed the use of participation marks in previous years but had not (yet) decided to implement them. A few of these ‘soft no’ nineteen requested to be informed of the results of this study because of the appetite to address the problem of student attendance and engagement.
Staff in History departments/schools/faculties where participation was not assessed either informally or formally (formatively or summatively) were sometimes candid in voicing their concerns and opposition but offered comparatively little by way of perceived benefit. The most frequent issues of contention were the following:
• By what criteria should participation be measured?
• How can fairness and consistency be ensured across staff and students?
• How can we avoid rewarding extroverts and shaming introverts?
• How can we deal with student appeals or extenuating circumstances?
• How transparent is the process, and what would the external examiner say?
• Is there a tendency for participation marks simply to be mark-inflating?
• How labour intensive is all the record keeping and logging?
• University policy prohibits such use.
Some staff voiced that using marks to assess student participation could act as an incentive to increase student attendance, and result in greater employability. Many of these ‘soft no’ surveys indicated that their staff assessed students’ seminar presentations, which offered discrete periods of assessment according to generally accepted rubrics, though one survey noted that its university had recently dropped the use of assessed presentations. In general, the lasting impression from the ‘no’ surveys emphasizes variety of opinion, with some staff adamantly opposed, others advocating strongly, and a final group curiously interested in knowing the practices at other institutions.
The eighteen surveys indicating that staff did use participation marks in one form or another were far more informative, for, as may be expected, these staff in History had previously thought about the issues at stake and experienced the impact (positive and negative) of using such marks. The perceived challenges behind using these marks in the ‘yes’ surveys were similar to those noted by ‘soft no’ surveys (e.g. verifiability, fairness, administration, inflation, student anxiety). The benefits, however, included the expected and unexpected.
The perceived and observed benefits included the very basic fact that assessing participation would offer an incentive for students not only to attend class but also to be actively engaged in the discussions and learning processes. At one institution in particular, assessment of seminar participation led to a massive reduction in the proportion of students who never attended class at all (from 10% to less than 5%), which demonstrates that this mechanism offered an incentive even to those students least likely to attend class; at the other end of the spectrum, that same institution also reported that, while less than 10% of students had attended every class before implementing participation marks, after implementation 30-40% attended every class. Also, students at these institutions had a greater incentive to prepare regularly and thoroughly by reading the assigned materials or preparing reflective notes. Slightly more unexpected, long-term benefits (for the students themselves) noted in these surveys included an improvement in communication skills, confidence among one’s peers, and the development of skills relevant to future employability. Another key benefit, which is corroborated in educationalist literature, is that a diverse range of assessments is vital because of the various ways in which students learn.
At the ‘yes’ institutions staff had the opportunity to ‘opt in’ for using participation marks; or, in other words, these departments/schools/faculties did not force all modules to deploy this method of assessment across the curriculum. Also, it became clear that, while some HEIs used such marks at all three or four years of the degree programme, others used participation marks only for first-year modules immune to the pressures of verifiability due to external examination. The ‘yes’ surveys indicated that staff at these universities were generally supportive and happy with using participation marks, though some surveys noted that some anxiety existed about rewarding attendance, that participation marks were necessary but probably best a formative rather than summative exercise, and that others outside History at their institutions were sometimes suspicious of the practice but at other times exceeding History in its use.
Staff perceptions of reactions from students at ‘yes’ institutions emphasize, overall, that using participation marks has had the desired effect of increasing student engagement, and that using such marks in some but not all modules has not significantly affected student choice; i.e., students do not necessarily prefer modules with participation marks any more or less than those modules without. Some students informed staff that, while they may initially dislike being compelled to take an active part in their learning process, they have come to realize its value. The views of students themselves are considered below.
An External’s View
The views of an external examiner for a ‘yes’ institution, which has been using participation marks in various ways for many years, are illuminating and may assuage concerns among some staff and at ‘no’ institutions. The examiner served a four-year term and offered a candid analysis.
At this particular ‘yes’ institution, seminar participation usually accounted for 3-4% of the overall module mark (one module, an outlier, allocated 10%), while an assessed seminar presentation was also included; across the department/school/faculty, staff had the flexibility to choose whether to adopt participation marks because no blanket policy forced a staff member’s hand either way. The examiner was not able to discern a difference in overall marks between modules assessing participation and those modules that did not, which may help to allay concerns about the potential for mark inflation. The examiner did, however, notice that in some cases a student with especially good seminar participation marks may be nudged slightly upward if their other assessment marks were on the cusp of a higher classification (e.g. if essay and exam marks were 69, a solid participation mark could bump the overall module mark to 70). In the view of the examiner, such a bump up was in fact a good thing because it demonstrated that a student’s active engagement in the classroom could pay off in concrete ways. The external also mentioned that they were not aware of any resistance or complaints from students regarding participation marks, though they did also mention that for them to hear of such a case that case would need to have been rather serious. Rather, if there were complaints, the external examiner suspected that these were low-level individual grumblings; nothing untoward arose at any of the (roughly 15) exam board meetings the examiner attended.
It may be inferred, therefore, that students generally trusted the assessment of their instructors regarding their seminar participation. The examiner similarly trusted the staff at this ‘yes’ institution because of the rigorous procedures and high standards set throughout the assessments. Indeed, although the examiner admitted that they could see no verifiable ‘paper trail’ for participation assessments, they were in no way suspicious of staff at the institution. Here mutual respect and trust was a key aspect among the examiner and staff members. On one issue, though, the examiner did seem to confirm a concern among those wary of using participation marks: work load. Although the examiner noted that a paper trail for assessing student participation did not exist in the traditional way, assessment of seminar presentations and other written assignments did add to the amount of paperwork with which staff needed to grapple. Added workload and paperwork, it may be observed, would come with any new or different form of assessment.
Students’ Views
Staff and examiner perceptions of students’ views can be helpful, but what did the students themselves think? The focus group of eight History students, some of whom were joint honours students, some in year 2 and some in year 3, spoke in a surprisingly open manner in front of the authors of this study. Over the course of a ninety-minute discussion, the students were asked a range of questions, which included general topics (e.g. What do you think of seminar participation being used as part of formal assessments?) as well as more focused issues (e.g. Are you more or less likely to choose modules which use this kind of assessment?). Students’ reactions – both written and oral – varied to a very limited extent; rather, significantly more unanimity emerged from the students than expected.
Seminar participation marks at this university were calculated based on a ‘Seminar Engagement Form’ (known also as ‘logs’), on which students would record their notes and reflections on their own presentation before their seminar group (once in a semester’s time), their summative reflections on their own engagement in seminars across the semester, and their preparatory readings for, and contributions to, seminars on a weekly basis. Students found this final component to be the most time-consuming but also the most beneficial. These completed forms’ particular and general elements enabled staff, at the end of the semester, to see how students viewed their own seminar preparation and participation, how students could see their own development, and how honest students were with themselves. The students noted that, apart from their usual seminar preparation and reading time, these forms probably took about fourteen hours to complete over the course of a twelve-week semester. Initially daunted by the length and comprehensive nature of the form, nearly all of the students liked the log because it forced them to stay on top of their reading each week, therefore adding disciplined structure to their schedules, and because it kept a record of what they had read and said in class, which in turn was helpful when preparing for other forms of assessment (especially essays). Also, because the log was a written document over the course of the semester, if a student missed a seminar due to illness, for example, they could still record what they had read in preparation and how their reflections on that week’s topic integrated with previous weeks and topics. The form, according to these students, allowed for more (and more helpful) feedback than normal essays or other methods of assessment could provide; and, again according to the students, when it came time to complete student evaluations of the modules, the form helped them think critically about the weekly development and overall structure of the module as a whole.
The students in this focus group noted uniformly that, although they had been apprehensive during their first year to speak in seminars – for fear of being labelled ‘stupid’ by their peers, their views changed as their confidence built over the course of that first year, and as it further developed in the second year. Also, the students’ attitudes were not limited to their academic development. When asked for their general reactions, one student noted that the assessment of their seminar participation ‘opens up so many doors’ on a personal level, another mentioned that it ‘helps you to grow as a person’, while a third mused that ‘what was once a weakness for me [their confidence to speak in class] can now become a strength’.
More particularly, the students admitted that first- and second-year modules incorporating the assessment of seminar participation via logs did force them to speak more in seminars than they otherwise would, but by third year these students needed no such enforcement mechanism because their preparation and contribution was seen as, in itself, beneficial in both the short and longer terms. Getting ‘recognition’ for their efforts, as the students put it, was icing on the cake, as it were; here the students were quite clear in their distinction between the form as recognition of their own initiative/efforts and the form as an incentive or spur to force them into doing something they did not want to do.
The fact that some but not all modules incorporated this form of assessment did not seem to bother these students or affect their choices. Some students in the focus group noted that they prepared for, and contributed to, seminars as completely as possible whether their participation was assessed formally or not; in fact, some students at this university had been known to use the log form for their own personal study in modules that do not require its submission for assessment. Other students, however, acknowledged that their preparation and participation dropped for seminars not incorporating such assessment; these students also admitted that some modules that did not use participation assessment had less engaged seminars with fewer students prepared well. Thus, the students’ impression mirrored what many instructors have long found frustrating. A fairly strong point made by the students in the focus group was that this form of assessment did not significantly affect their choice in modules. A few students reported that they may be more likely to choose a module if it incorporated seminar participation marks (i.e. it was a perceived benefit that would make the module more engaging), but the subject matter of the module remained the top criterion. Some students in the focus group knew of other students (not interviewed) who actively avoided modules with participation marks, though there seemed to be little overall correlation between class sizes for those modules which did or did not use this form of assessment.
Beyond the ‘Seminar Engagement Form’ as a formal assessment in its own right, the students saw how it affected their overall performance in the module and at university. One student, for example, noted how their first-year average of 40% jumped to 60% in the second year, and for this significant rise they credited this form of assessment and their own recognition of the importance of thorough preparation. The overall level of satisfaction with History as a course of study was also noted by one student who noted, with significant honesty in tone, ‘I feel comfortable in my degree’, and by another who emphasized that it is not enough to be good at writing essays in History. The third-year students in the focus group were particularly enthusiastic about how their personal growth and confidence in front of their peers could help them when presenting to outside bodies and after university. One observed that other universities do not seem to have such variety in assessments, and so their experience in front of, and in collaboration with, their peers could be a real selling point when on the job market. Other students admitted that they had ‘grown as people, not just [as] academics’, another that this form of assessment had ‘made me a better person’, and a third that their communication skills had improved so dramatically that their relationships within their family had improved.
When asked about potential or real disadvantages of this form of assessment, the students in this focus group were similarly honest and emphasized that the benefits outweighed the costs. They noted that some less honest classmates had the potential to record their seminar preparation and participation inaccurately by claiming to have read material they had not actually read. They also noted the potential for a student to receive a low mark based upon their completed written form at the end of the semester despite their frequent spoken contributions in the classroom. Finally, the importance of the form – and seminar participation more generally – was not entirely clear to them during the first year because it was not allocated particularly real value, and because it had not been explained as fulsomely as it was in the second year. Thus, a perceived disadvantage was that this form of assessment was not used with greater effect earlier in their university careers, when it could have instilled greater time management and a sense of active engagement in their learning process.
Summary & Conclusions
The primary concerns of staff at ‘no’ institutions, as indicated in the surveys returned for this study, involved the criteria by which participation can or should be measured, the transparency of the marking process, the potential to reward extroverts while punishing introverts, the complications caused by appeals or extenuating circumstances, and the increased workload caused by (yet more) paperwork. The reactions and perceptions of staff at ‘yes’ institutions, as indicated by their surveys, suggest firmly that the use of participation marks as a formal tool of assessment brought benefits of greater student attendance and participation in seminars; and, while they did sometimes acknowledge potential problems raised above, these benefits outweighed the costs. This characterization of the cost-benefit analysis was mirrored by the views of the external examiner interviewed for this study, who emphasized his trust in the professionalism of his colleagues at a different university, especially based upon the rigour they displayed in other forms of assessment.
The most illuminating findings, however, come directly from the students who participated in the focus group. These students agreed with the log as the written record of the assessment because it provided clarity of purpose and encouraged weekly discipline in their studies; as a written record, the form also provided transparency for marking purposes both internal and external. Also, the students noted that students who may sometimes remain quiet in the classroom – especially during the second year – could easily demonstrate their preparation and intellectual engagement with the material on the written form; in the privacy of their own study, such students could rest assured that they would not be punished for not speaking in class as much as more extroverted classmates. The authors of this study were careful not to ask the focus group about whether they thought students would be inclined to complain about, or appeal, their marks for seminar participation, but in sometimes frank, sometimes unconscious ways the students in the focus group admitted that they could not really complain too loudly about a mark because the logs were fairly self-evident. Moreover, the students suggested that they trusted their instructors’ sense of judgement, and the students found comfort in the fact that, if they missed a seminar or two due to illness, they could fill in their log’s weekly reflections to demonstrate their continued engagement despite their absence. The concern of staff regarding increased paperwork was, to some extent, confirmed by the students in the focus group, who noted that the ‘Seminar Engagement Form’ can be a little daunting at first, and sometimes tedious during the semester. Overall, though, these students found that the complete nature of the document at the end of the semester offered a good overall view of the module as well as a detailed account of each week’s readings and discussions. Interestingly, informal discussion with two members of staff at that institution confirmed that the use of seminar participation marks did increase the time required to assess, but the overall effect of incorporating such marks was significantly more positive than negative.
This study, broad in initial appeal for staff perceptions, yet more limited in its incorporation of students’ views, suggests that the very real and justified concerns of many staff regarding the use of seminar participation marks seem to be largely allayed by other staff with experience in using and assessing such marks either internally or externally. Similarly, the focus group of students confirmed perceptions of staff regarding the advantages both inside and outside the classroom. The key to transparent and accepted use of seminar participation marks, for both staff and students, is in the use of a clear written form or log. The staff in History at the university who hosted the focus group seem to have found a way forward in fostering active and engaged students and citizens, while maintaining a flexible policy that does not force all modules, all staff, or all students to adopt such a form of assessment. Such may be the significance of this study.