Skip to content

The halfbaked academic rubric

I have to admit that I’m not a fan of rubrics – preferring non-standardised forms of assessment – and perhaps because most of the rubrics I have come across have been pretty terrible. So I thought I’d have a go at developing a better one.

I am aiming for this halfbaked traditional academic rubric to be a generic rubric that could be applied to any traditional academic assignment produced in English. By this I mean any undergraduate assignment that requires a predominantly text-based response, in English, and is designed to assess an individual working on their own (i.e. it does not cater for group assignments or for multimedia projects or presentations).

I wonder if this is even possible – and would love your feedback – Is this a crazy idea? How would you improve this rubric?

A good rubric should meet the following criteria (is this a rubric for rubrics?!):

  • The rubric should have a number of discrete criteria which are orthogonal (i.e. mutually exclusive)
  • Each criterion should be relevant to the assignment set
  • Each criterion should have a number of ‘grade’ descriptors
    • These descriptors should be unambiguous – they should make it clear what you need to do to achieve each grade for that criterion
    • The descriptors for each grade should not overlap with any of the other descriptors (also orthogonal)
    • There should be a clear progression from the lowest to the highest grade for each criterion
  • Each ‘grade’ descriptor should have an associated mark /score
    • The lowest ‘grade’ should always have a score of 0

This rubric has three core sets of criteria, related to:

  • the extent to which the assignment brief has been met (The brief)
  • the quality of communication (Academic writing)
  • the depth of academic expertise (Academic understanding)

The first of these three sets of criteria feels different to the other two, because if you don’t address The brief (i.e. you don’t do what the assignment ask you to do) then you shouldn’t be able to get a high grade, even if you have done well on the other criteria. To reflect that this rubric uses the mark from The brief as a multiplier – the mark you get on The brief is multiplied by the sum of the marks that you get on the other criteria.

Rubric for The brief

All of the other criteria also have five ‘grade’ descriptors, which are scored from 0 to 4.

Rubric for Academic writing As Academic writing has three criteria, the maximum possible score is 12.

Rubric for Academic understanding

As Academic understanding has four criteria, the maximum possible score is 16. This is intended to reflect the greater importance of Academic understanding than Academic writing.

Given that the rubric explains what a student has to do to achieve particular scores, it is critical that students engage with the rubric as part of the assignment development process. To that end, this rubric includes an additional element. As part of the assignment each student has to submit a copy of the rubric showing the ‘grade’ descriptors that s/he thinks the assignment aligns with. When the tutor marks the assignment, s/he has an additional criterion to use – Student’s assessment.

Student's self-assessment

The overall mark is made up of the tutor’s scores for Academic writing (max. of 12) plus Academic understanding (max. 16) plus Student’s assessment (Max. 4) all multiplied by their score for The brief (max. 1).

mark = (Academic writing + Academic understanding + Self-assessment) x The Brief

That gives a maximum mark of 32 – which isn’t a very helpful number (who marks out of 32?).

I could change this in several ways:

  • Have six grade descriptors for each criterion – resulting in a maximum possible score of 40.
  • Allocate different marks to some grade descriptors (e.g. Doubling the marks for each of the Academic understanding criteria)
  • Doing a scaling calculation. If I wanted a mark out of n I would divide the mark achieved (m) by 32 then multiply by n.

Mark out of n = (Mark achieved ÷ 32) x n

Do you have a better solution?

Tell me what you think about this halfbaked traditional academic rubric and how it might be improved by adding a comment below …

2 thoughts on “The halfbaked academic rubric

  1. Karen

    Some thoughts from an Indigenous academic in Australia:

    I've generated and marked from mainstream enabling courses (Indigenous enabling courses as well) through to third year and Masters courses and I personally think the formatting of the actual assessment task requires an individual rubric that aligns with the objectives of the course and what students are to have achieved across the semester.

    The way rubrics are categorically sectioned into "major" and "minor" inaccuracies that require a 'leap' from one label to another and are often open to the interpretation of the marker (even after marking moderation meetings!) has been a significant issue from my experience. Who determines what a "major" error is or a "minor" error? It also very much depends on the level of study; a first year student can provide substantial effort at attempting the accuracy of referencing, and make substantial errors and be classified as having "minor errors". Fourth year students however can be classified as committing "minor" errors that are literally a missing comma and/or full stop placement or the italic use. The majority of markers are sessional academics in Australia, with limited time-frames to mark an assessment and poor incentives to mark as a consistent and collaborative team member within courses.

    When teaching and marking on Indigenous courses, I've had to embed the cultural /social/historical capital within the rubric as a guiding expectation and to address cultural clash. The social and cultural capital brought into a classroom is rarely factored into any rubric in mainstream tertiary education. Those students who have already attained the necessary cultural capital (Standard Australian English) are set up for success without any effort while those without this cultural capital are set up to fail for not being able to 'read' and 'interpret' the hidden curriculum.

    A straight forward essay will meet the criterion much easier than a broad spectrum of choice and original opinion supported with theory. What is determined as a critical answer on a topic that one marker deems satisfactory, may be definitively far less critical to another. For example, when I set out a criterion for Aboriginal issues, a non-Aboriginal student may produce a critical analysis according to theory, while the Aboriginal student may provide a seriously critical analysis on a much deeper level from theory but intertwined with a set of cultural and historical capital that cannot be captured in a rubric. The latter will always receive less marks even though they have a higher level of understanding and knowledge than the one who has met the criterion from a basic level of cultural capital, because that is all the rubric caters to. A non-Indigenous marker with limited cultural/social/historical capital may mark the latter much differently to an Indigenous marker with substantial cultural capital.

    I have set up my marking rubrics from a Bloom's Taxonomy Framework perspective, and each section (fail, pass, credit, distinction, high distinction) as a spectrum rather than a binary of fail/achieved. My students often show me a range of skills and critical thinking that rubrics are unable to capture (and validate), other than straightforward academic literacy and referencing.

    Today's marking rubrics are not keeping up with the innovative and open ended problem solving we require from our graduates and are actually excluding diverse solutions and critical analysis from a different standpoint. Considering the majority of tertiary educators lack diverse backgrounds, inequitable practices will continue regardless of an attempt at equality through standardised courses and rubrics that dictate who fits within a categorical achievement scale.

    Karen Wallace

    Reply
    1. PeterT

      Hi Karen and thank you for so raising so many thought provoking issues.

      I'm going to try to respond - somewhat tentatively - to related issues which I'm going to number in case other folk want to jump in and join the discussion of some points.

      1. Do you need an individual rubric for each assignment?
      Hmm. I'm really not sure about this one. What you say sounds right. However, for all the assignments in the undergraduate courses I have worked on the same generic issues are assessed albeit in response to different foci and tasks. Bearing in mind that my focus in on text-based assignments - they all include a focus on Academic writing and on Academic understanding. Isn't it clearer for students therefore to have a consistent set of criteria that address those issues? There are a couple of ways in which I think a generic rubric can be aligned with different assessment tasks:

      • Have a criterion that specifically focusses on the assessment task - The brief - and which has more weight (e.g. by being a multiplier)
      • Providing examples to illustrate the 'grade' descriptors which are specific to the particular assessment - this is critical - you need to use the rubric in your teaching (assessment as learning) not just have it as something that the markers use (assessment of learning). So I would expect there to be teaching activities that involved unpacking the rubric in relation to each assignment (This links with your point about cultural capital (see 3 below) and about definition of fuzzy terms such as 'minor' and 'major' errors I think (see 2 below)) - and of course I have included an element of students self-assessing and getting marks for doing so in a way that aligns with the tutor's assessment (Does this introduce some bias as we know that females tend to be more self-critical than males so may tend to mark themselves more harshly?).

      2. Leaps and fuzzy descriptors
      Your comment about 'leaps' within the descriptors of a criterion is an important one - and perhaps I need to change my criteria for rubrics so that it reflects that there should be a smooth progression between grade descriptors.
      I think it is inevitable (and probably desirable) that some fuzziness exists in grade descriptors for academic assignments - and some scope for academic judgement. Here too I think the best way to address this issue is in how you use the rubric in your teaching - perhaps by getting students to come up with examples that illustrate each grade descriptor for that particular assignment. In essence this is the same moderation issue as you have if you have multiple markers - so the solution is the same - comparison/discussion to establish a shared understanding of how to apply the grade descriptors. There is as you suggest a genuine problem where markers do not engage in adequate moderation of assignments.

      3. Cultural capital
      This is a critical point - and one where I would agree that many assignments (and rubrics) are inequitable. One suggestion is that you should couch your rubrics (and the assignment brief itself) in terms that are familiar to the students. I wonder though how students develop academic cultural capital if you do that. So my suggested approach (which may well be flawed - so do push back) is to use the rubrics in your teaching - to explicitly engage in teaching activities that help students to understand the rubric. Doing this will necessarily involve making a bridge between the students existing understandings and experiences and unfamiliar academic terminology and ways of thinking. This is difficult - in order to do it we need to understand cultural differences and how to make bridges (without undervaluing other cultural perspectives). There is a danger here that we see this only as a process of the student 'taking on' the dominant academic culture - but it needs to be a two (multi) way process with the tutor/marker needing to understand the student perspective in order to fully grasp the depth of knowledge and understanding that is being displayed in the assignment. I definitely don't have an easy answer to this one ...

      4. Aligning rubrics with Bloom's Taxonomy
      I was trying to address this in the Academic understanding criteria - which at least partly align with Bloom's Taxonomy (Engagement - Understanding/Synthesis - Criticality ). I'm thinking aloud here - but I don't think that these things are necessarily achieved sequentially (you can be critical even if you aren't great at synthesis and visa versa). I also think we need to support students in seeing what progression looks like within each of these competencies. Again intuitively what you are saying sounds right, and yet I don't think it works in practice???

      5. Rubrics don't work
      That is how I am interpreting your final paragraph - not least because I tend to agree! We are so concerned with standardisation - in an era when we are told that innovation and creativity (i.e. difference) are key. The real problem here is with the purpose of assignments - the very fact that we have to award a grade undermines their educational value (though if we can withhold the grade until after the students have engaged with the written feedback - and that feedback is focussed on how they might have done even better - then that helps to overcome that problem).

      I'm wondering to what extent my responses in any way really address the issues Karen has raised. I'd love to hear your views - reply below ...

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *