Skip to content

The halfbaked academic rubric

I have to admit that I’m not a fan of rubrics – preferring non-standardised forms of assessment – and perhaps because most of the rubrics I have come across have been pretty terrible. So I thought I’d have a go at developing a better one.

I am aiming for this halfbaked traditional academic rubric to be a generic rubric that could be applied to any traditional academic assignment produced in English. By this I mean any undergraduate assignment that requires a predominantly text-based response, in English, and is designed to assess an individual working on their own (i.e. it does not cater for group assignments or for multimedia projects or presentations).

I wonder if this is even possible – and would love your feedback – Is this a crazy idea? How would you improve this rubric?

A good rubric should meet the following criteria (is this a rubric for rubrics?!):

  • The rubric should have a number of discrete criteria which are orthogonal (i.e. mutually exclusive)
  • Each criterion should be relevant to the assignment set
  • Each criterion should have a number of ‘grade’ descriptors
    • These descriptors should be unambiguous – they should make it clear what you need to do to achieve each grade for that criterion
    • The descriptors for each grade should not overlap with any of the other descriptors (also orthogonal)
    • There should be a clear progression from the lowest to the highest grade for each criterion
  • Each ‘grade’ descriptor should have an associated mark /score
    • The lowest ‘grade’ should always have a score of 0

This rubric has three core sets of criteria, related to:

  • the extent to which the assignment brief has been met (The brief)
  • the quality of communication (Academic writing)
  • the depth of academic expertise (Academic understanding)

The first of these three sets of criteria feels different to the other two, because if you don’t address The brief (i.e. you don’t do what the assignment ask you to do) then you shouldn’t be able to get a high grade, even if you have done well on the other criteria. To reflect that this rubric uses the mark from The brief as a multiplier – the mark you get on The brief is multiplied by the sum of the marks that you get on the other criteria.

Rubric for The brief

All of the other criteria also have five ‘grade’ descriptors, which are scored from 0 to 4.

Rubric for Academic writing As Academic writing has three criteria, the maximum possible score is 12.

Rubric for Academic understanding

As Academic understanding has four criteria, the maximum possible score is 16. This is intended to reflect the greater importance of Academic understanding than Academic writing.

Given that the rubric explains what a student has to do to achieve particular scores, it is critical that students engage with the rubric as part of the assignment development process. To that end, this rubric includes an additional element. As part of the assignment each student has to submit a copy of the rubric showing the ‘grade’ descriptors that s/he thinks the assignment aligns with. When the tutor marks the assignment, s/he has an additional criterion to use – Student’s assessment.

Student's self-assessment

The overall mark is made up of the tutor’s scores for Academic writing (max. of 12) plus Academic understanding (max. 16) plus Student’s assessment (Max. 4) all multiplied by their score for The brief (max. 1).

mark = (Academic writing + Academic understanding + Self-assessment) x The Brief

That gives a maximum mark of 32 – which isn’t a very helpful number (who marks out of 32?).

I could change this in several ways:

  • Have six grade descriptors for each criterion – resulting in a maximum possible score of 40.
  • Allocate different marks to some grade descriptors (e.g. Doubling the marks for each of the Academic understanding criteria)
  • Doing a scaling calculation. If I wanted a mark out of n I would divide the mark achieved (m) by 32 then multiply by n.

Mark out of n = (Mark achieved ÷ 32) x n

Do you have a better solution?

Tell me what you think about this halfbaked traditional academic rubric and how it might be improved by adding a comment below …

8 thoughts on “The halfbaked academic rubric

  1. Karen

    Some thoughts from an Indigenous academic in Australia:

    I've generated and marked from mainstream enabling courses (Indigenous enabling courses as well) through to third year and Masters courses and I personally think the formatting of the actual assessment task requires an individual rubric that aligns with the objectives of the course and what students are to have achieved across the semester.

    The way rubrics are categorically sectioned into "major" and "minor" inaccuracies that require a 'leap' from one label to another and are often open to the interpretation of the marker (even after marking moderation meetings!) has been a significant issue from my experience. Who determines what a "major" error is or a "minor" error? It also very much depends on the level of study; a first year student can provide substantial effort at attempting the accuracy of referencing, and make substantial errors and be classified as having "minor errors". Fourth year students however can be classified as committing "minor" errors that are literally a missing comma and/or full stop placement or the italic use. The majority of markers are sessional academics in Australia, with limited time-frames to mark an assessment and poor incentives to mark as a consistent and collaborative team member within courses.

    When teaching and marking on Indigenous courses, I've had to embed the cultural /social/historical capital within the rubric as a guiding expectation and to address cultural clash. The social and cultural capital brought into a classroom is rarely factored into any rubric in mainstream tertiary education. Those students who have already attained the necessary cultural capital (Standard Australian English) are set up for success without any effort while those without this cultural capital are set up to fail for not being able to 'read' and 'interpret' the hidden curriculum.

    A straight forward essay will meet the criterion much easier than a broad spectrum of choice and original opinion supported with theory. What is determined as a critical answer on a topic that one marker deems satisfactory, may be definitively far less critical to another. For example, when I set out a criterion for Aboriginal issues, a non-Aboriginal student may produce a critical analysis according to theory, while the Aboriginal student may provide a seriously critical analysis on a much deeper level from theory but intertwined with a set of cultural and historical capital that cannot be captured in a rubric. The latter will always receive less marks even though they have a higher level of understanding and knowledge than the one who has met the criterion from a basic level of cultural capital, because that is all the rubric caters to. A non-Indigenous marker with limited cultural/social/historical capital may mark the latter much differently to an Indigenous marker with substantial cultural capital.

    I have set up my marking rubrics from a Bloom's Taxonomy Framework perspective, and each section (fail, pass, credit, distinction, high distinction) as a spectrum rather than a binary of fail/achieved. My students often show me a range of skills and critical thinking that rubrics are unable to capture (and validate), other than straightforward academic literacy and referencing.

    Today's marking rubrics are not keeping up with the innovative and open ended problem solving we require from our graduates and are actually excluding diverse solutions and critical analysis from a different standpoint. Considering the majority of tertiary educators lack diverse backgrounds, inequitable practices will continue regardless of an attempt at equality through standardised courses and rubrics that dictate who fits within a categorical achievement scale.

    Karen Wallace

    Reply
    1. PeterT

      Hi Karen and thank you for so raising so many thought provoking issues.

      I'm going to try to respond - somewhat tentatively - to related issues which I'm going to number in case other folk want to jump in and join the discussion of some points.

      1. Do you need an individual rubric for each assignment?
      Hmm. I'm really not sure about this one. What you say sounds right. However, for all the assignments in the undergraduate courses I have worked on the same generic issues are assessed albeit in response to different foci and tasks. Bearing in mind that my focus in on text-based assignments - they all include a focus on Academic writing and on Academic understanding. Isn't it clearer for students therefore to have a consistent set of criteria that address those issues? There are a couple of ways in which I think a generic rubric can be aligned with different assessment tasks:

      • Have a criterion that specifically focusses on the assessment task - The brief - and which has more weight (e.g. by being a multiplier)
      • Providing examples to illustrate the 'grade' descriptors which are specific to the particular assessment - this is critical - you need to use the rubric in your teaching (assessment as learning) not just have it as something that the markers use (assessment of learning). So I would expect there to be teaching activities that involved unpacking the rubric in relation to each assignment (This links with your point about cultural capital (see 3 below) and about definition of fuzzy terms such as 'minor' and 'major' errors I think (see 2 below)) - and of course I have included an element of students self-assessing and getting marks for doing so in a way that aligns with the tutor's assessment (Does this introduce some bias as we know that females tend to be more self-critical than males so may tend to mark themselves more harshly?).

      2. Leaps and fuzzy descriptors
      Your comment about 'leaps' within the descriptors of a criterion is an important one - and perhaps I need to change my criteria for rubrics so that it reflects that there should be a smooth progression between grade descriptors.
      I think it is inevitable (and probably desirable) that some fuzziness exists in grade descriptors for academic assignments - and some scope for academic judgement. Here too I think the best way to address this issue is in how you use the rubric in your teaching - perhaps by getting students to come up with examples that illustrate each grade descriptor for that particular assignment. In essence this is the same moderation issue as you have if you have multiple markers - so the solution is the same - comparison/discussion to establish a shared understanding of how to apply the grade descriptors. There is as you suggest a genuine problem where markers do not engage in adequate moderation of assignments.

      3. Cultural capital
      This is a critical point - and one where I would agree that many assignments (and rubrics) are inequitable. One suggestion is that you should couch your rubrics (and the assignment brief itself) in terms that are familiar to the students. I wonder though how students develop academic cultural capital if you do that. So my suggested approach (which may well be flawed - so do push back) is to use the rubrics in your teaching - to explicitly engage in teaching activities that help students to understand the rubric. Doing this will necessarily involve making a bridge between the students existing understandings and experiences and unfamiliar academic terminology and ways of thinking. This is difficult - in order to do it we need to understand cultural differences and how to make bridges (without undervaluing other cultural perspectives). There is a danger here that we see this only as a process of the student 'taking on' the dominant academic culture - but it needs to be a two (multi) way process with the tutor/marker needing to understand the student perspective in order to fully grasp the depth of knowledge and understanding that is being displayed in the assignment. I definitely don't have an easy answer to this one ...

      4. Aligning rubrics with Bloom's Taxonomy
      I was trying to address this in the Academic understanding criteria - which at least partly align with Bloom's Taxonomy (Engagement - Understanding/Synthesis - Criticality ). I'm thinking aloud here - but I don't think that these things are necessarily achieved sequentially (you can be critical even if you aren't great at synthesis and visa versa). I also think we need to support students in seeing what progression looks like within each of these competencies. Again intuitively what you are saying sounds right, and yet I don't think it works in practice???

      5. Rubrics don't work
      That is how I am interpreting your final paragraph - not least because I tend to agree! We are so concerned with standardisation - in an era when we are told that innovation and creativity (i.e. difference) are key. The real problem here is with the purpose of assignments - the very fact that we have to award a grade undermines their educational value (though if we can withhold the grade until after the students have engaged with the written feedback - and that feedback is focussed on how they might have done even better - then that helps to overcome that problem).

      I'm wondering to what extent my responses in any way really address the issues Karen has raised. I'd love to hear your views - reply below ...

      Reply
  2. Jason Zagami

    Hi Peter,
    a difficulty I see with a generic rubric is that there is often an expectation that the learning outcomes of a course are explicitly stated, usually as dot points, and that assessment criteria relate specifically to these statements (and no other). Hence none of your criteria would be acceptable. A general principal that assessment should only be on what has been explicitly taught (as defined by a courses learning outcomes) limits assessing skills such as APA referencing and higher order thinking skills, unless these are specifically framed as or part of, course learning outcomes. This is a problem as these skills are undoubtedly useful outcomes, and so such generic skills are commonly collected as graduate outcomes and efforts made to ensure that they are addressed over a program of study, but to have every course focusing on these, as your generic rubric seems to suggest, would go too far the other way. A student, having mastered such generic skills, could effectively pass or do well, in any course, irrespective of their attainment if the specific course learning outcomes. I prefer a course to have well defined learning outcomes, with a considered process of assessing the degree to which students have developed these outcomes. Scaled Criteria provide a guide to students and assessors of this process but are only a guide, where there are not 5 clear differentiations they should be abandoned rather than trying to force them to work, sometimes it may just be can or cannot. Where rubrics have problems is when they are artificially and incorrectly applied. They are a useful tool but not infallible, and in the end it comes down to the professional assessment of the assessor of the degree to which a student has achieved these learning outcomes.

    Reply
    1. PeterT

      Thanks Jason this is really helpful in challenging my thinking.

      I was hoping that 'The brief' criterion addressed the issue about the specific learning outcomes for the course being addressed - in that the brief should be making it clear what students have to focus on for that assignment (which should be linked to the intended learning outcomes). So if your focus is on (shall we say) eSafety then the brief should clearly focus on eSafety. Having said that, I think that the learning outcomes for some courses might benefit from being less fuzzy!

      By making the mark for 'The brief' a multiplier - you multiply it by the sum of the marks on all the other criteria - that prevents someone doing well on an assignment unless they have fully addressed the brief (which should mean fully addressing the learning outcomes for that aspect of the course). This is critical - and I think gets around the problem of someone who has mastered 'generic skills' from doing well on all assignments.

      This approach does mean that 'The brief' - the requirements for the focus of the assignment - does need to be really carefully designed.

      When I was developing this rubric I did have a specific course in mind - an introductory course which does explicitly focus on the things I have called 'Academic literacy'. I take the point that you shouldn't assess what you haven't taught. However, I guess I am working on the assumption that you should design your assessments first (linked to the intended learning outcomes) and then design the course to enable students to do well in the assignments. This kind of reflects my view that it doesn't matter what the curriculum says, people will do what is being assessed. So if the assessment focusses on 'Academic understanding' the students will focus on that too, and so should you. Again this does mean that the brief has to be really clearly articulated cos it carries so much weight.

      I also think you are right that you can't apply all of the criteria in every assessment. So, for example, on the first assignment on the course I am planning for it wouldn't be possible for students to demonstrate the higher levels of engagement with the literature (and some of the other criteria). My approach to that was that I would grey out the bits of the rubric that didn't apply for particular assignments - so that we can build up to the full rubric gradually, but students always have sight of what the full rubric will be so know what the intended end point for the course/programme is.

      I guess I suspect that when we are marking traditional text based assignments that we always take into account aspects of Academic Literacy and Academic Understanding (even if they are not explicitly stated in the rubric). I remember a workshop in which all the participants had to mark the same essay. Half the group failed it and half gave it a high grade. The 'essay' in question was written as a set of notes (so NOT an essay). The people who failed it did so because it failed on some assumed academic literacy criteria. The people who gave it a high grade did so because it did really well on academic understanding and The brief (they ignored any academic literacy component).

      Your last point - about the limitations of rubrics - I both totally buy (I'm not a fan of standardised assessments!) - but I also have a problem with. If we are making an academic judgement we are using some criteria - it seems inappropriate to be using criteria that we have not made explicit to the students cos that is like saying jump through this hoop without showing them where the hoop is. Of course even my rubric leaves scope for judgement (e.g. what does minor or major mean). My assumption here is that I will discuss the criteria with the students and we will co-construct shared understandings of what each of the descriptions means - so that we make similar academic judgements.

      Do those responses adequately address your concerns?

      I am kind of torn between having unique rubrics for each assignment (which would overcome all of your concerns) and my approach (which I think helps avoid unnecessary complexity for students - it gives them a more coherent and consistent view of how to succeed). Hmm ...

      Reply
      1. Jason Zagami

        Hi Peter,

        I understand your desire to have a consistent approach, but unless courses are teaching the same thing in the same way, I suspect there will always be significant variation. The criteria you have detailed as generic are certainly fine for your course, where they address your course outcomes, and likely several education courses with similar outcomes, but how would they apply to a course on quantum physics or classical dance? (extreme examples used to highlight the point)

        I find it easier to be upfront with students, there will always be marking bias, the criteria and rubrics are provided to give some insight into what I consider important in measuring their learning, and what they should focus on demonstrating. This guides the focus of their study towards the course learning outcomes, and somewhat supports the notion of assessment for learning, but acknowledges the humans in the process. While I try very hard to be unbiased in assessing, more than most, I still have some topics and higher-order thinking skills that influence me more than others. What would gain students high marks from some of my peers, in taking your example of academic literacy, a critical discourse analysis approach would fall flat with me, while an analysis of systems involved would resonate favourably, likewise I have a high threshold for grammatical and citation errors, while others penalise the lack of an oxford comma.

        These many influences cannot be practically encapsulated in criteria/rubrics and even explicit content-related criteria will be subject to interpretation and the understanding level of markers. Automated marking systems have highlighted the challenges of systematised assessment, it can work if everything is tightly defined and we spend a great portion of learning time on understanding the terminology, definition, and standardised processes involved so that students and assessors have the same perspective, but that in itself is problematic in higher education. I don't really want all my students thinking and addressing tasks as I do, sure they need to know enough of how I think to do well when I assess them and will have to do it all again with the next course lecturer, but that is a strength of higher education, not a weakness. The focus of assessment for myself though is not on measuring student achievement but on how it supports their learning - and I fully recognise that others have very different views on the purpose of assessment.

        As to students co-constructing assessment, I spent many years exploring this with students in schools and unfortunately have come to the conclusion that it is unproductive. Unless the course is on assessment, the effort involved in students gaining sufficient understanding of the course outcomes and assessment techniques to be able to authentically co-develop assessment anywhere near as effectively as someone who has spent decades doing so and has a detailed understanding of course content is rather fanciful. Indeed few enough academics are effective at it. The key element though is authentic, being able to override the decisions of the teacher/lecturer. Otherwise, it is just a pretence. I now prefer to provide students with choices of assessment approaches, each carefully crafted, but I no longer believe it appropriate to involve students in the design process. In tertiary settings, this is generally impractical in any event, with assessment tasks and criteria needing to be defined and published before courses commence.

        Thanks for the opportunity to discuss such things and I make no judgement on how others choose to do things. There are many paths, though I guess that itself is a judgement 🙂

        Reply
        1. PeterT

          You raise some really important points Jason - for me particularly around avoiding too much standardisation which undermines creativity (my paraphrase).
          I totally agree that the main focus of marking of assignments should be to provide formative feedback - to that end my plan is that written feedback will be released to students well before the grade - wonder how well that will be received!

          I recall doing a small scale research project some 25+ years ago which set out to explore what impact (if any) the mode of production of assignments had on grades awarded - comparing blue/black biro/fountain pen, dot matrix vs daisywheel printer - what emerged was that the biggest issue was the wide divergence of grades between markers (irrespective of the model of production).

          I'm not intending to co-develop the assignments with students - but am planning to engage with them in developing a shared understanding of what the rubric descriptors mean in practice. Though one of the things I learnt whilst working at the OU was the people who learnt the most were the people who developed the courses (i.e. the staff). At one point I planned to develop a course that involved the students actually developing the materials (within a framework provided by me). Never saw the light of day - though I think the shell is still on one of my website so I might try to revive it! :O)

          Thanks again Jason - lots to think about.

          Reply
  3. Darrall Thompson

    Hi Peter,
    Thanks for promoting the great discussion and putting your generic rubric ‘out there’… even as I know from our zoom engagements you are no fan of rubrics (hmmm …. the origin of red pen marking ).

    I have two big problems with your approach (… actually with most dominant measurement-based education systems on planet earth):

    1. It focuses students on ‘getting marks’ thus encouraging surface and strategic approaches that tick the boxes of the high-mark-allocated columns in the rubric… your suggested use of the rubric in class and student self-assessment against these very specific descriptors may also lead to a mechanistic engagement with the topic if poorly handled. (see

    2. It foregrounds the fulfilment of a ‘brief’… thus again encouraging extrinsic motivations to comply with the usual dominant institutional and parental requirement to pass a subject and get credit points, a degree and a job… rather than ‘the brief’ being a deep engagement with inspiring, challenging, interesting, engaging, exciting, important, innovative, valuable, creative, collaborative, research-based, well-designed, thought-provoking learning activities from which the written outcome glows with passion and the love and joy of learning and expression… …. Yes your rubric would give me 0 for flowery language !... (unrealistic philosophical note: … maybe the writing could be in Worimi language or Mandarin… are we English just purveying another colonial conformity hurdle… decimating diversity ?).

    Anyway Peter, I’m backing up these two big problems with your approach with a journal article and a story:

    * Assessment needs to focus students away from marks and aim to develop and nurture all five of the CAPRI (CAPability Reflections & Inspirations) framework discussed in this anti-marks journal article:
    Thompson, DG 2016, 'Marks should not be the focus of assessment— but how can change be achieved?', Journal of Learning Analytics, vol. 3, no. 2, pp. 193-212
    A pdf is downloadable free from the Journal of Learning Analytics Vol 3 No 2 (2016) ‘Multimodal and 21st century skills learning analytics and datasets.’ at:
    https://learning-analytics.info/journals/index.php/JLA/article/view/4888

    ** Last week I completed work on a UTS Social Impact Grant with a primary school that is 60% indigenous. Whilst the project-based learning activities developed did have a written component in English it combined local language and stories with elders, engagement with literature, creative making, digital animation and field trips. Here is a comment from a parent at the end of this activity:

    “For *** it was the story of the ***, the excursion to the *** and working on descriptive language that completely opened up his love of literacy and story. From that exact time, he started reading more and began loving to write... he was so fascinated by the opportunities of story writing and the beauty of adjectives that he started writing something in a little notebook whenever he could, it’s ignited something for him.”
    Kindergarten parent ****.

    Marking with capabilities as the focus (not marks) that span subject boundaries can only be achieved with marking software that gives visual progressive feedback over time across a well-researched framework that is inclusive of the range of attributes young people will need as we head into an uncertain future. (https://howardgardner.com/five-minds-for-the-future/)

    Reply
    1. PeterT

      Hi Darrall and thanks for sharing those thoughts (backed up with examples/a reference).

      Summative assessment is by definition about extrinsic rewards.
      On the first course I worked on at the Open University (OU) we told students that if they really bought the argument we were making they wouldn't submit the final assignment. We offered the alternative - which all the students took - of critiquing the course as the final assignment.

      I like to think that you could design a brief that allowed students to follow their passion and be creative - but agree that this is seldom the case where there is a pre-defined curriculum (except perhaps in the creative arts?).

      Re English - that reflected the context of the university I currently work in. When working in the East End of London with children who (initially) didn't speak any English it was clear that allowing them to communicate in their home language was the most effective way to support their learning (even after they were able to communicate in English). This links back to the point about validity - does the assessment assess what it claims to assess (e.g. a science assessment that requires you to respond in English may be assessing your English competence rather than your science competence).

      I think that your CAPRI/Review approach is a step in the right direction. However, you are still setting up pre-defined criteria that work is marked against. Ideally I think that the learners ought to be deciding upon the success criteria (at the outset of their work) - as in the PoL model (search for PoL in the blog).

      As always we are having to mediate our views on what we should do with the pragmatics of what it is possible (at present) to do within the current systemic constraints.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *