Skip to content

Introducing nano-credentials

We are familiar with micro credentials - things like digital badges. One of the claimed advantages of micro-credentials is that they enable you to assess competences (knowledge, skills and dispositions) that cannot easily be assessed or captured using traditional metrics (e.g. exams, essays). Assessing competences (e.g. leadership; resilience) often involves looking at what people do, looking at their practice, at their ability to apply 'knowledge' in particular contexts. This creates a problem, which nano-credentials will help to overcome.

The problem is how to provide credible assessments of performance. Unless you can video record the performance, so that the assessment of it can be moderated, how do you know that an assessment by one person at one point in time is valid and reliable?

One person making one claim isn't very credible

Video recording and then playing back the video in order to assess performance creates problems, not least in terms of the amount of time and effort needed. This makes it impractical.

Portfolios attempt to overcome this problem. However, they involve assessing artifacts that evidence practice, rather than the practice itself. They are also very labour intensive to produce and time consuming to assess. This makes them impractical.

Nano-credentials potentially solve the problem. A nano-credential is simply a claim that someone has seen you demonstrate that you have met a criterion (e.g. I just saw you demonstrate that you are resilient). With the aid of digital technology this sort of approach could be as simple as selecting the recipient's name and clicking on a button to award them a claim.  Much more practical than videoing the practice or documenting other evidence.

Having multiple nano-credentials, particularly if they have been awarded by different people observing your practice on different occasions, provides a compelling evidence base. They indicate that lots of different people have seen you meet the criterion on lots of different occasions - so we can be pretty confident that you can do whatever it is you have been assessed against.

Multiple people of multiple occassions seeing you demonstrate competence is compelling

Once a threshold number of claims have been made a digital badge could be awarded. e.g. 10 nano-credentials awarded by at least three different people on at least five different occasions results in a micro-credential for that competence.

Nano-credentials lead to micro-credentials

Thus, nano-credentials could feed into micro-credentials, which could feed into and enrich your profile, transcript, and CV.

Nano-credentials potentially have the power to transform not only how but also what we assess. Given that assessment drives practice in schools nano-credentials could be the lever that enables schools to focus on the competences that young people actually need in order to live fulfilling lives and make a positive contribution to our universal well-being.

 

2 thoughts on “Introducing nano-credentials

  1. Crispin Weston

    Hi Peter,

    I agree with you that providing evidence of a judgement is too time consuming. It turns education into a labourious process of creating audit trails, and assessment into an even more labourious process of re-assessing multiple times what has already been assessed at least once.

    I also agree that reliability can be built by corroborating judgements made when "lots of different people have seen you meet the criterion on lots of different occasions". But I doubt that in itself, this will be enough.

    My answer to your question "how do you know that an assessment by one person at one point in time is valid and reliable?" is that you have form an evidence-based view of the reliability of that person's judgement. You need to assess the assessor. And that is done by looking at the consistency of that assessor's judgements across a much wider sample than assessments of one student against one competency.

    You also need to have very clear definitions of the competency - badges awarded for "teamwork" or "grit" are too woolly, too liable to be interpreted differently by different people. This is the problem that criterion referencing ran into in the first National Curriculum (1988-1994). Definitions need to be expressed in terms of exemplar performances. That process will quickly reveal that some performances are at a higher standard than others (I favour the word "capability" over "competency", which implies a Boolean judgment: true/false), and that some performances address subtly different capabilities - so we need to develop complex models of the geography of capability that we want students to develop. With such mappings, we can corroborate assessments made against different but related capabilities, and judgements made by teachers against those made by formal tests and inferences made by computers in the course of data-driven digitial activities. Once we start corroborating data, we want to suck in as much data as possible from as many different sources as possible.

    To me, the whole idea of badges was nice but superficial. It is not the badge itself that counts - its the data analytics system that has to underpin it (but at the moment, almost certainly doesn't).

    Crispin.

    Reply
    1. PeterT

      Hi Crispin. Worryingly I agree with most of what you have said! ;O)

      The point about assessing the assessors is a good one - and aligns with part of the argument made by Ngyuen (2021) in a fascinatingly (and scary) article about transparency and surveillance (see https://doi.org/10.1111/phpr.12823).

      On the issue of clear definitions - effectively on having unambiguous rubrics - I am very torn. On the one hand it seems totally obvious that everyone needs to have the same understanding of what success looks like. On the other hand I feel that much of peer evaluation - which is the foundation of promotion in academia (and to a lesser extent in other fields) - is anything but standardised. In LinkedIn I have 70+ endorsements for 'educational technology' - the folk who gave me those have very different interpretations of what the endorsement means, yet I think that they still carry some weight. I'm not convinced that standardisation - in the sense of us all using the same measure of success - is achievable or even necessary (if you have enough people who are deemed to have expertise all saying that they've seen you achieve x). Ho hum - not sure how to square that circle.

      I do agree that most badges at present are superficial. I guess my hope is that nano-credentials will help to make them less so.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *