High stakes summative assessment drives practice in schools (because these are the metrics against which schools and teachers are judged). Traditional exams, which are the predominant form of summative assessment, cannot capture evidence about many 'learning outcomes' that are seen as being critical - such as collaboration, real problem solving, creativity, or persistence. AI, in the form of data mining, may offer a solution.
This first struck me when working on the Schome Park Programme, which used a wiki, forum and island in Teen Second Life™ to give hundreds of 13 to 65 year olds a radically different experience of what education could be like.
Members of Schome Park engaged in a variety of activities. For example, they organised events, such as philosophy discussions, which they advertised in the wiki. Other members of the community could sign up to attend an event in the wiki, and then attend (in the form of their avatar) at the agreed location within Schome Park (the island in Teen Second Life™) at the appropriate time. A data trail existed in the wiki, forum and/or Teen Second Life™ of who advertised sessions, who signed up to attend sessions, who actually turned up for them, who stayed for the whole session, and often what was said (via text chat) within the session. AI could mine that data to draw conclusions about people's knowledge, skills and attributes. For example:
- Person X ran a series of sessions on topic Y; the number of people attending each session increased over time [suggesting that Person X was good at running sessions on topic Y]
- Person A frequently signed up to attend sessions, but often didn't turn up for them [suggesting Person A may not have been good at forward planning]
- Person B always turned up for the sessions that she had signed up for - and stayed for the whole of each session [suggesting that Person B was good at forward planning and was interested in the sessions' content]
- Person C created lots of artefacts, such as T-shirts. Lots of people not only bought these T-shirts, but often wore them [suggesting that Person C was good at creating T-shirts]
- Person D created lots of artefacts, such as T-shirts. Lots of people bought these T-shirts, but hardly every wore them [suggesting Person D was better at marketing than at T shirt design]
- Person E was often asked by other people to help them debug their code [suggesting that Person E was good at coding]
- Person F often joined in discussions in the forum. Her contributions generally built constructively on what other people had said, and she often helped to diffuse arguments [suggesting that Person F had good communication and moderation skills]
These examples are somewhat simplistic, but illustrate the sorts of ways in which inferences could be drawn from the data trail.
Whilst this was not happening in the Schome Park Programme it would be relatively easy for the developer of an immersive world (such as Minecraft) to capture data which might reveal information about such things as: perseverance (how long did you carry on trying to get across the ravine in the training world?); creativity (how did you solve a range of problems that you encountered in Minecraft?); communication and collaboration (though analysis of how you engage with other people in-world); and so forth.
Using AI to assess learning, based on trails left behind when working digitally (think of Google's range of online tools such as Gmail, Search, Maps, YouTube, and Docs) has massive potential. When measured up against the characteristics of effective summative assessment such an approach might perform very well, it could:
- measure things we think are important in a credible way
- be practical to implement, scaleable and integral to the learning process rather than an additional activity that learners (or teachers) have to carry out
- be focussed on successes rather than failures
- provide just in time formative feedback
- be criterion referenced
- provide alternative routes to success for each learner
- provide concise reports of the outcomes of the assessments
However, the use of AI in this way raises significant ethical issues about use of personal data and the potential for a lack of transparency both about what data is being collected and used, and also about how conclusions are being drawn.
As educators we need to be addressing these issues, because AI is coming to assessment, and some of the big players may be more interested in profit than ethics.