Expanding the unit of analysis of learning


Like many educators, I have been involved in a lot of discussion about generative AI, teaching and assessment. Many conversations get bogged down in desires to ChatGPT-proof assessment, or to detect GenAI in student work, or to have students acknowledge how they have used GenAI in their assignments. I have a lot to say on all of this (mostly, that we are having the wrong conversations), and I will try to work through a lot of that across other posts. For now, I want to focus on a fundamental issue that was already here before the launch of ChatGPT for public use in November, 2022:

Students were already using things other than their brains to do their assessments.

A reflective globe showing an unclear night scene with blue and yellow lights.
Photo by Joshua Fuller on Unsplash

Students were getting significant help from technologies and other people, and taking the credit for this collaborative work as their own. Walking around as if their grade point averages were all of their own doing.

Sometimes, the unfairness of this rises to the surface. Mostly, in group work assignments, when a particular student is awarded a grade that does not seem to represent their effort, knowledge, ability, the quality of their work, or their “individual contribution”. The obvious unfairness of group work leads many to resent it or avoid it, even though most professional work, and significant proportions of personal activity, involves working collectively.

Students were already using things other than their brains to do their assessments.

In a recent email conversation, on the back of a TEQSA Assessment Experts Forum, Simon Buckingham Shum argued that the challenges and opportunities of GenAI lead to a situation where we need to move from a focus on individual students to collective intelligence networks that include AI. Importantly, this is more than assessing the products created by students with AI, but also appreciating the ways in which the thinking, knowledge and ability of students becomes expanded through increasingly sophisticated integrations (this is my interpretation of Simon’s words).

I like this very much, but I want to go further and say that it is not AI that has led us here but a fundamental aspect of how we learn and know things. We always, inevitably, recruit, make use of, are entangled with, things other than our brains (including our bodies, other people, material objects and environments, and technologies). I could happily talk about the cultural, political, economic, biological and environmental elements that shape the ways that these interrelate, but, for now, let’s just go with the more tangible people and things.

In a 2018 paper for the Networked Learning Conference, I wrote with Clara O’Shea about the limitations of isolating students from the people and resources they interact with while learning in order to try to measure performance in controlled conditions. In the abstract, we wrote the following.

“Most university graduates will need to be effective networked learners, using social and material resources to adapt to changing and complex workplace settings and, increasingly, digital networks. If we accept that assessment is an important driver of learning, then it follows that assessments in which students are able to make use of available resources and networks, may afford a more appropriate preparation for future employment, particularly in light of an increasing need to adapt to technological change.”

We were talking expansively, not just about digital technology but a more generalised idea about learners always being scaffolded by environments that they, themselves, partially configure. Our view was that learning, work and performance are always collaborative, even in apparently individual assignments. (As a side note, I think an appreciation of this collaborative aspect would be necessary to anything labelled “authentic assessment”, but more on that in another post).

This relates to another conversation, this time with Jason Lodge, about how (in my view) self regulation is always co-regulation (see Allal, 2016), in the sense that we always use what is around us, including other people, to regulate ourselves. And we only have partial control over this process, which means that self-regulation is contingent on longer-term or historical networks and conditions, as well as the constrained, relational agency we have in negotiation with the people and things around us. We can’t do whatever we want because we are tied to our families, peers, teachers, regulators, material and cultural environments, and so forth.

Our traditional, individualist focus on learning is not a good fit for this kind of thinking. To try to take all of this into consideration in designing and running assessment that can meaningfully engage with widely-available generative artificial intelligence technologies, we need theoretical lenses that can explain the complexity of human-technology relations. Two potentially valuable lenses are distributed cognition and sociomaterialism. In distributed cognition, thinking is done, not inside a person’s brain, but in the combination of brain, body and world. I do not use my computer to help me think, the thinking is done by me-and-my-computer. I particularly like John Sutton’s (2010) “third wave” conception of distributed cognition, in which people recruit cognitive resources to complement what is already present in their extended cognitive systems. Sociomaterialism is a label for a wide range of theoretical approaches that see people and things as holistic assemblages that are inseparably entangled in activity (Fenwick, 2015) . We do not worry about learners and technologies as independent of each other because they do not exist as independent of these entangled situations. Similarly, we are not concerned with general statements about students or technologies but are concerned with actual practices that unfold in actual situations.

Looking at actual practices through these lenses can support more nuanced understandings about the impact of technology. We can see past ideas of technological or human determinism because agency is always relational (change is shaped by combinations of people and technologies in particular contexts) (Fawns, 2022). We can see past panic and complacency to more subtle concerns (a somewhat unpredictable combination of numerous, smaller benefits and disruptions is more likely than a small, predictable set of major ones). We can see that cognition is distributed in complex ways across multiple entities, rather than simply “offloaded” to a particular technology, and simple descriptions or acknowledgements of which bits were done by a student and which by AI cannot capture anything but the most naïve practices. We can more easily see past linear causal effects (GenAI / Google / the printing press will make us stupid), binary ideas of things being good or bad, and blanket statements about groups of people (students will cheat, young people understand technology better than older people, etc.). 

We need a different, more expansive unit of analysis for learning and knowledge than the individual student

Understanding individual student contributions to what is actually a collaborative work (with AI, other students, other resources) was always philosophically problematic, in my view, and is fast becoming intractable in the face of widely-available generative artificial intelligence technologies. We need a different, more expansive unit of analysis for learning and knowledge than the individual student, while not losing sight of all of the important things we know about individual learning. And for that, we need a theoretical lens like those I mention above. But it’s one thing for researchers to take up more complex perspectives. The bigger challenge is how to use such lenses to influence policy and practice, particularly when they are in tension with entrenched structures and practices (individual grades and rankings; fixed ideas of ability; pre-determined, individual learning outcomes; etc.). Concrete approaches and examples, informed by more complex perspectives, seem like an important ingredient of this. Another is finding out more about what students are actually doing when they study (without being intrusive or using surveillance, which suggests to me that we need to foreground trust and conversation). How do students work with people and things in the process of learning and working towards assessment, how do they develop their capacity to do so in ways that suit them and their situations, and how can we support that development?

Further reading and resources

,