Bloomin’ GenAI: Program-level negotiation of unit-level learning outcomes and assessment parameters

An image of purple and pink flowers on a vine with dark background.

The first part of that title is as gratuitous as the second part is boring. Sorry about that.

I was chatting to colleagues yesterday about learning outcomes, Bloom’s taxonomy and GenAI. And before you say “I’m so bored of Bloom”… yes. I hear you. But it’s a way of thinking about learning outcomes that is accessible to a lot of educators and it helps me with my story, here.

As I said in that chat, I think that, in general, we want to separate out unit-level learning outcomes and the parameters within which they are demonstrated. A unit on anatomy should have learning outcomes that relate to knowledge of anatomy. Even if it’s to do with something broad like communication, collaboration or information literacy, where we are bringing these in as unit-level learning outcomes, they should be focused on how that broad idea relates to the discipline and subject.

More general capacities that are important but not subject-specific, like the capacity to use GenAI responsibly and effectively, should, I think, be brought in at a higher level (e.g. at the level of the program). Program-level outcomes / capacities / attributes then serve as guide posts for thinking about the parameters within which unit-level learning outcomes should be demonstrated, and about what kinds of unit-level learning activities might support the achievement of program-level aims. These unit-level assessment parameters would then specify, amongst other things, the extent to which the learning outcome needs to be demonstrated individually and/or collaboratively. When and where is it important for a student to demonstrate an outcome without support from technology or friends or family, and where is it acceptable or, indeed, important, for them to demonstrate an outcome with / through support and collaboration with these other entities?

The same is true for Bloom’s verbs – ok, we want the student to analyse or evaluate information, but must they do that independently of people, technology and resources, or can/should they be incorporating some of these things into their analytic processes?

Crucially, collaboration (by which I mean working with people and things to do something) is not a simple construct. Consider the difference between asking ChatGPT to write an essay and then handing that in, vs. writing an essay, asking ChatGPT to critique it, modifying it accordingly and then handing it in. Consider the difference between asking a student to talk through a concept with an assessor where the assessor can offer prompts or where they are able to refer to a document (e.g. in a PhD viva) vs where they are forbidden from referring to any resources. Going back to Bloom for a moment, when considering different uses of people and things in our thinking and operating in the world, where is the line between an activity counting and not counting as analysis, or evaluation, or even remembering? For a discussion of how using things to help you remember still counts as remembering, see my paper on Remembering in the Wild. Or check out my blog post on Expanding the unit of analysis of learning for arguments about all forms of learning and knowing are collective at a fundamental level.

This is where setting parameters, guidance, restrictions, security measures and appropriate scaffolding and learning activities becomes very challenging. The temptation is to construct a framework of collaboration, as many have done (e.g. different ways of using GenAI in assessment). But whatever categories you use will be insufficient for understanding what is happening with respect to learning and the demonstration of knowledge, because it always depends on the context and situation and the view of what constitutes legitimate learning and legitimate demonstrations of that learning. Separating out the unit learning outcomes so that they don’t specify epistemological legitimacy, and then dealing with that within the parameters and criteria for assessment (which should be linked to program-level outcomes and attributes), opens up space for conversations, prioritisation and design negotiation at program level about these kinds of value propositions and epistemological concerns.

I may have made this sound much simpler than it is. Clearly, there are huge challenges to this program-level negotiation of unit-level assessment parameters. But, for now, I want to articulate this point about separation of learning outcomes and assessment parameters and hear about what the HE community thinks. Then, it’s on to the practicalities of how to approach the negotiation.

References

Fawns, T. (2022). Remembering in the wild : recontextualising and reconciling studies of media and memory. Memory, Mind & Media, 1, E11. https://doi.org/10.1017/mem.2022.5.