HE as a collegial, collaborative sector: thoughts from 2 AI events


Last week, I went to Sydney, where I was part of two events about AI in education, both of which were exemplary in terms of the constructive and collegial attitude of the delegates. One event – Learning and Teaching Leaders round table on AI and assessment reform – was for leaders to work through strategic approaches in response to AI. The other – the 2024 AI in Higher Education Symposium – was for educators to share their AI-related teaching and assessment practices. The combination of these events, one with a top-down focus and the other with a bottom-up focus, offered insights into the range of work that is needed, both to assure our institutional and sector credibility, and to help staff and students to keep finding ways of making education meaningful to their present and future contexts.

Photo by Matteo Vistocco on Unsplash

The first event was a Learning and Teaching leaders roundtable, run by Danny Liu (USyd), myself (Monash), Michael Cowling (CQU), Trish McCluskey (Deakin), Jason Lodge (UQ) and Helen Neil (TEQSA). Something like 150 leaders from 40 HE institutions workshopped ideas (a mix of principles and practical action points for strategic plans) to inform their RFI’s that are due to TEQSA in June 24. These RFI’s should take into the TEQSA Assessment Reform Guiding Principles that some of us were involved in drafting.

It was clear that most of what is being proposed by institutions will require considerable resourcing and money. It costs money and staff time and energy, for example, to pilot programmatic approaches to curriculum and assessment, design and implement staff development, and to promote a cultural and, perhaps, even philosophical shift in how staff, students, employers and the wider public understand HE learning and qualifications.

There is no spare money in the system, which means that existing money needs to be reprioritised. This, in turn, points to a need to know the costs of different aspects of running an institution, including assessment, administration, research, technology costs, and so on. Assessment, in particular, might be a place to save money while actually improving the quality of education. I am picking on the creeping propensity, over recent years, towards overassessment or an excessive number of summative assessment points.

Here, in my opinion, is one of the many places in which our response to AI really involves doing things that we should have been doing anyway. There is a lot of angst around whether our qualifications mean what we say they mean due to the possibility that AI is doing the work and the thinking on behalf of students. And yet it was already worth questioning the extent to which our assessments were meaningful reflections of student knowledge, given the bad maths we like to do in making up grades. As Sadler noted, it does not make mathematical sense to add up marks from two assessments that test different kinds of knowledge. Medical education assessment gurus (e.g. van der Vleuten et al., 2012) have also pointed out that the aggregation of assessments requires expert judgement and not just simple quantitative attempts at the objectification of quality. Can we save money on designing, marking, moderating and monitoring a bunch of 5% assessments as part of the move towards more holistic and thoughtful, programmatic approaches?

Here, in my opinion, is one of the many places in which our response to AI really involves doing things that we should have been doing anyway.

One of Sadler’s key points was that we don’t want to give, or take away, marks for things that don’t count as a learning achievement (e.g. attendance, lateness, or performativity). This is more than just an argument to concentrate on more substantive and thoughtful assessment. It points us in a direction to think about what constitutes a learning achievement these days, and how such achievement relates to the purposes and values of higher education. It is worth coming back to that point after mentioning a few things about the second event: the educator symposium.

In this second event, hosted by Danny Liu, Michael Cowling, Russell Butson (Otago University) and myself, something like 2000 people signed up, 400 or so in the room and the rest online. I’m not sure how many showed up, but the appetite was much bigger than we had anticipated, and the responses and feedback from the event suggested that a space for practical sharing was much needed. In particular, I want to highlight the modelling of vulnerability by the participants, who did not try to pretend that their AI teaching and assessment practices were perfect. Rather, they talked about what excited them, what they were unsure of, what they had messed up, what they might try next tie.

I see this kind of modelling of vulnerability by educators as a key move in the response to AI for a few reasons:

  • Its’ more honest in a period in which uncertainty is the only certainty. It also helps students feel that teachers know what it’s like for them in these messy times.
  • It recognises and normalises vulnerability as an inevitable aspect of a move towards assessing process, and towards the honest acknowledgement, by students, teachers, researchers and professionals, of the use of AI. Crucially, it gives students permission to also be open about their messy, imperfect, trial-and-error approaches to using AI in their learning.
  • It is part of the give and take of students as collaborators in our educational approaches. Teachers acknowledging that they don’t know everything they need to know in order to effectively teach the students in front of them is the crux of partnership approaches.

Here, in open and collegial discussions about practice, we were able to have constructive conversations about what counts as learning and achievement. As we might expect, it looks different to different people, and there is a lot of work to do to move people out of their philosophical or epistemological trenches so that we can ask some hard questions about learning, knowing, doing and relating. The assurance of our qualifications, for example, probably won’t look like it used to, but with extra measures to contain AI. Rather, the notion of what it means to have studied for a degree may need to be rethought, and that suggests to me a broad and multifaceted range of work at many different levels of each institution and beyond. This is one of the reasons that I think we need to improve, quickly, at cross-institutional collaboration.

The open attitude towards sharing unfinished ideas at both of these events was refreshing. Colleagues from different institutions were able to put aside competitive instincts and talk openly about challenges and proposed ways forward, and to ask probing questions or offer advice to others. This is what we need in Higher Education – not to work in competitive silo’s, hoping to outdo each other in adopting or containing AI, but to collaborate, and break down unnecessary borders in order to strengthen our real and perceived coherence and value as a sector. The weakening, or failure, of any HE institution weakens us all at a time where the relevance of HE is questions in the media and public discourse, and a growing range of alternative forms of education and self-learning are casting doubt on our expensive and slow-changing offerings. Conversely, the strengthening, or success, of any HE institution strengthens us all, as public belief in the value of HE is bolstered, we are all given examples from which to learn, and stronger partners with whom to collaborate.


One response to “HE as a collegial, collaborative sector: thoughts from 2 AI events”