Exploring the impacts of AI on Knowledge, Universities & Labs
AI is disrupting the university monopoly. We can't "hire our way" out of this transition, and trying to control emergence is a losing battle. Are sandboxes our best option to prototype a future University operating model?
We've all seen it. The rise of Generative AI. It began with the images and the chatbots. You remember - those funny ones of people with multiple fingers. The hallucinations of chatbots we were shocked by.
Fast forward a year or two though, and we're seeing not just dodgy images and endless slop. We're seeing the extremely rapid advancement whereby our whole information system, everyday digital software, and physical products are increasingly seeing AI crammed in. It's got to a level of sophistication that is able to fool people regularly - deepfakes are everywhere. The ability to trust videos, websites and news - even real time phone calls (thanks voice synth) - is decreasing.
So what does this mean for our society? Our workplaces? Our initiatives? Do we resist AI? Do we learn about it? Embrace it? Critique and influence it?
All of this has been buzzing around in my mind lately, when an article popped up in my feed from The Conversation:

It felt like an interesting provocation. The idea that we're seeing the fundamental disruption of one core framing of Universities (the cradle of knowledge development and knowledge transfer) and everything that surrounds that, is fairly compelling, and its likely impact on the fundamental
As I've mentioned, I'm currently employed in University Land, and seeing this unfold in real time - students, academics and professional staff, as well as executive leaders grappling with what this means as AI accelerates.
Responses currently seem to fall into a few camps:
- control - attempting to slow down or throttle usage through regulation or 'detection' tools
- drift - people utilising AI in the most basic ways, such as enhanced search and information retrieval, or rewriting documents in a more professional tone
- pioneer - people embracing or evangelising AI with little consideration of implications, ethics or safety
- deep research - the people who really know their stuff, often operating exploratory research projects to inform the future of AI, rather than its application to institutions or people's lives

Responding
We can wait for institutional responses - training programs to be developed and rolled out, or bury our heads in the sand...
I guess I've always favoured the middle path. So at the turn of the year, I spotted a program called Superesque, which proclaimed to coach advanced AI application through hands on learning with a cohort.
I figured for a couple of months investment of professional development, it was worthwhile to better understand what's happening, and from that - better understand its implications for myself, for workforce transitions in the University and broader societal innovation spaces. As Gareth from our cohort wrote about the "hiring talent fallacy",
[Finance asks] "Why not replace the legacy managers with ‘AI-native’ talent who get it?”
It’s a seductive idea. It’s also a strategic death sentence. Here is why you cannot hire your way out of this problem..."
Read his article here if you're interested in his response.
This post is not a "this is what I learnt" - I may or may not do one of those in the future. This is a reflection on implications and trajectory ahead.
As Gareth indicates - we can't "hire our way" out of this. We can't wait for a new workforce to arrive; we have to evolve the one we have - avoiding "the frozen middle" trap of organisational change.
So, if we have to learn our way through it, where do we do that? We need spaces to experiment. We need connected, collective learning and sensemaking.

A Brief Field Note from a Sandbox
As well as Superesque, last year I got started with utilising some slightly more advanced applications of AI in an initiative with a forward thinking partner.
We're building an evidence-based strategy to inform an initiative focused on systems change and activating a network of people around the world, rooted in relational convening. They couldn't wait around to build the evidence base, build a deep contextual analysis of multiple sources, etc - instead we built a process which was a bit like a Human-AI sandwich.
- Human: Curation and selection (Quality Control).
- AI: Synthesis and pattern matching (The Engine).
- Human: Crafting the narrative/strategy (The Context).
This hadn't been about replacing anyone's role; it was about augmenting research capacity, translating and mobilising existing knowledge. Through this we built access to what academic research is known for - rigour and peer-reviewed evidence - but augmented that with near real-time pattern spotting and translation capacity which wouldn't have been available in the budget.
As I reflect on this initial sandbox space, I am well aware of the allure of speed. The allure of the bright and shiny tools. But this sandbox space never felt like that - we weren't going out there to use the latest shiny tools to make it "an AI project". We were looking at how we could best support a partner to achieve their goals, to synthesise the evidence, maintain rigour, and introduce new ideas to their network in a way which didn't feel "jargon-y".
It didn't feel like we were giving over power to AI, it felt like a new collaborative mode.

The Fork in the Road: Three Futures
If we accept that AI isn't such an ethical black hole that we have decided we will consider how we move ahead to some degree (which seems to be the case in most organisations at this stage), we need to look at how to institutionalise this wisely - to that end, I see three paths ahead.
Scenario 1: Control
Organisational attitudes here could quite easily move around between inertia (wait and see), consultant-based advice (experts paid to guess), regulate (lock down tools as risk) and/or try to policy-write the way into safe & ethical use. The challenge with this is that analytical approaches like this only really work when the landscape of change is relatively stable, or pace of change is slow. Neither of those are currently true.
The risk: The "frozen middle" freezes harder. We become irrelevant because we are too slow.
The opportunity: Limit risk. Make better mistakes.
Scenario 2: The Drift
The "middle ground" scenario - staff using tools individually primarily for perceived efficiency or ease. This largely means we get faster at BAU. We treat AI as a tool (search on steroids, drafting documents), not a transformation.
The risk: we miss the chance to shift value creation. BAU is already declining. Other organisations move smarter and faster, and the University sector is hollowed out by 1000 cuts.
The opportunity: AI augmented workflows may be more efficient or effective.
Scenario 3: The Sandbox / Networked Sandboxes
Develop institutional Sandboxes (such as Living Labs) where three main things can happen:
- Existing staff and students are able to build deep capability and develop experience with these new technologies in safe-to-fail environments, but on real world challenges
- Inform institutional change (policies, programs, new value creation, new services etc) from within the organisational context
- New relationships and partnerships are able to be formed, demonstrating real world application, and building understanding in community and industry through real world experiences exploring opportunities and risks equally
An advanced version would be a coodinated portfolio of multiple sandboxes looking at different angles (e.g. one on institutional systems for student enrollment, one on implications for research processes, one on knowledge translation and new educational courses), which would be connected through a learning and collaboration layer (an enabling relational infrastructure).
The risk: investing significant resource into something which is still emergent can be wasteful if it's not well architected. Ensuring effective coordination and prioritisation may be difficult
The opportunity: new forms of value creation wont invent themselves, and institutional transformation will rarely come from analysis of a moving target or policy making. Seeding a portfolio of Trojan Mice might be a better path.
Of course, there are other permutations for scenarios than I presented here. There are always multiple futures possible, and futures aren't equally distributed. I offer these as what I see as possible futures based on where we are at today.

Conclusion
If we want Scenario 3, we can't just wish for it or expect it to be created. It's not the norm.
A designated sandbox (like a University Living Lab) could test protocols, ethics, and workflows. It could create new services. Seed new and meaningful collaborative partnerships. And more.
What's clear is that larger institutions have greater resources to engage with this transformation, but more vested interest and institutional inertia bound up in BAU. Arguably this puts smaller universities, or even private / community sector entities at a greater advantage because they can move fast and nimbly, whilst retaining tight ethical protocols and smaller governance.
A Sandbox isn't just about playing with tech; it's about building the skills and experience and prototyping aspects of the future operating model of the university.
One likey implication of AI is the need to rapidly shift to an operating model which is based around different principles - human experiences, place-based integration, and potentially a considered adoption of AI-augmented activities which will require advanced capabilities.
[Edit: I realise in the title I included Labs - I was intending to look more at social innovation labs, but I ended up looking at the role of Living Labs to support this transition. I might write a future one about the potential of AI assistants and agents specifically for social innovation and living labs in the near future as that's an area of active experimentation through Superesque.]
Time to keep exploring.

