- Shaken Not Burned
- Posts
- AI is powerful, but why is transformation so hard? With University of Exeter
AI is powerful, but why is transformation so hard? With University of Exeter
Beyond hype: how AI collides with real organisations, messy systems and difficult decisions
Shaken Not Burned
How the world really works – so you can decide what to do next
AI is becoming one of those topics where the scale of the claims can make it surprisingly difficult to work out what’s going on.
We are told it will transform business, unlock extraordinary productivity gains, reshape jobs, and even help solve major global challenges like climate change. At the same time, there are growing concerns about energy demand, governance failures, bias, job losses, and the sheer speed at which these systems are developing.
The dominant narrative tends to swing between utopian optimism and existential fear, often without spending enough time on a more practical question: what actually happens when AI is introduced into real organisations, real systems, and real decision-making? This is the focus of this week’s episode, which is the first in our AI series.
Rather than debating whether AI is inherently good or bad, Felicia talks to Professor Saeema Ahmed-Kristensen, director of DIGIT Lab, led by the University of Exeter, about something much more grounded: why so many digital and AI transformation efforts struggle in practice and what that reveals about the limits of technology alone.
She makes a distinction between problems that are well-defined and those that are not. AI is particularly powerful when objectives are clear, data is available, and success can be measured relatively easily and, in those contexts, it can offer extraordinary value. But many of the most pressing challenges organisations face are different.
Sustainability, climate strategy, major organisational change, and social systems are messy, politically embedded, and filled with trade-offs. They are often what researchers describe as wicked problems: issues where there is no single right answer, where choices create consequences elsewhere, and where uncertainty is part of the challenge itself.
That distinction matters because it shifts the conversation. It suggests that AI may be extremely useful in supporting parts of decision-making, but it does not remove the need for human judgment. In fact, in many cases, it may make governance, accountability, and strategic clarity even more important.
A recurring issue is that many organisations are adopting AI not because they fully understand where it creates value, but because they fear being left behind. That pattern should feel familiar because we’ve seen it play before in telecoms, digital transformation and in sustainability: implementation pressure can outpace strategic clarity in periods od rapid change.
And we know that creates real risk. Leadership, organisational readiness, governance and skills all become critical. AI can accelerate processes, but if organisations do not understand where it fits, what decisions should remain human-led, or how long-term capability is maintained, then speed may simply accelerate poor decision-making.
This raises a deeper question that often gets overlooked: if AI increasingly takes over routine and early-stage tasks, what happens to the development of human expertise, institutional memory, and judgment? Greater efficiency may come at the cost of some of the very capabilities organisations need to remain resilient.
For those focused on sustainability, this is particularly important. AI may improve modelling, optimisation, and analysis, but sustainability challenges are not purely technical problems. They involve governance, trade-offs, politics, and accountability. AI may support decision-making, but it cannot determine acceptable trade-offs or replace human responsibility.
AI is powerful, but power is not wisdom. Better tools do not automatically create better outcomes. What they do do is make it even more important to understand what kind of organisations, systems and governance structures are capable of using them responsibly. As this new Shaken Not Burned AI arc begins, that feels like the right place to start.
Further reading:
AI adoption is no longer the challenge. Execution is Lenovo/Tech Radar
The GenAI Divide: State of AI in Business 2025 Project Nanda, MIT Media Lab
Dilemmas in a general theory of planning Rittel & Webber
What is a wicked problem? Stony Brook University NY
Organisational institutionalisation of responsible innovation Owen et al
The Sciences of the Artificial Herbert Simon
Generative AI changed everything. Fully synethetic audiences didn’t Research Live
Prediction Machines: The Simple Economics of Artificial Intelligence Agrawal et al
If you enjoyed this episode, subscribe to our newsletter and follow us on LinkedIn, TikTok and Instagram – and why not spread the word with your friends and colleagues?
Reply