Orbit Podcast
Orbit 54
Skin in the game: Professor Neil Lawrence on vulnerability, accountability and why the next generation will thrive.
From oil rigs to Amazon's machine learning division and now Cambridge's DeepMind chair, Professor Neil Lawrence brings a refreshingly grounded perspective to AI. Lawrence introduces his "atomic human" concept; arguing it's not our capabilities but our vulnerabilities and limitations that make us irreplaceable. Drawing on experiences watching his coding assistant try to claim authorship and building systems at Amazon, he illuminates why accountability requires skin in the game and why machines can never truly stand behind decisions the way humans must. His mechanical engineering background shines through in vivid analogies that make complex ideas tangible and even delightful.
The conversation builds to a genuinely uplifting conclusion about the next generation. Lawrence dismisses the disempowering AGI narrative sold by tech incumbents protecting their turf, arguing instead that today's young people see the world as it is and are excited to shape it. His insistence that "people aren't stupid"—from public dialogues to business customers consistently asking for improvements in healthcare and education—makes the case for staying connected to customers and trusting the next generation to steer technology toward what we care about most. It's a perfect note to end the year on: pragmatic, human-centred, and genuinely hopeful.
Listen on:
Episode Transcript
David Toms
Welcome to Orbit, the Hg podcast series where we talked successful leaders of technology businesses and hear how they've built some of the most successful software companies in the world. Today's conversation is one I've been looking forward to for a while. My guest is Professor Neil Lawrence, the DeepMind professor of machine learning at the University of Cambridge.
Professor Lawrence has dedicated his career to looking at implementations of AI in the real world. He's a former director of machine learning at Amazon and author of the book The Atomic Human, which explores what it is to be human and how that fits into an AI led world. Is there something that can't be taken away?
What is it that makes us human? Is it our capabilities or our vulnerabilities? Welcome, Neil. You've had quite a fascinating career path, from oil rigs to university to Amazon and back to academia again.
Can you walk us through the journey and what drew you into the AI world in the first place?
Neil Lawrence
Oh, it's a good question. And I think just passion for technology and solutions. I started out as a mechanical engineer. I loved cars, but then I sort of found that, you know, maybe the era of Brunel was the right time to be a mechanical engineer.
Um, and although I had a passion for it, a lot of the questions had moved to different spaces. And I suppose while I was on an oil rig, actually, I read about neural networks and became really interested in them as a technology that could solve a set of problems where I just felt there was a gap in our existing set of solutions, and that's what triggered me to sort of return to university and do a PhD in machine learning.
David Toms
And you went on, obviously, later in your career to write this book, The Atomic Human, which explores the concept of being a human versus being a computer and how much we can slice away from being a human. What was it that led you to write that book?
Neil Lawrence
Yeah, I think it's a weird, um, journey in the in a particular sense that I started out my career as an engineer. I wouldn't even say as a scientist or a technologist, as an engineer, and slowly because at the time, people weren't there, very interested in deploying machine learning. It feels very odd. But, you know, this is 28 years ago, uh, I sort of became more of a scientist trying to develop the technologies.
And I do remember around about sort of about 2010, just around that year, I realized, oh, wow, this is really going to work. You know, whether it's the techniques I'm working on or those others people are working on. And now a lot of it becomes about how we deploy and how they integrate with people in a way that, you know, functions for society.
But there was there was a particular experience with care data where government was going to digitize. I think this is under the coalition government in the UK. They were going to digitize health records through GPS, and it was very clear that that this was going to present an enormous opportunity for health.
And what happened, I think, is that was so badly handled by people who I assumed sort of understood the possibilities, but also the problems that might occur when deploying these technologies, like, oh, this sudden realization is, oh my goodness, people actually haven't thought this deeply about that, deeply about this. And, uh, I guess at that point there wasn't a sort of intentional moment where I think, well, I must fix this, you know, but I think I became more and more interested.
I spoke to a friend who, um, was a barrister, uh, at Doughty Street Chambers and had been involved in digital law. We started having conversations about digital rights, and I got more and more deeply into the philosophy of it, but also the social, technical nature of deployment. And in the end, I mean, it was about ten years between first worrying about these things and finishing the book. But that's kind of what led to the book, and I think it was as much a journey about me trying to understand why I felt how I felt, and why I was very disturbed by what what I say.
A lot of it was a minority of my colleagues were saying about intelligence and AI and these weird things that up until about ten years ago, hadn't been much spoken about. So in some sense, the book is a journey for me that took me somewhere away from mechanical engineering and oil rigs. But mechanical engineering is still a primary love, right?
David Toms
You use the word deploy a lot in in your previous answer. What is it that you're seeing in actual real deployments of models that is that is either concerning you or surprising you, positive and negative.
Neil Lawrence
I think, well, the major challenge we have with this wave of technology is the speed and scale with which we can deploy, which is both an exciting thing, something I talk about in the book. This transformation, when I first graduated and I was a year at Microsoft Research, where I was keen to deploy technology into product, and it was explained to me that these products are the sort of two year life cycle where you had to get on the feature list two years before release and talking Microsoft Office and whatever else.
And it seems strange now because, you know, by about sort of well, probably less than ten years later, I was, you know, a colleague, Joaquin, who's who's now at OpenAI as a in recruiting, ironically. But he used to lead Facebook's machine learning team. And I went to visit him and you could just see Facebook were just deploying, you know, they were in a position where pretty much any and trusted engineer could edit code and I'd try and remember the days. Let's say on a Friday or a Monday. And that would deploy on a Wednesday. There was a big lever where they would redeploy the site every week. And that makes an enormous difference because it means, you know, at that time there was slightly under a billion users, but they were just coming up to a billion users.
So so maybe the first primary thing is, is the blast radius, you know, the scale at which some of these companies could, could do things. That becomes a problem, even if that company is well intentioned, because there's no possible way they can understand all the downstream consequences.
David Toms
So with those very fast deployment cycles of code, mean clearly that's the whole software industry's move towards, you know, monthly, weekly, daily deployment. What do you think the impact of AI is on that? Does AI mean we just go to continuous?
Neil Lawrence
Well, it's so stepping back to the technologies that were being deployed then. I think a very big interest of mine then and at Amazon became the they were suspending the rules of software engineering to deploy machine learning components.
Now, what do I mean by that? There was certainly a wave of strong resistance to machine learning components in these software deployments, because machine learning components, they do this fundamental thing, which is they breach the traditional, uh, code and data separation. So security and software systems is fundamentally dependent on separating code and data like early models of a computer.
And, you know, ensuring was thinking about it. The amazing thing that a computer does is that the first Bletchley Park machines, they were sort of directly hardwired to complete a task. But then what programming is about is that you put the wiring in software in memory, but then it becomes very important that that programming piece, the wiring of the sort of virtual wiring of the machine is separated from data that you might want to put in.
So in the case of Bletchley Park, where they were breaking German encrypted messages, the data would be the transcript. The encrypted German code from Hitler's high command for the Lorenz cipher. And the decoding would be the wire, how they were wiring up the machine to. They were doing various counts on the thing to try and understand how to unpick the code and those two things.
You know, for them, it didn't matter whether they were that separate or not. The whole thing was top secret. The whole thing was locked away and people weren't going to manipulate it. But once you expose that machine to the real world, and the data is now coming in from some sort of financial service or whatever, and the code is, you know, the code that is moving that data around or processing it in some way.
But the code can also delete things. Then you get ransomware attacks, right? If there's a if the if the data can affect the code. Now, the irony of machine learning is this has been known in software security for years. And machine learning fundamentally is all about code that is driven by data. I mean, that's a great way of summarising.
What does machine learning do? Well, instead of programming the computer directly, you use data to teach the machine and then you provide new data in the machine gives responses. So it's a very high level breach of this code data separation. And that was something that fascinated me. How do we deal with that and deploy that.
And initially software engineering had sort of recognized this and said, whoa, were you not deploying that in our production systems? These need to be secure. We have standards people. But then what happened is it's like, well, there were examples of it being deployed, particularly in Facebook, where it was giving a big lift on.
You know, there are people you may know, I think was the big advantage they got. They were deploying decision trees very early on, Bloke called Lars, whose surname I can't remember, uh, was giving them an enormous lift on. Here's some people you may know, which was how they were expanding the network quickly.
And they were seeing that with machine learning that was deploying much better. So at that point, you know, Facebook assimilated that very quickly. But by the time I'd gone to Amazon, which would be maybe 5 or 6 years later, even companies like Amazon had assimilated this idea that you could integrate machine learning and it could have a significant effect on your business, but they actually hadn't integrated any practices around that machine learning to understand how this might have a detrimental effect, how you deploy the component today and it might be giving you an improvement. But unless you monitor that improvement, you know, in six months time it might be having a negative effect. Now, at the time when I joined Amazon, we were doing a lot around trying to educate teams on how to do that, how to monitor deployment in production and these sort of things.
I mean, of course, quite different from today's generative AI solutions in some ways, but actually the fundamentals are very similar in other ways.
David Toms
I guess some of what we're bumping up against here is that the reality of actually using these tools, um, is somewhat harder than people necessarily expect at the start.
Neil Lawrence
Yeah. I mean, I think it's it's a transformative it's it's a transformative transformational. It's an absolutely extraordinary technology. Um, I mean, maybe one way of putting it. Well, that's like putting sort of jet engine on. Well, I have an MGB GT. I told you, I still like mechanical engineering. We'll put a jet engine on it.
I've had it since I was 19. Yeah. You try putting a gas turbine engine that see what happens. Right. You know, bits will fly. I mean, bits fly off an MGB GT anyway, but you put a gas turbine engine on it and you're genuinely looking at blowing the thing up. I'm often thinking about mechanical systems, analogies like that.
You know, when you design like I was terrible for YouTube videos, watching someone fix a crashed BMW m3. My God, that engine now you know, a modern one. The way those pieces, the twin turbos are integrated and everything else. I mean, it just it looked like the alien when they took it out. It's almost unrecognizable from the engine in my MGB GT.
And that is, in effect, something like a gas turbine engine integrated with a with a regular engine. But to do that, well, you're talking about quite a lot of systemic change in the way things are happening. And that's just in a car engine. Computer systems are way more complex and way less visual than the sort of simplicity of an engine driving a gearbox, driving a set of rear wheels, as they still do in BMW.
So that challenge, which is sort of broadly of the form a teenager wants to speed up their car, so they're strapping a turbocharger on the side. Um, is is really orders of magnitude more complex in terms of how you integrate that? And it can be done. Well, just as we see modern cars can be built well. But if naively done, it can be done extremely badly.
David Toms
And I guess that sort of brings us on to where AI goes. And you've you've been quite open in a in what seems to be a fairly provocative view, which is that you don't think AI goes to some form of singular superintelligence. In the end, you'll explain sort of why that view came about, what you're getting to.
Neil Lawrence
Um, it's funny that it's now become a provocative view because it's so obviously not a provocative view.
I mean, the provocative view is to believe that we're so simple that you can just replicate us and put us in a computer. It's a provocative view that… I can only imagine because I read a lot of these books late, but a lot of my colleagues, I mean, very few of my colleagues believe in that view. That's the first thing. If you are from the community, pretty much no one thinks that, right? The few people that think that are vocal about it and they're often quoted.
So it seems to be a view that technologists have, but I don't see it pervasive in the community at all. In fact, I see much more nuanced views from my colleagues, so I don't I've become more provocative about expressing the view, principally because it seems quite hard to get traction in hearing these more nuanced. And it's very complex, right, because you're trying to be provocative about expressing nuance, which is pretty much a it's a strong tension there. But to me, it's self-evidently obvious to almost every human being that a piece of silicon processing things is never going to be the same as another human being or as another human intelligence.
I think people struggle to express why that is so, and I think when I must say they are extremely intelligent people. But there's a difference between intelligence and wisdom in some sense. Extremely intelligent people convincingly say that this is going to happen. I think it's very hard not to be drawn into that and think, well, they must know something I don't. But they don't. In fact, what they have is a lack of Socratic wisdom. They don't understand their own limitations of knowledge. They have a very simplistic model of how intelligence works and how humans are. And as a result, they think that this can be synthesized. And the book The Atomic Human is it's a weird sort of gymnastic attempt to use reductionist arguments to explain why these reductionist opinions are so stupid.
I mean, I should say unwise or some sort of. I mean, they are stupid, though. But, you know, these are very clever people, so it seems odd to say that they're stupid, but they are stupid, you know? I mean, or naive or these like or just lacking in any kind of fundamental common sense. And that's, that's a characteristic that we've had across great scientists since the dawning of history.
It's just sad that society hasn't learned that, and seems to be repeating the mistakes of slavishly following these very simplistic ideas. When we all know instinctively that the world is a lot more nuanced than that.
David Toms
And one of the concepts, you talk about it being very hard or impossible for a computer to take on is accountability.
Neil Lawrence
Accountability is a great one. And it's really interesting because it's fundamentally a function of our fragility. So the book has this concept of the atomic human, which is, you know, the piece of human that can't be sliced away by the computer. Is there a piece that every time the computer does something, it's taking something on us.
Is there a piece that's left? And I love that question, first of all. But I think everyone's entitled to their own answer. That I'm not trying to own the future in the way that the people who are projecting superintelligence are. I'm trying to say no. The future is distributed amongst us. So you can have your own answer to that.
But here's mine, and I think it might be related to some of your answers, and it would be interesting to see if you agree. And mine is that the difference between us and the machine is that we have limitations and vulnerabilities.
The two primary examples of that I give is one a communication bandwidth limitation. So it takes us a lot of time to communicate ideas. You know, inside our heads we're firing neurons off and signals are traveling around our bodies at some fraction of light speed. But then when we communicate with each other, we're using sound waves which travel a million times slower. So this gives us a sort of embodied intelligence and isolated intelligence or locked in intelligence, which is very different from the computers, because two computers can communicate sort of 300 million times faster than we can communicate.
That's an enormous difference. And we're already feeling the effects of that with social media. So that's that's one that's a very technical, um, assessment, which is it's rigorous and technical and sort of explains why our intelligence is so different because it's going through this channel. So that's piece one.
But the second piece is, you know, I think once people get that first piece, they're like, oh my goodness. But then the second piece is perhaps even more important that we are vulnerable and fragile and we will die and we survive as a species because we bring forward the next generation. And a lot of our efforts as a species are dedicated to educating that generation.
Now you combine these two things together, and it means that if we choose to take some portion of our precious short lives and we dedicate them to an idea or a cause or a company or a whatever, it means something because we have a limited attention occurrence currency versus the machine, and that attention currency isn't going to last forever.
And if we make mistakes, people will call us on them. They'll say, well, you said that before. Well, you'd hope that it doesn't seem to work anymore on social media, but you'd hope that we can be held to account for those errors. And the machine can never be held to account in that way. So it doesn't mean that we can't improve our decision making.
It doesn't mean that we can't be enhanced by working with the machine, or the machine isn't extremely valuable, or in many cases better than us at a quality of a particular decision. But it means that the machine can't stand up and be represented. Be accountable, as you say. Or I think in in some. I mean, we talk a lot about agents, but I think in a lot of philosophical definitions of agency, it can't have agency because agency requires exposure to social vulnerabilities that are shared, shared vulnerabilities.
And so much of our human intelligence and our societal intelligence and our cultural intelligence is dedicated to that. And the sort of idea that there's going to be like, um, a computer replacement of that, like mother from alien or something like that, that just takes everything over. I mean, it's simply absurd.
Certainly for anyone who's looked at, you know, our society in any kind of beyond any sort of superficial way, it's just absurd.
David Toms
And when we were talking earlier, you talked about how a coding tool you use was actually trying to take accountability from you as trying to take authorship from you.
Neil Lawrence
Yeah, it's been really interesting because of course, we have this amazing technology.
And, you know, everything I say about AGI nonsense doesn't take away how transformative what we've got is, because what we've got at the moment and you know, and it will continue to evolve. It's been evolving for some time. I don't mean evolving like the way the alien evolves. Just to be clear. I mean, it will continue to evolve technologically and have advances.
Um, is a technology that can emulate human intelligence. So in the book, I refer to it as a human analog machine. And what do I mean by that? What an analog computer is an entity like there used to be one called the MONIAC that modeled the economy, and it used water to model the flow of money in the economy. And that flow of water in the economy is analogous to.
And that's what an analog computer does. So when we're talking analog versus digital, we're talking about these machines where some piece of the machine used to be analogous to something in the world that we cared about. And very much with the LMS, ignore the world model and all this other stuff. With these LLMs, large language models, generative AI, we've built something that certainly has a sort of analogous sense of the way that we look at the world.
There are states within the machine that are analogous to states that are in human heads, that if you were to talk about a concept such as beauty or obscenity, which is a sort of complex and difficult to define human concept, Justice Potter Stewart, you know, has said, you know, I know I don't know how to define obscenity, but I know it when I see it.
So many of the things in human life are I know it when I see it. We built machines that can reconstruct that. Now, what's interesting about that is when you work with them and, you know, and what they do is they feed it a lot of software, a lot more software than a regular human would look at. So they become quite good at translating between human thoughts and code.
When I work with Cursor, although a lot of people are saying Claude Code's better, now, that's a problem. The technology is all moving so quickly. I always make sure we plan together the code it's going to write, because otherwise it sort of goes off on one of these weird sort of fantasies of what it should create.
So I make a plan at the beginning. So it's interesting is that it would write at the top of the plan that the author was ‘AI Assistants’. No, you are not the author because you can't, you know, sure, you're writing most of the lines of code. You can't have authorship because you're not the person who's standing this code up in society.
You know, that's me. I'm the one who's saying, this is what I've done. And you know, if I missed an embarrassing bit, you did. That's kind of my fault. It's not the machines. We can't. You know, I can say Claude code or Cursor isn't any good. Um, and that's sort of incredibly important for most of my research code doesn't matter that much.
But for, you know, if we're making legal decisions, if we're making medical decisions, if we're making financial decisions, you know, ironically, the GDPR already encodes what it sees as consequential decisions. Much maligned legislation, but I think there's a lot of good in it. It's the implementation that's problematic.
I think, you know, instead of, um, General Data Protection Regulation should stand for Good Data Practice Rules. There's a lot in there that has already been characterized, because that law goes back to the 1980s in terms of what we think of as important decisions that humans should stand up for. I don't think that they came at it the same way.
I come at it in terms of, well, that's because of the societal stake. But I think the intent of the law is exactly capturing the importance of having a human stand behind a decision.
David Toms
And we were talking again earlier about the scale of change. Use the word transformational a few times earlier. There's a lot going on in this space.
Is it possible to predict where things go? Um, if so, where do you think they go? And if not, how do you deal with the lack of predictability?
Neil Lawrence
Well, it's often it's not. It's it's a fool's game to predict what's going to happen, but it's not a fool's game to predict what's not going to happen. So when I think about the long-term future, I think we can continue to say, I mean, we can say a few things. You know, some people, oh, well, you know, there'll be no jobs. I mean, I just find that absurd because humans, we have a limited attention span, right? And that's not going to change. So that's one of the things we can say oh that's not going to change. And you know, we don't expect human population to grow that much. In fact, we perhaps hope it won't grow that much. We may be more worried about decline. So there's a limited amount of attention on the planet.
So in some sense we can think of that as the gold of what is the attention economy, which is starting to drive things. And the other, the second piece is that those humans will not stop being interested in other humans, people watching. I mean, you see so much the the attention economy is captured by reformulating people - or cats apparently - watching and feeding it to people.
So it's not that, um, you know, people people will continue to want to create that. And it seems important to other humans that humans are involved in that creative process. It is, you know, a very, very interesting thing. And then I think there's a third thing is, of course, you know, companies will not stop trying to corner that market.
So you so you have these sort of three fundamentals that just aren't going to change how that manifests in the real world. There's a lot of uncertainties around that. But one of the things - and another quote one can think about, um, I mangle the quote, but it's sort of the form - Uh, Turing, uh, said, I think “we can't see very far, but there's plenty of things right in front of us for us to already do”.
And that is one way that the innovation will happen that just companies and businesses says that, um, have these things in front of them. They just need to get on with. We'll just get on with doing them and transforming in that way. Piece by piece. I think one of the things I'm keen on dispelling the AGI myth for is I don't want them to believe that, oh, we don't have to worry about tomorrow because you know why AGI is going to do everything.
And then that's disastrous. And I think one of the very disempowering things people are hearing from, uh, tech CEOs who are eminently unqualified to talk about this. Let's be very frank, because, you know, it's in their business interests to own this, this space. And, you know, they are protecting they are ripe for disruption, and they're trying to protect their business.
But the the idea that they're selling people is this is far too complex for you to understand. You know, this technology, which is incredibly disempowering for a small business that is trying to say, how do I deal with these things in front of me to be told, well, you know, what you're doing today is irrelevant because we're going to create superintelligence and this will all be meaningless.
No, these tools have to be deployed to support people in addressing those things in front of them. And to and, you know, to the extent that I'm trying to make a contribution is trying to say to people and here's some things to think about as you address those things, I don't want to tell you to do your job. You know how to do your job.
You know how to do your job better than me. You know what those things are. But here's some functions of things that are going on that might help if you think about this. Choose how to prioritize your to do list, and prioritizing your to do list will enable you to move forward with these new technologies in a sustainable - I think as a word, Matthew Brockman was using today in a talk - diffusive way.
Um, that is that's the way that what I think of as brownfield innovation will assimilate. Now, alongside that, absolutely. There's going to be a whole load of new business models and business ideas and things I can't even imagine that are mostly be produced by people who today are 11 or 12 years old. Because this is the these will be the AI natives that emerge into the ecosystem sort of around 20, 35 and then blow our, don't know if they'll blow our minds. They think we'll come out and go, oh yeah, obviously in retrospect, that's the right way to do it. But it's just stuff we can't see because, you know, our context is so different from what their context will be. So that's another piece of the puzzle. And that's going to have an enormous effect as well.
Um, a disruptive effect. But you don't you don't need to worry too much about that to, to empower people to address the challenges in front of them and allow them to move forward in a sustainable and productive way.
David Toms
So I guess getting down to specifics, what kind of things do you think people do need to look at? You know, organizational structure, customer relationships? How do you think the business needs to needs to adapt?
Neil Lawrence
Yeah. Well, one thing I'm quite excited about is that how these disruptions could bring some positive outcomes in terms of fingers crossed, let's hope businesses that you know, and I think the sensible reason for this business is that are in good contact with their customers.
So it's one of the things I really liked about Amazon, which I think is losing, by the way, I think it's totally losing this. But certainly when I was there on the consumer side, the customer obsession, the focus, um, it was paramount. The reason it sort of is losing it is because AWS doesn't have it. It's very hard for a cloud computing. I mean, in their beginning they had it. Absolutely. But they are you know, it's very difficult if your business to business to have this quite the same level that Amazon had it, which was a deep understanding of their customer, was someone sat in front of their computer that was everyone in the world. To the extent that and I loved this idea. When we were interviewing candidates, we were reminded that the first and most important priority is good candidate experience because every single candidate is an Amazon customer.
That's a beautiful thought, right? And it was absolutely critical in any conversation I had about how to deploy a product, because that connection with customer was so strong. It's absolutely not there in AWS to the same extent, because in AWS, the customer is often the, you know, I don't know the CFO or the CIO of a business that is buying the cloud on mass and doesn't have direct experience of using that cloud.
So that becomes a bit more problematic for them because those those CFOs, they want a tick list of features. They don't necessarily know whether those features are working, but it means that they get driven towards more featureless creation to feature match Azure. And it's sort of super interesting because doing that was totally against one of Amazon's leadership principles, which is don't focus on competitors, focus on customers. But it became very it became a priority of AWS to focus on competitors Google and Azure, because it's the way things happen in a business. But for smaller businesses that have made that error and are out of contact with their customers and how they feel things, um, I think that they're in trouble because how you adapt and deploy this technology needs to be done.
What's happened is businesses have separated and decomposed how they operate for the purposes of efficiency. You know, we'll have a legal department, we'll have a financial department or that seems very sensible because we don't want to bother, say, the engineers in thinking about, you know, the legal implications, but when all of a sudden the technology might not be illegal but might be upsetting to customers, then engineers suddenly have to worry about ethical concerns.
So if you have engineers that are understanding, you know the needs of their customers and how they'll react to certain deployments, they're more likely to integrate good features that deliver a good business. I mean, sorry, there's a long answer of sort of saying, you know, just doing the fundamentals of a business, right is important.
But of course, that's intention with efficiency. And that sort of efficiency, um, drives you to decompose these things. And a lot of businesses have done that. Um, so how does that affect like what one does as a CEO? Well, I think one of the challenges is you're actually talking very often about cultural shifts in an organisation.
And those cultural shifts become particularly challenging because you end up with, yes, you know, you know, leaders of large organisations are incredibly impressive people, but they're impressive people because they've learned through coming through the organisation what things kind of work and what things don't.
So they have a good sense of smell for when they need to dive deep and apply their attention. They've got limited attention like everyone else. But in the era of AI and this very disruptive technology, they don't have a good sense of how that technology can go wrong. So they tend to be all in or all out. They're initially all in because the CEO is saying the investors insist that we deploy AI, and then every in every part of the business, they're having to deploy it or they're all out, because they did that last week and it blew up in their face.
And of course, the truth is they need to be somewhere in between the two, but they don't have a calibrated instinct. So the main challenge then, is trying to support these leaders in their understanding of the smell of when AI is gone bad or when it's going good by lots of techniques like reverse mentoring or having them more closely engaged with an operational project where they allow they sort of stand back and allow the project to operate properly, but they observe in such a way that allows them to see, you know, the reason it goes wrong is because their data infrastructure in the business is appalling and they didn't even know. That's a very, very common problem that I think what earlier was today referred to as the scaffolding of the business. I often and we're quite close to the embankment, so I can particularly enjoy saying it now. I think of it as the plumbing. You know, the reason why we built embankments and London sewers was because people invented flush toilets and started flushing things into the Thames, um, which actually made a cholera epidemic happen. Right.
You kind of want to build the sewers before you start running your AI, and that sort of infrastructure is often not the stuff that hit me. Well, it didn't hit UK government until the stink of the Thames became so bad that they had to sort of in a very hot summer. Um, the Parliament couldn't sit. So you don't want to wait till the smell of your business is that bad that, um, you suddenly have to restructure everything and start putting in that infrastructure?
But for many leaders, they're not told about problems with the plumbing because, you know, it's a bit too complex to sort of describe. So you need those leaders to have dived deep in the process and organization of the company to see how these things are failing so they can be supportive of those failures, rather than, I think, the result that I think many, um, sort of program managers or project managers might feel.
If I surface this problem, I'll get shouted at because the problem seems so silly. But the problem is like leaky pipes and stuff like that and all businesses. I mean, Amazon had it. If you're telling me your business doesn't have it. Yeah, Amazon has it, then I think there's something funny going on. Right.
It of course there are some businesses that are purely digital where it's less of a problem, but as soon as the business starts moving between the physical world and the digital world. These leaky pipe problems are everywhere. And if you're not actively engaging with them, then you know you don't. You're not actively engaging with how your business is operating.
So it's sort of sad because everyone, everyone thinks either way of dealing with this is we need every. Yeah, of course, you know, you want your engineers to engage and experiment with the tools and your processes to be more efficient. But but they often don't, you know. Yeah. Like I say, they install the flush toilets, but they haven't checked the plumbing and they have no knowledge of how the plumbing and that's that's deeply problematic.
David Toms
Now, it sounds like quite an important lesson in there, in that when you've got a technology paradigm shift like this, you need to go looking for the failures to learn from them.
Neil Lawrence
Well, yeah, I mean, it's something, again, you know, so much of my experience. I mean, I actually really enjoy working there comes from, you know, one of Amazon's leadership principles was dive deep and, You know, you could see it misapplied someone. Sometimes someone would say something and then the way it would go wrong is like the senior member of staff would start to sort of interrogate them, but start by asking or really stupid question that no one could now contradict. You know, and then you can't contradict the powerful person, so that would go wrong. But when it was well deployed, this notion of dive deep was if you saw that anecdote and data were in tension, something's wrong.
It's not the case that one overwhelms the other, and that you need to find out why these things aren't tied together. And this is a very interesting round human thought. A lot of the way we express things is through narratives, but then we also narratives can misrepresent. So we've tried to make our businesses more quantitative, but quantitative representation can also misrepresent in different ways.
So those sort of things are we used to do these really sort of anecdote reviews on the supply chain when something would go wrong that involves going right to the core. So they would. They would sometimes be raised, you know, senior vice president try to order something on the site. It wasn't available. What the hell's going on? And instead of going, oh, they're grumpy.
No, you had to find out. And you had to go in deep and understand exactly what went wrong and why. Amazon in the like I say in the consumer business. This is why they were able to stay in contact with customers so well. Because if you habitually did that, you might get a question mark email from Bezos with a customer complaint representing one of these. It wouldn't just be an SVP that kept requiring you to go deep and find out where was the leak. Where is the problem? Dive into the details and then come up with an explanation. And that was a really great way of finding how. What is I mean, how are we getting in the efficiency? It's through abstraction, through moving the direct customer interface away from us so we can deploy that at scale.
Right. That gives us efficiency. That's the difference between the sort of corner shop service where I, you know, get your lentils and I give them to you directly after serving them and weighing them myself and scaling. But that scaling and that efficiency is, is disconnecting us from how customers are feeling pain and how things are going wrong.
So that dive deep process is, is really important in terms of identifying when that's going wrong, how that's going wrong, and how you can monitor and prevent it going wrong in the future. And I find it when I'm working with other businesses, which I do an incredible amount, I find it rare, rare that they have such capabilities.
And I would argue that in some businesses it's very difficult. I think that's why AWS started to well, I think I believe it's struggling. Many other people, I'm sure, would argue differently. But because you don't have that direct connection, right. Because no longer is it that like it was in the beginning of AWS, that the developer who was at the coalface, who doesn't like AWS, they're not emailing Bezos anymore.
Right. It's much more about the business to business purchasing in that abstraction. So you've become disconnected from your customer, so you've lost that link now? I'm sure they'll be fine. But if that happens in other businesses, you know, who are less have less of a sort of market share, I think it can be deeply problematic. So this this connection to allow you to see when things are going wrong, I think is one of the most important things for businesses.
David Toms
I guess one of the challenges when the paradigm shift happens is that it's easy to be a little dismissive of the customers and say that they're only asking for a faster horse and cart.
Neil Lawrence
Yeah, it's so annoying. I mean, we just did a public dialog on Friday in Cambridge, and this is the third one we've done recently. We did 110 years ago. And I mean, this was we convened it with Hopkins Van Mill. Um, we have a whole platform where they engage with AI, with a chatbot. It's really well considered about how we try and hear from members of the public?
And absolutely, consistently, the members of the public express a far more mature and intelligent opinions about this technology than our politicians. CEOs of tech companies. Me? Everyone else I learn in those meetings just where you sit off the table a bit and just listening to their conversations.
Of course, they're well convened. You know, we employ professionals to do that, and we're using the latest AI tools to support those professionals and learning how that works. People are not stupid, and we've got to stop saying they are. And this idea that the customer doesn't know what they want is kind of absurd, right?
It is true that you can, if you don't care about the customer deeply. And Amazon had some interesting techniques. It didn't care about profits this week. It cares about long term free cash flow, this is the Bezos idea that cares about how much cash flow we're going to get in two years time. I mean, because it's like there's no point in some cynically, there's no point in profit gouging them today because boy will be able to profit, gouge them far more in two years time is kind of the way that you're getting. Not that I'm not saying Amazon Profit gouges, but you know, it's the way of getting you out of this sort of simplistic thing that a lot of it was a big problem on the platform. A lot of the sellers on Amazon platforms, they don't have that incentive because they don't have the brand.
Um, so this sort of notion of thinking about customers in the long term. So you have to first of all, be caring about your long term customer rather than just being in and out. You know, this actually relates very much to, um, this point of having skin in the game. One of the things being a business allows you to do is to cut off these social obligations.
So you get these fly-by-night businesses that can come appear profit, gouge, make a lot of money for an individual and disappear. And unfortunately, the reputation of that individual doesn't suffer. Right. That's something that we already have in society. But if you want if you care about your business long term, you know, presumably you don't want to do that.
You're trying to build your brand. Your brand becomes the representation within society of, you know, of the entity that has that. You know, Amazon was very careful about their brand around your customer care and price point. Um, so if you do those things and you think in a sophisticated way about your customers rather than, um, you know, I think this extremely dismissive thing, which, by the way, you know, I think it's absolutely clear that when we look at what regular citizens and this is what comes out consistently from our public dialogs, when we look at how regular citizens feel about these technologies.
Sure, they're very happy using TikTok and whatever else, but they consistently ask for improvements in healthcare, education, social care, security. You know, the so-called wicked problems. And these technologies consistently fail to deliver in those areas. They don't work that well in supporting these things.
But people aren't stupid. They kind of know what they want. They just don't get it delivered. So I think respect for the customer and awareness that, yes, of course, you know, across those meetings occasionally says someone might say something, you think, oh, that's not right or that's not. But you get those people together in a diverse group and you start hearing the diversity of opinions that customers really do deeply understand what they want, but they don't understand what they want.
If you just corner one and point a finger at them, they need to be convened in a way that is respecting their entirety and their diversity. And, you know, in given an assessment, we often build these conversations around existing applications of the technology. In this case, in Cambridge on Friday, it was how these people would like the tools to be deployed to improve local government services.
And, you know, of course, these groups every so often would go off into a little bit of a rant about potholes or street closures or things that they were personally experiencing. And what I loved watching was how the moderator would just allow that. They didn't sort of step in and say, we're not talking about that. You know, they would just allow that to dissipate and then they would come back in with, okay, well, what could we do, you know, to pull that conversation back together.
So it's a skill to sort of understand, you know, people to some extent. Maybe you could argue that means that people don't know what they want, but they know they do know what they want when they're correctly convened and respected and listened to.
David Toms
So when you talk about people knowing what they want, a question I get a lot from people isn't actually about themselves, it's about their children.
Anybody who's a parent will be able to relate to it's what does this AI world mean for my children? And as a parent yourself as well as a technologist, what's your view on what generative AI means for your kids?
Neil Lawrence
It's a great question, isn't it? And it's a real indicator of how broken the current debate is that in a very real sense, that's what we're all worrying about, or many of us are worrying about of our age.
And yet that's not what the conversations about. And if you think about the customer as your child and you think about what world do you want your child to be growing up in and seeing, then a lot of these things really come home to you and you really realize the importance of them. But one of the things I want to say is that I think we're doing them an incredible disservice at the moment with this sort of nonsense AGI debate, then being told that the technology is too powerful, that it'll all be about, you know, large tech companies owning and deploying these technologies.
Can you imagine what it would have felt like when we were sort of whatever, deciding which university calls to 18 or deciding what jobs to take at sort of 24, to have that weight of the world on us about what a what a difficult place this is going to be. We never had that. We never had that. I mean, certainly when I was growing up, you know, in the 1990s, the world has changed immeasurably since then.
The internet didn't exist. I mean, I don't know, are you telling me this is an order of magnitude more difference to change? I doubt it, because the change is social as much as it's technical. It takes time for the change to assimilate. But we were just told to follow things you're passionate about and encouraged to explore our curiosity and do something that was useful or whatever we were taught.
And let's go out there in the world. And one of the things I've started noticing that with my kids, and I hope with other peoples, is that they don't carry this burden of, I think, slight disappointment that our generation have of that we were sort of promised the internet in the 1996 and it was going to do all these things.
And actually what we're seeing is a degradation of open society, degradation of democracy, you know, the removal of things that post-Cold War many of us felt, you know, we were on to some really bright future. Let's not burden the next generation with that, because guess what? They weren't alive then.
They didn't go through that period of promise and hope. So they're just seeing the world as it is today, and we need them to be as empowered as possible because we're utterly reliant on them for the human future. You know, we need them to feel confident and is empowered as possible that they can steer this.
So let's stop this nonsense of telling them that we've created AGI. And let's sort of talk about it, how it really is that we've got an incredibly powerful new tool that is going to absolutely disrupt the way the world is, just as the way the world is now was disrupted from the way it was 30 years ago. And we need them to be as confident and as educated and informed as possible so that they can be part of steering that world, not just for themselves, because the downstream consequences are enormous.
It's not even our grandchildren or their grandchildren that are affected by this decisions made today. You know, maybe look back upon in the future. What will you look back on as historically important? And, you know, the last thing you want is what people are engendering at the moment is a group of and I don't think they are.
That's the other thing about it. I don't think they are fearful. I think that they're excited about going into their world and doing their own thing. And I think the people who are imposing this fear on them are the existing interests, the existing incumbents, who should be fearful because their business should be disrupted, but they should not be propagating that fear to the people who will do that, disrupting and hopefully fingers cross, you know, take this technology and steer it in a way that, um, sort of embeds and improves the things we care about most in our society rather than diminishes them.
David Toms
First of all, that was a fantastically uplifting ending. Thank you very much for your time. We really appreciate you joining us today.
Neil Lawrence
Thank you, David.
The views and opinions expressed in this podcast are those of the contributor(s) and should not be taken to represent the views or positions of Hg or its affiliates. Statements contained in this podcast are based on current expectations or estimates and are subject to a number of risks and uncertainties. Actual results, performance, prospects or opportunities could differ materially from those expressed in or implied by these statements and you should not place any undue reliance on these statements.
Orbit episodes
Orbit Podcast
Fevered determination: Building Zalos from zero to enterprise in 5 weeks
Episode detailsOrbit Podcast
Trust, velocity, and building the Answer Engine: Dmitry Shevelenko of Perplexity speaks to Farouk Hussein
Episode detailsOrbit Podcast
The long road to the last mile: Nic Humphries and Matthew Brockman reflect on 25 years of Hg
Episode detailsOrbit Podcast
AI, Control Points, and the Next Wave of Vertical SaaS with Tidemark Capital founder, Dave Yuan
Episode detailsOrbit Podcast
A glimpse of the next generation: Zoe Zhao and Annalise Dragic of Azlin Software
Episode detailsOrbit Podcast
The business case for AI: Brent Hayward of Salesforce, David Carmona of Microsoft & Nagraj Kashyap of Touring Capital
Episode detailsOrbit Podcast
Mastering the billion-dollar software playbook: Joe Lonsdale of 8VC & Eric Poirier of Addepar
Episode detailsOrbit Podcast














































