Orbit Podcast
Orbit 46
Quick and dirty prototypes with Andrew Ng
In this episode of Orbit, Matthew Brockman, Managing Partner at Hg, sits down with renowned AI pioneer Andrew Ng to explore the rapidly evolving AI landscape. Andrew, who has led ground-breaking initiatives at Stanford AI Lab, Google Brain, and Coursera, shares fascinating insights on building in the AI era, including how the falling costs of prototyping (just $55,000 to create a working prototype) are democratising innovation and allowing companies to take more shots at goal with minimal risk.
The conversation delves into practical advice for software leaders navigating this technological revolution, with Andrew emphasising that the real value lies in the application layer rather than foundation models themselves. He challenges conventional thinking about AI "kill zones," advocates for creating innovation sandboxes within organisations, and offers a compelling perspective on why professionals across all domains—from lawyers to doctors to marketers—should learn coding to effectively harness AI's potential. Andrew's prediction of the rise of "10x professionals" who can masterfully direct AI to accomplish tasks presents both an exciting opportunity and imperative for business leaders to embrace this transformative moment.
Listen on:
Episode Transcript
Matthew Brockman
Welcome to Orbit, the Hg podcast series where we talk to some of the most successful leaders of technology businesses and hear how they built some of the most successful software companies across the world. I'm Matthew Brockman. I'm Managing Partner of Hg and I'm delighted today to be joined by Andrew Ng. He's a very well-known figure in the world of data science. Director at the Stanford AI Lab, led to him founding of the Google Brain Team, co-founding of Coursera, where I think he's personally contributed to millions of people learning more about AI and to continue their education. He's the Managing Partner of the AI Fund, which we'll hear about, and a leading AI venture studio, exec chairman of Landing AI, and in 2023, he was named as one of the the hundred most influential people in AI.
So, Andrew, you have a really interesting background, obviously as an academic researcher, as a business leader, a business founder, an investor. At this point of really interesting development in data science and evolving intelligence, where would you say it's easiest? Would you rather be a researcher right now? Would you rather be a venture capitalist? Would you rather be a founder of a business? Which is the hardest? Where would you find it easiest and hardest to participate in this evolving world?
Andrew Ng
I find what's most fun to do at this moment in time is to be a builder. I think you could be a builder in academia or as a business leader. We're a venture studio, so we build companies. But at this moment in time with the advancing AI technology, there's so many new applications that are now possible that no one on the planet could have built even one or two years ago. And I think it's amazing that today, frankly, they're probably high school kids that could do things that the best researchers in the world would have really struggled to do two years ago. And so the magnitude of opportunities just builds up. It's amazing.
Matthew Brockman
Obviously a lot of talk in the last year or so around, I guess, pace of development and also cost of models and cost of application, cost of products, I guess. Even a month ago we had the news from China DeepSeek, the sense of just it's so much cheaper now to access the capability. Do you think that continues? Do you see a reason why that slows? How do you think about the next one, two, three years, which feels like a decade in normal technology language? How do you see that evolving?
Andrew Ng
I think the cost of training models has been falling for several years now. And what happened with DeepSeek trading a model for under $6 million, that was probably, you could argue, was on trend, but what was surprising was it was a Chinese company rather than an American company that did it this time. But to most people, I think this may be how I would think about the AI stack. I think there's a semiconductor layer, Nvidia, AMD and others. On top of that we have the cloud companies. And then on top of that, the AI launch model training companies, also called the foundation model companies, the OpenAI and DeepSeek and a lot of the buzz in hype has been on these technology layers and that's fine. Whenever there's new technology, the media, social media are, "Let's talk about this."
But there's an even more exciting layer in the stack, which is the application layer, both on top of the technology layers, because of all this to work out, we need the applications to generate even more revenue so that they can afford to pay the technology layers, including the foundation model training companies. So I think the good news to most people watching this is the fact that foundation models are so expensive to train, is not your problem because others have spent billions of dollars training these models. You can now access these models for dollars or cents and build amazing applications on top of them in a really capital efficient way. I hope the economics works out for the teams spending billions of dollars on this stuff. I'm cautiously optimistic it will, but for most people, this creates great opportunities to build new applications really inexpensively.
Matthew Brockman
One of the pieces of feedback, I guess, we've had coming back from some of our investee companies is, "The models are still evolving at such pace. How do I build application or how do I build application that I know is going to ultimately work for my customer or serve a customer need as opposed to trying to work with something more static?" And I think they're hearing that from their own customers. Like, "If I build an application that you guys have supplied to me, how do I work with the latest model or the latest technology?" Do you have a perspective on how people should approach that if you're running a SaaS company or a classic sort of application software company?
Andrew Ng
I would say I think it's kind of business fundamentals. I think it's worthwhile taking the technology that's available today and using that to serve customers and then of course it'll get better. Just like if you build your applications on top of the cloud, the cloud prices keep on coming down, storage is cheaper, computers better, and then we serve customers even better on top of this improving technology infrastructure. So I think there are a lot of opportunities to take today's technology and just go build valuable things and serve customers. And then as the technology gets even better, we have the option to switch to newer generations to make it even better.
Now, there is one thing that has been hyped up more than I think it should be, which is the so-called kill zone of the foundation model companies. So the hype goes that foundation models are so powerful, you build a startup on top of them. Maybe one day the foundation model will be so powerful, it just wipes out your business. I don't want to say there's no kill zone. There is a very small kill zone. So the example that everyone talks about was when OpenAI's models got better, you know, you've got Jasper, which is a marketing company built on top of an earlier model, into trouble. And then arguably the chats of PDF applications also got into trouble when the foundation model companies incorporated upload PDF natively. So you can upload a PDF file to most of the foundation to the chat things on the internet. And so the chat with PDF things went away, but that actually wasn't a foundation model that was actually bundled in the feature.
So I don't want to say that there's no kill zone, but I think it's been vastly hyped up relative to the actual magnitude of the reality. And I wish it was that simple that some foundation model company just is a trade in bigger model and it does everything. It is just not. So the kill zone is much smaller than the hype makes it out to be. And I would say just go build valuable stuff and as technology gets better, there's a very good chance that your life will get better too. And I don't want to say it's a zero chance, but it's very smaller than most people think that it will wipe out some business built on top of them.
Matthew Brockman
We've had this debate over many years, which has always been, if we look at an application software company, there's always, "Could Microsoft kill you?," or, "Could one of the big providers, either in enterprise application software or in the cloud infrastructure supplant you?," But ultimately the nuances of application in a particular industry are so deep, that it's actually quite hard if you're serving a very horizontal set of use cases. It feels like the technology will continue in that way, even with the capability of the intelligence these models have.
Andrew Ng
Yeah. In fact, at AI Fund, we do a lot of applications in vertical SaaS in some of these industries that I think are really exciting, but the rest of the world for some reason things are unexciting. But so when we go really deep into tariff compliance for cross-border trade, I mean all that specialized data, or when we go really deep into shipping route optimization, figuring out all the issues because of that, I don't see any short-term realistic path for a foundation model to just figure out how to do that. There's so much vertical knowledge, so I'm just not worried. And when we build these complex agentic workflows on top of foundation models, I don't want to pretend that technology is infinitely defensible, I'm sure someone will someday figure out how to copy it, but I wish AI was that good that it's so easy for AI to just take over these businesses, but AI just isn't that good yet today.
Matthew Brockman
I'd love to pick up on that theme. So we have a lot of investment companies and a lot of people who work with Hg and either invested with us or in our ecosystem that have... The software is quite deterministic. It's processing a particular workflow that needs to be right, you know, my taxes or a particular sort of medical process and so on. And obviously the models are ultimately probabilistic. They're giving you an outcome based on a series of inputs. How do you think about workflows like the ones you just talked about where there's an element of it needs to be deterministic, it needs to be right for the end customer, and how that plays through over time or how you think that might mature in the next few years? Because it is a question we get a lot.
Andrew Ng
So OEMs are probabilistic, but we have a lot of mechanisms to make them more reliable and make them more deterministic. Because they're probabilistic, I don't think they'll ever be mathematically deterministic, but it could be more reliable than, say, a human in a similar context maybe because humans are also slightly stochastic and sometimes make slightly different decisions, but some design patterns, before taking a high stakes action, we often have the AI system have a confirmation flow. So for example, before we charge someone's credit card and ship them an expensive product in the physical mail, just pop up a window to say, "Do you really, really want me to charge your credit card and ship you this thing?," and have them click yes before you do it. So that confirmation flow takes out some errors.
We built these agentic workflows, which means we don't have an LM, just spill an answer and do it. With AI, generate the answer. Then I'll say, "AI double check your answer, maybe apply guardrails. Steve is really right." And it turns out that if you have AI system, double check his answer a couple of times. If every double check cuts down the chance of an error by 10%, it goes from 90% to 99% reliability with double checking. That's a made up number, but it really becomes much more reliable. So there's still some really high stakes applications, like certain medical diagnosis ones where I'd just be really cautious. I don't think we're there yet, but there are a lot of commercial applications where so long as the risk of an error is not completely catastrophic, it's merely somewhat expensive, then I'm seeing more and more applications where AI is getting reliable enough to be acceptable for those types of use cases.
Matthew Brockman
I mean it's certainly one of the themes we felt is important in our investment portfolio, and it sounds consistent with what you're saying, which is there's always traditionally been an application layer and there's almost like the human layer, like the person entering the data and making decisions on financial recommendation or a healthcare recommendation. The extension of the application layer into the kind of human process, like how much of the work can be done essentially by the machine or by the processor in that context, and therefore how much of the value is in the technology versus in the user for want of a simpler expression.
Andrew Ng
So one of the very robust defenses, or one of the often used defenses for the random results for unreliable results is human in the loop, to have a human verification at the last step. As I build more and more applications, one repeated theme is as we look at how deterministic versus stochastic humans are, I see so many businesses really surprised to discover that the humans are much more random than they ever thought. And so the bar for AI to be less random than the human is probably lower than many businesses thought.
So, one example, it was a few years ago, I was in a factory that inspects plastic widgets. Spoke of two inspectors that had been working the jobs for many, many years, thought of as very seasoned, reliable inspectors, but no one ever checked that the two of them actually agreed on their judgments. And it turns out they did not. And so we found out because we found that no matter what an AI did, one of them would disagree with our AI and said, "No, this AI is stupid." And we found this because the two of them didn't even agree with each other. That's why at least one of them would always criticize the AI output.
I find that when you measure how reliable AI is, especially for problems with subjectivity, even medical diagnosis, frankly, humans just don't agree often, the bar may be lower than widely appreciated before. And this is why also when robots started welding cars, one of the huge benefits, it wasn't that robots welded cars better than humans, it was much more systematic than humans that welded cars in different ways every day. And that systematic processing was a big benefit. So I think we still have work to do. AI isn't ready to take over a lot of tasks, it's ready to take over some, but I think the bar may be lower than maybe widely appreciated.
Matthew Brockman
Do you have a point of view of what the job might look like for an average lawyer, attorney or a medical professional in a few years’ time? Maybe some of the businesses you're invested in or you're creating, do you, do you see how the end user, they, might change once deployed?
Andrew Ng
You know, I have some very specific and maybe controversial advice, which is if you're a lawyer, learn to code, if you're a doctor, learn to code, if you're a salesperson, learn to code, if you're a recruiter, learn to code. And here's why. Look into the future. One of the most important skills for any knowledge worker especially, is the ability to tell a computer exactly what you want so that it will do it for you. Computers become incredibly powerful. And just as today, I can't imagine hiring a marketer that doesn't know how to use web search. I mean, Google was a new thing a long time ago now, but you just can't hire someone. Would you hire a lawyer that doesn't know how to search the internet? I would not. We're rapidly approaching a phase where already I really hesitate to hire a marketer that doesn't know how to use AI in a really deep way. And as AI capabilities continue to expand, people that understand at a deep level how computers work and can tell it to do exactly what you want to do will be way more powerful than people that don't.
I had an interesting experience working with a collaborator at DeepLearning.AI on generating some pictures for art. I was teaching a class that needed some artwork in the background. Long story, but I learned an interesting lesson because the collaborator I was working with, Tommy Nelson, he had taken art history when he was a student, and so he knew the language of artwork. So he could go to AI picture generation thing, like Midjourney, he could tell it, "Use this genre and this theme of this artist, he has a palette," whereas I don't know the language of art. And so I would say, "Please make some pretty pictures of robots for me." And he could get AI to generate much better artwork than I ever could, and I think the same will be true for all of these roles, be it marketers, recruiters, or doctors. To someone that really understands how computers work, we really use the right language to command it to give you the outcome you want.
So to me, coding is less about writing code specifically. It's about understanding enough of how computers work to have a step-by-step problem-solving approach. So let's use the precise language to instruct the computer to do what you want, and that will be really important for all knowledge roles.
Matthew Brockman
That's a great example. We have a similar experience in some of our legal software companies, where essentially you're trying to use the capability to automate workflow, and it's that combination between understanding legal process and legal knowledge. "What should I actually be thinking about and asking?," and then complementing that with the capability of the model. You need both in one place to actually make it effective. You can't really just prompt to kind of, "Oh, could you draft me a legal document that does X?" It's a much more complex workflow, but once you can capture that, that's where the productivity comes from.
Andrew Ng
Yeah. In fact, I think, you know, reasoning models which have been making the rounds there, I think they're a bigger deal than most people realize, though. One of the challenges is you do need to prompt them differently. I think most people have not yet figured out how to use reasoning models, but I personally use reasoning all the time, and I feel like when you can give very precise instructions to tell a really smart model what to do, it just does incredible amounts of work for you. But I think that's a skill that... Oh, in software engineering we often think about the 10x engineer, which is the engineer that supposedly has 10x the impact of the average of whatever engineer. We don't talk a lot about the 10x marketer or the 10x marketer or the 10x doctor, maybe. And I think for some professions, the laws of physics makes it very hard for someone to be 10x.
So if you're a supermarket checkout clerk, no matter how athletic you are, you just can't shove groceries that fast out the checkout, or even doctors, because the human body heals at a certain rate, I don't know that any surgeon can have 10x the cure rate of another one. But I think that as more work becomes AI empowered, we will see more 10x marketers, 10x lawyers, 10x financial analysts. And what this means is for an individual, if they learn these skills, learn coding or learn how computers work and learn to bend computers to your will, make them do what you want, we'll see a lot more 10x professionals and this will be exciting for the individuals and embrace those opportunities.
Matthew Brockman
Does that 10x opportunity inform where you are investing your own time and money? As you think about verticals or subtexts where you can build applications and agents, are you thinking about that as priority areas where you have the maximum chance of having a return and an impact?
Andrew Ng
I know that there's a standard rule of thumb that some people use that you should invest in areas where there's a 10x opportunity for efficiently getting something. I don't know. I'm actually curious what you do. It feels to me like a simplified description of investment criteria. We definitely look at it, but I don't think it's necessary or sufficient to decide to go into a sector. I'm actually curious what you think.
Matthew Brockman
We try to diversify as much as we can within an overall theme of white collar jobs, employ a lot of people in expensive places, North America and Europe, and how can we automate those workflows. And essentially if we can invest in those, we have about 55 companies in our portfolio, we have about 55 exposure points to individual companies and then obviously different products within those. So each of them benefits from the same macro theme. More and more automation, more and more workflow. It was SaaS at a certain point, it still is SaaS in many applications, bizarrely, and it's still a trend which is moving, bizarrely, in many places. Now you've got this next wave of obviously intelligence and that really playing through and it's trying to get... And then it's really cross-sharing, "How do I get as much collaboration across people within the portfolio or within the applications to think about how I can learn from each other. And I guess that's a real theme at the moment when we think about what's happening in the world, obviously, of data science and AI.
Andrew Ng
That's actually really clever, and I think it's similar to… I'm seeing a lot of cross-sharing where very surprising lessons from one application are applicable. And I think one thing I've seen as well is that technology evolves so quickly. It means that having a community to work with means that we all do a better job staying on top of where the tech is, what we can and we cannot do, what is just marketing hype from some company and what to actually believe in and try to use ourselves. I think that cross-sharing community seems to be very powerful.
Matthew Brockman
I certainly, and this is obviously the reason we do these kind of podcasts, is to try and share a bit of intelligence across some of these applications as they build. Do you have a view of what a software company will look like in the future about how many developers or what kind of developers? You talked a lot about how this might work in the application layer, but as you're building software, do you have a perspective of what an application business of the future might look like versus one that might be conventionally a SaaS company in people's heads today?
Andrew Ng
So I think software development is completely transforming already because of Gen AI. I know that one answer for this moment in time is use the developer tools like Winserv and Cursor and GitHub Copilot or whatever. I actually am pretty confident the current generation of tools is not the end of the story. There will be dramatic improvement over that. And I'm already seeing developers that use AI as a coding assistant way more productive than people that don't. I still write a lot of code and frankly, I would hate to ever have to code a game without AI assistance. And so actually one thing I worry about, I know of a number of large companies where maybe for some reasons the CIOs have banned employees from using new AI assistant develop tools. And boy, I feel like they've got to be operating much slower than the people that use AI assistance.
So I hope that those CIOs will figure out the right policy to give them the reassurances they need, which I think there are valid concerns of use of AI and copyright and security and whatever, but I think it's making a big difference to developer productivity already.
Matthew Brockman
I mean, we're in the initial stages, if I'm completely honest with you. But we found enormous potential from developer productivity, as you say. So it can be 60% or 70% or 80% more productive, the resources required to deliver a certain set of products. It's also, frankly, made us think pretty hard around... It's been a trend over the years around offshoring and using obviously lower labor centers to produce basic levels of products and so on. That becomes a very different proposition, frankly. And we're rapidly changing how we think about that, for example. So massive advocates of how do we deploy this in the application layers that sit in a bunch of our companies.
Andrew Ng
Let me add two observations to that. So it turns out, if you look at the software engineering process, I want to break software work into three buckets. So AI Fund, we build companies, so we often build quick and dirty prototypes, and then at some point you have to build robust enterprise-grade software and then sometimes you have to maintain legacy software. So these three buckets of work, I find that the second and third, building scalable software, maintain legacy software, AI boost productivity significantly. Let me make up a number, like 50% or something, which is huge. But the really transformational part is building quick and dirty prototypes, whereas nowadays, 30%, 40%, 50% boost in productivity, it feels like a 10x boost in productivity. And I think it's because quick and dirty prototypes, for validating innovation and invention, that has less connections to legacy software infrastructure. If you just run it internally and you're set in your firewalls, you don't need to worry as much about security. So there are all sorts of technical reasons why building quick and dirty prototypes is so much dramatically faster with AI-assisted coding.
So what we do at AI Fund is when the cost of prototyping falls dramatically, then economics says, "Well, we probably want to buy a lot more of it," and so I see a lot of teams including large corporations with smart innovation teams saying, "You know what, let's build 20 prototypes, and if the price of that is to fail 18 times of two it could be a very cost-efficient way to invent new things." So, I find that the rapidly falling cost of prototyping has created a new mechanism, a new way, for companies to invent. And this is why also sometimes people lament that so many things are in POC that never make it into production. To me, if the POC took an engineer half a day, it's fine. It's fine if a lot of them just fail. What's the big deal?
Matthew Brockman
It's like a sketch on a page, right? I could create the full artwork, but actually if I can just do the sketch, I can make a very quick decision on whether I want to follow through and go to all that trouble.
Andrew Ng
Oh yeah, that's a great way of putting it. And so frankly, I still do a lot of coding on my weekends. My Saturday routine is disappear at a coffee shop. And frankly, a lot of my prototypes will never see the light of day. I never tell anyone about because they frankly just don't work. But a small fraction of them, I actually take to my team and say, "Hey guys, check out this prototype. Let's see if this is scalable," and that makes it all worthwhile. And then not just me, but I think this is a different philosophy to building and inventing.
Matthew Brockman
Sorry, one last maybe, if I may. Do you come up against the regulatory side data access? Some of these sort of themes are, I guess probably slightly more prevalent in Europe. They're certainly talked about a lot between Europe and the US. You can experiment, but to actually get deployment, to get a real adoption, to get scaled, does that raise its head? Is it something that you have to think about, work on, or is part of your investment process?
Andrew Ng
I think eventually these things do become an issue, but often... Maybe, one thing that's changed is AI allows us to process unstructured data, meaning text, images, audio, video, much better than ever before. So what that means is setting up the data infrastructure to get your unstructured data to where you want it to go for AI to process is now more valuable than it used to be. Previously, for example, we used to have a lot of images stuck in some blob storage in the cloud. It just sat there because we had no use for it, we couldn't process it. But AI now lets us process it. So I'm seeing a lot of rejigging of data infrastructures to let data go where it should go in the organization, subject to security and privacy, to do the AI processing. So that's been important.
And then I think regulatory safety issues, they are important, but large corporations grind to halt if every innovation needs five C-suite to say, "Yes." So I find that if you create sandboxes to let teams within the sandbox, say, protect the environment, no PII leaked externally or whatever, no damage by shipping externally, customer products that could be bad. But you create a sandbox to let innovators just run like crazy within that sandbox to prove something out before we figure out how to take it outside the sandbox, then you don't have innovators need them to run around to ask for permission. I frankly, I know that sometimes you need to set up processes where you need to ask for permission, but I just have a strong visceral negative reaction to that. Every time you need to ask someone for permission, you're slowing down someone else. So it's not always a bad thing, but I really tend to ask, "Where in my organization does anyone need to ask anyone else for permission, and can I remove that step?"
They can let innovators in your organization have sandboxes, so it's safe for them to experiment because they can't create downsides in the organization, but let them just run like crazy and then prototype and prove things out without needing to ask too many people for permission. You get a lot more of those spots of ideas. I think even in light of data privacy regulations, which obviously I take seriously, hopefully there are ways to set up sandboxes to let innovators run fast without needing to ask a lot of people for permission.
Matthew Brockman
It's this context of innovation, I think also trying to manage how do I think about ROI in an uncertain environment. As you say, it's not like you can sort of say, "Well I'm going to launch that product. It'll make that amount of revenue." There's a degree to which it's just an uncertain circumstance you have to be more comfortable with, I guess, at the time like this, even if you're in a very established business.
Andrew Ng
Oh, yeah. And one piece of good news is it's much more capital efficient than ever used to be to build prototypes. So at AI Fund, we budget $55,000 US to get to a working prototype or something.
Matthew Brockman
Is that right?
Andrew Ng
And it's not nothing, but you know, $55,000, if an organization sustained that loss, you take a bet, fine, you lose $55,000, you can sustain that magnitude of loss. You could just take a lot of shots at goal.
Matthew Brockman
And would that be a prototype for a specific application in an industry? So that might be a... You know, as I say, here's a product that would start to help me think about, you know, the example you gave around tariff calculations? Is that the kind of level you're thinking about, so it's a$50,000 sort of entry ticket to at least get a product sort of spec'd and running?
Andrew Ng
Yeah, to get a working prototype, we try to get a working prototype and then if it still looks good with AI Fund, we then invest, say, a million dollars behind it. We tend to be very frugal. Maybe not everyone can do things with our capital efficiency, but I think it is just cheaper than most people realize to build prototypes.
Matthew Brockman
Any words of advice? If I'm a CEO of an application software, a SaaS application software company today, what would you suggest I should be doing? What would you suggest I should be worried about? How should I spend my time, I guess both on learning about technology, but maybe organizational change, you know, the classic innovators' dilemma, what got me here, will it get me there. Like any thoughts on that?
Andrew Ng
This change feels pretty disruptive. I feel like there'll be a lot of top down and bottom up innovation needed to navigate what the future end state will be like with generative AI being more pervasive and uses too, by many developers. I think at this moment in time, the end state for a lot of businesses may not be clear, which means that leaders are in this interesting situation of needing to execute change management toward an uncertain endpoint. Our standard MBA change management playbook, optical clear vision, blah, blah. We've all maybe seen that playbook. But because the technology is evolving so rapidly, many teams are executing change management toward an uncertain endpoint, which is different. But so I think the tactics are things like encouraging your workforce to learn Gen AI in a deep way. Don't just learn prompting. Really learn to master this and make a computer do what you want. And then empower a lot of bottom up innovation, which often means... I'm already seeing non-software professionals that learned to code more productive than people in the same role that don't know how to code at all.
So my marketers that know how to code are more productive than marketers that don't know how to code sometimes. And so I think to provide training, enable bottom innovation and to combine that top down and bottom up views with where the business may be going, even as we continue to navigate through this uncertainty. But it's a very exciting time to be building. And the other thing is, I think just go and build. It's so cheap to build things and there's so many ideas worth building that no one's even gotten around to yet. So the other piece of it is just go build, build, build. There's so much stuff we could be doing.
Matthew Brockman
Well, listen, it's been wonderful to speak, Andrew, so much for your time today. I appreciate you're a super busy man, so thank you for the time and for the insights. Thank you.
Andrew Ng
Thank you. It is a real pleasure to be here. Thank you.
Orbit episodes
Orbit Podcast
The business case for AI: Brent Hayward of Salesforce, David Carmona of Microsoft & Nagraj Kashyap of Touring Capital
Episode detailsOrbit Podcast
Mastering the billion-dollar software playbook: Joe Lonsdale of 8VC & Eric Poirier of Addepar
Episode detailsOrbit Podcast