This episode features Caster Eunice, CEO and cofounder of Applied Intuition, a $15 billion under-the-radar AI company that adds intelligence to physical machines like cars, tractors, planes, and mining equipment. Unlike other AI CEOs who build in public, Caster has remained deliberately quiet for nearly a decade while building relationships with 18 of the top 20 automakers and major construction, mining, and defense companies. The conversation explores his contrarian approach to company building, including staying quiet until recently joining Twitter, his philosophy of 'radical pragmatism,' and the belief that emotions should be removed from rational decision-making. Caster argues that physical AI will have more immediate impact than software AI, particularly in industries facing labor shortages where the average farmer is in their late 50s and dangerous jobs like mining and trucking desperately need automation.
He presents an optimistic view of AI's future, comparing it to the Industrial Revolution - acknowledging downsides while emphasizing the potential for reducing human suffering through better access to healthcare, mobility, and basic services. The discussion also covers his unique management philosophy of encouraging dissent to avoid groupthink, his extensive reading habits focused on old books rather than current content, and his belief that many Silicon Valley CEOs lack taste due to narrow life experiences.
Key Takeaways
[1:19]
Applied Intuition is a $15 billion AI company that has been quietly building for nearly a decade, working with 18 of the top 20 automakers plus major construction, mining, and trucking companies. They add AI to physical machines like cars, tractors, planes, submarines, and mining rigs - essentially being 'Waymo or Tesla but without the hardware.' The company has never spent any of the capital they've raised and is profitable with over 1,000 engineers.
[4:35]
Caster believes AI will bring about an abundance revolution similar to the Industrial Revolution, dramatically reducing human suffering. He gives the example that 80% of German towns in WWII lacked electricity, yet today we take such infrastructure for granted. AI will democratize access to services like personalized coaching, mobility for disabled people, and healthcare in remote areas like Rwanda where people live two hours from the nearest hospital.
[9:50]
The core root of AI anxiety is misunderstanding. Caster advises people afraid of AI to actually learn about the technology - watch YouTube videos showing AI struggling to understand a cup held upside down or recognize that nunchuck-wielding robots are pre-programmed demonstrations costing $15 million, not autonomous beings. 'Get to know it, then actively make the technology be used for good.'
17 more takeaways
Sign up free to see all the key insights from this episode
Spend time learning about AI technology if you're anxious about it
To see limitations and reduce fear through understanding
Read old books rather than new ones
Time has filtered out noise, so you get more signal per book
Identify areas where you're ignorant and find the best book in that space
To build broader knowledge base and become more well-rounded
Derive company values from analyzing why you're successful, not from abstract ideals
More authentic and actionable values that reflect actual success factors
Take notes and follow up consistently
Half of business success is in the follow-up and maintenance
Encourage everyone in your organization to speak up regardless of seniority
To surface the best ideas and avoid groupthink
Remove emotions from decision-making processes
Emotions are based on past experiences not optimized for current business decisions
Treat your first startup as practice with zero expectations of success
Reduces pressure and allows focus on learning the craft of being a founder
Books Mentioned
House of Huawei
by Not specified
Company read this to understand how Chinese companies like Huawei operate as extensions of the state rather than profit-driven businesses
The Vibe Coding
by Not specified
Whole company is currently reading this new book, though it goes against his usual heuristic of only reading old books
The Emperor of All Maladies
by Not specified
Cancer book that Caster says changes the way he thinks - the ultimate test of good material
Made in America
by Sam Walton
Called 'unbelievable book' written by Walton on his deathbed
My American Journey
by Colin Powell
Described as 'very good' though not on his website
Guns, Germs, and Steel
by Jared Diamond
Top of his list for connecting dots from cave people to modern Silicon Valley
Collapse
by Jared Diamond
Also recommended by same author as Guns, Germs, and Steel
SPQR
by Not specified
Roman history book he picked up because he realized he didn't know much about Roman history
Malcolm X Autobiography
by Malcolm X
Believes reading this will make you a better founder, even though connection isn't directly obvious
High Output Management
by Andy Grove
Mentioned as a classic that everyone already knows about
People Mentioned
Marc Andreessen
Quote tweeted Caster's first tweet saying 'this is the best AI CEO nobody knows' and encouraged him to go online
Elon Musk
Replied to Caster's first tweet
Elad Gil
Famed investor who describes Applied Intuition as 'the most successful, most quiet company in AI'
Naval Ravikant
Investor and friend who says 'fame itself is like a tool and it's powerful'
Peter Ludwig
Caster's cofounder at Applied Intuition
Sam Altman
Was president of Y Combinator when Caster was COO
Joe Montana
NFL quarterback who is an investor in Applied Intuition
Charlie Munger
Quoted as saying 'I've never met anybody very successful who doesn't read all the time'
Steve Jobs
Referenced for the quote about great artists stealing, which Caster interprets as being humble and learning from everything
Notable Quotes
"This is the best AI CEO nobody knows"
— Marc Andreessen
Quote tweet of Caster's first Twitter post
"Our best work is done alone and quietly"
— Caster Eunice
Core company value at Applied Intuition
"The core root of fear is misunderstanding"
— Caster Eunice
Explaining why people are anxious about AI
"Every minute you're writing something for public consumption, you're not focusing your very limited time that you have on your customers and your product"
— Caster Eunice
Philosophy behind staying quiet and not building in public
"Get to know it, then actively make the technology be used for good"
— Caster Eunice
Advice for people anxious about AI
"These industries need autonomy, and it couldn't come soon enough"
— Caster Eunice
About farming, mining, and construction facing labor shortages
"Half of the work is follow-up"
— Caster Eunice
Company value about the importance of maintenance and execution
"Truth is what stands the test of time"
— Gandhi
Referenced by Caster when discussing finding the best ideas in company culture
"I've never met anybody very successful who doesn't read all the time"
— Charlie Munger
Used to support the importance of reading for founders
"A one death is a tragedy, a million is a statistic"
— Stalin
Referenced when discussing the 30,000+ annual car accident deaths in the US
Other Resources
Y Combinator
accelerator
Caster was COO and saw hundreds of startups up close
Tesla FSD
product
Example of L2++ autonomous driving technology that will become ubiquitous
Waymo
company
Represents the high-sensor, high-compute approach to autonomous driving
Roomba
product
Example of robots already around us that we don't think of as robots
CarPlay/Android Auto
technology
Analogy for how full autonomy will become standard in cars
Berkshire Hathaway
company
Inspiration for Applied Intuition's quiet, long-term focused approach
Stripe Press
publisher
Published book about maintenance that aligns with Applied Intuition's philosophy
Full Transcript
You decided to join Twitter recently. You put out your first tweet. Marc Andreessen quote tweeted it and said, this is the best AI CEO nobody knows. Our best work is done alone and quietly. Every minute you're writing something for public consumption, you're not focusing your very limited time that you have on your customers and your product. You're building a lot of the future that we're gonna be living in. What does the next couple years look like? Us solving some of these impossible problems like cancer are directly gonna be related to this AI boom. Net suffering in humanity overall should go down significantly. A threat that has emerged on this podcast is that AI is coming just in time to save us. The real impact of AI in the next five to ten years really is gonna be in farming, mining, construction. These industries, they need autonomy, and it couldn't come soon enough. If you look at farmers, the average age of a farmer is in their late fifties. What does that mean in ten years from now? There's a lot of anxiety about what AI is gonna do to the world. The core root of fear is misunderstanding. If you at home are very anxious about AI, the best thing that you can do is spend time to understand, and you will quickly see the limitations. Get to know it, then actively make the technology be used for good. Today, my guest is Caster Eunice, cofounder and CEO of Applied Intuition. You have probably never heard of Caster or Applied Intuition. This is the most important under the radar AI company and CEO that I've ever come across. It's a $15,000,000,000 company that has been growing quietly over the last decade. What they do is they add AI to vehicles, like cars, tractors, planes, submarines, mining rigs, and a lot more. 18 out of the top 20 automakers are customers, as well as the biggest global construction, mining, and trucking companies, also the Department of Defense. They're basically Waymo or Tesla, but without the hardware. Kasser himself was born on a farm in Pakistan, grew up in Detroit, started his career as an engineer at GM, and then at Bosch. He then went on to start a couple companies before starting Applied Intuition. I love everything about this episode, and I am so excited to bring it to you. Don't forget to check out Lenny's productfast.com for an incredible set of deals available exclusively to Lenny's newsletter subscribers. Let's get into it after a short word from our wonderful sponsors. This episode is brought to you by Omni. Many product teams today are in the process of debating how to ship AI analytics. The hard part is obvious. Having an LLM gas at SQL in production is a huge mess and just a bad idea. Omni takes a different approach. They have a semantic layer built in so that when you embed their analytics, the AI actually knows your business definitions, not just your raw tables. You can test queries, validate the reasoning, and lock down permissions before anything hits production. If you want AI analytics in your product without building the whole stack from scratch, check out omni.co/lenny for a free three week trial. Companies like Perplexity, DBT, and Buzzfeed use Omni to ship analytics their customers can trust. That's omni.co/lenny. My podcast guests and I love talking about craft and taste and agency and product market fit. You know what we don't love talking about? SOC two. That's where Vanta comes in. Vanta helps companies of all sizes get complying fast and stay that way with industry leading AI, automation, and continuous monitoring. Whether you're a startup tackling your first SOC two or ISO twenty seven zero zero one or an enterprise managing vendor risk, Vanta's trust management platform makes it quicker, easier, and more scalable. Vanta also helps you complete security questionnaires up to five times faster so that you could win bigger deals sooner. The result? According to a recent I d c study, Vanta customers slashed over $500,000 a year and are three times more productive. Establishing trust isn't optional. Vanta makes it automatic. Get $1,000 off at vanta.com/lenny. Caster, thank you so much for being here. Welcome to the podcast. Thanks for having me. You're basically building, a lot of the future that we're gonna be living in and people may not even realize this. And the kind of there's two sides to this. On the one side, let me ask you this question. If things go really well, what does the next couple years look like for people with the emergence of AI with physical AI? What are what are what's a vision of the future? Let me take the broader AI point and then the the the micro the more, specific one on physical AI. Macro, I I think think about this like the industrial revolution. Right? So if you're sitting, let's say, in the late eighteen hundreds, there's a lot of, you know, we can focus on a lot of bad things that happened because the Industrial Revolution, right? You have child labor and you have monopolies emerging and you have abuse of, you know, wars end up happening. But there's also, it's an almost unimaginable present without the values and without without the kind of benefits we got out of the Industrial Revolution, which is broader access to health care, like, we've never seen before. Like, access to goods, material goods, things like we take for granted like heating and cooling your home. There there's this, there there's this great, YouTube kind of channel that focuses on POW letters from Germans who are seeing America in the early forties, and they're writing letters back to Germany about what they're seeing as they're basically, you know, prisoners of war. And they're kind of blown away that the towns that they roll by in these trains as they're going to their POW camps are all lit up or that there's cars everywhere. 80% of German towns in World War II did not have electricity. And that's kind of a mind bending kind of thing because we just assume all this stuff, all this technology is, you know, equally distributed. So the positive version is these things that we, let's say folks who are wealthy or folks who have access to technology, these things everybody has access to. The the the fact, like, simply having somebody who's a coach to you and having that coach very specifically to you, not a generic, you know, chat g p t that's giving fairly generic answers. That is a very powerful thing. I think us solving some of these impossible problems like cancer are directly gonna be related to this AI boom. So I think net suffering in humanity, I think, just like the industrial revolution, overall, should go down and should go down significantly. And I'm a fundamental optimist in that view that technology will bring that that that that that positivity in physical AI specifically. Again, you know, when you when you have things like your, you have your own car and you have the ability to have your limbs and you have your your senses and you can drive, you take these kind of things for granted. You jump in your car and you go to the store. For somebody who maybe is disabled or somebody who doesn't have the money to afford a vehicle, access to mobility that's nearly free or is free is a big deal. And that, like, simple example of making self driving cars free for everybody and how that would change the planet, you live in Rwanda and you are two hours from the nearest hospital, that matters. So then in a very, very true way. And so I think a a lot of, let's say, the negativity around AI comes from people who, frankly speaking, are living in a very, very good existence. And when you live on the other edge of society, yes, and I'm not like some naive person who thinks that there's no downsides of technology, we can discuss that. But I I just see there's a lot more positive. So when you ask that question, what's the next forget three to five years. What's the next twenty years? These things that we take for for granted that are bad, suddenly are not there. And I think I think certain diseases, certain in, accessibility to basic, you know, services suddenly start going away. One last example of that is you take just the fact that you can message people basically for free. You know, for people old enough. Like, this is not the norm. We came from Pakistan. We couldn't even communicate back to Pakistan because the long distance, you know, was so expensive, and so it was handwritten letters. Today, you can basically contact anybody on the planet basically for free. There's obvious downsides for that. But there there's a lot of upsides for that, which is being in touch with people that you care about and you love basically for free. And so I think AI has the ability to bring this abundance to many, many more people, at a near free, you know, free free cost. On the flip side of this, as you pointed out, there's a lot of there's a lot of anxiety about what AI is gonna do to the world, to jobs, robots. They're like these videos coming out of China with these robots with nunchucks, like the stock market. I I feel you know what? I feel the nunchuck union is up at arms with that. How dare they? Yeah. But it's, you know, it's scary. And the you know, the market's reacting more and more to the just like, oh, wow. These companies are maybe not gonna survive long term. Again, being at the the center of this and and building a lot of this the stuff that'll get us there, how do you how do you envision that couple years playing out? Like, are you optimistic? What keeps you optimistic? Any advice to people that to help them kind of stay, you know, calm through this period? So so the two separate things, anxiety around technical shift and then the public investors reacting to specific stocks they've had. We have to separate those things. So let's talk about them separately. On the first one, you know, the core root of fear is misunderstanding. And I think the if you at home are very anxious about the impact of AI in some variant on your own job, the best thing that you can do is spend time to understand it, and you will quickly see the limitations. There's some great videos on, YouTube, which are, like, you know, trying to get Gemini to understand what a cup is by just holding it upside down, and it, like, really strung then they do it with, you know, Chat GbT. So it's like if the if the revolution is coming, you know, the AI overlords have to first understand, like, the top and bottom of a cup. And so you realize that you can see the video of nunchuck wielding humanoids, which are preprogrammed, and that cost $15,000,000 to do that video. Yeah. That that is true. It's not it's not fake. I'm not I'm not implying it's fake, but it's also not what your brain kind of fills in the gaps. You see nunchuck robots, and you just feel like, well, the gap you know, these these are sentient beings that are at their own volition going rather than it's a bunch of motors have been programmed to do a certain thing. If you really wanna be impressed, you go to a car factory, and we've been doing that for twenty five years. We have very, very advanced robots moving extremely fast to build things. And why are we don't we have anxiety about the car factory, but we have anxiety about the nunchuck robots is because the human being doesn't like, that gap, we understand the gap of, you know, of a welding robot. You say, okay. That's a robot. It's been programmed to make this weld, but we don't know the techno we as in just as an individual human being living in, you know, in the world, you don't know how that robot was made to do that nunchuck thing. And so you substitute that with anxiety and fear. And so I would really implore you to, you know, kind of learn more about the technology and you start seeing the edges. Now, does that take away from the most fundamental thing that you're getting at the string that you're pulling at, which is like, is society going to be fundamentally harmed? And is this, you know, net net bad for society? I think in any technical shift, the emergence of WhatsApp, this is a example. There are people who are damaged by that, literally companies that go away, but also humans who are damaged by the advent of that technology. And so I think as members of society and as leaders in society, we can we can kind of move that funnel in whichever way, you know, technology first, remove the word AI. AI is such a, like, emotional word. And because it it's wrapped in these things you don't know. And so that fear then kind of deforms. So let's just say technology. You know? So the so so I think it's it's up to us to recognize this technology can be used for good and technology can be used for bad. And I think that's where really the the focus is. So get to know it and then actively make the technology be used for good, as a participant, whether you're a founder or all the way as as a as an individual, you know, employer citizen of a large company. Then on the second part of the question about, you know, public investors and stuff, this is my I don't have any, you know, particular research on this, but this is what my guess is what's actually happened. Beyond being an engineer, which is my my core identity, for the lack of a better word, I was also I did an MBA at Harvard. And so that was the first time that this, let's say, did did an I didn't come from very, you know, very, very kind of wealthy upbringings. And this is the first time when I went to Harvard I saw, like, you know, that world, that world of people having, like, private jets and stuff. It was a really eye opening experience for me. But the real world I was exposed to was high finance and how high finance works. And you might think, as I did, from far away, that folks in at hedge funds or at large, you know, public equities funds are extremely nuanced and thoughtful, and they are, like, you know, on whiteboards with, you know, extremely deep and and oh, maybe even theoretical math to figure out should they buy or sell, you know, Figma. And that's not actually how it works. I mean, really what these folks are is and in this specific case, I think what's happening is they're they buy and sell stock. They are smart people, and they do work hard. It's not to take take that away. But they they don't have a fundamental edge that you would assume that, somebody who sits in, you know, skyscraper in New York has. And by the way, that's why retail investors have have have become such a, active and and and then kind of significant part of the market. So those folks have gone to AI consultants and have gone to people who are literally developers, at these firms. And they'll do something like, hey. Why don't you build me this app in a week? And then, you know, this, like, consultancy will come back with an app, which kinda looks like maybe a Figma or another, you know, some some some web app. And to that hedge fund manager, they're like, well and then, you know, if if the company was sitting there, they would say, no. No. This is this just looks like my app, but this is actually not my app. It's not as deep. It doesn't have all these things. There's the integrations with all these other other systems. But for the in public investor buy, they're like, yeah. But it only took, like, a few weeks or a month to build this. It took you 500 engineers a couple of years. This AI thing could be real and the things I'm reading on x about, like, just vibe coding your way to replace, you know, billion dollar companies, that might be the case and the market immediately prices in that risk. And that's where that sell off comes from. That doesn't necessarily mean all of those I mean, I just within the last twenty four hours, I had a, that's I I would I I can't say his name, but it look a very, let's say, calibrated investor who said this is the time to buy because these companies are not actually going away. And and and so I think those are two anxiety within society and the sell off are two very different things. They're motivated by different things. They're part of the larger AI narrative, but I wouldn't conflate those two things. It's not that the the hedge fund investor is like, I'm worried about society. Sell service now. Like, it's it's there's the the it's it's different than that. At least that's my impression. This is the alpha we've been talking about. Time to buy. This is not investment advice. Well, that's really good advice. I think the real advice is to to fight fear. And I feel that anxiety, especially when I go to Michigan and I I you know, outside of people in the Silicon Valley bubble. It's like, just just try to learn a little bit, about the technology that you're afraid of, and and you'll start seeing some of the the edges. I love your point about how self driving cars are essentially robots. We don't call them that, but they're robots. Absolutely. And you see a non truck wielding robot. A self driving car doing bad things is could be very dangerous already. And so that's a really good reframe that if you just think of it as just another robot, and it's been really good for us. And by the way, the self driving thing as an example, you know, whichever way you slice, the statistics that are available from self driving companies, they're supremely, supremely more safe than human drivers. And I I do believe in twenty or thirty years, not that much longer, we'll look back and we'll kind of be like it's kind of like we think about child labor. You know, in post industrial revolution, that was a normal thing. You would send kids who are in middle school to go work. It happens in third world countries today. It's there isn't a lot of emotion behind it. It It is not considered to be exploitative because you have no choice. You know, everyone just and I think we'll look back in twenty five, thirty years where, like, people were just, like, tired, under the influence, you know, after, like, extremely stressed, going through a traumatic life, you know, experience, and then they jump into a car like that. That that that's that it's it is crazy. And and just for everyone, everyone should really emotionally think about just in The United States, over thirty thousand people will die in the next year from these accidents. That like, the old Stalin line, it's like, you know, a one death is a tragedy, a million is a statistic. And we just let the statistic kind of go over our head, like, oh, it's 30,000 people. But if you ever have talked to a family of somebody who went through a tragedy like a car accident, it isn't it's unbelievable. And and and, you know, you you suddenly all the fear of AI robots goes away and you really see that human impact and you realize, like, actually, us driving doesn't make sense, and it's not for any other reason than literally people die. I've become a a huge, I have a Tesla in the self I've just used self driving all the time now. Just like a few months ago, we got very good and it used to be nerve wracking. And now it's like, wow, this is much better than mine. And you're doing, you're not using, you're not doing driving as a job. Imagine if you're a commercial truck driver or you work in a mine or you work, you know, like, there, a little bit of intelligence, a helping hand in that very dangerous task, it's it's it's it's incredible. And I think there's something about the human brain where, you know, when you when you bring up that that reality of like self driving trucks, the immediate people are like, well, what about the trucking jobs? Now, needless to say, we don't have enough people who wanna do that job. So leave that fact to the side. I think the fact that you really focus on is the fact that people die from trucking accidents. Like, we we like, we can't, you know, throw out the baby with the bathwater. And so I think I implore everybody, you know, who thinks about AI broadly, and physical AI specifically to always recognize that your monkey brain is programmed because of thousands of years of being in you know, hundreds of thousands of years of living out in the wild to be in the cave that when you hear the rustle in the bush, that is you think it's a snake because that's what our ancestors were programmed. So now when something new enters our psyche, you your your your view isn't well, if if mining if if mines became autonomous, well, wouldn't that lose jobs? It's like, those are awful jobs that people die in, and the best evidence is that people don't wanna work in them. Like, that's the best evidence. Like, no nobody's clamoring to go work in a in a mine in a remote area. And so intelligence can help make that, you know, make that reality much, much better. People are seeing AI advance in all these different ways on the software side. They see all these models being released. It's driving a 100% of people's code now. What's really cool about you is you see the hardware side of this. And I think one of the biggest changes to our lives will probably be robots walking around doing things for us. Do you have a sense of just how close we are to just robots around us day to day? So I would think about the this, the the framing here matters again, of on a spectrum. So there are robots around us like, Zumbas. You know? Like, they they click, clean your carpet where you're sleeping. There's robots around you when you make a coffee. There's that's an automated machine that is taking an input and doing a bunch of things based on your, you know, your you what you what you need. So what you're really talking about is how fast can you go up that spectrum to where you have a robot that can take on lots of tasks with little guidance. And the way that I would think about this is, let's say we're sitting in this podcast is happening not in 2026, but 2006. And you're asking me the same question about mobile. And you say, well, mobile is coming. This is, remember, pre iPhone, which comes out in o seven. Everyone has got those flip phones. So we have some we have mobile. It's not like a completely, you know so we have some robots around us already, but, like, it's like, okay. So what when are we gonna and you asked me in 2006. When are we gonna get that Star Trek phone that can do everything? And I think at that time, I would say, because I don't even know the iPhone is coming a year later, I would say, well, Lenny, I don't I don't know. Maybe it's one to five years. And if it's not five years later that Uber, WhatsApp, Instagram, Snapchat are all products, and they're being consumed by many, many, many millions of people. So what happens when you think about sitting in 2006, and why can't your brain figure out that Instagram is coming? Instagram is very hard to even conceive without phones that have an app store, have cameras on both sides, are available, generally available, that lots of people have it. And the fact that people are comfortable being on social networks. In 2006, it's still an early thing. This is pre Twitter, and Facebook is not that big. And Myspace is, but it's not the same type of private kind of community. And so so the the the the point I'm making is I think it can come pretty fast, but the way and the form factor will come is hard to pick just like it's hard to figure out Instagram's gonna happen because the the the intelligence in that particular type of hardware, which will be generally available, that's a keyword, generally available, is really gonna impact the use cases. So I think, like, the most obvious use cases that will come early are going to be use cases where you get the most amount of bang for buck. And the bang for buck is a car that drives itself or a mining robot, which is a mining vehicle, which is now intelligent. And the reason is all that, you know, let's say engineering required to make this giant, you know, machine that moves dirt has already been done. It's been done over the last, you know, fifty, sixty years. So then you're just inputting a little bit of intelligence into it and leveraging everything else that that that the, you know, companies and and and and kind of people have developed. So I think I mean, I and I'm not just, you know, pitching my own book. I mean, we're a physical AI company. I I I continue to believe that we're I think our brain emotionally loves the humanoid concept because of we're monkeys. And but, actually, just, like, more pragmatically, it's actually just putting intelligence into things that already exist all around us. And I think and then once that happens, the new applications will emerge, which I think we'll talk about in five to seven years, which will be which we'll start seeing. So let's let's just move forward five to seven years, and let's see what reality exists, and then maybe we can try to jump into the future from there. I think generally speaking, every single car company on the planet right now is working on a product that's like a Tesla FSD product. Every single car company there without exception. Many, many companies are working in versions of that that will become fully autonomous within a cheap sensor suite. So the the fundamental difference, just to simplify it all, the Tesla approach versus the Waymo approach, just to really keep it simple, is the Waymo approach is lots of sensors and lots of compute and maps. And the Tesla version is very few sensors, no maps, or, you know, no high fidelity maps. I'm just generalizing here. And, cheaper compute for the lack of better word. And the the Tesla version of a product, this is in the industry is called an l two plus plus product, is gonna be available everywhere because it's literally cheaper, and it doesn't require, like, HD maps. The Waymo product functions better in a geographically constrained area. So you fast forward five years, both of these types of technologies will be much more ubiquitous. L two plus plus and l four will be much more ubiquitous, not only in the Bay Area or in parts of China, but really globally. There there are companies working on this globally. So now, dear, I don't know if you remember, but nav systems used to be a big deal in cars. Yeah. You would pay thousands of dollars, and nav systems are kind of the thing that everybody wanted. We're at that moment for l two plus plus systems, where, like, people are willing to pay thousands of dollars for an a semi automated vehicle. It will not be a long time. You're already seeing this happen in China where the downward pricing pressure for that autonomous product, for the lack of a better word, will become close to free. So now you fast forward five to seven years and every car has some level of autonomy. So now you have to, like, mentally live in that reality that everybody who's buying a car, they just get FSD with it. Now you start seeing a different world because now the average person isn't wondering is self driving in a company. They use it all the time. They don't wonder our navs and so what you have in nav systems is a CarPlay emerges and Android auto emerges, and it's very natural. People are like, oh, I have my phone. I just plug it in, and and it wasn't a big revolution, but the CarPlay and Android auto revolution is actually huge. It brings free navigation and free applications to your car, and it's fairly ubiquitous. And so I think the next thing that happens in five to seven years is then full autonomy becomes the thing that everyone expects. And so I think and all of that, and you will see a clear decrease in injuries and death, because of that, because you have some intelligence helping you help help you out. Now, again, I am using the consumer of vehicle analogies just so people can understand it, but this is the same in construction. It's the same in mining. It's the same in defense. It's this it's in every one of these verticals. There's these big physical machines that humans are interacting with. That teaming up with that machine is the future. The productivity unlock from just you looking at a machine, not like a sentient being, but but but almost like an a physical agent of something you're trying to accomplish, unlocks things that I think are very hard to think about. So I I love, you know, you know, things like Motebook, and I love, oh, the, let's say, Open Claw Revolution that's happening for the lack of a better word. But I think the big impact, that's still that's still such a small part of society. My my my my barometer of impact is, like, you go to, you know, the the Detroit Airport, and you sit in a gate, and you look around, and you're like, how many people here are using OpenCloth? And it's like, let me look into, like like, you might be the only person who knows what that is. And, and whereas everybody, they're living their lives there. And and so it's like to them, actually, the the impact of AI is gonna be in this physical physical world. I see you also have a you have some yeah. There you get it. Out. Oh my god. Vert. Open clock. There you go. Perfect. I'm just coming on the spot soon, so I got some some lobster claws. Yes. I think I think the real impact of AI in the next five to ten years really is gonna be in farming, in mining, in construction, in self driving trucks. That's where you're gonna have a real impact. Though, I think I mean, I I I love the stuff that's happening on on these platforms, but it's still segregated to, like, frankly, developers and some some a small very, very small part of society. I wasn't planning to spend so much time here, but this is extremely interesting. And I think it's important for people to hear from folks like you about where things are heading, because as I said, everyone's just like, what is happening? What is gonna be my future? The jobs piece is really interesting. And you have a thread that has come emerged on this podcast recently is that, like, people are afraid AI will take their jobs. But in reality, AI is coming just in time to save us because, populations are declining. People are aging and we need something to help us there. I know this is something you and like, this is something Mark talked about and you're really close to them. Help help us feel better about just how AI isn't gonna take our jobs and actually gonna save save us? Yeah. I think I think, honestly speaking, these industries, like, they need autonomy. I mean and and and and and it and it couldn't come soon enough, frankly speaking. This is not like people are not fighting for those trucking jobs. If you look at farmers, the average age of a farmer is in their late fifties, 58 or so. What what does that mean in ten years from now? That means many of those farmers are going to be retiring if they're not already retired. And twenty years, we have even a bigger, a bigger problem. That, by the way, is every vertical is like that. And I, you know, my hypothesis here, but, unlike, you know, sometimes people say like, you know, McDonald's can't hire or, like, you know, the the mind local query can't hire and where all the people other people are still here. I think they just the trade off is just not worth it anymore. In the nineteen eighties and the nineteen nineties, doing the long haul trucking job was what the family has to sacrifice. The father not being there for for days and weeks on end. And today, that same working class family has can make that decision and say, you know what? I will drive for Uber or DoorDash. And I'm willing to do that because I can turn that app off and pick up my kid, and I prioritize that. The the that is where I think this, you know, this, this kind of, intelligence kind of revolution in the real world is really, I think, is gonna fill those gaps in, rather than, like, an entire industry suddenly gone, and it's just automated. This is this is this is this is I I don't believe that future, mainly because the realities of actually, you know, replacing an entire industry with robots is is is still that's too complex. One day it will happen, but it's not happening anytime soon. But the entire society will be different by that point. And I think, again, use the industrial revolution as as a good version of that. I, the, you know, the or to the earlier question, if I'm somebody who is not in the AI ecosystem and I have this anxiety and, how would I deal with it? Reading history books is a great way to really understand how society deals with this. And there are there's a lot of literature. These industrial revolution doesn't happen, like, you know, in the dawn of Christianity where not many people are writing and not many people are reading. Lots of people are writing. Lots of people are reading in that, in in the last hundred and fifty years. And you can read both the people who are impacted by the industrial revolution, people who are benefiting from it. And writ large, it's a very positive experience. And that doesn't mean they're, again, they're they're downsides. We should mitigate the downsides. But the thing that we can't do, and this is maybe specifically as America or society as, as the the global population as a whole, there's this impetus to say, like, we gotta pump this brake on, again, don't say AI, say technology, pump this break on technology. The issue then is the American economy really ends up stuttering, and that impacts the lowest end of the labor mark labor market way more than anybody else. And so in the attempt to help the people who are the most marginalized, we actually hurt them the most. And, the the, you know, the the the statistics between Europe and America are, you know, been have been have been a pretty explicit. But in the last decade, basically, the American economy is is now, you know, grown at a much higher pace. And that growth hasn't come from, you know, Detroit, Michigan. That growth has come from Mountain View and it's come from Sunnyvale. It's come from the Bay Area. And which is another way of saying it's because of new frontier technologies. So putting brakes on frontier technologies because we're afraid of unintended consequences will actually have real intended consequences on people who are trying to help the most. And the reality is very, very fundamental. In a future that does not take care of the average worker and the average person in America, we'll have much bigger problems. So we need a solution that takes that into account. But that solution isn't just pump the brakes, AI is bad, or Frontier Technology is bad, or technology is bad, or whatever, you know, whatever thing that you you don't like. I think that that that'll have really, really bad consequences. One of the reasons that we don't pump the brakes is just fear of China and competition with China, the nunchuck robots, being a recent example of, like, oh, shit. And then you have kind of a contrarian take on just how much of a threat China is and how they're approaching things. The summary version of this is I think the way you we we recently read as a company. We read this book, House of Huawei, which is just a really great interesting book. And, Huawei is a really amazing company, for the reason that it makes great technology. But the couple 100,000 people that work at Huawei, about a quarter of them are members of the Communist Party. And Huawei's goal is not to grow profits or shareholders. It's a private company. It's really a extension of the state. So literally the name Huawei means China's ambition. So imagine if you had a company called, you know, MAGA and half of the company or a quarter of the company was a certain political party, and they said our goal isn't to make profits or to our goal is just the expansion of it's not even a company anymore. It's it's it's something else. Right? And so I think we incorrectly when we specifically speak of Americans, we think about China. We impart our understanding of markets and companies onto China. So we think Huawei, since they make phones, they must be just like Apple. It's like, no. No. No. No. Actually, there's not that's not like Apple at all. And so I think the first thing I would implore everybody who thinks about China, especially with anxiety in America, is you're not comparing companies to companies. This is this is not apples to apples. This is very, very different. And so imagine instead of thinking OpenAI is competing against, you know, DeepSeek, you say OpenAI is competing against the Chinese government. Instead of Apple competing against Huawei, Apple's competing against the Chinese government. And you can even remove the word Chinese go government is the best word to to define what this organization is, but it's not a for profit privately owned independent group of people who are working on projects together to build build products to to market. So that's the first very important thing. You cannot treat China like another America or another Europe or another whatever. Number two is if your goal isn't to make profits, you can do incredible research, and it can be extremely compelling. But like we've seen, if you're if the if the if the system is not sustainable, that's also not a company, and that's not sustainable. Let me give a very stark example of that. Chinese EVs are really lauded as being this exceptionally interesting product. Right? And, and you constantly get the streamer of, I would say, fairly shallow analysis, which says, look how good China is and look how bad Munich, Detroit, Tokyo, solar, the other other epicenters for automotive, globally. There is a Chinese EV like company in America. It's called Rivian. Makes great products, but they lose a lot of money making those products, and therefore, the company is not very highly valued. I think if you said top 50 or top 100 companies in the Bay Area, I'm not sure Rivian would really make that list. And it's not that the products are bad or the people at Rivian are incompetent or they're not working hard. It's just the business is a tough business. The EV business in automotive is a tough business. So how can we hold these realities? So we say, look how amazing these Chinese EV companies are and look how bad the home team is. It's just because the home team is being assessed for being a business. It has to make profits. And because it doesn't, it gets hammered by public investors. The other thing is not even a company. Now let's if we do it apples to apples, you see this America just has to build great EVs. That means Tesla and everybody else combined and we don't care about profits, I think America would field some very good products, and there would be wow products. So it's the the comparisons are really, really off, and I think that's creates a misunderstanding. I think, you know, then then the low maybe the most philosophical question. Can China succeed, and does that mean America has to fail or vice versa? If you believe in open and free markets, you believe everybody can succeed in those markets, and that's been proven for over a hundred years. And I think what we're experiencing right now is how does China play in that ecosystem because I said open and free markets, and those are not open and free markets. And so but that doesn't necessarily mean that you have to have an antagonistic relationship. It certainly doesn't mean that China is incompetent, and it certainly doesn't mean that it's not doesn't warrant our attention and that our our kind of, let's say, focus. But it's also not a one to one comparison. I think we should be very careful in implying it's a one to one comparison. And by the way, that, like, five minute explanation is never gonna get to the average person sitting at an airport in Detroit, Michigan waiting for their flight. They just all they consume is China bad. Like, it's like that's not it's not like that. It's not that it's not that simple. It's way more nuanced. This episode is brought to you by Lovable. Not only are they the fastest growing company in history, I use it regularly, and I could not recommend it more highly. If you've ever had an idea for an app but didn't know where to start, Lovable is for you. Lovable lets you build working apps and websites by simply chatting with AI. Then you can customize it, add automations, and deploy it to a live domain. It's perfect for marketers spinning up tools, product managers prototyping new ideas, and founders launching their next business. Unlike no code tools, Lovable isn't about static pages. It builds full apps with real functionality, and it's fast. What used to take weeks, months, or years, you can now do over a weekend. So if you've been sitting on an idea, now is the time to bring it to life. Get started for free at lovable.dev. That's lovable.dev. So you decided to join Twitter recently, put out your first tweet. Your first tweet was just like, hello. I'm gonna start tweeting. That tweet got, like, 2,000,000 views. Elon replied to you. Marc Andreessen, quote tweeted it and said, this is the best AI CEO nobody knows. Follow for the for free alpha. Elad Gil, famed investor, describes you as the most successful, most quiet company in AI. And to me, this is really interesting because most founders are told, build in public, build a following, be loud, get out there, talk all the time about what you're doing. You did the opposite. You're very into the radar. Stayed quiet. Build, build, build, and then decided later. Okay. Now time it's time to talk about our story. So I think this counternarrative is really interesting, and I think will inspire a lot of founders to not feel like they have to do this. What was your just philosophy of just start staying quiet and then starting yeah. Yeah. It's it's a it's a great point. So number one, it was intentional. And and I think if it was up to me, it would we would do that forever. I think we're very much inspired by folks more like a Berkshire Hathaway and less like, you know, let's say, a Silicon Valley darling. And, the I'll I'll tell tell why I changed the views and then just but before some founders go and take that advice, immediately without without really thinking about it, I can do that because I'm known in the ecosystem. You know, I know these folks personally, and so I don't need to have a, a brand out there that is getting Elad to, you know, remember me and think about me. And so if you're you know, if I'm doing my first two companies, were a lot less known is before I really, you know, became to YC. So, you know, all of our company values can be reduced to these two words of radical pragmatism. So before you take the advice, make sure it applies to your situation. One of the reasons and Naval, who's one of our investors and and a friend, you know, says, you know, fame itself is like a tool and it's and it's powerful. Now if you don't have a network and you can get a following, that's a fantastic way to get you know, to recruit people to your company, to recruit investors to your mission, and then, of course, you know, the customers. And so but for us, and I think I think that wasn't a hard requirement, you know, ten ten plus years ago. The other thing is I think Peter and I you know, the the old saying about life is kind of, like, you do things and then you rationalize the thing that you do. Right? So I think, fundamentally, Peter and I, we don't get a lot of my cofounder Peter Lutterwee. We don't get a lot of emotional satisfaction out of doing very public things. And I think if I was really to play armchair psychologist and really try to get to the root of why beyond the rational view, which is focus on your customers, focus on the product, Every minute you're doing a podcast, every minute you're doing an expost, every minute you're writing something for public consumption, you're not focusing your very limited time that you have on your customers and your product. And, ultimately, that's the only thing that's gonna produce and yield results. But the but the the the, you know, the the reality of the situation today, you know, 2026, is even a company like us that's known or or somebody like me is known in the ecosystem, you still wanna get that broader message out. And that's what I talk a little bit a little bit about on on on x. So it's, it's, it is definitely contrarian, but it's not just contrarian for contrarian sake. It plays a little bit of our of our own psychology. And then I would say just to just to finish that thought there is, you know, I grew up you know, I'm an immigrant. I came to The US, from Pakistan when I was a kid. I have a little bit of a weird name, and, you feel like, you know, anybody I grew up in in in Detroit and Warren, Michigan, specifically for all those at home. And, when you feel that you're a little bit on the edge of society or you're not maybe in the mainstream, and this is, you know, resonates with some people, that's not as as resonate with everybody, you feel very skeptical of the mainstream because you're just on the outside for so long. And I think you can trace a bunch of founders' psychology to this feeling of being an outcast, actually. And so then you find yourself in a situation where you're like the wise you know, c COO of YC. And the narrative of I'm an outsider is like, yeah, like, I don't know if there's anything more inside than being, you know, the YC COO. Right? So I think, that reconciliation, I think, over the over my career also has has has had to happen, which is, like, maybe that's just kind of a weird weird kind of thing. And so when I talked to, you know, Marc Andreessen, who who's who really pushed me to go go online or Elad or whoever it is, their view is leave your baggage and your trauma, you know, in the background. And, really, let's let's think more pragmatically. And the pragmatic thing here is whether I like to do these types of things or not, fundamentally, it helps get the message out. And the message can be something very small and myopic, like what's happening in physical AI and machines become intelligent or much larger, which is what's what's happening in society through this, you know, fundamental change that we're going through. I I I've had the rare privilege or the, you know, experience of seeing the full economic spectrum. You know, I've I've really seen the extreme ends of both sides, and and truly, I I really mean that. And, so somebody like Mark, who is close to our our our company says, well, that's a those are some ideas that are worth getting out beyond just, you know, you're they're promoting whatever some, you know, your company or something like that. And and that I actually that I can get behind, which is, like, the debate and discussion about ideas and what's happening to our society because of these technical changes. And, and so, you know, here I am. Amazing. Okay. So there's a few threads I wanna follow there. One is your, as you said, COO at y Combinator. You saw a lot of startups up close. You you this is your third startup on your own. Something that I hear you talk about is that successful companies almost always show traction very early. A lot of founders here are like, no. Just keep trying fighting, and maybe maybe you'll be the next big my Notion four years in. We'll figure it out. What's your experience there, and what's your advice to founders who aren't seeing traction early? Nuances. I mean, if I was starting another company, I'd call it Nuance. Right. So, the the the, I think, what you're saying is correct. I've I I continue to believe that I think good companies tend to be to have traction fairly early and then just sustain it for a decade plus. To the founders, that's toiling. Let's say if you're listening and you're about two years into your company and and you're maybe having a tough time building getting money and building that first product that consumers or or businesses really love, either through retention or dollars. Two years is a very is the difficult time. The heuristic that I would use is if I'm not if the information I'm getting from the market is not informing me on a more and more specific path, I would consider resetting. And what I mean by reset is oftentimes and this is wearing my YC hat seeing, you know, hundreds and and and thousands of companies, is oftentimes is like the cofounding like, literally the foundation upon which the house is built is not correct. It's like, imagine you built this house and every time you put a cup of water and it slides off the table and it falls on the ground and you're do you keep adjusting the table? It's like, maybe the foundation is actually wrong. The whole house is off kilter. And that foundation might not only be your co founders, it could be the market that you're in. It could be the phase of life that you're in and the amount of effort that you're willing to put into that thing in order to make it successful. So there's a bunch of reasons that a company can fail, and you have to be able to somehow say, I don't know what is the reason. I'm just gonna have to hard reset here. One thing I would tell founders and I tell applied is creating a a founder class in itself. People who work at applied intuition are now starting their own companies. You know, we have a thousand plus engineers, and over time, they they're they're they're starting their own, firms. And I say to all of them is, just imagine the first time you're gonna do a startup for the first three year, it's a zero. Just rid yourself of the expectation that it's gonna be successful and that you're really and you're you're a craftsperson. If we were make if we're if this is a woodworking podcast and you said, you know, the first, a table that you built was, you know, what it was wobbly, you wouldn't say, well, go work at Crate and Barrel. You'd say, oh, that's the first table. We're gonna we're gonna keep at it. Being a founder is its own muscle, and you wanna exercise that muscle. But I think a lot of founders, especially early in their founding career, put such an incredible pressure on themselves to make it great out of the gate that they actually miss the thing that you're getting in that first round, which is learning and building that muscle. And the second, third time. And I I think it's not random that my third company is the most successful company. I think I think that's that I think in that you see that more often than not. There are funds which are almost exclusively focused on multi time founders, right, for for for this reason. What I love about that advice is often the best ideas come from when you're you have low expectations. You're just playing around. You're just tinkering. You're not like, I'm gonna build the next great, I don't know, Google. It's just you having fun. And that and that's, like, how I found this world that I'm in right now, this path, and OpenClaw is a good example of that. I think what's the why that device is so difficult is if you hear this and you're like, you know, you're in you're in the you're in the proverbial war. You're like, what the hell are these people talking about having fun? This is hard. And so you have to, like, hold these contra like, contrasting kind of or conflicting views in your head, which is, like, it's deeply very, very important and you should give it your all. And it's also not that important. And that's a really hard thing to reconcile and keep keeping balance. And the way that you approached this company where you stay quiet like that, I think helps a lot where you're not Absolutely. Yeah. It's it that like, the, you know, even, like, at YC, when I became, COO, I I told Sam, Sam Altman was the president, and I told Sam, let's not announce this for, like, a year because, you know, if the partners don't want me to be COO, it's not a successful thing. I, you know, I don't have the pressure of the public of public scrutiny that, you know, why were you COO only for six months or something like that. And I think you have to be very honest with yourself as a founder and as a human being that those things matter. What people think about you matter, and it impacts yourself and be having the spotlight on you. You know, the the I always say it's very easy to pivot before you raise money and before you have employees. No nobody cares. The moment you raise money and the more importantly, the moment you hire employees, employees join a very specific mission. And you go and you you walk into the office and there's 10 of them. You say, guys, turns out this isn't wrong. We're going in a different mission. Imagine if this was war. It's like, what the hell? We just we're attacking that hill, and now we just say that hill is not important. Like, how do you know the next hill is important? And you as a lead, you lose a lot of credibility. And it's not only for the superficialness of being a credible leader. It's a practical nature of when you're very, very public, they become the the startup becomes your identity. And then suddenly, you're you're having to reconcile that actually that thing is not correct. So it's it is, one of our our our we have these core values in the company. And in in early in the company, you just have this line which says, our best work is done alone and quietly. And I I deeply believe that. And and so founders, I would I would think of it that way. But it's for pragmatic reasons. It's not like some just because it's cool to be under the radar. It's it's it's just allows you to maybe work in a bit more peace. I I love these core values you've shared so far. This the last one, the best work is done alone quietly. I'm so on board with that. Radical pragmatism is the other one you shared earlier. Are there are there a couple more there? These are these are gems. Yeah. We those are, like, I would say, the the the meta values. We we have very specific, let's say, operating principles. And this is this is real as tactical as advice I can give to founders. So we to come up with your values when you're getting a little bit of traction. And the reason I say that is early enough where you and the way you come up with the values is not like, what values should we have, like, as philosophers? No. No. You should figure out why are we being successful? Like, literally write down the five to 10 things that are the reasons you are being successful and those become your values and you kind of re and so we did that. And so our first one was going to speed above everything. And it was like us being fast. The second one is like, you know, never disappoint the customer. Technical mastery, high output matters, like all the way down to like, you know, ones that are not obvious, like laugh a lot. That's been our core value from at the beginning of the company's history. And it's like when you're working in intense things, if you don't have the ability to keep grounded, have perspective, laughter and humor also is a way to get subtle feedback in a slightly different taste than this sucks, you can say it's not the best. And and that is slightly and so you're you're really creating the framework in which people are learning how to behave with each other within the company. And so today, the values really serve us as almost like, they're like guiding principles. And so I you know, we do new team meetings that every week, Peter and I, we meet all the all the new team members, and And we're almost always just talking about the values in in some some level of detail in-depth. Yeah. Another value, half of the work is follow-up. Like just taking notes and following up. That is the business. It's not more complex than that. Laugh a lot is my new favorite company value. Yeah. Sounds like a wonderful place to work. And on the on the last piece there, there's this book that just came out by Stripe Press about maintenance and how valuable and underappreciated the maintenance part of work is. Absolutely. Absolutely. Yeah. I think, like, if there if there's a takeaway that you get from, let's say, a bit of my philosophy on on, where we, you know, we started the conversation around, like, why, you know, being promotional has all these negative connotations in it. But so I'm, I'm careful using that word, but why not be promotional? It's because there's costs to everything. And so if you can focus on the craft and making the product really, really good and really listening to your customers, you have a much higher likelihood of success. And, and then you can, you can always then go and, and, and, and scale that a part of that is the thing that you're talking about maintenance or another version of in, in, you know, my, my roots are in automotive engineering and automotive engineering is actually a exercise in quality. That's, that's really what it's, you're building these very complex machines at scale. You know, people talk about rockets being really, really complex. You only gotta send up a rocket even at the highest, like, once every couple of days. You're making a car every thirty seconds and you have to make it extremely cheap and it's globally competitive. So you really get into the nuance and minutiae of how a factory runs. And a factory is about safety and maintenance. It is not there's not a lot of complex things. It's just, you know, it's like when you when you break down what is being operationally, you know, strong. Operationally strong is keeping an eye on a handful of things and make sure you're doing them really, really good. And I I I'm I'm one of those believers that, you know, there there's this adage is a man who cannot command is himself is not fit to command others. And it's like that maintenance aspect is a part of that. Right? It's like, well, if you maintain yourself in your own work, you maintain your team, you maintain the company, the products are almost they they come out of all of that that that that system. And I think a lot of founders don't think about their company as a system or almost as a machine. But I I would implore you to do that because then you really focus on the craft of the machine and building the machine and making it more hygienic, and and and and and making it more well tuned. Just like, you know, you'll you'll meet people who really love cars and they they really obsess about the maintenance of cars, you know, like like, they will they will detail, like, underneath the driver's seat as somebody who details my own cars. Like, nobody's gonna look at that, but it's under that same ethos of of really caring a lot about the craft and being it'll be brief. And frankly speaking, since you have limited amount of time, it's hard to really care about x and also making it, making your company's hygienic. And, and there's different reasons at different points of your company that you should do different things, but that's kind of a little bit of the ethos. I love how it keeps coming back to just staying quiet, working, just working on the thing and not talking about it. Your point last point there makes me think of that, the score takes care of itself. Classic. Yeah. So Joe Montana is actually one of our investors. I don't know. In our series d post, we the post was the valuation takes care of itself. Like, very much we fall into that category. And it's like it's like, you know, sometimes people will come to our office and, they'll say, oh, like, it's, like, such a clean office. You guys must have, like, this giant cleaning staff. And it's like, actually, we clean our office. Just like in Japanese school, as I mentioned, I lived in Japan. Like, the students clean their clean their, own schools. We have a cleaning zen every week, and everyone cleans the area around them. And I think it's important that, like, there's something about this ethos of, like, also, like, not getting so wrapped up in your own narrative of, like, I'm a Stanford software engineer and I do AI. It's like, clean up your desk. So there's, like, some, like, basic things like that. And I don't know what that larger philosophy is, but it is a philosophy that we we we kind of drive towards. And I think, like, you know, our claim to fame, which is kind of a crazy, you know, reality is we've never spent any money we've ever raised in the history of the company, which is kind of it almost sounds like it's made up. It's as if your company is almost 10 years old, you know, a thousand engineers plus. And so we're a functioning business without using capital that we raise, and I think it's somehow connected to us cleaning the office. I don't I don't know how. But it's saving all these cleaning costs. It all makes sense. Yeah. Yeah. We still have people, you know, clean, but, like, we also are, like, our employees also are aware of their surroundings. And I think there's a direct line between, like, be quiet and alone and clean your desk and well written software. And I don't know what that thing is, but it's all falls in the same arc. I know you also have a no shoe policy for that same reason to keep things clean. Yeah. Yeah. And it also influenced by Japan. I think the, the yeah. I worked there, and we we had a similar office setup. The other the other other way to think about this may be as, again, I'm just trying to impart everything I've known to founders because I feel like that's my you know, that that's who that's who I'm that that information is so limited, and everyone's kind of trying to make it up, frankly speaking. Alpha. Yeah. Yeah. Yeah. Exactly. Is I would implore you as a founder to really try to take the best of Japan and the best of Germany, the best of China, the best of Detroit, the best of Silicon Valley. And and, you know, I think sometimes people take that Steve Jobs line and they really, like, you know, deform it where they say, like, great artists steal. What he's really talking about is, like, the less the less magnanimous version of that is, be humble and learn from everything around you. And as a leader and be be well rounded. I think, like, reading is should be you know, I I I there's a Charlie Munger line where he says, I've never met anybody, very, very successful who doesn't read all the time, and I, like, very much fall into that category as well. And so if you, like, unpack why that is, like, why does reading a physical book make you a better founder? Like, I just, like, ask that question in the most direct way is I'm not reading, you know, if you especially my my ethos of reading is read old books. Don't read anything new. Read read old books because time has filtered out a lot of the noise. So you get a lot of signal. And in your life, you you're gonna a thousand books, maybe, you'll read. Like, in the best case scenario, you're gonna read probably 50 to a 100 books, which is kinda crazy for the average person. So you're just not gonna read many, so don't read low quality content. You read there are there are true pillars of of kind of of human, ideas out there. You consume those ideas and then it's up to you to interpret how those ideas then reflect upon the business that you're leading or the technology that you're developing. I absolutely believe reading, a book like Malcolm X's autobiography will make you a better founder. And it's not and again, it's like the whole cleaning zen all the way to clean coat. It's not directly one to one related. I think we always want these very simple if then statements. But I think being a well rounded founder where you understand society around you and history around you, that somehow makes you build a better product. And I don't know how, and how to why, but I think it's absolutely is true. I I I do see a connection there. And And then people like Charlie Munger who are not an AI founder obviously also believe that and I I think so there there's some some pattern there. Like, it's interesting. This is the same. It's a metaphor for LLMs. You feed all his data. Somehow they become almost conscious. How does that happen? No one no one fully knows. It's so interesting how similar you are to Mark Andreessen and your way of thinking and the way you consume content. Like, there's We're both bald. There's a thread here. Just here's how here's there's important ingredients to being really successful. Yeah. I mean, Mark, I mean, you know, we're we're fortunate enough to choose, you know, our investors, and and that's a true privilege. I didn't have that in my first company. We spent years. We didn't raise a dollar. So I don't I'd I'd certainly appreciate it. But, you know, if I've ever had, like, a a mentor, you know, Mark would fall into that category. And I said, you know, before applied, and we we debated and talked a lot. And, I think Mark, is also like that. Right? He really consumes content actually outside of this little industry that we're in. And, and then I think it makes him actually a better investor. Yeah. You're, we'll point people to your website. You have a list of the books that you recommend and love, and it's very long and very not what you often see. I can't help but just ask, are there a few books that have most influenced your thinking, most influenced your life? Yeah. That list is, like, I've very thoughtful been thoughtful about about that. And, the reason I I I use books like, hit the, you know, the autobiography of Malcolm X as an example is I know that's not on the top of the list. Everyone's gonna you know, if I say high output management, classic Andy Grove, you know, you guys know that. So it's a part of the book here. Yeah. Yeah. Exactly. It's, like, partly the theatrics of also entertaining you, but also giving you new information as a as a as a listener. I think, like, a good a lot I'm the books I'm currently reading is this is kind of a random, you know, slot of books. Like, I'm reading the the Vibe Coding book that came out. There were our whole company's re reading that, which is a new book, and it kinda goes against my grain of my heuristic. But, The Emperor of All Maladies, the the cancer book, fantastic book. I'm almost done with it. Like, I think it changes the way I think. Like, you you you read that and it changes the way and I think that's the ultimate test. When a piece of material changes your existing framing on life, this is this is good. In the LLM use case, this this is somehow related in the sense of diverse data makes your understanding of the world more rich and nuanced, and therefore, it's better. But yeah. So, like, I'm always inspired to give, like, more wacky kind of examples rather than than the other obvious ones. But, the obvious ones that if I really wasn't wasn't, you know, wasn't being theatrical, I think Sam Walton's book, Made in America, is a unbelievable book. It's very, very good. He wrote it on his deathbed. You know, My American Journey is also very good, Cole Paul Powell's book. It's not it's not on my website, but it's it's it's really good. You know, I I'm, I'm I'm somebody who tries to connect some of these dots from, like, us being, you know, cave people to now, like, living in, you know, you know, Silicon Valley working as a venture backed AI company founder. And so guns books like Guns, Germs, and Steel really are top of that list of fantastic fantastic book or Collapse, also same author. So yeah. But my my my point to the founders is read the stuff. You can still go to physical bookstores, read the stuff that is both old and well regarded. And, you know, nothing about, I often, when I'm trying to find the next book to read, like, I remember, the way I picked up SPQR, which is the book on on Roman history was, like, I was, like, I don't actually know a lot about Roman history. I know the high level stuff. So, like, think about all the ideas in the universe from philosophy to history to, you know, Jainism to the rise of, you know, Japan as a, as a feudal state. Like, areas you don't really know, and then just find the best book in that space. And I think you you just start filling in the blanks. And so often, I I I that's kinda like the way that I grok The ecosystem is like, what don't I know anything about? And let me co find the best piece of material in in in that. And, yeah, this will work well. I like that. So Mark's Mark's philosophy is this barbell strategy of only today news, like x and ten years ago books. I love that you're, like, no. Just like upside down to y'all. Just only Yeah. You you I mean, you you know, Mark was a heavy influence in me getting on x. So, so He's a getting He's he's he's he's propagated that that that view. And I I the the thing that I really do agree with him is on, as our company becomes a larger, more influential, and impactful company of society, it is my responsibility as a cofounder of the company and the CEO of the company to at least propagate my ideas to our first AI founder community and then larger the technology leadership and then the the world writ large. And so that's that's part of it. And I think in that way, Mark is Mark has really taken a you you think about VCs not long ago, you would never even know who their names are. Yeah. I mean, they were like this they were like, you know, they're like PE guys or or or, you know, hedge fund managers. You can't even think of names. They're they're like, they're just blobs of ominous sounding, you know, Obsidian Corporation or something like that. And it's only a 16 z and a couple of other folks, John Doerr, who really kind of created the, hey. I'm gonna be the individual investor, and I'm gonna propagate a certain set of ideas. And that's gonna create gravity within Silicon Valley and influence founders to then make certain types of companies. And, of course, they invest in those. I'm just threat about reading to find almost areas you disagree and and don't, you know, haven't thought about, I know one of your approaches to management and one that maybe a value is to encourage your leaders to listen to naysayers to not create this positive reinforcement cycle. Talk about why that's so important, how you operationalize that. So imagine, you know, I'm not the the the founder of the company. Peter's not the cofounder of the it's just just a generic company. The ideal that you the ideal situation with a generic company is one that you can put in lots of different competing ideas. The culture is one where you will shake those ideas out. There's not an emotion in it. Whoever brings the idea, the best best idea wins. So why can't companies do that? Frankly speaking, a lot of times, it's the founders, and the founders are told, you know, by, you know, popular media and the way that human beings experience life and our tribal kind of outlook that you have to have this hard view, and everyone has to if they're not following you, then maybe you're a weak leader or something like that. And I think we just don't believe that we don't just believe in that philosophy. And I think we we we believe that what maybe a more tactical way of saying that is we take inputs of the environment, our customers, specifically, our employees, our competitors, of our investors, and we of what's happening in society real large, and that impacts our strategy. And, and I think it's one of the reasons we've been very, very successful. We're not we're not so arrogant to think that we just have the answers because we had the ambition to start a company. And I think that permeates into a very specific culture. And I think the culture that we built is one. It's also not being contrarian for contrarian sake. Right? I I like if you one one view I do have is emotions are generally not helpful in making rational decisions. They're almost like the opposite of it. And sometimes passion and, you know, leadership are supposed to be about, you know, they're supposed to be, like, again, magnanimous or or or emotional. And we just don't believe that. And and that's, I think, a bit more of our kind of Midwest, you know, you know, roots showing for Peter and I. And and and so I think it's not that we say, hey, disagree with everything in the room. What we specifically say is speak up. Speak up. Everyone has to speak up because that one person with their one experience because they worked at, you know, Zoox or Waymo or wherever, Tesla or they worked at a Chinese company, whatever it is, that one idea they have in their head when the debate is happening about what we should do in, you know, in space, like, literally the space of space. And, and something maybe we don't know much about. That one person's one idea, they have to feel comfortable sharing it, even if they're the most junior person or they feel that they didn't get their way in the last debate or they they feel you know, whatever whatever anxiety that they might have, they have to share that opinion that, guys, this is actually the right idea or this is the wrong idea. And if you can create that environment where the best idea wins, you know, Gandhi has this line, truth is what stands the test of time. We're trying you know? And I think there's become a little bit of a meme in the Bay Area, like, truth seeking, you know, as a culture, but it kinda is like that. We're trying to find the best idea. Maybe truth is the wrong word, and maybe it's the best idea. Find the best idea, and then let's go full bore against the best idea. Let's maybe use a counterfactual. Why do companies fail when they have great talent and they have, you know, seemingly all the same components that an applied intuition have? It's because maybe the best ideas are not being surfaced, and, certainly, maybe they're not actually being adopted. Or more often than not, when I think about companies that have been very successful, is they have momentum going in a specific direction. And that momentum over that that that that that that that wall that wall of sound overwhelms any new sound that's emerging, which is, hey. The market's changing. The market's changing. You just can't even hear that because there's all this momentum going in a particular direction. A good example, I had front row seats for this when I worked at Google. It was the era where Facebook was emerging. And here's Google, and people don't remember Google in the late double o's and early teens. Google wasn't just a company. It was the apex predator of Silicon Valley. Apple was just, you know, the MacBook air had just come out. Steve Jobs was, had, had his beginning in the right direction, but nothing like Apple is today. Amazon, AWS was still a young thing. Nvidia was teetering off of bankruptcy. I mean, all these giant companies that you think of, Microsoft was run by Ballmer. You know, Twitter was a small thing. And then you so it wasn't it was and, but Google was this already larger than life number one company. Everybody wanted to work at Google. And there was there were not many companies with that stature. And then in the periphery, this little company, Facebook starts emerging. And Google, who has the best engineers on the planet, is making 1,000,000,000 in cash flow a month, tries to fight this little company that and I remember Facebook at that time maybe had a thousand people, and Google was, like, you know, 15 x, 20 X, the size with a lot of cashflow. And why couldn't Google fight Facebook? It's because Google is not Facebook. It's like that Confucian saying like, how does a gorilla learn how to fly by not being a gorilla? The way that Google would have won the social media wars by being a social media company, it's it's just fundamentally not. And so the large this happens in companies all the time, which is you're just going in one direction with momentum. Consciously or unconsciously, because that's where all the employees are. That's what the culture is. That's what the, what they work on. And then something changes in the market and you just can't even, even, even move there. And I think that also can happen uncared or surprisingly at really small companies. So where founders have a view and it's like, that view is the view it's going to be. And actually that can be just 10 degrees off from what is the, was the correct path, and the whole company's kind of led astray. So they were in the right market. They might have even been selling the right problem, but they were just a little off. And so we're so scared of failing and so scared of losing that I will humble myself and listen to other people and they say, hey, we're five degrees off off course here. And it's like, okay, let's let's let's let's maybe fix the course. And then once that becomes your culture, then it's really hard to lose because everybody's not about fulfilling a preset path. They're just about finding the, you know, to how to win. This is exactly what I wanted to ask about. Everyone listening to this either is like, oh, yeah. We're very open minded. We're we're absolutely gonna listen to everyone's opinions and decide rationally the right path, in practice. Never happens. Right. Or they're just like, we're just we know we're not good at this. We're just, like, too nice to each other. How do you how do you do this at a company that isn't good at this? Is it, like, does it have to be the CEO top down in your experiences? Does it have to be part of the culture? How do you operationalize at a company that's not like you? You know, the the middle way is, is typically the right way. And and it's it's it's hard to find the middle way because these are conflicting ideas. These are the the the the guardrails or the flag the post you just set. One side is, like, we're just gonna go on the other side as we're to, you know, maybe, like, almost, like, unsure. And you have to somehow once you do make the have that debate, you have to then confidently walk down that path. And, again, this is conflicting. I just said, be humble enough to listen to what's going on. But then once that decision is made, the decisive in our values, that first value, the, you know, the speed value, which is specifically the wording is, move fast, move safe. That's specifically the wording. We assess our managers on adherence to those values. Literally, we compensate and promote against those values. So they're not just like abstract values. So the behavior we're actually looking at under speed is decisiveness. So we're setting up a system that is looking at, you know, these conflicting kind of things. One is like be open and the other ones make decisions quickly. And and you have to hold the hold hold those intention. This is why you as the cofounder or founder of the company get paid the big bucks. You gotta do that. You gotta you have to know when to when to bluff and when to hold them and, you know, when to fold them as as I say. So you, you, you, you, you at some point, and that point comes sometimes faster than you think you will not get any more information and you have to make a decision. So the you're, you're, you're, you're walking this very, very thin line. Your point about emotions was extremely interesting, and I wanna make sure people don't take away the wrong takeaway here. So what I actually found really helpful, which I think is aligned with what you were suggesting is taking emotions out of it. The way I've used this in my work is, when you have to make a hard decision, pretend nobody's feelings would be hurt and emotions are not involved. What would you do if nobody cared? If they're like, totally great. So what would you do do in that world? And then that tells you, okay. That's actually the right thing to do. And then it's, okay. How do I help people feel okay about this? How do I deal with the downsides of this path? Yeah. I think I think that that that's the that's the obvious version of it. I think maybe another version of thinking is it's an emotion is let's take another route. You as the leader or as the engineer, who is getting, direction, you already have some preset view that this is my idea. That's an emotional, you know, that's an emotional construct, and it's it's it's around ownership and feeling of ownership. So, yeah, I'm I I really fall into that category of, like, the more so, like, even maybe most fundamentally, like, what is an emotion? An emotion is like these set of reactions that have been that the framework has been imparted in your brain through life experiences. And those life experiences might not actually make you have not been optimized for you to make a decision in a product review. And so the the the more you can pull that away, a good heuristic would be the same decision being made by multiple people in the company gets the same result. So you're removing a little bit of that almost like filter. You can almost think of that emotion as like a filter. So I like to have the raw image come through, the raw decision come through so we can consistently classify it again and again. Not to get too abstract, but I don't know if that makes sense. Yeah. It makes sense. And, I have one last question, but, there's an interesting trend I've noticed with a people talking about AGI. The missing piece I've been hearing more and more is just emotions are what creates consciousness potentially. Michael Pollan has a new book out about consciousness, and his take is it's not just more intelligence. It's actually emotions that led to the consciousness. I think it's it's it's, under, let's say, undermining how complex human thought is to think that it's just the, let's say, the, you know, inputs and outputs of or let's say, association, for the lack of better word, of ideas, facts, words, letters. It's not just as associations and, and creativity is kind of a little a little bit, you know, of that as well. Like like, what the old the old saying of, like, technical mastery is mastering the complex. And I think computers do that really well and creativity is mastering the simple. I'm sure I'm gonna eat my words on this as like the best artist in like three years. It will be. It will be beautiful. Yeah. Yeah. And so I think, and this again goes back to my philosophy of like, consume broad, you know, broad inputs, but then try to remove that filter, see things as honestly as they possibly can, create a culture in the company that is also similar and doesn't put any weight on who the idea came from or where it came from. But then ultimately, as a leader, you decide. And by the way, you gotta be right. Like, that's the other thing I think a lot of founders are just we're not we don't emphasize enough. Founders love to take credit for for things. It's just human nature. Everybody does. But they the the reality is you have to be right. It's not enough to just start a company. It's not enough to, you know, have this vision of the world. You have to be right. And the the evidence is is the company a sustainable standalone business? Cause we're talking specifically about, you know, venture backed AI companies in Silicon Valley. All of my but I should have said this at the beginning. All of my, you know, all of my advice is specifically for that narrow group, which is which is founders of of of AI venture backed companies in the Bay Area. Speaking of that last question, I've been wanting to get to this because it's an interesting spicy take that you have as the last question. I know you have to run after this. You have this view that a lot of CEOs in Silicon Valley don't actually have great taste. I'm I'm excited to hear what your experience there and just what what you Yeah. I I I also wanna be careful to imply that I do. Okay. Okay. I fall into that into that group. I think, you know, the it is true because a lot of the the couple of reasons, both both taste in the most, let's say, you know, artistic sense and in the most specific, like, in running a company and, like, what should be the policy, like, HR policy for, you know, point x. A lot of that is I think they're just not exposed to a lot of interesting good things. And and that's been a theme in this whole conversation is like, just just get more and more exposure. I it's very unfortunate when I meet somebody who and I'm not thinking of anyone in particular. So if this is you and you're one of my friends, I apologize. I don't I'm not talking about you, but it's like, you know, you grow up in Cupertino, you go to Berkeley and you start and the first thing you do when you come out of school is you start a company and then that's that's all you do, you know, for twenty, like you've never even been an employee and why I think that's so important. Like I spent over a decade working in large organizations and like truly large, like more, more than a 100,000 employees. General motors or like a Bosch. And when you're in the, you know, the, you know, the back alley of that organization, the bowels of those organizations, you learn how bad it is to be an employee. Like, the bureaucracy above you, leadership doesn't know what's going on, the industry, you know, your antiquated tools, all this stuff. Why that's so important to experience as an individual is then when you become a leader, you're making policies and you're creating culture, and you have to keep that in mind. And a bunch of founders, we just never had the, frankly, the fortune of being at the bottom of the totem pole. And, and that's just one version of how to, you know, that that doesn't obviously seem like, you know, read read you know, consume the photos of, Brissant or, you know, Picasso or whoever it might be. But it's something similar. There's something similar about you you can sometimes meet founders and may maybe a good heuristic here is, like, there's some founders that would be good at lots and lots of things. Actually, not just being a founder of an AI company in the Bay Area, and there's something about taste there. There. Because you're really what you're talking about is, like, understanding humans and understanding life and then being able to discern with some judgment what is good and what's not good because that's really what we're talking about. And, and so if you, if you're, if you're, if your life experience is very narrow, you, you could still be good and you might be in and, or have the ability to discern what's good and what's, what's, what's not good. But I think like, there's something about like, if you've backpacked for a few years around the world, I somehow believe that's gonna be a better founder. It's like, you know, I don't know how I can there's no peer reviewed research that can that I can point to that that says that. So I think that's what I'm getting at. There there is some developing of taste. Yeah. Well, I feel like we have, helped people build their taste, build feed their feed their model with more insights and different perspectives in this conversation. I feel like we could chat for hours, but I know you gotta run. Yeah. I'm not sure if, there are any real takeaways other than Okay. Zero. We really we really went everywhere. I'm I'm sorry if you had a particular line of questions you wanted to go down. We went in all the perfect directions. Okay. Good. Good. Good. Kasser, thank you so much for doing this. Thank you so much for being here. Final question. Just working folks finding online, how can listeners be useful to you? That's a great question. I mean, I love to hear what are books that I don't know. So that's that's always good. I've I've I've, some of my favorite books have been just randomly kind of recommended to me. So, I'd I'll I'll take that. Of course, I consume, you know, research as well. And so if there's something particularly novel that's going on, obviously, all the mainstream stuff, we as a company and we as an individual are gonna consume, but things that are a bit off the beaten path, we're always looking for that. But yeah. And then if you if you have a particular opinion about specifically our domain, physical AI, and how AI is gonna impact mines, farms, you know, construction sites, robo tax, all of that stuff. I'm always interested to hear new new opinions on that or even old opinions maybe with a different viewpoint. So, yeah. If you if you see me, you can see me online. Of course, I'm always around as well to I'm always always open to that feedback. And you're on Twitter now. There you go. Yeah. Exactly. Yeah. Follow me there. So I oh, yeah. Exactly. That that's that's the end. That's the call is actually. Channeling my inner inner mark. There you go. Kasser, thank you so much for doing this and for being here. Yeah. Thanks for having me. It was a lot of fun. Bye, everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also, please consider giving us a rating or leaving a review as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at lenny'spodcast.com. See you in the next episode.