# The State of AI with Marc & Ben

## Episode metadata
- Episode title: The State of AI with Marc & Ben
- Show: The a16z Show
- Owner / Host: Andreessen Horowitz
- Episode publish date: 2024-06-14
- Episode AI description: Ben and Marc dive into how small AI startups can compete against tech giants, revealing that data isn't as valuable a commodity as once thought. They also compare the current AI boom to the internet's explosive growth. The duo discusses the intricacies of creating AI models and highlights innovation in training techniques. They explore AI's transformative role in travel and healthcare, emphasizing unique user experiences and the ease of health diagnostics. The conversation critiques traditional healthcare financing while advocating for transparency in data usage.
- Duration: 01:13:53
- Episode URL: [Open in Snipd](https://share.snipd.com/episode/8ef10934-c249-436c-91cd-bf76cedfe077)
- Show URL: [Open in Snipd](https://share.snipd.com/show/e4874dcd-789e-493a-858b-a9ec77e81cad)
- Export date: 2026-02-11T20:06:35
## Snips
### [The Different Types of Startup Models](https://share.snipd.com/snip/e01bf38a-5150-4af7-b480-8c187a6fccd7)
🎧 03:39 - 07:00 (03:20)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/da58feef-19f2-437e-92be-09c204a30dcd"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Assume foundation models will improve significantly.
- Build startups that benefit from these improvements, not those threatened by them.
#### 💬 Quote
> you want to assume that the big foundation models coming out of the big AI companies are going to get a lot better. So you want to assume they're going to get like a hundred times better. And as a startup founder, you want to then think, okay, if the current foundation models get a hundred times better, is my reaction, oh, that's great for me and for my startup because I'm much better off as a result, or is your reaction the opposite?
> — Marc Andreessen
Marc Andreessen on how startups should think about competing with foundation model companies
#### 📚 Transcript
**Marc Andreessen:** let me start with one point, Ben, and then we'll jump right to you. So, Sam Altman recently gave an interview, I think maybe Lex Friedman or one of the podcasts, and he actually said something I thought was actually quite helpful. Let's see, Ben, if you agree with it. He said something along the lines of, you want to assume that the big foundation models coming out of the big AI companies are going to get a lot better. So you want to assume they're going to get like 100 times better. And as a startup founder, you want to then think, okay, if this current foundation models get 100 times better, is my reaction, oh, that's great for me and for my startup because I'm much better off as a result? Or is your reaction the opposite? Is it, oh, shit, I'm in real trouble? So let me just stop right there, Ben, and see what you think of that as general advice.
**Ben Horowitz:** Well, I think generally that's right, but there's some nuances to it, right? So I think that from Sam's perspective, he was probably discouraging people from building foundation models, which I don't know that I would entirely agree with that in that a lot of the startups building foundation models are doing very well. And there's many reasons for that. One is there are architectural differences, which lead to how smart is a model? There's how fast is a model? There's how good is a model in a domain. And that goes for not just text models, but, you know, image models as well. There are different domains, different kinds of images that respond to prompts differently. If you ask Midjourney and Ideogram the same question, they react very differently, you know, depending on the use cases that they're tuned for. And then there's this whole field of distillation where, you know, Sam can go build the biggest, smartest model in the world, and then you can walk up as a startup and kind of do a distilled version of it and get a model very, very smart at a lot less cost. So there are things that, yes, the big company models are going to get better, kind of way better at what they are. So you need to deal with that. So if you're trying to go head to head full frontal assault, you probably have a real problem just because they have so much money. But if you're doing something that's different enough or a different domain and so forth, for example, at Databricks, they've got a foundation model, but they're using it in a very specific way in conjunction with their kind of leading data platform. okay, now if you're an enterprise and you need a model that knows all the nuances of how your enterprise data model works and what things mean and needs access control and what needs to use gear-specific data and domain knowledge and so forth, then it doesn't really hurt them if SAM's model gets way better. Similarly, Eleven Labs with their voice model has kind of embedded into everybody. Everybody uses it as part of kind of the AI stack. And so it's got kind of a developer hook into it. And then, you know, they're going very, very fast to what they do and really being very focused in their area. So there are things that I would say like extremely promising that are kind of ostensibly, but not really competing with OpenAI or Google or Microsoft. So I think it sounds a little more coarse-grained than I would interpret it if I was building a startup. Right. Let's dig into this a little bit more.
**Marc Andreessen:** So let's
---
### [The Future of Artificial Human Intelligence](https://share.snipd.com/snip/07b4bd86-9116-4554-a011-abac09d3d88a)
🎧 07:00 - 10:12 (03:12)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/4ffcacb1-405c-48e9-8fbb-29221aded72c"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Current top language models are quite similar in performance.
- 100x improvement may not be noticeable for average users.
#### 💬 Quote
> I think if you look at the very top models, you know, Claude and OpenAI and Mistral and Lama, the only people who I feel like really can tell the difference amongst those models are the people who study them.
> — Ben Horowitz
Ben Horowitz on current language model performance
#### 📚 Transcript
**Marc Andreessen:** start with the question of do we think the big models, the God models, are going to get 100 times better? I
**Ben Horowitz:** kind of think so. And then I'm not sure. So if you think about the language models, let's do those, because those are probably the ones that people are most familiar with. I think if you look at the very top models, you know, Claude and OpenAI and Mistral and Llama, the only people who I feel like really can tell the difference as users amongst those models are the people who study them. You know, like they're getting pretty close. So, you know, you would expect if we're talking 100x better that one of them might be separating from each other a lot more. But the improvement, so 100% better in what way? Like for the normal person using it in a normal way, like asking questions and finding out stuff? Well, let's say some combination of
**Marc Andreessen:** just like breadth of knowledge and capability.
**Ben Horowitz:** Yeah, like I think in some of them
**Marc Andreessen:** are, yeah. Right, but then also just combined with like sophistication of the answers, you know, sophistication of the output, the quality of the output, sophistication of the you know, lack of hallucination, factual grounding. Well,
**Ben Horowitz:** that I think is for sure going to get a hundred times better like that yeah i mean they're on a path for that the things that are so against that right the alignment problem where okay yeah they're getting smarter but they're not allowed to say what they know and then that alignment also kind of makes them dumber in other ways and so you do have that thing. The other kind of question that's come up lately, which is kind of, do we need a breakthrough to go from what we have now, which I would categorize as artificial human intelligence, as opposed to artificial general intelligence, meaning it's kind of the artificial version of us. We've structured the world in a certain way using our language and our ideas and our stuff. And it's learned that very well. Amazing. And it can do kind of a lot of the stuff that we can do. But are we then the asymptote or you need a breakthrough to get to some kind of higher intelligence, more general intelligence? And I think if we're the asymptote, then in some ways it won't get 100 times better because it's already like pretty good relative to us. But yeah, like it'll know more things, it'll hallucinate less. On all those dimensions, it'll be 100 times better, I think. You
**Marc Andreessen:** know, there's this graph floating around. I forget exactly what the axes are, but it basically shows the improvement across the different models. To your point, it shows an asymptote against the current tests that people are using that sort of like add or slightly above human levels, which is what you would think if you're being trained on entirely human data. Now, the counterargument on that is, are the tests just too simple, right? It's a little bit like the question people have around the SAT, which is, if you have a lot of people getting 800s, you know, on both math and verbal on the SAT, is the scale too constrained? Do you need a test that can actually test for Einstein?
**Ben Horowitz:** Right, right, right. It's memorized the tests that we have and it's great. Right. But
**Marc Andreessen:** you can imagine SAT that like really can detect gradations of people who have like ultra high IQs who are ultra good at math or something. You could imagine tests for AI, you know, you can imagine tests that test for reasoning above human levels, one assumes. Yeah,
**Ben Horowitz:** well, maybe the
**Marc Andreessen:** AI needs to
**Ben Horowitz:** write the test. Yeah,
---
### [The Limits of Artificial Intelligence](https://share.snipd.com/snip/4b0756b2-f2f8-4ced-b52c-dceb73a54c38)
🎧 10:12 - 13:46 (03:33)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/f1cb0eab-f1c3-46e8-a670-5a3f915bbd20"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Internet data represents average human intelligence, limiting default AI responses.
- Prompting can access latent knowledge, like prompting for secure code yields better results.
#### 💬 Quote
> If you say, write me secure code to do that, it will actually write better code with fewer security holes, which is very interesting, right? Because it's accessing a different purpose of training data, which is secure code.
> — Marc Andreessen
Marc Andreessen on accessing different parts of the training data with different prompts
#### 📚 Transcript
**Marc Andreessen:** and there's a related question that comes up a lot. It's an argument we've been having internally, which is also I'll start to take some sort of more provocative and probably more bullish, or as you would put it, sort of science fiction predictions on some of this stuff. So there's this question that comes up, which is, okay, you take an LLM, you train it on the internet. What is the internet data? What is the internet data corpus? It's an average of everything, right? It's a representation of sort of human activity. Representation of human activity is going to kind of, you know, because of the sort of distribution of intelligence in the population, you know, most of it's somewhere in the middle. And so the data set on average sort of represents the average human.
**Ben Horowitz:** You're teaching it to be very average, yeah. Yeah,
**Marc Andreessen:** you're teaching to be very average. It's just because most of the content created on the internet is created by average people. And so kind of the content on average, you know, as a whole on average is average. And so therefore the answers are average, right? You're going to get back an answer that sort of represents the kind of thing that an average 100 IQ, you know, kind of by definition, the average human is 100 IQ. It's IQ is indexed to 100 at the center of the bell curve. And so by definition, you're kind of getting back the average. I actually argue like that may be the case for the default prompt today. Like you just ask the thing, does the earth revolve around the sun or something? You get like the average answer to that. And maybe that's fine. This gets to the point as well. Okay. The average data might be of an average person, but the data set also contains all of the things written and thought by all the really smart people. All that stuff is in there, right? And all the current people who are like that, their stuff is in there. And so then it's sort of like a prompting question, which is like, how do you prompt it in order to get basically, in order to basically navigate to a different part of what they call the latent space, to navigate to a different part of the data set that basically is like the super genius part. And, you know, the way these things work is if you craft the prompt in a different way, it actually leads it down a different path inside the data set, gives you a different kind of answer. And here's another example of this. you ask it, write code to do X, write to sort a list or, you know, whatever, render an image, it will give you average code to do that. If you say, write me secure code to do that, it will actually write better code with fewer security holes, which is very interesting, right? Because it's accessing a different purpose of training data, which is secure code. And if you ask, you write image generation thing the way John Carmack would write it, you get a much better result because it's tapping into the part of the latent space represented by John Carmack's code, who's the best graphics programmer in the world. And so you can imagine prompting crafts in many different domains such that you're kind of unlocking the latent super genius, even if that's not the default answer.
**Ben Horowitz:** Yeah, no. So I think that's correct. I think there's still a potential limit to its smartness. So we had this conversation in the firm the other day where you have, there's the world, which is very complex. And intelligence kind of is, you know, how well can you describe, represent the world? But our current iteration of artificial intelligence consists of humans structuring the world and then feeding that structure that we've come up with into the AI. And so the AI kind of is good at predicting how humans have structured the world, as opposed to how the world actually is, which is, you know, something more probably complicated, maybe irreducible or what have you. So do we just get to a limit where like it can be really smart, but its limit is going to be the smartest humans as opposed to smarter than the smartest humans? And then kind of related, is it going to be able to figure out brand new things, you new laws of physics and so forth? Now, of course, there are like one in three billion humans that can do that or whatever. That's a very rare kind of intelligence. So it still makes the AIs extremely useful, but they play a different role if they're kind of artificial humans than if they're like artificial, you know, super mega humans. Yeah.
**Marc Andreessen:** So
---
### [The Benefits of Supervised Learning](https://share.snipd.com/snip/a300dfe2-a43f-4dfa-b6e8-ad0a4b855ea0)
🎧 15:31 - 15:40 (00:08)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/ccd60fcc-613d-4932-9ef5-dd1bb9a472ab"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Overtraining, or training models on the same data for longer with more compute cycles, has been shown to improve performance.
- This challenges the previous assumption about diminishing returns from more training.
- Meta and others are reporting positive results from overtraining.
- This suggests that more compute, rather than more data, might be the key to improvement at this stage.
#### 💬 Quote
> We don't necessarily need more data at this point to make these things better. We maybe just need more compute cycles. We just trained it a hundred times more and it may just get actually a lot better.
One expert's opinion about overtraining models.
#### 📚 Transcript
**Marc Andreessen:** exactly. Like what one guy in the space basically told me, basically, he's like, yeah, we don't necessarily need more data at this point to make things better. We maybe just need more compute cycles. We just train it a hundred times
---
### [The Tradeoff Between Synthetic Data and Artificial Intelligence](https://share.snipd.com/snip/2811bf37-6716-4092-b379-174694563fd9)
🎧 15:40 - 17:12 (01:32)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/6689ade8-f601-470f-9720-1368e5d880b8"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Supervised learning significantly boosts AI models.
- Self-improvement loops, where AIs perform chain-of-thought reasoning and retrain on the answers, are starting to show results.
- LLMs might be better at validating code than writing it, suggesting different parts of the neural network handle different tasks.
- AIs could leverage their strengths in one area (like validation) to improve their weaknesses (like code generation).
#### 💬 Quote
> It's not an it. What it is is it's this giant latent space, it's this giant neural network. And the theory would be there are totally different parts of the neural network for writing code and validating code.
> — Marc Andreessen
Marc Andreessen on how different parts of neural networks develop different skill levels.
#### 📚 Transcript
**Ben Horowitz:** it may just get actually a lot better. So on data labeling, it turns out that supervised learning ends up being huge boost to these things. Yeah. So we've got that. We've
**Marc Andreessen:** got all of the kind of, you know, let's say rumors and reports of various kinds of self-improvement loops, you know, that kind of underway. And most the sort of super advanced practitioners in the field think that there's now some form of self-improvement loop that works, which basically is, you basically get an AI to do what's called chain of thoughts. You get it to basically go step-by to solve a problem. You get it to the point where it knows how to do that. And then you basically retrain the AI on the answers. And so you're kind of basically doing a sort of a forklift upgrade across cycles of the reasoning capability. And so a lot of the experts think sort of thing is starting to work now. And then there's still a raging debate about synthetic data, but there's quite a few people who are actually quite bullish on that. Yeah. And then there's even this trade-off. There's this kind of dynamic where like LLMs might be okay at writing code, but they might be really good at validating code. You know, they might actually be better at validating code than they are at writing it. That would be a big help. Yeah, well, but that also means like AOs may be able to self-improve. Yeah, own code. Yeah, yeah. They can validate their own code. And we have this anthropomorphic bias that's better at validating code than writing code? But it's not an it. What it is, is it's this giant latent space. It's this giant neural network. And the theory would be there are totally different parts of the neural network for writing code and validating code. And there's no consistency requirement whatsoever that the network would be equally good at both of those things. And so if it's better at one of those things, right, so the thing that it's good at might be able to make the thing that it's bad at better and better. Right, right, right, right, right. Sure, sure. Right, sort of a self-improvement thing. And so then on top of that, there's all the other things
---
### [The Process Flow in a Law Firm](https://share.snipd.com/snip/84336da3-471b-455c-8921-17e076f0ad91)
🎧 18:52 - 23:13 (04:20)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/6da8c111-a422-4a01-bf70-9242f3893703"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Evaluate your AI app idea based on potential pricing.
- High value justifies complex development, low value suggests a simple wrapper.
#### 💬 Quote
> The test for whether your idea is good is how much can you charge for it? Can you charge the value or are you just charging the amount of work it's going to take the customer to put their own wrapper on top of OpenAI?
> — Ben Horowitz
Ben Horowitz on determining if an AI app idea has potential
#### 📚 Transcript
**Marc Andreessen:** this also goes to, you know, a lot of what entrepreneurs are afraid of. I'll give you an example. So a lot of entrepreneurs, here's this thing they're trying to figure out, which is, okay, I really think, I know how to build a SaaS app that harnesses an LLM to do really good marketing collateral. Let's just make a very similar thing. so I build a whole system for that. Will it just turn out to be that the big models in six months will be even better at making marketing collateral just from a simple prompt, such that my apparently sophisticated system is just irrelevant because the big model just does it? Yeah. Let's talk about that, like apps. Another way you can think about it is that the criticism of a lot of current AI app companies is they're quote unquote, you know, GPT wrappers. There's sort of thin layers of wrapper around the core model, which means the core model could commoditize them or displace them. But the counter argument, of course, is it's a little bit like calling all, you know, old software apps, you database wrappers, you know, wrappers around a database. It turns out like actually wrappers around a database is like most modern software. And a lot of that actually turned out to be really valuable. And it turns out there's a lot of things to build around the core engine. So, yeah. So, Ben, how do we think about that when we run into companies thinking about building apps?
**Ben Horowitz:** Yeah. You know, it's a very tricky question because there's also this correctness gap, right? So, you know, why do we have co-pilots? Where are the pilots? Right? Where are the AI? There's no AI pilots. There only AI co-pilots. There's a human in the loop on absolutely everything. And that really kind of comes down to this, you know, you can't trust the AI to be correct in drawing a picture or writing a program or, you know, even like writing a court brief without making up citations. You know, all these things kind of require a human and it kind of turns out to be like fairly dangerous to not. And then I think that so what's happening a lot with the application layer is people saying, well, to make it really useful, I need to turn this co-pilot into a pilot. And can I do that? And so that's an interesting and hard problem. And then there's a question of, is that better done at the model level or at some layer on top that, you know, kind of teases the correct answer out of the model, you know, by doing things like using code validation or what have you? Or is that just something that the models will be able to do? I think that's one open question. And then, you know, as you get into kind of domains and, you know, potentially wrappers on things, I think there's a different dimension than what the models are good at, which is what is the process flow, which is kind of in database for autism, on the database kind of analogy, there is like the part of the task in a law firm that's writing the brief, but there's 50 other tasks and things that have to be integrated into the way a company works, like the process flow, the orchestration of it. And maybe there are, you know, a lot of these things, like if you're doing video production, there's many tools or music even, right? Like, okay, who's going to write the lyrics, which AI I'll write the lyrics and which AI I'll figure out the music. And then like, how does that all come together and how do we integrate it and so forth? And those things tend to, you know, just require a real understanding of the end and so forth in a way. And that's typically been how applications have been different than platforms in the past. There's real knowledge about how the customer using it wants to function that doesn't have anything to do with the kind of intelligence or is just different than what the platform is designed to do. And to get that out of the platform for a kind of company or a person turns out to be really, really hard. And so those things, I think, are likely to work, you know, especially if the process is very complex. And it's something, it's funny, as a firm, we're a little more hardcore technology oriented, and we've always struggled with those in terms of, oh, this is like some process application for plumbers to figure out this. And we're like, well, where's the technology? But a lot of it is how do you encode some level of domain expertise and kind of how things work in the actual world back into the software. I
**Marc Andreessen:** often
---
### [AI Deflates the Cost of Building a Startup](https://share.snipd.com/snip/3655baeb-c48b-4a0f-afba-eb6acff716c0)
🎧 26:51 - 28:21 (01:29)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/8b6b6542-8bef-4541-bfde-3cabf57b35b6"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- AI investments appear paradoxical: massive funding for foundation models, yet lower app development costs.
- This raises questions about how tech investment structures will adapt.
#### 💬 Quote
> if AI deflates the cost of building a startup, how will the structure of tech investment change?
> — Kaiser
Listener question from Kaiser on the impact of AI lowering startup costs
#### 📚 Transcript
**Marc Andreessen:** So that actually takes us to the next question. And this is a two-in question. So Michael asks, and I'll say these are diametrically opposed, which is why I paired them. So Michael asks, why are VCs making huge investments in generative AI startups when it's clear these startups won't be profitable anytime soon? Which is a loaded question, but we'll take it. And then Kaiser asks, if AI deflates the cost of building a startup, how will the structure of tech investment change? And of course, Ben, this goes to exactly what you just said. So it's basically the questions are diametrically opposed because if you squint out of your left eye, right, what you see is basically the amount of money being invested in the foundation model companies kind of going up to the right at a furious pace. You know, these companies are raising hundreds of millions, billions, tens of billions of dollars. And it's just like, oh, my God, look at these sort of capital, you know, sort of, I don't know, infernos, you know, that hopefully will result in value at the end of the process. But my God, look at how much money is being invested in these things. If you squint through your right eye, you know, you think, wow, that now all of a sudden it's like much easier to build software. It's much easier to have a software company. It's much easier to like have a small number of programmers writing complex software because they've got all these AI co-pilots and all these automated, you know, software development capabilities that are coming online. And so on the other side, the cost of building an AI like application startup might, you know, crash. And it might just be that like, you know, the Salesforce, the AI salesforce.com might cost, you know, a 10th or a hundredth or a thousandth the amount of money that it took to build the old database-driven salesforce.com. And so, yeah, so what do we think of that dichotomy, which is you can actually look out of either I and see either cost to the moon for startup funding or cost actually going to zero.
---
### [The Jevens Paradox](https://share.snipd.com/snip/9944e08c-b90b-4edc-a158-da4b0614ef7e)
🎧 28:21 - 32:02 (03:41)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/8b981a41-93e9-4812-a248-5129cb0b6888"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Decreasing software costs may not decrease overall investment due to increased demand (Jevons Paradox).
- Like CGI in Hollywood, better software increases expectations, potentially raising development costs.
#### 💬 Quote
> The paradox here would be the cost of developing any given piece of software falls, but the reaction to that is a massive surge of demand for software capabilities.
> — Marc Andreessen
Marc Andreessen on how the Jevons paradox could affect AI software development costs
#### 📚 Transcript
**Marc Andreessen:** Well,
**Ben Horowitz:** so it is interesting. I mean, we actually have companies in both camps, right? Like, I think probably the companies that have gotten to profitability the fastest, maybe in the history of the firm, have been AI companies. There been, you know, AI companies in the portfolio where the revenue grows so fast that it actually kind of runs out ahead of the cost. And then there are, like, you know, people who are in the foundation model race who are raising hundreds of millions, even billions of dollars to kind of keep pace and so forth. They also are kind of generating revenue at a fast rate. The headcount in all of them is small. So I would say, you know, where AI money goes, and even, you know, like if you look at OpenAI, which is the big spender in startup world, which, you know, we are also investors in, is, you know, headcount-wise, they're pretty small against their revenue. Like it is not a big company headcount. Like if you look at the revenue level and how fast they've gotten there, it's pretty small. Now, the total expenses are ginormous, but they're going into the model creation. So it's an interesting thing. I I'm not entirely sure how to think about it, but I think if you're not building a foundation model, it will make you more efficient and probably get to profitability quicker. Right. So
**Marc Andreessen:** the counter, and this is a very bullish counter argument, but the counter argument to that would be basically that falling costs for like building new software companies are a mirage. And the reason for that is this thing in economics called the Jevons paradox, which I'm going to read from Wikipedia. So the Jevons paradox occurs when technological progress increases the efficiency with which a resource is used, right? Reducing the amount of that resource necessary for any one use. But the falling cost induces increases in demand, right? Elasticity enough that the resource use overall is increased rather than reduced.
**Ben Horowitz:** Yeah, that's certainly possible.
**Marc Andreessen:** Right. And so this is, you see versions of this, for example, you build in your freeway, and it actually makes traffic jams worse. Right. Because basically what happens is, oh, it's great. Now there's more roads. Now we can have more people live here. can have more people, you know, we can make these companies bigger, and now there's more traffic than ever. And now the traffic's even worse. Or you saw the classic example is during the Industrial Revolution, coal consumption. As the price of coal drops, people use so much more coal that actually the overall consumption actually increased. People are getting a lot more power, but the result was the use of a lot more coal in the paradox. And so the paradox here would be, yes, the cost of developing any given piece of software falls, but the reaction to that is a surge of demand for software capabilities. And so the result of that actually is, although it even, it looks like starting software companies, the price is going to fall, actually it's going to happen is it's going rise for the high quality reason that you're going to be able to do so much more. Yeah. Right. With software, the products are going to be so much better and the roadmap is going to be so amazing of the things you can do. And the customers are going to be so happy with it that they're going to want more and more and more. Yeah. So the result of it. And by the way, another of Jevons Paradox playing out in another related industry is in Hollywood, you know, CGI in theory should have reduced the price of making movies in reality has increased it because audience expectations went up. And now you go to a Hollywood movie and it's wall-to CGI. And so, you know, movies are more expensive to make than ever. And so the result of it, you know, so, but the result in Hollywood is at least much more, let's say, visually elaborate movies, whether they're better or not is another question, but like much more visually elaborate, compelling, kind of visually stunning movies through CGI. The version here would be much software. Yeah. Like radically better software to the end user, which causes end users to want a lot more software, which causes actually the price of development to rise.
**Ben Horowitz:** You know,
---
### [The Overrated Value of Data](https://share.snipd.com/snip/8ace5cf7-d3f7-49a0-94ac-87dec05ff6e9)
🎧 38:42 - 43:38 (04:55)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/af28cd52-646d-45fe-b4c8-0c4a90033463"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Proprietary data is often overvalued as a competitive advantage for AI companies.
- Abundant internet data often surpasses the value of specific company data.
#### 💬 Quote
> There's no large marketplace for data. In fact, what there are is there are very small markets for data.
> — Marc Andreessen
Marc Andreessen arguing that proprietary data is overvalued
#### 📚 Transcript
**Marc Andreessen:** Okay, good. All right, so let's go to the next topic. So on the topic of data, so Major Tom asks, as these AI models allow for us to copy existing app functionality at minimal cost, proprietary data seems to be the most important moat. How do you think that will affect proprietary data value? What other moats do you think companies can focus on building in this new environment? And then Jeff Weishaupt asks, how should companies protect sensitive data, trade secrets, proprietary data, individual privacy in the brave new world of AI? So let me start with a provocative statement, Ben, see if you agree with it, which is, you know, you sort of hear a lot, this sort of statement or cliche is like data is the new oil. And so it's like, OK, you know, data is the key input to training AI, making all this stuff work. And so, you know, therefore, you know, data is basically the new, the new resource. It's the limiting resource. It's the super valuable thing. And so, you know, whoever has the best data is going to win. And you see that directly in how you train AIs. And then, you know, you also have like a lot of companies, of course, that are now trying to figure out what to do with AI. And a very common thing you'll hear from companies is, well, we have proprietary data, right? So I'm a, you know, I'm a hospital chain or I'm a, you know, whatever, any kind of business, insurance company or whatever. And I've got all this proprietary data that I can apply, you know, that I'll be able to build things with my proprietary data with AI that won't just, you know, be something that anybody will be able to have. Let me argue that basically, let's see, let me argue in like almost every case like that, it's not true. It's basically what the internet kids would call cope. It's simply not true. And the reason it's just not true is because the amount of data available on the internet and just generally in the environment is just a million times greater. And so while it may not, you know, while it may not be true that I have your specific medical information, I have so much medical information off the internet for so many people in so many different scenarios that it just swamps the value of quote your data you know just it's just it's just like overwhelming and so your your proprietary data as you know company x will be a little bit useful on the margin but it's not actually going to move the needle and it's not really going to be a barrier to entry in most cases and then let me cite as proof for the for my my belief that this is mostly cope is there has never been nor is there now any sort of, basically, any level of sort of rich or sophisticated marketplace for data. There's no large marketplace for data. In what there are is there are very small markets for data. So there are these businesses called data brokers that will sell you large numbers of information about users on the internet or something. And they're just small businesses. They're just not large. It just turns out like information on lots of people is just not very valuable. And so if the data actually had value, you know, it would have a market price and you would see it transacting and you actually very specifically don't see that, which is sort of a, you know, yeah, sort of quantitative proof that the data actually is not nearly as valuable as people think it is.
**Ben Horowitz:** Where I agree, so I agree that the data, like just as here's a bunch of data and I can sell it without doing anything to the data is like massively overrated. I definitely agree with that. And like maybe I can imagine some exceptions like some, you know, special population genomic databases or something that are that were very hard to acquire that are useful in some way. That's, you know, that's not just like living on the Internet or something like that. I could imagine where that's super highly structured, very general purpose and not widely available. But for most data in companies, it's not like that, in that it tends to not, it's either widely available or not general purpose. It's kind of specific. Having said that, right, like companies have made great use of data. For example, a company that you're familiar with, Meta, uses its data to kind of great ends itself, feeding it into its own AI systems, its products in incredible ways. And I think that, you know, us, Andreessen Horowitz, actually, you know, so we just raised $7.2 billion. And it's not a huge deal, but we took our data and we put it into an AI system. And our LPs were able, there's a million questions investors have about everything we've done, our track record, every company we've invested and so forth. And for any of those questions, they could just ask the AI. They could wake up at three o'clock in the morning and go, do I really want to trust these guys? And go in and ask the AI a question. And boom, they'd get an answer back instantly. They'd have to wait for us and so forth. So we really kind of improved our investor relations product tremendously through use of our data. And I think that almost every company can improve its competitiveness through use of its own data. But the idea that it's collected some data that it can go like sell or that is oil or what have you, that's, yeah, that's probably not true, I would say. And,
---
### [How GitHub Improved Our Investor Relations Product](https://share.snipd.com/snip/03c050d0-dbb3-45c9-99ca-553948714607)
🎧 43:38 - 44:20 (00:42)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/05cd1a92-8c85-4bf3-b8ae-f3d8dbb0a7c3"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Companies should leverage their data to improve internal products and competitiveness.
- Selling data or treating it as a primary asset is generally not advisable.
#### 💬 Quote
> almost every company can improve its competitiveness through use of its own data. But the idea that it's collected some data that it can go like cell, or that is oil, or, or what have you. That's, yeah, that's probably not true, I would say.
> — Ben Horowitz
Ben Horowitz on how companies should use their data
#### 📚 Transcript
**Ben Horowitz:** you know, it's kind of interesting because a lot of the data that you would think would be the most valuable would be like your own code base, right? Your software that you've written. much of that lives in github nobody is actually i don't know of any company we could work with you know whatever a thousand software companies and do we know any that's like building their own programming model on their own code like or and would that be a good idea probably not just because there's so much code out there that the systems have been trained on. So that's not so much of an advantage. So I think it's a very specific kind of data that would have value. Well, let's make it actionable then. If I'm running
---
### [The Rise of Network Effects in AI](https://share.snipd.com/snip/c6e59982-55a8-494b-b512-26ac166a2423)
🎧 53:27 - 57:40 (04:12)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/64900f08-25cc-4fba-b4c7-1155db8fa078"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- The AI boom resembles the PC/microprocessor boom more than the internet boom.
- AI is a new kind of computer (probabilistic), while the internet connected existing ones (deterministic).
#### 💬 Quote
> The internet was a network, whereas AI is a computer.
> — Marc Andreessen
Marc Andreessen explaining the difference between the internet and AI
#### 📚 Transcript
**Marc Andreessen:** asks, what are the strongest common themes between the current state of AI and Web 1.0? And so let me start there. Let me give you a theory, Ben, and see what you think. So I get this question, you know, because of my role, and Ben, you're with me at Netscape, you know, we get this question a lot because of our role early on with the internet. And so there's, you know, the internet boom was like a major, major event in technology, and it's still within a lot of, you know, people's memories. And so, you know, the sort of, you know, people like to reason from analogy. So it's like, okay, the AI boom must be like the internet boom. Starting an AI company must be like starting an internet company. And so, you know, what is this like? And we actually got a bunch of questions like that, you know, that are kind of analogy questions like that. I actually think, you know, and then Ben, you know, you and I were there for the internet boom. So we, you know, we lived through that and the bust and the boom and the bust. So I actually think that the analogy doesn't really work for the most part. It in certain ways, but it doesn't really work for the most part. And the reason is because the internet, the internet was a network, whereas AI is a computer.
**Ben Horowitz:** Yep. Okay,
**Marc Andreessen:** yeah. So, some people understand what we're saying. More like the PC
**Ben Horowitz:** boom. Or
**Marc Andreessen:** the PC boom, or even I would say the microprocessor, like my best analogy is to the microprocessor. Yeah. Or even to like the original computers, like back to the mainframe era. And the reason is because, yeah, look, what the internet did was the internet, you know, obviously a network, but the network connected together many existing computers. And then, of course, people built many other new kinds of computers to connect to the internet. But fundamentally, the internet was a network. And important because most of the sort of industry dynamics, competitive dynamics, startup dynamics around the internet had to do with basically building, either building networks or building applications run on top of networks. And this, you know, the internet generation of startups was very consumed by network effects. And, you know, all these positive feedback loops that you get when you connect a lot of people together. And, you know, things like, you know, so-called Metcalfe's Law, which is sort of the value of a network, you know, expands, you know, kind of the way it expands is you have more people to it. And then, you know, there were all these fights, you know, these fights, you know, all the social networks or whatever fighting to try to get network effects and try to steal each other's users because of the network effects. And so it was kind of, you it's dominated by network effects, which is what you expect from a network business. AI, like there are some networks effects in AI that we can talk about, but it's more like a microprocessor. It's more like a chip. It's more like a computer in that it's a system that basically, right, data comes in, data gets processed, data comes out, things happen. That's a computer. It's an information processing system. It's a computer. It's a new kind of computer. It's a, you know, we like to say the sort of computers up until now have been what are called von Neumann machines, which is to say they're deterministic computers, which is they're like, you know, hyper literal and they do exactly the same thing every time. And if they make a mistake, it's the programmer's fault. But they're very limited in their ability to interact with people and understand the world. You know, we think of AI and large language models as a new kind of computer, a probabilistic computer, a neural network based computer that, you know, by the way, is not very accurate and is, you know, doesn't give you the same result every time and in fact might actually argue with you and tell you that it doesn't want to answer your question. Yeah,
**Ben Horowitz:** yeah. Which makes it very different in nature than the old computers. And it makes it kind of compressibility, you know, the ability to build things, big things out of little things, more complex. Right.
**Marc Andreessen:** But the capabilities are new and different and valuable and important because it can understand language and images and, you know, all these things that you see when you use these All of that
**Ben Horowitz:** means we could never solve with deterministic computers we can now go after, right? Yeah,
**Marc Andreessen:** exactly. And so I think, Ben, I think the analogy and I think the lessons learned are much more likely to be drawn from the early days of the computer industry or from the early days of the microprocessor than the early days of the internet? Does that sound right? I
**Ben Horowitz:** think so. Yeah, I definitely think so. And that doesn't mean there's no like boom and bust and all that because that's just the nature of technology. You know, people get too excited and then they get too depressed. So there'll be some of that, I'm sure. There'll be over build outs, you know, potentially of eventually of chips and power and that kind of thing. You know, we start with the shortage. But agree. Like, I think networks are fundamentally different in the nature of how they evolved than computers. And the kind of just the adoption curve and all those kinds of things will be different. Yeah.
**Marc Andreessen:** So
---
### [The Computer Industry Today Is a Massive Pyramid](https://share.snipd.com/snip/b8c31151-f268-457b-b2e0-c53640a6edb9)
🎧 57:40 - 01:03:02 (05:21)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/f89a2c04-6de8-485d-a1ab-8f6889668e7e"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Past computing eras relied on complexity as a lock -in mechanism.
- AI's ease of use (talking to a person) changes this dynamic.
#### 💬 Quote
> Nobody ever got fired for buying IBM because you had people trained on them, people knew how to use the operating system. [...] It's an interesting question with AI because AI is the easiest computer to use by far, it speaks English. It's like talking to a person.
> — Ben Horowitz
Ben Horowitz discussing lock-in and ease of use in different computing eras
#### 📚 Transcript
**Marc Andreessen:** then this kind of goes to where how I think the industry is going to unfold. And so this is kind of my best theory for kind of what happens from here. It's kind of this, you know, giant question of like, you know, is the industry going to be a few God models or, you know, a very large number of models of different sizes and so forth. So the computer, like famously, you know, the original computers, like the original IBM mainframes, you know, the big computers, you know, they were very, very large and expensive. And there were only a few of them. And the prevailing view, actually, for a long time was that's all there would ever be. And there was this famous statement by Thomas Watson, Sr., who was the creator of IBM, you know, which was the dominant company for the first, like, you know, 50 years of the computer industry. And he said, he said, I believe this is actually true. He said, I don't know that the world will ever need more than five computers. And I think the reason for that, it was literally, it was like the government's going to have two, and then there's like three big insurance companies, and then that's it. Who
**Ben Horowitz:** else would need to do all that math?
**Marc Andreessen:** Exactly. Who else would need to, who else needs to keep track of huge amounts of numbers? Who else needs that level of calculation capability? It's just not a relevant, you know, it's just not a relevant concept. And by the way, they were like big and expensive. And so who else can afford them? Right. And who else can afford all the headcount required to manage them and maintain them? I mean, and this is in the days, I mean, these things were big. These things were so big that you would have an entire building that got built around a computer. Right. And they'd have like, they famously have all these guys in white lab coats, literally like taking care of the computer because everything had to be kept super clean or the computer would stop working. And so, you know, it was this thing where, you know, today we have the idea of an AI God model, which is like a big foundation model that, you know, then we have the idea of like a God mainframe. Like there would just be a few of these things. And by the way, if you watch old science fiction, it almost always has this sort of conceit. It's like, okay, there's a big supercomputer and it either is like doing the right thing or doing the wrong thing. And if it's doing the wrong thing, you know, that's often the plot of the, of the science fiction movies is you have to go in and try to figure out how to fix it or defeat it. And so it's sort of this, this idea of like a single top-down thing, of course, and that held for a long time, like that held for, you know, the first few decades. And then, you know, even when computers, computers started to get smaller. So then you had so-called mini computers was the next phase. And so that was a computer that, you know, didn't cost $50 million. Instead, it costs, you know, $500,000. But even still, $500,000 is a lot of money. People aren't putting mini computers in their homes. And so it's like midsize companies can buy mini computers, but people can't. And then, of course, with the PC, they shrunk down to like $2,500. And then with the smartphone, they shrunk down to $500. And then, you know, sitting here today, obviously, you have computers of every shape, size, description, all the way down to, you know, computers that cost a penny. You know, you've got a computer in your thermostat that, you know, basically controls the temperature in the room. And it, you know, cost a penny. And it's probably some embedded ARM chip with firmware on it. And there's, you know, many billions of those all around the world. You buy a new car today, has something. New cars today have something on the order of 200 computers in them. Maybe more at this point. And so you just basically assume with the chip sitting here today, you just kind of assume that everything has a chip in it. You assume that everything, by the way, draws electricity or has a battery because it needs to power the chip. And then increasingly you assume that everything's on the internet because basically all computers are assumed to be on the internet or they will be. And so as a consequence, what you have is the computer industry today is this massive pyramid and you still have a small number of like these supercomputer clusters or these giant mainframes that are like the God model, you know, the God mainframes. And then you've got, you know, a larger number of mini computers. You've got a larger number of PCs. You've got a much larger number of smartphones. And then you've got a giant number of embedded systems. And it turns out like the computer industry is all of those things. And, you know, what size of computer do you want is based on what exactly are you to do and who are you and what do you need? And so if that analogy holds, it basically means actually we are going to have AI models of every conceivable shape, size, description, capability, right, based on trained on lots of different kinds of data running at very different kinds of scale, very privacy, different policies, different security policies. You know, you're just going to have like enormous variability and variety, and it's going to be an entire ecosystem and not just a couple of companies. Yeah, let me see what you think of that.
**Ben Horowitz:** Well, I think that's right. And I also think that the other thing that's interesting about this era of computing, if you look at priors of computing from the mainframe to the smartphone, a huge source of lock-in was basically the difficulty of using them. So, you know, nobody ever got fired for buying IBM because, like, you know, you had people trained on them, you know, people knew how to use the operating system. Like, it was, you know, it kind of like a safe choice due to the massive complexity dealing with a computer. And then even with the smartphone, why is the Apple computer smartphone so dominant? What makes it so powerful as well? Because switching off of it is so expensive and complicated and so forth. It's an interesting question with AI, because AI is the easiest computer to use by far. It English. It's like talking to a person. And so like, what is the lock in there? And so are you completely free to use the size, price, choice, speed that you need for your particular task? Or are you locked into the God model? And, you know, I think it's still a bit of an open question, but it's pretty interesting in that that thing could be very different than prior generations.
**Marc Andreessen:** Yeah,
---
### [The Hype Cycle of New Technology](https://share.snipd.com/snip/7b47b7f6-1b20-4237-a48d-266730dac675)
🎧 01:03:02 - 01:06:56 (03:54)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/ceee568e-342c-4666-8304-8e3d75fbaf78"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Expect boom and bust cycles in AI, similar to previous tech advancements.
- Overbuilding in areas like chips and data centers is likely.
#### 💬 Quote
> I think a big one is probably just the boom bust nature of it that, like, you know, the demand, the interest in the internet, the recognition of what it could be was so high that money just kind of poured in and buckets.
> — Ben Horowitz
Ben Horowitz discussing boom and bust cycles in technology
#### 📚 Transcript
**Marc Andreessen:** yeah, that makes sense. And then just to complete the question, what would we say? So Ben, what would you say are lessons learned from the internet era that we lived through that would apply, that people
**Ben Horowitz:** should think about? I think a big one is probably just the boom bust nature of it that, like, you know, the demand, the interest in the internet, the recognition of what it could be was so high that money just kind of poured in in buckets. And, you know, and then the underlying thing, which in internet age was the telecom infrastructure and fiber and so forth, got just unlimited funding and unlimited fiber was built out. And then eventually we had a fiber glut and all the telecom companies went bankrupt and, and, and that was great fun. But, you know, like we entered in a good place. And I think that that's something like that's probably pretty likely to happen in AI where like, you know, every company is going to get funded. We don't need that many AI companies. So a lot of them are going to bust. There's going to be a huge, you know, huge investor losses. There will be an overbuild out of chips for sure at some point. And then, you know, we're going to have too many chips and yeah, some chip companies will go bankrupt for sure. And then, you know, and I think probably the same thing with data centers and so forth, like we'll be behind, behind, behind, and then we'll overbuild at some point. So that'll all be very interesting. I think that, and that's kind of the, that's every new technology. So Carlotta Perez has a great kind of, has done, you know, amazing work on this, where like that is just the nature of a new technology is that you overbuild, you underbuild, then you overbuild. And, you know, and there's a hype cycle that funds the build out and a lot of money is lost, but we get the infrastructure. And that's awesome because that's when it really gets adopted and changes the world. I want to say, you know, with the internet, the other kind of big kind of thing is the internet went through a couple of phases, right? Like it went through a very open phase, which was unbelievably great. It was probably one of the greatest booms to the economy. It, you know, it certainly created tremendous growth and power in America, both, you know, kind of economic power and soft cultural power and these kinds of things. And then, you know, it became closed with the next generation architecture with, you know, kind of discovery on the Internet being owned entirely by Google and, you know, kind of other things, you know, being owned by other companies. And, you know, AI, I think, could go either way. So could be very open or like, you know, with kind of misguided regulation, you know, we could actually force our way from something that, you know, is open source, open weights. Anybody can build it. We'll have a plethora of this technology. We'll be like, use all of American innovation to compete. Or we'll, you know, we'll cut it all off. We'll force it into the hands of the companies that kind of own the internet today. And, you know, and we'll put ourselves at a huge disadvantage, I think, competitively against China in particular, but everybody in the world. So I think that's something that definitely, you know, that we're involved with trying to make sure it doesn't happen, but it's a real possibility right now. Yeah.
**Marc Andreessen:** There's sort of an irony is that networks used to be all proprietary, and then they opened up.
**Ben Horowitz:** Yeah, yeah, yeah, right. Landman, AppleTalk,
**Marc Andreessen:** NetBui, NetBios. Yeah, exactly. And so these are all the early proprietary networks from all individual specific vendors, and then the internet appeared in kind of TCPIP and everything opened up. AI is trying to go the other way. mean, the big company is trying to take AI the other way. It started out as like open, just like basically just like the research. Everything was open source in AI, yeah. Right, right, right. And now they're trying to lock it down. So it's a fairly nefarious turn of events. Yeah,
---
### [The Darkest Side of Capitalism](https://share.snipd.com/snip/21d8771f-0405-47cb-bbc4-d58e1588a7f8)
🎧 01:06:56 - 01:07:50 (00:53)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/850ea8d8-6f0f-474c-a74d-fd92a1556831"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Large tech companies are pushing for closed AI systems, contradicting the internet's open history.
- They claim it's for safety while simultaneously pursuing aggressive development.
#### 💬 Quote
> it is kind of the darkest side of capitalism when a company is so greedy, they're willing to destroy the country and maybe the world to just get a little extra profit. And they do it. The really nasty thing is they claim, oh, it's for safety.
> — Ben Horowitz
Ben Horowitz criticizing big tech companies' push for closed AI systems
#### 📚 Transcript
**Marc Andreessen:** very nefarious. It's
**Ben Horowitz:** remarkable to me. I mean, it is kind of the darkest side of capitalism when a company is so greedy, they're willing to destroy the country and maybe the world to like just get a little extra profit. But, you know, and they do it like the really kind of nasty thing is they claim, oh, it's for safety. You know, we've created an alien that we can't control, but we're not going to stop working on it. We're going to keep building it as fast as we can, and we're going to buy every freaking GPU on the planet, but we need the government to come in and stop it from being open. This is literally the current position of Google and Microsoft right now. It's crazy. And
**Marc Andreessen:** we're not going to secure it, so we're going to make sure that Chinese spies can just steal our chip plans, take them out of the country, and we won't even realize for six months.
**Ben Horowitz:** Yeah, it has nothing to do with security. It only has to do with monopoly.
---
### [The Cycle of Speculative Mania](https://share.snipd.com/snip/9a81c306-68d0-476b-a953-3a0655937897)
🎧 01:07:50 - 01:11:09 (03:19)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/5b655b2b-1310-4c11-a7df-ad7d2566ba7d"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Speculative bubbles accompany major technological advancements.
- This is due to the inherent uncertainty in identifying successful use cases early on.
#### 💬 Quote
> It's just incontrovertibly true. Basically, every significant technology advance in history was greeted by some kind of financial bubble, basically, since financial markets had existed.
> — Marc Andreessen
Marc Andreessen on the inevitability of speculative bubbles with new technologies
#### 📚 Transcript
**Marc Andreessen:** Yes. The other, you know, just Ben, going back on your point of speculation, so there's this critique that we hear a lot, right? Which is like, okay, you idiots. Basically, it's like you idiots, you idiots, entrepreneurs, investors, you idiots. It's like there's a speculative bubble with every new technology. Like basically like when are you people going to learn to not do that? Yeah. And there's an old joke. There's an old joke that relates to this, which is the four most dangerous words in investing are this time is different. The 12 most dangerous words in investing are the four most dangerous words in investing are this time is different. Right. Like, so like, does history repeat? Does it not repeat? My sense of it, and referenced Carlotta Perez's book, which I agree is good, although I don't think it works as well anymore. We can talk about it sometime, but it's a good, at least, background piece on this. It's just incontrovertibly true. Basically, every significant technology advance in history was by some kind of financial bubble, basically, since financial markets have existed. And by the way, this includes everything from radio and television, the railroads, lots and lots of prior. By the way, there was a there was actually a so-called there was an electronics boom bust in the 60s called the it was called the Tronics. Every every company had the name Tronics. And so, you know, there was that. So, you know, there was like a laser boom bust cycle. There were all these like boom bust cycles. And so basically it's like any new tech, any new technology. That's what economists call general purpose technology, which is to say something that can be used in lots of different ways. Like it inspires sort of a speculative mania. And, you know, and look, the critique is like, okay, why do you need to have this speculative mania? Why do you need to have the cycle? Because like, you know, some people invest in the things, they lose a lot of money. And then there's this bus cycle that, you know, causes everybody to get depressed, maybe delays the rollout. And it's like two things. Number one is like, well, you just don't know. Like if it's a general purpose technology like AI is, and it's potentially useful in many ways, like nobody actually knows up front, like what the successful use cases are going to be or what successful companies are going to be. Like you actually have to, you have to learn by doing. You're
**Ben Horowitz:** going to have some misses. That's venture capital.
**Marc Andreessen:** Yeah, exactly.
**Ben Horowitz:** Yeah,
**Marc Andreessen:** exactly. So yeah, the true venture capital model kind of wires this in, right? We basically, in core venture capital, the kind that we do, we sort of assume that half the companies fail, half the projects fail. And, you know, if any of us, if we or any of our...
**Ben Horowitz:** Fail completely, like lose money. Lose
**Marc Andreessen:** money, exactly, yeah. And so, like, and of course, if we or any of our competitors, you know, could figure out how to do the 50% that work without doing the 50% that don't work, we would do that. But, you know, here we sit 60 years into the field and like nobody's figured that out. So there is that unpredictability to it. And then the other kind of interesting way to think about this is like, okay, what would it mean to have a society in which a new technology did not inspire speculation? And it would mean having a society that basically is just like inherently like super pessimistic about both the prospects of the new technology, but also the prospects of entrepreneurship and, you know, people inventing new things and doing new things. And course, there are many societies like that on planet Earth, you know, that just like fundamentally like don't have the spirit of invention and adventure that, you know, a place like Silicon Valley does. And, you know, are they better off or worse off? And, you know, generally speaking, they're worse off. They're just, you know, less future oriented, less focused on building things, less focused on figuring out how to get growth. And so I think there's a, at least my sense, there's a comes with the territory thing. Like we would all prefer to avoid the downside of a speculative boom-bust cycle, but like it seems to come with the territory every single time. And I, at least I have not, nobody, I'm aware, no society I'm aware of has ever figured out how to capture the good without also having the bad. Yeah.
---
### [The 12 Most Dangerous Words in Investing](https://share.snipd.com/snip/b71279de-ddc9-492d-9079-f7847f71e0bb)
🎧 01:08:07 - 01:09:37 (01:29)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/c91617e3-9316-47a3-808e-cee2459f5c2d"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Every significant technological advancement in history has been accompanied by a financial bubble.
- This pattern holds true for general-purpose technologies like AI, radio, television, and railroads.
- These bubbles are inevitable because it's impossible to predict successful use cases or companies in advance.
- The "this time is different" mindset is dangerous in investing because this pattern repeats.
- Venture capital accounts for this by expecting a certain percentage of failures.
#### 💬 Quote
> Every significant technology advance in history was greeted by some kind of financial bubble [...] since financial markets had existed.
> — Marc Andreessen
Marc Andreessen on the historical trend of tech bubbles.
#### 📚 Transcript
**Marc Andreessen:** old joke. There's an old joke that relates to this, which is the four most dangerous words in investing are this time is different. The 12 most dangerous words in investing are the four most dangerous words in investing are this time is different. Right. Like, so like, does history repeat? Does it not repeat? My sense of it, you referenced Carlotta Perez's book, which I agree is good, although I don't think it works as well anymore. We can talk about it sometime, but it's a good, at least, background piece on this. It's just incontrovertibly true. Basically, every significant technology advance in history was by some kind of financial bubble, basically, since financial markets have existed. And by the way, this includes everything from radio and television, the railroads, lots and lots of prior. By the way, there was a there was actually a so-called there was an electronics boom bust in the 60s called the it was called the Tronics. Every every company had the name Tronics. And so, you know, there was that. So, you know, there was like a laser boom bust cycle. There were all these like boom bust cycles. And so basically it's like any new tech, any new technology. That's what economists call general purpose technology, which is to say something that can be used in lots of different ways. Like it inspires sort of a speculative mania. And, you know, and look, the critique is like, okay, why do you need to have this speculative mania? Why do you need to have the cycle? Because like, you know, some people invest in the things, they lose a lot of money. And then there's this bus cycle that, you know, causes everybody to get depressed, maybe delays the rollout. And it's like two things. Number one is like, well, you just don't know. Like if it's a general purpose technology like AI is, and it's potentially useful in many ways, like nobody actually knows up front, like what the successful use cases are going to be or what successful companies are going to be. Like you actually have to, you have to learn
---
### [The Power of Transferring Money](https://share.snipd.com/snip/9597c47a-1086-4f1c-b3fd-19e60dde2ad3)
🎧 01:11:09 - 01:13:45 (02:35)
<iframe
src="https://share.snipd.com/embed/obsidian-player/snip/3dcaf00d-c478-4090-9796-beb56a03574b"
width="100%"
height="100"
style="border: none; border-radius: 12px;"
sandbox="allow-scripts allow-same-origin allow-forms allow-popups allow-clipboard-write"
></iframe>
- Avoiding speculation entirely requires societal pessimism towards new ventures.
- Embracing speculation, despite risks, fosters innovation and progress.
#### 💬 Quote
> why would you be mad at, you know, young ambitious people trying to improve the world, getting funded, and some of that being misguided?
> — Ben Horowitz
Ben Horowitz on the benefits of speculation and funding new ventures
#### 📚 Transcript
**Ben Horowitz:** And like, why would you? I mean, it's kind of like, you know, the, the whole Western United States was built off the gold rush and like every kind of treatment in like popular culture of the gold rush kind of focuses on the people who didn't make any money. But there were people who made a lot of money, you know, and found gold. And, you know, in the internet bubble, which, you know, was completely ridiculed by, you know, kind of every movie, if you go back and watch any movie between like 2001 and 2004, they're all like how only morons did a dot com and this and that and the other. And there were all these funny documentaries and so forth. But like, that's when Amazon got started. You know, that's when eBay got started. That's when Google got started. You these companies, you know, with that were started in the bubble in the kind of time of this great speculation, there was gold in those companies. And if you hit any one of those, like you funded, you know, probably the next set of companies, you know, which included things like, you know, Facebook and X and, you know, Snap and all these things. And yeah, I like, that's just the nature of it. I mean, like, that's what makes it exciting. And, you know, it's just an amazing kind of thing that, you know, look, the transfer of money from people who have excess money to people who are trying to do new things and make the world a better place is the greatest thing in the world. Like, and if we, some of the people with excess money lose some of that excess money in trying to make the world a better place, like, why are you mad about that? Like, that's the thing that I could never have seen. Like, why would you be mad at, you know, young, ambitious people trying to improve the world, getting funded, and some of that being misguided? Like, why is that bad? Right,
**Marc Andreessen:** right. As compared to, yeah especially as compared to everything else in the world and all
**Ben Horowitz:** the who are not trying to money. So you'd rather, like, we just buy, like, you know, lots of mansions and boats and jets. Right. Like,
**Marc Andreessen:** what are you talking about? Right. Exactly. Or donate money to ruin us. Yeah, ruin us causes. Such as ones that are on the news right now. Okay. So, all right. We're at a minute 20. We made it all the way through four questions. We're doing good. We're doing great. So let's call it here. Thank you, everybody, for joining us. And I believe we should do a part two of this, if not parts three through six, because we have a lot more questions to go. But thanks, everybody, for joining us today. All
---
Created with [Snipd](https://www.snipd.com) | Highlight & Take Notes from Podcasts