it’s an “open field” • TechCrunch

If you’ve been closely following the progress of Open AI, the company run by Sam Altman whose neural networks can now write original text and create original images with astonishing ease and speed, you might just skip this room.

If, on the other hand, you’ve only paid vague attention to the company’s progress and the growing traction that other so-called “generative” AI companies are suddenly gaining and want to better understand why, you might benefit from this interview with James Currier, a five-time founder and now venture capitalist who co-founded the company NFX five years ago with several of his serial founding friends.

Currier falls into the camp of people who follow progress closely – so closely that NFX has made many related investments in “generative technology” as he describes it, and it’s getting more and more of the team’s attention every year. month. In fact, Currier doesn’t think the buzz around this new wrinkle on AI isn’t so much hype as it is a realization that the wider startup world is suddenly faced with a very big opportunity to the first time in a long time. “Every 14 years,” says Currier, “we have one of these Cambrian explosions. We had one on the Internet in 1994. We had one around mobile phones in 2008. Now we will have another in 2022.”

In retrospect, this editor would have liked to ask better questions, but I’m learning here too. Excerpts from our chat follow, edited for length and clarity. You can listen to our longer conversation here.

TC: There’s a lot of confusion about generative AI, including how new it is or whether it’s just become the latest buzzword.

JC: I think what happened to the AI ​​world in general was that we felt like we could have deterministic AI, which would help us identify the truth of something. For example, is it a broken part on the production line? Is this an appropriate meeting to have? This is where you determine something using AI the same way a human determines something. This is largely what AI has been for the past 10-15 years.

The other sets of AI algorithms were more of these dissemination algorithms, which aimed to examine huge bodies of content and then generate something new from it, saying, “Here are 10,000 examples.” Can we create the 10,001st similar example? »

These were quite fragile, quite brittle, until about a year and a half ago. [Now] the algorithms have improved. But more importantly, the corpora of content we looked at have grown bigger because we simply have more processing power. So what happened is that these algorithms overlap with Moore’s Law – [with vastly improved] storage, bandwidth, computing speed – and suddenly became capable of producing something very similar to what a human would produce. This means that the face value of the text he will write and the face value of the drawing he will draw are very similar to what a human will do. And all of this has happened in the last two years. So it’s not a new idea, but it has just reached that threshold. That’s why everyone looks at this and says, “Wow, that’s magic.

So it was computing power that suddenly changed the game, and not a previously missing technological infrastructure?

He didn’t change suddenly, he just changed gradually until the quality of his generation got to where it was meaningful to us. So the answer is generally no, the algorithms were very similar. In these broadcast algorithms, they have improved somewhat. But really, it’s all about the processing power. Then, about two years ago, the [powerful language model] GPT came out, which was a type of on-premises compute, then GPT3 came out where [the AI company Open AI] would do [the calculation] for you in the cloud; because the data models were so much larger, they had to do it on their own servers. You can’t afford to do it [on your own]. And at that point, things really exploded.

We know this because we invested in a company that makes generative AI-based games, including “AI Dungeon”, and I think the vast majority of all GPT-3 calculations came from “AI Dungeon” at one point.

So does “AI Dungeon” require a smaller team than another game maker?

That’s one of the big pluses, absolutely. They don’t have to spend all that money to house all that data, and they can, with a small group of people, produce dozens of gaming experiences that all benefit from it. [In fact] the idea is that you’re going to add generative AI to older games, so your non-player characters can actually say something more interesting than they do today, although you get fundamentally different gameplay experiences of AI to gaming, compared to adding AI to existing games.

So a big change is in the quality? Will this technology peak at some point?

No, it will always get better and better. It’s just that the increment differences will be smaller over time because they’re already getting pretty good,

But the other big change is that Open AI wasn’t really open. They generated this amazing thing, but it was unopened and very expensive. So groups got together like Stability AI and other people, and they said, “Let’s just do open source versions of this.” And by then the cost has dropped 100 times, just in the last two or three months.

These are not offshoots of Open AI.

All of this generative technology will not be built solely on the Open AI GPT-3 model; it was only the first. The open source community has now replicated much of their work, and they’re probably eight months, six months behind, in terms of quality. But it will get there. And because open source versions cost one-third, one-fifth, or one-twentieth the cost of Open AI, you’re going to see a lot of price competition, and you’re going to see a proliferation of these models that compete with Open AI. And you’re probably going to end up with five, or six, or eight, or maybe, maybe 100 of them.

Then, in addition to these, unique AI models will be built. So you can have an AI model that’s really looking to do poetry, or AI models that really look at how you do visual images of dogs and dog hair, or you’ll have one that’s really specialized in writing sales emails. You’re going to have a whole layer of these specialized AI models that will then be specially designed. Then above those, you will have all the generative technology, which will be: how to get people to use the product? How do you get people to pay for the product? How do you get people to connect? How do you get people to share it? How to create network effects?

Who makes money here?

The application layer where people are going to go after distribution and network effects is where you are going to make money.

What about the large companies that will be able to integrate this technology into their networks. Won’t it be very difficult for a company that doesn’t have this advantage to come out of nowhere and make money?

I think what you’re looking for is something like a Twitch where YouTube could have built that into their model, but they didn’t. And Twitch created a new platform and a valuable new part of culture and value for investors and founders, even if it was difficult. So you’re going to have great founders who are going to use this technology to give them an edge. And it will create a seam in the market. And while the big guys are doing other things, they can build billion-dollar companies.

The New York Times recently published an article featuring a handful of creatives who said the generative AI applications they use in their respective fields are tools in a larger toolbox. Are people naive here? Are they at risk of being replaced by this technology? As you mentioned, the team working on “AI Dungeon” is smaller. It’s good for the business but potentially bad for the developers who might have been working on the game otherwise.

I think with most technologies there’s a kind of unease that people have of [for example] robots replacing a job in a car factory. When the Internet arrived, many direct mailers felt threatened that companies could sell directly and not use their print advertising services. But [after] they have embraced digital marketing or digital email communication, they have probably experienced huge career bumps, increased productivity, increased speed and efficiency. The same thing happened with online credit cards. We didn’t feel comfortable putting credit cards online until maybe 2002. But those who adopted [this wave in] 2000 to 2003 did better.

I think that’s happening now. Writers, designers, and architects who think ahead and embrace these tools to give themselves 2x, 3x, or 5x productivity are going to do incredibly well. I think the whole world is going to end up seeing an increase in productivity over the next 10 years. It’s a huge opportunity for 90% of people to do more, to be more, to do more, to connect more.

Do you think it was a faux pas on the part of Open AI not to [open source] what he was building, given what arose around him?

The leader ends up behaving differently from the followers. I don’t know, I’m not inside the company, I can’t really say. What I do know is there’s going to be a big ecosystem of AI models, and I don’t know how an AI model stays differentiated because they all asymptote towards the same quality and it just becomes a game of price. Seems to me the winners are Google Cloud and AWS because we’re all going to be building stuff like crazy.

Open AI may eventually go up or down. Maybe they become like an AWS themselves, or maybe they start making specialized AIs that they sell to certain verticals. I think everyone in this space will have the opportunity to do well if they navigate properly; they’re just going to have to be smart about it.

NFX has a lot more on their site about generative AI that is worth reading, by the way; you can find that here.

Denise W. Whigham