Building software with Generative AI

Nat Friedman, former CEO of Github and now a notable AI infrastructure and payments, sits down with Jasper CEO Dave Rogenmoser for a fireside chat. Nat speaks on his journey building Copilot at GitHub, and the challenge of going from 0 to 1, which requires you  to answer simultaneously the question of: what’s a new thing we can build that doesn’t exist, and what’s a thing people really want to use all the time that they may not know about. Dave also poses the question of “build vs buy," and gathers Nat's intel on all the recent news coming out of the AI space.

About this session

Dave Rogenmoser and Nat Friedman sit down to discuss building a product that people love, and what's coming in the AI space.
Osama Zahid
Nat Friedman
Former CEO
Osama Zahid
Dave Rogenmoser
Co-Founder & CEO

I would like to see by a raise of hands, who flew more than five hours to get here. Keep your hand raised if you flew over six hours. Seven, eight, nine, ten, 11, 12, 13, 14. 15, 16, 17, 18. 18 hours is the longest flight. Where Where'd you fly from? India from india? What's your name? Huntsman huntsman, give it up for him. That's amazing. High score. So my name is Austin distel. My story to get on this stage started seven years ago. I was running my marketing agency at UGA and I had a glass ceiling in my business. I was looking for mentorship. And so I started researching. I got retargeted by some ads. I attended a webinar and this guru gave me some good advice. And over the course of that hour, I decided I'm going to use all my spring break money to buy this guy's course and just be a student of success. And over the course of. Three months. He helped me make a lot of breakthroughs. But what I learn most is about the character that you should have in business, about how to approach customers, how to build a company. And I'm really lucky that I bought that course because the guy who taught it was none other than Dave Rogan Moser, CEO of Jasper. This is seven years ago. And so buy some of your old faces. I can tell you didn't know that he was a recovering guru, but this humble Kansas CEO has really taught me a lot about committing to your customers needs, solving their pain points over and over again, but in different ways over time. I actually ended up moving out to live with Dave. He offered his home in annapolis, Maryland, shortly after he actually had a honeymoon with his wife. But that's the kind of guy this guy is. I got on my feet, ended up working with Dave, Chris and JP for the last seven years on multiple different companies. What's interesting about this is their level of commitment to the team, the core values and building a strong foundation of family. And so I knew if I was going to commit to this path, these guys doing marketing stuff, I would end up, one, doing the best work of my life. They're challenging in the best way. I've also discovered that I'm going to work with smart, hungry and humble people. But lastly, I'm going to work with people that will be friends for a lifetime. And so these are kind of core values that Kansas CEO will bring to the industry of AI. And with that, I want to welcome to the stage Mr NAT Freeman and Dave Rogan Mauser for five fireside chat you're going to learn about from Dave's background as a Sas CEO. Multi-time through I come here. Mr NAT Freeman is a CEO of GitHub and now notable investor in AI infrastructure and payments. This is going to be a really interesting conversation, so give me a warm round of applause. The stage. Yeah well, Matt, Thanks for joining us. Yeah, Thanks for the conference. Been for everybody. It's been good. As it turned out, better than I even hoped for. And a big part of that is because all of you come in. So thank you for doing that. So now you and I have had a few of these conversations. You know, usually it's just me. Text new question. You texted me a question. I had a telephone, a 30 minute call and just talk about that. And so what we thought was basically, we want to invite you into one of our conversations. What are we talking about? I've prepared questions that I'm genuinely interested in and I'm genuinely thinking about. And then I've heard from a lot of you all even over the last 8 hours, these are the questions that everybody is talking about and that I think has a really unique insight. It's about the refers. It's about the weather balloons as what we're all wondering about. What are they as you know? I don't know. But anyway, so I'm excited to chat more about this. And I think you've got a good perspective both as a CEO who has to apply this in a company, as a technical person and now as a investor that's just kind of seen the market and really Broadway. So Thanks for joining us. Yeah, super happy to be here. I mean, it's amazing timing and it's a great space. So Thanks to the folks of Jasper for putting all this together, like, I think it's been really cool that I keep running into people I know and people I've been meaning to meet. So you guys have really pulled a great group together. So Yeah. Appreciate that person. This is one of the ask you about was GitHub copilot interesting story that people all want to one of the very first commercial products that took the underlying core tech said we're going to put it into a product and we're going to give it out to end users in a delightful way. I'd love to just hear how that went down. If people don't just a bit about what the product is and like, what does it look like in a company that takes something from idea to a real delightful, usable product? The many lessons from that. Yeah, yeah, that's a great question. Yeah, people love the story, so I'm happy to talk about it. So just to set the scene, it was 20, 21 and I was the CEO of GitHub and I had ended up in that position because I had started a company called xamarin, which Microsoft had purchased in 2016. And then in 2018 I went to Satya and I said, I think we should go by GitHub. And he kind of decided like in a 20 minute conversation like, let's do it, go get it done. This is a crazy head spinning experience. And then we acquired GitHub and I ended up as the CEO and so it was early 21 and GPT 3 came out and you know, I don't know, I'm always attracted to New at exciting things and GPT 3 blew my mind like I really I'd seen GPT too, you know, the sort of sequence of sequence models that followed these we employed machine learning like every company had some amount of machine learning. You were doing spam filtering or ranking or recommendations. But this idea of, you know, generative models that produce outputs that are interesting was still relatively new. And to be through is like magic to me. And so I saw this and it has may of 21 and I remember talking to my team and saying, I don't know what we're going to build something with this, like some product at GitHub with GPT 3. And so I reached out to the opening AI folks and Satya and his sort of foresight and wisdom had already invested dollars in opening AI. And so we had some kind of, you know, existing channel open and I could, I could ping them through and I knew Sam and I knew Greg from prior lives and they said, yeah, we're really interested in doing something with code, let's explore it. And so actually initially people who look at co-pilots today, so if you don't know what a product is, it's basically a really powerful autocomplete for programmers that helps them write code it, you know, complete line or block and it's become very, very popular. But at the time, it was not actually it seems very obvious in retrospect. It was not obvious, at least to me. And a lot of the other smart people who are looking at this new material, this new substance that had been introduced into the world in the form of these large language models, what you could build out of this, that would actually be something people love and I think is sort of really interesting. This is like always the challenge of going from 0 to 1 is you have to answer simultaneously this question of what's a new thing we can build that doesn't exist and what's the thing people really want to use all the time that they may not even know they want to use all the time, and you have to kind of answer it from both sides. And so initially, actually, I thought I sat down, grabbed an engineer on the team and and this leader of our research team, this guy ugur, and this engineer named Alex Gravely, had worked with many times before. And I said, let's write down a list of ideas if, like things that this product could be. And the first idea we wrote down, actually. He was like a Q&A chat bot for answering programming questions. And then we had all these other ideas, you know, could it you could review, could it do you know, could it help, like take an issue and generate a pull request out of it, like really write large amounts of code all at once. And then one of the ideas or fourth idea was doing large scale code synthesis. So write whole functions, that kind of thing. And so, you know, we thought, let's just experiment, let's prototype each of these things and sort of see how it goes, see what works, and, you know, what feels good. And so we actually implemented like a chat bot initially and we thought we would kind of tested on questions about Python programming. And so we, we actually used it internally. We had all kinds of employees using this. We built this kind of data set of 400 questions about Python and the right answers. And then the models we were getting from opening AI were improving week over week. And so we thought, OK, we can benchmark the, you know, these sort of question databases that we built against this and see if it's getting good enough. And what we found was really interesting. So one of the things we discovered is that it was really easy to make a blowing demo like and I think this is still true and I like you can like a blowing demo is easy to build and if you just like screenshot the right share you picked output like your product is a killer product and you love it. So you can get that Wow. Really easily. But then like getting to sort of sustained usage after that is, is actually quite hard because of reliability essentially. So the chat bot at the time with the models that were state of the art, it would give you the right answer like 30% of the time. And when it did, it was incredible. But then 70% of the time it was like completely wrong. And, you know, if, like, you hire me and you asked me a question and like 70% of the time, I give you the wrong answer. Like you're not, like, thrilled about that experience usually. And so we realized, like, the great demo, the cherry picked slide, you know, or whatever, like that may be a necessary but is not a sufficient condition for product success in AI. And so we discarded that idea and we started it playing around with this idea of code synthesis. And so we built a little tool where you could basically describe like a whole function that you wanted in like a docstring, and it would just generate that function. And then that weekend, it was a long weekend. I think we were in September by this point, August, September, and I was working on like a little hobby project at home. My daughter was three, I think, at the time that I was trying to build a robot that could play tic-tac-toe with her, with a plotter and a camera. And I was using OpenCV to do it, didn't really know how to use OpenCV. So I tried using this code synthesis tool to do this, and it just didn't really, you know, the demos again, the demos were incredible. You know, I sent sort of screenshots of the generating amazing functions that were perfect, but that only happened like 1 out of 10 times. And so the rest of the time it was this terrible experience. And so I was like, we haven't got it yet. And so then the next thing we tried was to just add it into the editor. And so you had a button you could push. This is everyone thought this was a good idea. You have a button, you can push the editor and it would open a little sidebar and it would generate three or four options for blocks of code. And then you would read them and you like ideally you'd pick the right one. And then like the AI system would use your feedback from selecting the right thing to improve its suggestions in the future. And that also turned out to be a really bad product because like you hit a button, you wait maybe 2 seconds, 3 seconds, you get multiple options, then you've to read them. And reading code is really hard. It's like a cognitive load. And then the most likely outcome was that none of the suggestions were good, or you couldn't tell which one was good because you're like, I don't know which one of the I got to try it, you know, which one is going to work. And so we're like, yes, this sort of sucks. And so then Alex said, OK, we have this little four coders now know this little dropdown in the editor. It's called the like intellisense dropdown. You know, you hit a dot and it sort of completes the class members. He's like, maybe we can insert some GPT 3 results into that. And so we tried that and that was like, you know, interesting. It was pretty interesting, but it wasn't perfect because, you know, that little box we get populated by language server results and then the GPT 3 results would populate a little later. And then they would Reflow the box because they came in later and you were kind of mixing things of different types in the same UI bucket. So it felt a little bit off. So then we realized like, well, we need a different UI, you know, for doing autocomplete basically in an editor. And we thought, you know, and again, these things are so obvious in retrospect that you're like, how come you guys were such idiots? You didn't figure this out immediately. We thought, like, what if we just do that great text like Gmail does in Gmail autocomplete? But the problem was the editors had no facilities for doing this obvious code. Intelligent there's no great text. Primitive so good news. Github was owned by Microsoft and the code team were our partners. So I went to the VS team and. Half of the folks doing co-pilot, which wasn't called co-pilot at the time. I said, could you please add a gray text feature so we can do great checks? And they said, no, fuck off. Like we're not going to do that, right? So this one engineer, Alex, was like, I'll just hack it in, you know, and try to make it sort of work. And he did. And then like we're like, ooh, this is really good. So then the way it worked was it would kind of print out some great text to complete the line, but if you hit a key, it would try to generate a whole block. So you would choose essentially whether you wanted to try to generate a block or not. And then someone on the team said, hey, what if we, like, try to figure out how to, based on your cursor position, decide automatically whether you should complete a line or a block. OK, we'll use heuristics, we'll pass the code to do this. And so we implemented that. And I grabbed some people from GitHub who knew how to pass, you know, code and do tease. And we implemented this in two languages and that then it worked like. So this was a very magic moment. This was the first time and we replaced four months or five months into this exploration at this. And how many people were working on this too? OK so the title of it was the first time that it worked, meaning that the median new user who tried it liked it and wanted to keep it, was that all internal testing, all internal testing. So great thing about building a product for developers is that you have a bunch of developers on your phone with a lot of opinions. Yeah, and they know it. I mean, I think true of Jasper, too, you know, your audience, you know well. And I think that's been an advantage for Jasper is that you understand your user without having to talk to them and ask them questions, you can kind of simulate them in your own brain. Developers have this advantage to, if I like it, maybe other developers will like it. So that was it. That was like it was four months. It was five months in and it was the first moment the product really works where we had just the right set of details. Correct so that there was like a greater than 50% chance that a new user really wanted to keep using it. And then it just exploded internally and we had a Slack channel and we were getting the head exploding emoji. Like every five minutes from your users, we got to 300 or 400 users. And then I looked at the usage data and the really strong sign was after 30 days, there was like the retention was like 65%, meaning if you had ever used it once, there was like a 65% chance you were still using it for a product which has a very intrusive UI. It's like constantly, you know, showing stuff on your screen that is, you know, I knew there'd be tire kicking, but and this was a, you know, brand new product. So then the next thing that happened was the model was a little bit too stupid. And so like it would occasionally generate Python to code in a Python file or you do stuff that you didn't want it to do, basically. So we knew it had to get smarter. And so I went to the opening. It seemed like, hey, we need a smarter model. How can you help us with this? And they're like, we're on it. We're on the case. We know how to do smarter models. Several months passed, they brought us a model. It was much smarter, but it was also much bigger and so slower. And so we looked at this model and we're like, I think this is too slow. And, you know, really, really smart people and open eyes, people smarter than me said, you know, nat, we've got to try this. People will wait for the intelligence to wait for the intelligence to pop it in. So we did we tested it. And people will not wait for the intelligent source, which is what we found out. You know, they're there. And so what we measured was how much of your code does it write? And it wrote less than half as much of your code when the model was more than twice as slow, even though it was much better, you know, the code that it was writing. And so that's an interesting story for me because. When we started building this product, we didn't really know what was the question we were trying to answer and we didn't even know what that question was until we'd already found the product. And it turns out the way I think about this is that the question we were trying to answer was, how do you take a model which is sometimes very useful but often wrong, and turn that into something people enjoy using and they get value out of? So you're like, I got to build a bookshelf out of mashed potatoes, you know, like, how do I like, what do I need to add to make that actually work? And the reason I think co-pilot works is that in the default case, it requires no interaction from the user. So you don't have to call upon it to do anything. You don't have to think too hard. It's just is like a small input of your energy that's required. And then occasionally you get this like jackpot moment where, you know, sometimes you get these little benefits, but occasionally get this jackpot where it's like, wow, that's saved me like 20 minutes of looking that up, figure out how to do that. And maybe I wouldn't have even built that feature, you know, without co-pilot in this situation. And so psychologically that arrives at random time. So I think a co-pilot psychologically is like a slot machine where you're putting quarters in regularly. It's not that expensive. And occasionally you get this like jackpot. Hits this like randomized psychological reward that hooks you and you're waiting. You want the next jackpot, and so you kind of stay using it. And so I think this is the other thing people sometimes forget about these products is that they should in some way be fun, right? They should, you know, appeal to people's individual psychology and make you feel like a kind of a badass because, you know, you did it in some way. You know, you prompted it. It's not just something that was produced for you, but it's also fun. So you didn't just offload onto the user the drudgery of validation in some way. So I think, yeah, there was something that happened there where people learned when to pay attention to the outputs and when to ignore them. I'm writing boilerplate now, so co-pilot's probably good. This is super sensitive logic code. I need to scrutinize this more carefully. And I think that's kind of why that product worked. And do you think that. The randomness and hitting the jackpot is better than it just being right all the time. No it's just like, Thank goodness that there's like a mechanism around dopamine rushes that it saves the day when it's only 70% Right? well, I mean, no, obviously, being right all the time would be much better. But I do think there's something with these products where you look every time you're wrong, it's very easy to criticize it. And this is what happened. We launched co-pilot and this is happening now with Bing chat. We should talk about that. But, you know, we launched co-pilot and we only because I'm an idiot only had enough used to let like 10,000 or 20,000 people use it initially. And so there was a big uproar online and people pointing to screenshots of it saying, look, it's wrong here. There's an error. So obviously the product is terrible because of this error and the way you feel about that, if you've used it a lot. It's like, well, yeah, it does make mistakes, but like it works like it Nets out that it works because you kind of learn it. It's kind of worth it. And so I was very frustrated by all the FUD. And so I was like, I need more users. Because basically if every time there's a user who says that there's another user that I use it every day, then we can kind of fight the FUD. And so I went to the Azure team and I said, I need more users and this is really strange and, and like I don't know how many couple would be good, maybe 500. And they were like, OK, we have a block of 100. It's an all or nothing block. It's 4,800. And it's to cost you, I remember $25 or $30 million a year. And you have to decide by Friday because like we're going to allocate it out. Somebody else. I was like, oh, OK. So we like tip this whole block. We open co-pilot up to like a million users. And then exactly that happened basically, which was, you know, people, people who had a chance to use it were able to sort explain why it worked for them. But I don't know. I mean, on the truthfulness thing. Obviously that's like the big thing everyone's talking about, especially when it comes to search and chat bots. And I do think it matters, but I think people don't always care about truthfulness as much as they say they do. Again, code, it's objective. Like your code works or it doesn't. And, and, but I think people like chatting with a bot that's nice to talk to, even if it's wrong sometimes. I mean, people talk to me. I'm wrong sometimes. So I think when you think of the whole product. And just think about product success, I do think there's something off like is this fun or are you like even if it's wrong sometimes. You know, someone mentioned to me the other day this idea of like, what is the failure mode of a product like? And when it fails, is that funny? Is it like terrible? You stub your toe. Does it hurt? I have this dumb idea that we didn't implement, but my idea is like, we need to make it feel non-threatening. So when it makes a mistake, it should make, like, a little noise, like oopsie or bloop or whatever. And then we reduce weekly meetings with the team, and it's like, did you guys had the blue pen? And they're like, no, he's not at the blue. Stop bringing it to blue. So anyway, I eventually dropped that idea. But, um, but yeah, I do think there's something about like, just like the personality of it that, you know, people, people like, I love hearing about the iteration. Like UI is like a fickle thing and it takes a lot of loops, a lot of testing and a lot of trying. I think what I found with Jasper and sounds like it's true here is. You've got to attach to some familiar experience that they already know and yet leave open ended freedom to try this brand new thing that's never existed in the world. And if you just kind of start with this brand new thing, it won't get adopted. And so you kind of chose the Gmail autocomplete and then again stuck it in the same space that already working and made it something that they didn't have to remember to use. It just kind of constantly prompting. And so it's interesting. It wasn't intuitive. It took a lot of cycles, a lot of testing. Now it's to that point. Now you're like, I don't get why this took you so long to figure out because it seems so trivially obvious, which is how it goes, I guess. But I also think it's kind of time for, I think co-pilots going to be obsolete at some point if it's not already. I mean, like the models are much better now. And you know, I use Chad, CBT and other models for coding in a totally different way than I use co-pilot. And clearly you want an integrated product experience there where the I don't know that your editor and your debugger and everything is really tied together. I think I'm trying to replicate doing some fun experiments in that direction to try to explore what that looks like. We don't have it yet, but I think yeah, I mean the model quality, you know, like we're have we're going to experience weird leapfrog effects in AI where, you know, the models scale or we figure out a new trick and it completely changes the optimal workflows that are available. And then the other thing I was thinking about this, we talked about this a little bit the other day, but I think this is the first platform revolution that's occurring in the era of social media. And the reason that matters is when you have a good product now it just explodes in popularity because it's like someone screenshots it and they send it to their friends. And, you know, mobile did not occur in the social media era. In the same way this revolution is happening, it's so new, things are getting very big, very fast. And that means for people who have something that's working for incumbents, yeah, you've really got to stay on the edge of innovation like you can't because you can get leapfrogged. We've seen that a little bit in some of the generative AI image space where, you know, the dream booth style stuff, there was like almost a baton being passed from New app to New app is, you know, people figured out better user interfaces. So yeah, I do think yeah, like they're all we're going to see major product leapfrogs even over the next year in each of these categories. Let's talk about capabilities. You know, there's been a few really successful examples of products that are created typically around, you know, marketing content, code creation, chat bots. And I think we talk about this a lot like people tend to like laser in on that. Like that is what the tool is. Yeah, well we really don't think that is what it is. And there's so many other use cases out there. You spend a lot of time with startups, a lot of time thinking about this, like, what else should we all be thinking about? This is really going to impact in the near term besides those three use cases. Yeah I mean, if I'm honest, I think this is going to like rewrite civilization. Like, like buckle up. Like, I think yeah, I don't think there's a lot that it's not going to affect me in a good way to think about this if you don't want to go sci-fi is just like when have we last had a really important enabling technology that, you know, was horizontally applicable to lots of companies and ways of working? Not don't even think about the consumer thing, just like how companies work. And you know, you know, you can think of the internet, obviously a very large one in this category. You can think of the personal computer or you can just pick some software thing like the relational database. And like it's very hard to predict from the invention of the relational database, all the things that are downstream of it. Like obviously every company gets better because they can all keep track of things better. And that's cool. And you have new categories of business to business apps that are introduced like CRM. You know, you could SAP ERP types of things. You get and It is a web crud interface on top of an Oracle database. Right and you say it is the downstream products of the relational database. And you could argue that Facebook is downstream of the relational database. Essentially, it is a consumer UI to relational databases for people. And so it's very hard to I think this is similar in that it's just going everywhere. It's just going to go everywhere. This can be heavy proliferation. One of my predictions for this year is that we'll finally see some really good open source models we haven't really yet. I mean, there are some decent models. Why do you think that is? It's a great question. I ask people that a lot. I mean, if you look at each open source model, that's not that great. You can kind of answer the question about why it's not that great. And if you ask the people who made it, you know, and they're, you know, a couple of beers in or whatever. They'll be like, oh, we made this mistake, you know? And so I do think it's one of those cases where we're just new to building these models and best practices are not super well-established. It's like, well, we were undertrained or we innervated on the tokenizer or and that was a mistake, or we wanted to build a model that supported 50 languages. And so we didn't put enough English in the training data set because it got squeezed out by 49 other languages or, you know, lots of things like that. So I think it's just a contingent thing. Everyone says, oh, it's because they're so expensive. We're not really like, I don't know, taxi beat model, few million bucks. It's not that expensive. So I think we'll see. Oh, maybe what happened here? I mean, I guess we could get unlucky. And, you know, it's like you're climbing to Everest and there's just bodies everywhere. And so, you know, whoever's building those models right now is like walking past the bodies of the former open source models that are not that good. Oh, the other reason there that's a big categoric reason is that the licenses most of the open source models are quite restrictive. So they say like you can't use this for commercial purposes or if you've ever had a mean thought about anyone, you shouldn't use this model or that kind of thing. So I think there will be more permissive ones that will come. Let's talk, build versus buy. I think a lot of companies here, you know, existing companies already offer a different product or a different service and say, how do we augment our own service, our own prior and we bring some of this in-house and offer this, you know, to our customers or maybe even just use it internally stuff. And the big question and we wrestles all the time in Jasper is like, what do we build? What do we buy? Yeah, it moves so fast. How do you weigh that and what would be your advice around what to focus on there? Yeah well, I think build them by is probably the right advice in this case. So this is another unique thing about this platform revolution. And if you go to the relational database kind of metaphor is that like the technology can benefit almost every company and almost every company can incorporate it without dramatically upending their business and taking the right. I mean, unlike Google's situation where I do think it is like a very large potential change to their business. Most companies can benefit from these models without having to reprice all their products. And so I do think people should look like if you're off, I don't know if you customer support. For example, I have there's a company, mid-sized company with about 10,000 employees. You know, I know the founders and they just happened to be really sharp guys and wanted to adopt this stuff early. And so their customer support team decided to build a tool using GPT 3. They fine tuned themselves amazing team to answer support emails and the way it worked is an email would come in and they would have a model, decide, label it, categorize it and decide, you know, can we respond to this automatically or not? And then if it could, it would make a plan and then write a response and it would sort of retrieve some information, but it would never send the response. And so when the human agent came to the ticket, there would already be a machine drafted reply there. And these folks who are not in the customer support business got to the point where 80% of those machine drafted replies were sent out unedited and that was like they were like automating themselves out of a job in the support team. Like that was like a huge now and it was not 80% of all tickets. It was the ones that draft reply to, but I think it was like 30% or 40% of all tickets. So it's quite a big stat. And so, I mean, and what did that result in? Well, much lower costs and support, but a better experience for customers who got answers faster, right. You know, they got their support tickets answered faster. I think there's a bunch of those interesting ways in which your business can be better if you use this stuff. Eventually your incumbent support vendor will hopefully integrate that. And if they don't, then you can go find a new one, but sort of will show up in the applications eventually. But yeah, I mean, I think about I've invested in a bunch of companies and. I use all these sleek, cheap models for programming myself. Right you know, you and I were talking backstage about Python scripts, and I feel like really nervous that some of the companies I've invested in might not be using these models to write code because they're, like, so much less efficient. like, it's. I don't know. Like, I write way more code. It's much more successful because I use large models to do it. And I'm not talking the co-pilot. I'm talking about using chatbots or other models. And so, yeah, I think it's a competitive advantage as an individual human being to learn how to integrate this swarm of little I tools into your life. And so I use models, for example, to label my email that I just wrote a little Python script, and I get emails and they kind of help me decide what's important and what's not. And it's like really good at it. So, yeah, so I didn't reply to your email. That might be the reason I got routed away. I was, Yeah. They've begun their among us. A lot of announcements last week. What's your take on all of that? Yeah, Google saying what you mentioned, being chatter. I mean, it's so exciting for me. You know, we went through this period of like corporate blandness and politeness to be in this, like, war time where pretty fun, huge, you know, very polite, you know, CEOs are like pounding their chests. I think it's really exciting. So that used to be the case back in the old days. Now it's sort of back. So I think that's fun. And I think Satya really called it, you know, he's like, this is asymmetric war. You know, we don't need the economics of search. Google needs the economics of search. You know, it's like he can launch these balloons and then like Sundar has to launch an f-22 raptor, take it out with, like, an expensive sidewinder pistol. Like a balloon is like $100. I don't know how much the sidewinder missile cost, but it's probably a lot. So so I love it. I don't know. It's a great spectacle. I mean, you know, people are kind of criticizing Boeing right now for making some errors. And I have to say, the screenshots are amazing of, you know, these conversations people are having with Bing chat. We're being chats, getting really angry at the user and saying, you're a bad user and you know, there was the best one I saw was someone asked Bing chat when avatar two was showing and it resulted in Bing chat trying to gaslight the user and saying it was still 2022 and then getting really angry at the user. I mean, it was very smart. So you could say, hey, maybe they should have fixed that before they shipped it and you would not be wrong. But I think it's amazing that dollar company is so enthusiastic about this technology that they want to ship. And they're going to learn some stuff in the process. And I they don't learn too much from that mistake because I do think yeah, I do think this is how you explore the frontier. You have to have some stomach for errors in mistakes in order to do this. I agree. And hopefully just as a culture, we can be OK with that and be like, you know, this is not that high stakes for a chat bot to get mad at its avatar too. If you don't see the movie, it's not that big a deal. Yeah, exactly. So who do you think is going to win out here and what are we going to see? A shift in search that is going to impact all of us? I don't know. I mean, I can tell you for me, I try to pay attention to how I feel when I use when I use products. And I would rather ask Chad GP a lot of these questions. And at first I thought, OK, all the queries that, you know, something like chatbots steals from search are low value because they're like factual questions and answers and not like give me a mortgage, you know, no more valuable queries are these really high dollar value transactions. But then I was traveling and I was like, oh, I want a hotel near, you know? And like charging Beatty is a great answer. I say want hotel with, like, you know, a good jam and a nice restaurant, centrally located. It's like yours for answers for you. And that is like the type of question that is just where the web is. The worst in a way, is choked with SEO spam. So I was like, you know, this is like a multi-thousand dollar transaction with respect to this hotel room I stayed in for a week. So I think maybe I prefer this, in fact. So I think it's going to be really I mean, it's Google was almost designed for this moment. If you think about it, you know, what do they have? They have distribution in the form of Google and Android and Chrome. They have huge amounts of data. The data is acid. It's a dream. They've 40 million books scanned for no reason, you know, like and they had, you know, Larry Page had the courage. And when did Google Books start? Like 2007? Something like a long time ago to say like we're looking at $1,000,000,000 scanning books. Why? you know, we'll need it later because they had this mission to build this AI and to organize the world's information. And then they have obviously Android and they have trillions of photos that people are taking a. A And then they have hardware, they have TPU, and so you just can't count Google out. The only thing that's really working against them is extreme internal dysfunction. Like it's like and the economics of search, right? The margins on search, which those are both pretty big deals. So but and then they invented all the techniques. You know, that's the other thing. Like it's a true Bell Labs kind of moment. But I think Satya has got the lead right now and I don't know, I also believe in I believe proliferation will occur. We will have good open source models. There will be a chat, CBT quality model, maybe not fine tuned quite as well, but something where the base pre-trained model is as good. It's open source and MIT licensed something unencumbered this year, I think, you know. All right. Last quick question here. You spent a lot of time talking to startups that you've got some interesting insights. You said backstage that maybe it's not quite how we all think, where there's just unlimited money go into all startups with the word I and the name. Like, what are you seeing out there? That's really interesting. Well, there's five. So basically for a long time. First of all, tier what's interesting question for a long time like we did co-pilot where we went through that sort of 4 to six month process of trying to figure out how to take a language model and turn it into a developer product. And we did we shipped it. And I really expected in the next year that there would be like 5 to 10 more products like that where people were taking language models and they figured out how to make really cool products for, I don't know, for doctors or for accountants or spreadsheet or whatever. I didn't know what they'd be. And then, you know, there was Jasper, but there was kind of nothing else. And I was really shocked by that. I was like, because for me, the GPT 3 moment had been this, you know, I don't know, the clouds parted and light shown down and I was like 23. So but now that's changed. Chad GPT changed that. So there's a long kind of mysteriously quiet period from my point of view. And now that chat CBT has really taken off. I was last week and I took my wife and daughter and my mother to Tanzania and we went on Safari and our guide was a chatty user. That was crazy. So I think we're now I'm now starting to see all the kind of tinkering and new product ideas and innovation in the startup world that I thought would happen earlier. And so I think to me, for me, there's a lesson about how long it can take sometimes for technologies to diffuse. It probably feels super fast to everyone who discovered it since chat, but and then yeah, on the fundraising side, I would say that there's, there's good money, early available seed maybe even a bit later on the narratives not matching the reality that I see, which is even like very good companies, the best companies names that you all know and love you are taking sometimes months to raise rounds in like the venture world is always strange where you know, like you can get 20 no's and then you get one. Yes and you succeeded in raising your round and to the outside world, it looks like. What's a super hot market like? Everyone's throwing money at this thing. It's like, well, you get 20 no's first. So it's always a little tricky to know what's really happening in those pre practice demos that get shared on Twitter and then behind the scenes, you know that the air is not even working that well. Yeah and remember, like somebody dumped $1 ice water on every venture capitalist in the world, you know, a year ago roughly. And so they all got cold feet. They're like, great, are we supposed to deploy money or not? And then AI is happening and now you have this sort of social media driven hit phenomenon where you have, you know, people getting to huge numbers of users and revenue very quickly. So it's going to be a weird world, maybe a little barbell effect over the next year from a fundraising point of view? I think so. We're rewriting civilization might be a few dead bodies along the way, but it'll be good that Friedman everybody thinks.

GenAI Conference

Hosted by Jasper