Episode 158 - Investing in Artificial Intelligence with James Cham

 

Subscribe to the Podcast:

How can Christians thoughtfully approach the topic of artificial intelligence?

How should we use it? What are the ethical considerations? Is it good or dangerous?

These are relevant questions for Faith Driven Investors as AI continues to be a topic for conversation in both the business world and the broader culture. 

Thatโ€™s why weโ€™re excited to feature James Cham on the podcast. 

James is an early-stage venture capitalist and a partner at Bloomberg Beta, a Silicon Valley-based firm that invests in the โ€œnew world of work.โ€ Conversations about Artificial Intelligence and Machine Learning are a big part of his day-to-day life. 

In this crossover episode, he talks with the Faith Driven Entrepreneur hosts about wrestling through the ever-changing landscape of technology with wisdom, discernment, and a redemptive vision of the world.

All opinions expressed on this podcast, including the team and guests, are solely their opinions. Host and guests may maintain positions in the companies and securities discussed. This podcast is for informational purposes only and should not be relied upon as specific investment advice for any individual or organization.


Episode Transcript

Transcription is done by an AI software. While technology is an incredible tool to automate this process, there will be misspellings and typos that might accompany it. Please keep that in mind as you work through it.

Henry Kaestner: Welcome back to the faith driven entrepreneur podcast, William we're here with our guest, James Cham in the house, a fellow Californian. But you are broadcasting this in from Washington, D.C., where you're going to talk about the subject matter that we've brought you to talk about. And this is building, William, on some work we've done recently about artificial intelligence. And two nights ago we did an event for Inklings. Inklings. As a group, we get together every month, month and a half out here. James Cham has been added a bunch of folks talking about Christians in Web3. What is it looks like for Christians in blockchain? What does it look like for Christians in artificial intelligence obviously with chatGPT there's a lot of talk to it. But William, correct me if I'm wrong, but the precipitating event for today, but the origination, the genesis for today's podcast comes from a panel you heard James talk at at the Praxis event. Tell us more about that.

William Norvell: Yeah. So Praxis, another great friend of the movement, as everyone knows, had their redemptive Imagination summit that they call it up in Napa, California, a few weeks ago. And you know, this year there was a shocking lack of cryptocurrency panels and a shocking increase in AI panels. I mean, I'm if I could set people up to stick around for the next 40 minutes, I leaned over. I was actually sitting next to our other fearless leader Justin Forman. I said, We're going to get this on the podcast fast. This is like the most at the moment forward thinking on not only where AI was going to go, but what should we be thinking about as believers? What shouldn't we be thinking about? What should we be thinking about with our children? What should we be scared of with our children? Our Some of our children may be different and some should lean in to learning about this now, and some maybe should be held back from that first season. So I think it spanned just an incredible view of what God would have for this space, how we should thoughtfully think about it, and how, of course, we shouldn't be scared of a revolution that's coming. And thoughtful Christians need to be a part of it, or we're going to lose the battle by abandoned the playing field. So with that, James, welcome.

James Cham: It's good to see you guys.

Henry Kaestner: So, James, I want to ask you what is motivating the urgency and excitement around AI? Why is everybody talking about it now? And we'll talk more about why Christians should care, but why is this all the rage now? Why are there so many panels and everybody is talking about AI, the space you've been in for a long time, What makes everybody focus on it now?

James Cham: You know, the great A.I. demos have always been with us since like the sixties. There's was great demonstrations in which it'll do this thing or that thing and even, you know, sort of as recently as like five years ago or two years ago, I'd show some amazing demo of some amazing product that will do this sort of that magical thing. And then someone would say to me, Hey, can I use this? Can I try this? I say, Well, give me a few moments. I need to prep the data. I need to do this and that. I need to change this, you know, and then give me a couple of weeks and you'll be able actually try this out. And it might or might not work. And what's different about now is that in the last year and then in the last six months, there have been a series of investment bets the actually paid off such that now we're at a point where like everyone and anyone is able to use the most advanced large language models in a way that like used to be cloistered inside Google or clustered inside Facebook where only the smartest people in the world, we're able to play with it and fiddle with it. And now, in part because of a series of both business decisions and technical advancements, this is available to anyone. And that suddenly means that these things that were theoretical questions became real opportunities. And that has implications not just for business, but also for the way that we think about faith.

Henry Kaestner: So what are some of those large language models that have now made their way out into the public? Just give an overview. I sense that a number of our listeners know what they are. Some of them might not. What is that? How is that accessible now for everybody?

James Cham: So right now it's most accessible either through a set of services from Google or Microsoft or from Openai. And let me just take a step back to describe sort of like one version of the history of AI, which is what is AI. AI is a dream, AI is a dream that the computers will be able to do things that humans will be able to do in terms of thinking. And what's interesting about that dream is that along the way, over the last since like basically the sixties, we've started to be able to do things with computers that like humans that would be difficult. And each step along the way we've said, Oh, this is not quite A.I., this is not quite A.I.. You know, the fact that you can beat someone in chess, that's impressive, but it's not quite AI. And then we're at this point now where these large language models are flexible enough and open ended enough that for the first time, a lot of even the best practitioners are saying to themselves, Oh my goodness, this might mean that we're close to true artificial intelligience. And this only happens because of a series of slightly crazy technical miracles. And let me just describe a few of them. Right? So one of the first ones is this idea that we're able to represent ideas in sort of super high dimensional space that you can sort of like get some idea and the set of statistical techniques that don't really matter right now. You're able to represented as math, right? And that idea that you can represent as math and then do math against it suddenly meant that you can manipulate them in interesting ways. So that's the first miracle. So that's like one miracle. The other miracle that sort of has been surprising to everyone is then you're able to take like sort of, you know, all of the Web, compress it down into these models and then you're able to like, do a whole series of queries and chats against them. And then the surprising thing about that is that that will actually end up creating coherent results. So that's the first thing you saw, but that didn't really work that well. The thing that worked really well was when you asked the models to explain their thinking step by step, and suddenly it meant that like, you know, sort of you ask it some physics problem and it wouldn't answer correctly, and then you'd say, to it. Okay, answer this physics problem and describe it step by step. And suddenly, like for the first time, it would start answering things much more precisely. And this was sort of shocking and confusing to a bunch of sort of people in the industry because all of their historical work around thinking about natural language processing and how people talk like that was thrown out the window by just saying, we're going to process a bunch of text compressor down. Are we getting too technical? I feel like I'm taking you down the wrong.

William Norvell: Yeah, well, I love it. I think we needed to go there. And now. Yeah, let's take a step back up. Right. And so I think one of the things that was really intriguing is you talked about how and no point in human history, for the most part, has a technology, this advance been able to be utilized by so many people so quickly? Right. And how that's going to have I mean, I mean, you can just go on Twitter or LinkedIn. I mean, there are Hindu A.I. companies born a day that can build you a PowerPoint, build your website. I mean, people are taking this technology and building on it. You know, like a developer tools, right? Like, it's amazing. And so I'm curious as you look out, I mean, I'm sure it'll be in the interim, But, you know, let's bring it into this conversation. You're an investor for a living, right? You invest in venture capital. So how do you think about the space from Bloomberg Beta perspective and where you think, you know, let's do now, you know, soon and later? Right. What do you feel like the most impressive uses are now? What do you think's coming soon and what do you think? Hey, people are getting ahead of themselves. That's still a ways out before that type of technology is going to be able to exist.

James Cham: Mm hmm. Okay. So the thing that is now is that most machine learning used to be built for very purpose built reasons. Right? That a lot of cases you just have to do a lot of work to get it trained to answer a question. Exactly right. And now for the first time, it's extraordinarily flexible, and that's disorienting to most people. And so that flexibility means that you can now ask relatively open ended questions and sort of discover sort of what the model knows or doesn't know. And then now in terms of like short term, what's available to it, you know, I think there are a lot of startups that are making a lot of money very quickly because they found a specific niche, right, that they figured out, oh, I can create text for this thing or that thing and I can sell it for 20 bucks and it cost me two bucks to generate and sort of like that sort of opportunities right here, right now. There's also a whole set of like opportunities around wrapping these models with agency where they can actually then affect the world. Right. And then that sort of thing, which basically now you've got these models going around and sort of like browsing a website or touching this or that. Like that's clearly the next thing. Right after having chatGPT, the next natural thing everyone wants, right, is that ability to have these things actually do something. And that part, it's a little less clear, right? It's a little less clear what it's good at and bad at. And that's sort of the thing that's happening right now in real time that out in Silicon Valley right now, you've got like 50 different people, 50 different teams trying to figure out exactly when does this model work and when doesn't it work, when it tries to click on a website or browse this or solve this problem. But that interaction with first the digital world and then that interaction to the rest of the world, that's clearly the thing that's just out there. And to be honest, a little scary.

William Norvell: In this would be thinking about a practical example that may not be a sexy example. This would be like me go to ChatGPT saying, Hey, I need to take a flight to Dallas. Right? I've got an off site in Dallas coming up in a few weeks and I need to book an Airbnb, booked dinner reservations on Monday and Tuesday, booked my flights and send the itinerary to so-and-so. And is that a query that's going to be possible here in the near future? Words like, Oh yeah, that be taken care of.

James Cham: That's the sort of thing that feels like it's right around the corner and right now works very well in demos and almost works well enough that people are going to roll out. And so this is the other part of it, which is like it is a information diffusion of knowledge story where we live in a world where everything is so connected that, you know, people are able to play all over the world and thus try these technologies out and see what works and doesn't work and then tweet about it like in the afternoon. And that cycle of innovation and try things is like both really exciting and terrifying, right? Because it does mean that there's much more room for mischief now than there was before. And sort of our old illusions of being able to control the technology sort of thrown out the window.

Henry Kaestner: Tell us more about that. Tell us more to the some of the things that you're it's like, oh, my goodness, you know, this could happen in I mean, in some of this stuff. I got to tell you, as a dad or just a human being, I get really worried about what this means for the adult entertainment business and just, you know, what this ends up being in terms of just taking people down, just really, really bad places. But what are some of the other things that you look at and say, wow, the emergence of this is going to be this is the type of mischief that can happen or maybe just riff on that a little bit.

James Cham: Yeah, I mean, I think here are a couple interesting angles. So in part because these models are so flexible and because their programs are so consistent, they can be much more polite on a consistent basis or much more persuasive on a consistent basis than say, I will if I didn't have enough coffee in the morning. And so that creates a full set of interesting opportunities. And in part because they're cheaper to run than like, let's say talking to me, you can end up replacing humans in conversation in all sorts of interesting, both good and bad ways. And now what's interesting, though, is that one way to think about it is to think about it from the point of view of a consumer in which the consumer maybe gets fooled into having a relationship with what is essentially just a big bunch of numbers. Right? So that's one side. But the flip side, which I think is much more relevant to entrepreneurs and much more relevant to people building stuff is like the actual danger of these models and of AI right now is it gives you a chance to pretend that you're not responsible, that you could build this system that does great mischief, and then you will say, Oh no, it wasn't me, it was the model that did it right. And I think that that sort of like avoidance of responsibility is actually going to be the big, big temptation for entrepreneurs. And that is the place where I think Christians have a lot of wisdom, right, where Christians will be able to say a lot of smart things about the responsibility we have when we create something and both for good and for evil.

William Norvell: Yeah, it's fascinating. It reminds me. So my wife and I went back and started watching the show called Person of Interest. I don't think I ever heard of it. Jim Caviezel was the star who, of course, played Jesus in The Passion as well. And it's ten years ago. But it goes through the whole concept of the show is this brilliant engineer built in artificial intelligence that, of course, national security wanted to buy. Right? Because they could predict national security threats and it would pop out who's relevant. And they're a threat to national security. And the show takes a turn because turns out there's irrelevant numbers as well. The system also finds people that are going to get mugged on the street, and that's what the show goes off. And guys, But in the context of that, they talk deeply about the decisions he made while building the system. And for instance, one of the big decisions he made is ten years old. Sorry if I'm ruining it is it erased its memory every night and how that had to be true, or else it would learn at a speed that eventually, like he couldn't control it anymore. Right. But they go so deep into these deep. And then there's a competing A.I., of course, that comes along. That's evil. And that's a fun story. I think You talked a little bit about national security last time I heard you talk, and I was curious for you to take that direction and say, you know, as an example, you could build something great that can be good, that can be used for very much not good things.

James Cham: I mean, I think so. These models, right, are going to be extremely flexible, extremely persistent. And they will do things that we will consider thinking. They will have ideas that are going to be similar to consciousness. And why does that happen or how does that happen? We're not totally sure. But one way to think about it is that they're able to find structure and analogies that our brains sort of like implicitly do, and it's now doing that explicitly through math. And so in that case, it might be able to do things that look creative, right? And it might also be able to do things that end up becoming very, very. But because they're machines, they're much more persistent than we will be. And so, for example, your ability to hack into some system, you know, like, I don't know, I might know 15 techniques to hack into some system. I might get bored because I kind of want to read Twitter or there's a basketball game going on. I get distracted. But these models will be very persistent and they will go through every single possible security hole and find every single possible vulnerability and take advantage of that in a way that no human will and those sorts of sort of little angles and opportunities I think are quite scary. And I think there's a good reason why a lot of the government and governments all over the world are concerned about this. But I'd say that the hard part there is like and my thinking has evolved on this. I used to think that the solution might be some newfangled regulation. I now think that in some ways the right solution are very, very old principles around What are you responsible for? You benefit. Are you responsible for like the upside and downside of something? You know, sort of who's liable and all those questions. And some of these are super old and super straightforward. So that's one piece. And I think the other piece that's interesting about this is that I think a lot of regulators and a lot of people are pursuing A.I. because they're hopeful and optimistic, are utopian in their thinking. And I think it's very, very clear from, you know, all of history and certainly in the Bible, right, that sort of tools are flawed. And part especially tools are created in our image because we are flawed, right. Because of original sin. And I think that realizing that these models will never be perfect, that these models will always be tragically, flawed, the same way that we are, like, that's going to be an important truth and something that we'll always have to think about. And that's a wisdom that we as Christians and a perspective that we as Christians to provide that I think is unique and helpful.

William Norvell: And so where is that? So where? So as you think about a believer investing in the space, coming to the space and talking to entrepreneurs who may be thinking about building in the space, I mean, I think that is yes, I mean, these models, I assume, were always going to be a factor of us, right? I mean, since you're in DC, you know, there was a political one, right, where it's like ChatGPT basically won't say any nice things about Trump, but will say nice things about Biden. Basically, like if you asked it, like, tell me, like great qualities and like, well, you know, there's people that built this thing, right? There is bias built in right to whoever. And, you know, there could be bias built in the other way where somebody wouldn't say anything nice about Biden either. Right. But I'm just curious where and how should a Christian think about, you know, do you need to build something new? Does it need to be on its own? Can you influence from inside a large organization that's already working on this? And just what is the posture to make it a biblical worldview or.

James Cham: Yeah, there are a couple of angles I think built into that. One of them is the sense that as these models get bigger, they do seem like they get smarter and more interesting. But the other part about it is that there's a bit of a power law, right? That to build the next version, sometimes it's going to be like 10 to 100 X more expensive. And so that math around it does end up meaning that it's critical that Christians are in some of the biggest companies in the world to influence and think through what this actually might mean. So I think that's one part. The second part is that you notice that as you talked about chatgpt, and I think this is a little bit of a marketing thing, there is a temptation to treat it as if it is the God model, right? As is. Oh, you know, if only we can influence our great God chatgpt, then our lives would be better. And I think that's the other temptation, right? That's the temptation of treating it like an idol. And I think that's one of the sort of there's one temptation, which is to say, let's ignore it and run away from it like a [...]. And then there's the other temptation was to say, you know what? Actually this thing is God, right? Rather than a sub creation made by humans. And the moment we treat it like God, then we have a whole set of other problems.

William Norvell: Okay, So let's go a layer deeper into that. So, you know, I remember we had Frank Chin on a while ago, and I remember he talked about some of that. It's like, hey, some things humans weren't made to do. It's part of the toil. I mean, I think we were talking about autonomous driving at the time, and he said, you know, look, all the truck drivers are upset, but like, think about that job. That's not a human flourishing job. You're away from your family, you're driving all day. It's stressful on your body like this is a good thing for humanity, actually. Now we need to help those people find new jobs and retrain them. But that job in and of itself, Frank was arguing, is. Not a human flourishing job. And so I'm curious, from an artificial intelligence standpoint, are there certain task? Is the people scared, of course, that we can talk about the doomsday Terminator two scenario at some point, but is this actually a thing we should embrace because it's going to make us more human? Is there an argument for that that we get more humanity out of this?

James Cham: Hmm. You know, this goes back in some ways to praxis, which is that there are a set of decisions that we can make about the kinds of businesses that we want to support and the kinds of businesses that could flourish. So that's the first part, which is it is a decision made by people around what kind of businesses we're going to allow. That's the first part. But the second part is like I will admit that like I have an essentially tragic view of work and of humanity, which is that it's flawed. We're all post fall. We try the best we can. I think that there are a whole set of jobs that are actually I actually honor truck drivers. I think it's a great job. I think it's really important. I think it will fulfill a bunch of important things for people. And, you know, like, I don't know, I compare this to sort of my ancestors who were either toiling in the field, some rice field, or sometimes maybe like running money from one place to another, right. Doing things that were like. And so I feel like our question of like what is a good, fulfilling job is so contextual, right? And is so much based on like sort of our current conception. I don't know. My great grandfather, you know, was away for like nine months out of the year in order to like, go from one place to another. Right? And that was still seen as a good job. So that's one piece, right? But is just to say that, like, I think jobs are going to be tough and it's super contextual. And then the other part that I'd say is like it is also important to be very, very cold hearted as we think about these models right now and the economic moment we sit in, which is to say that like if you thought about like the last big policy decision the United States made around globalization, the promise that we made to like citizens was globalized. Things will be cheaper. And by the way, you will benefit from the fact that things are cheaper and we will take care of you. And be honest, both Republicans and Democrats have not done that, in part because there was a moment where workers had a chance to influence sort of a whole set of decisions and in part because maybe you trust the Democrats a little too much, or maybe you trusted your labor union leaders a little too much. That promise was not fulfilled. And we are actually right now in a very, very similar moment where there is a lot of everyone smells the benefit and all the great things that these models can do. And at the same time, there's a great bargain to be made between sort of all the folks who are working in sort of a bunch of jobs that might be displaced. And I think like that bargain is an entirely political thing that needs to be coldhearted rather than utopian or blinded by sort of my various startup dreams.

William Norvell: So let's get a practical questions here. If you were an ex job, you would be wildly scared of this technology taking over your job because that's something and all the headlines, of course, always come out with, you know, technology is always going to take all the jobs, right? Just different versions of it. I'm curious from your perspective, if you were. So let's talk about kids in college. Henry's got two boys in college and one coming up soon. What should or shouldn't they study? Right? Yeah.

James Cham: White collar jobs that require people to be polite actually are at great risk that, you know, sort of there was a time ten, 20 years ago where everyone said, oh, the truck driver is going to be in trouble, or like the guy who the farmer's going to be in trouble. That's not really the risk here. The really the real risk and all these sort of these sort of folks like me playing with spreadsheets and emails and trying to be polite to people and talk to them and persuade them in some consistent way. And so there's a whole set of white collar jobs that are going to be different. So I'm giving a presentation in a few minutes to like some congressional staffers. And I have this one slide of this huge floor of an insurance building. It comes out of like the movie The Apartment from the Sixties, and you had desks of people who would like have a little calculator, a little hand calculator, crank some number out, take the slip, pass it over to another desk. And what's interesting about that work, which was like entire floors of buildings, is like that's basically a spreadsheet that those hundreds of people on the floor were replaced by a single spreadsheet. And that's on the one hand, terrifying, right? And it means great dislocation. But on the other hand, it's also true that we've been okay, that if you look at sort of like life from the fifties on to the nineties, it turned out that like those dislocations end up being okay and being managed. But that's like entirely, I think, a political question and less like a fit of the world question.

Henry Kaestner: So I want to take this in a slightly different direction. I just am fascinated by this and it's less around how we invest and it's less maybe around some of the innovations that come from entrepreneurs in the business, which is so much of our audience. But it's more. But the one thing that unites most listen to the podcast, and that is our belief that there is a truth, that there is absolute truth. It's not relative. You can point back to God's Word as immutable. And, you know, as you get ready to talk to his congressional staffers, I mean, this is a nation under God. I wonder if there's an opportunity for there to be this kind of operating. Well, it's an operating system of which all of a brain of all AI sits on, which is every type of answer that comes out of this query that I might have of a chatGPT has some sort of biblical foundation to it. And so that I am like, for instance, you know, we invest in faith driven entrerpreneur, faith driven entrepreneur is the common element of this podcast, and there's some sort of this belief that the thing that unites us all is to realize that there's there real mistakes made in Second Chronicles, where the Good Kings of Judah didn't see God out. And there's real problems with sin in this area and pride and what the wisdom that comes from proverbs and psalms, etc., Can any you that like a coded in to this operating system in the chatGPT. So the answers you come out are actually informed by two or 3000 years of truth. Now somebody might say, well, gosh, that's too myopic. It's just Christian and or we take one nation under God. It wasn't just a Christian, God, whatever, but 99% of Americans believe in God. It's only 1% that are really atheistic. And the general concept that there is a God and that America is unique and is one nation under God is still believed in by the majority of the people in the marketplace. Is it possible that there could be one type of truth? There kind of is kind of coded in to all of these things so we can then say, Well, I actually can't go that far off the rails because at the root code of all of this comes from Scripture.

James Cham: I think that's partly a commercial question, right? I think it's important not to confuse chatGPT and the work that open AI to all the really impressive good work that open AI is doing and building their own model with all the models that are potentially available. Right. It's possible that we live in a world where opening AI end up being the only people who are able to build advanced models. But if that happened, that only really happened because of regulation. Because the truth is right now there are enough people chasing them, enough people building their own versions of models using the same set of techniques. And so my guess is that, you know, we'll live in a world where there are a number of providers of very big models, and then your ability to either fine tune it or wrap around it, questions like, Oh, here's your answer now. Like how does this reflect various biblical values? And then we answer this question around how it answers various biblical values like that could be done right? Sort of the other weird miracle around like these large language models is that they're very good at self-reflection, that you can give it. You can say to it, hey, you know, answer this question, and then you can ask it, Hey, is this question biblical? Is this answer biblical? And then you could ask again, Hey, was this answer really biblical? And AI actually come up with a better answer, right? So those sorts of things you can do and they aren't necessarily bounded into like one specific model.

Henry Kaestner: That's interesting, so like there's the web, and I don't know as much about this as I speak to it, but there's the web and then there's the dark web. And I wonder if there's like the chatGPT that's informed by the world's great religions, of which are commonalities. And so that any answer, any query, any type of pontification or reflection or theory or opinion that's expressed by the chatGPT, undergirded by the world's great religions comes out with something. And then anything that would be sinister, not based on these commonly accepted things would be part of like the darkGPT. And people would kind of know like, Hey, this is the origination and it this AI is a part of this overall code that has this kind of underbelly. And so it could never, ever convince me to kill somebody or it can never encourage me to lie, bear false witness. It can never encourage me to do any type of activity that would have to do with adultery or whatever, things like that.

James Cham: And so let me make one really important distinction, which is chatGPT is an application that is built by Openai. So it's like one very specific application that's built on top of a very big model that openai I spent a bunch of money in order to get it to work right? But there'll be other people who spent a bunch of money to build models. And also the other slightly crazy thing about like these models is, you know, what are they really? Are they really answering the question or are they just trying to imitate what other people have done, what other people have written? And so in some ways, you know, they. Really are just trying the best they can to like generate text that matches something you tell it to do. And so you I think folks apply. Try this. One of the crazy things you can do is you can say, Hey, pretend that you are a Christian who lives in San Diego, who really likes baseball, and then answer the question in this way. Or you can say, pretend that you are a Christian who really likes socialism and answer the question this way include references to the Bible. And so these models will try the best they can in order to fulfill the sort of setup that you gave it. And so in some ways, that sort of thing is available right now, and it's more a question of commercial adoption and sort of the economics of it than it is a question of whether it's technically doable.

Henry Kaestner: So is it right to encourage and challenged the listeners this podcast about how might you innovate on top of open air with a biblical strain so that a consumer, you know, parents of three children and say, I'm actually going to go ahead and I'm going to buy into and pay a subscription so that my kids have questions about life's big problems or about history or something like that, that it's done through a biblical worldview, that AI becomes part of their teacher. And when they ask these questions that I can answer at home about evolution or any type of chemistry or anything like that, that I can have the screen that everything they ever ask of this thing is informed by a biblical worldview because that's actually programed in the system. Kind of like what I have seen guys or any one of a number of different types of pay for services that have been screen out negative stuff. This could actually be a service that do there's a positive screen.

James Cham: I mean, you could do that right now. And like the hard part with it is you could basically sprinkle that in the beginning of any query, the chatGPT and it'll do a pretty good job. And then whether you wanted to make that a separate application or a separate business, that's a good question. But you can literally go to chatGPT right now and say, pretend that you are a very thoughtful sort of evangelist who takes the Bible seriously and, you know, just went to Africa and is based in Silicon Valley, you know, and then answer this question and it'll do an okay job. And then what even crazier is like, you can provide lots and lots of answers that, you know, that person gave and they use that as examples that it can then use to generate, you know, possible future answers.

William Norvell: It's not going to test out. I'm going to see chatGPT 2021, but we got enough podcast out there. I'm going to see if I can train somebody to answer questions like Henry or Rusty. I think we should try that.

Henry Kaestner: Okay. Well, that's absolutely a thought that based on what James just said and the interplay that we've had thus far, there are infinitely better questions that I could have asked along the way that a chatGPT is like, okay, James Cham just said this on the podcast. What should I ask as a follow up? I'm sure Chat GPT right now could come up with 100 better questions than the ones that I asked.

James Cham: And I think the opportunity. So on the one hand, like that's going to be magical and it'll seem so different from anything we've ever seen before. But in other ways it's not different at all. Remember that sort of relationship with the Henry Bot in some ways is very similar to like your relationship to Billy Graham or David Letterman, which is say it is a relationship not with the actual person but with an image of the person. Right? And those are the things in sociology would be called parasocial relationships. And they can be very, very helpful. And they do great good until they become idolatrous. and then that sort of temptation is as old as the Bible, right? That the chance for us to treat something that is a parasocial relationship or relationship with a king who doesn't really know us and worship him or our ability to sort of like have a relationship with like some mountain and then worship the mountain. Right? That temptation is going to be the thing that we're going to struggle with a lot more in new and interesting ways.

William Norvell: Yeah, that's good because that's real. You know, we just did obviously a tribute podcast to Tim Keller, and I feel like that, you know, he's had a big influence on my life and I've probably shaken his hand twice. Right. But the amount of like, did I answer questions? You know, here's kind of what Tim Keller would say about that, right? Even though I don't really have a personal relationship with him, like, yeah, that's been around for a while. You read enough books by someone and you watching it. I mean, I've listened to a hundred sermons from him and read 20 of his books. Like, I can kind of like, here's kind of what he would say to that.

James Cham: That's right. And then the interesting thing then as Christians is to have a distinction between those parasocial relationships and sort of the relationship with someone who seems like would be distant and seems like it would be all powerful and all knowing and yet actually has a relationship with us, right? Because our relationship with Tim Keller might be parasocial, but our relationship with Jesus is actually social, right? That actually, like we actually have a relationship with him and an understanding. That distinction I think, gives us room to think about these questions in a way that ends up being a little bit easier than it is for non-Christians because we've got a model like not a machine learning model, but so we have an example of what it's like to have an actual relationship with someone versus having that sort of Parasocial relationship was important and again, beneficial, but very different than. Having a relationship with Jesus.

William Norvell: James is a company in here. You know, we get along, foreigners listening. I'm one of them. I'm curious to hear your answer here. What should we be folding into our business? How should we be taking some of these tools, if at all? Right. I assume the answer is yes to some degree. Like, what should I be doing? I'm going to pitch you in 12 months. Right? Like, what if I haven't done X? Are you going to be like, wow. Like, man, you got to, like, get with the times. Like, you got to. And is that statement even true? Does everyone need to be using some of these tools in their business? Is that advantageous to build a better growing business?

James Cham: You know, there are some point in maybe 1997 when people started building web applications and they didn't have names for it yet. Right. They didn't really know how to describe it. Everyone sort of thought Yahoo! Was the dominant thing and maybe those like we thought we'd all buy Oracle applications that were served through some Web server. And we're sort of at that point, we're sort of at the point where it's clear something's working and exactly how it's going to work or exactly what's to be the dominant business model. All of those questions are unclear. And so if I'm you I'm trying to figure that out right now that there's a little bit of a race right now to figure out what are these models really, really good at and what's going to be the thing that, as it turns out, to be really, really valuable. So if we use our web example from like the late nineties, early 2000, you know, you talked to some senior executive Yahoo! Who was the dominant company at that point. And if you told them, the most important asset you have is shared address books. They'd be like, That's dumb. Address books is like a feature. And if you told them that, like that part that would search through the web turned out to be the most valuable part. They'd be like, That's dumb. That's just a feature of our portal. But of course, it turned out that Google, which indexed the web and then became like the stopping point for everyone else, turned out to be incredibly valuable. And that shared contact list is basically social. Right. And it turned out that like that turned out to be the really, really valuable thing. And so we're still at a point where we don't really know what's going to be the durably valuable thing. And so we're all playing around right now. And so if I'm an entrepreneur, I'm at least devoting, you know, sort of a few hours a day just trying to figure out the outlines of what's possible right now.

William Norvell: That's good. That's good

Henry Kaestner: Wow, that's awesome. So that is a great takeaway. Spending some concerted time thinking through this, reading up on it, because it does feel like this is a marketplace changing type of event and it's a big deal. There's going to be opportunities for great innovation, for inclusion in your business. And we need to be able to have answers about how this impacts our life. And so many of us are parents and we need to be thinking through this. And as we do that, I think that there's going to be some great innovations and Lord willing, we'll figure out what this means for translating the Bible and contextualizing the Bible into different languages and our languages, an infinite number of great applications, because it feels like this is a technology like so many others that could be used for good or for bad. And so Christ followers need to not bury their heads in the sand and just like, wow, this is this scares me. So we're going to go away and we're just going to be real conservative and we're to, you know, go back and live on farms and not use phones. But we actually need to lean into this and get involved and get engaged and be serious because everybody else's.

James Cham: Yeah, I think there are a few angles on that. So one is, you know, the biggest bargain in the world right now is 20 bucks a month to subscribe to chatGPT plus like I make no money, I have no financial interest in open AI. But I would do that. And you know how you said this is the time to read about it? This is probably the time to read about it. This is part time to read it and try it. Because the weird thing about now is that you literally can go on right now and you could get the capabilities that used to only be available to the smartest, smallest set of people inside Google. Right? And you suddenly now can play with these things and figure it out faster than they can what's possible. So that's one piece. I think the question that you asked about kids is just so, so important. And in some ways, we are really, really lucky because we've already experienced what the phone transition looks like, Right. And there's a way in which we as parents can have a proper amount of skepticism about what works and doesn't work. Right. That like if you were to go to 2010 and sort of think about like the phone and how your kids should think about the phone, right. You might have thought, well, this is so different from the web. It's so different. But now we actually have specific examples, our head about what happens when these things become super, super accessible and what it means for folks to go astray and what it means for someone to be totally consumed by something. Right. And all those examples we can think about because they happened to us, right? And then we could be wiser about how we end up thinking about this for our kids.

William Norvell: It's so good. You've been here before, so you know, we're. We're going to close. We would love to hear where God's speaking to you today and whether that's about AI or not. There's a lot of other fun things he talks to us about. Where in his world are you today? And you know, what's he telling you from his scripture and what's coming alive to you in a new way?

James Cham: You know, the thing that has stuck with me for the last few years is this idea of old men dreaming dreams of revival. And I think that that continues to animate me and excite me, that I think that we're at a time of great uncertainty, both from a technology point of view, but also from like be honest, like our position in the world as Americans. And I think it's at those times of uncertainty where there's the best chance of revival because suddenly we can't rely on all the normal answers. Right? The things that have worked for us since, at least like, I don't know, the mid eighties, they're not working anymore. Right. And so this means that there's a whole set of folks who are open to like God working in surprising ways. And I find that really encouraging. And so that's the thing that I pray for and sort of try to dream these dreams and hope that young men and women do amazing things. And and that's probably what excites me the most right now.

William Norvell: Amen.

Henry Kaestner: Amen, indeed. Fascinating.

William Norvell: Wait, I got something we're going to finish with. I got something to finish with, so I just want to chatGPT while you're here. And I told him to pretend that he's Henry Kaestner, the host of faith driven entrepreneur podcast, co-founder of bandwith.com and Elder of the PCA church. I'd like to ask you some business questions. What are the three most important things to do as you build your business? According to Henry Kaestner AI, one faith integration pastor is a strong advocate for peace driven entrepreneurship. He believes one's faith should not be separated from the work, but rather integrated into every aspect of it. People and culture employees are the backbone of any business, and Kaestner emphasized the importance of treating employees while fostering a positive company culture and investing in term and growth. Happy, motivated employees are more likely to deliver their best work, sustainable and ethical business practices. Number three Kaestner encourages entrepreneurs to focus on long term sustainable growth for short term gains and to make business decisions that are ethical in line with their faith and their values. Wow. We think. What do we think?

James Cham: You're tearing up? You're tearing up.

Henry Kaestner: While I get this stupid and I put it around my eyes. And fortunately, this isn't a video podcast, but but yes, I am tearing up. But it doesn't have to do with that. It's fascinating and scary because what it doesn't do is it doesn't point to it talks about generally about faith as if that just kind of like a character attribute without a relationship with a living God who as sinful as I am, died for me. And our response and joy and gratitude of the gift of life now and eternal, that I can then just return to the altar with all that I am is the aroma of Christ to be a blessing and of others, and balancing the joy in the gratitude with the faithfulness and the obedience, and something as multidimensional and just spiritual in the aroma of Christ. So I would answer as a human being, I would answer those questions differently. Now, all of those things can glean different topics on the web, and yet it feels like it's looked at it through an academic exercise versus a spiritual life, heart transformation one.

James Cham: So I would not anchor myself too much on the idea that that won't change. And I did a bad job of explaining this earlier. But these models are just based on what people have said, right? And these models are just trying to the best they can to guess what would be the next word based on the first set of words that you gave it to. And then what's interesting is, like as we progress and provided with more words, the more data or certain types of words and certain kinds of data, then the models will say something different. And so I bet you that if it recorded everything that Henry said on a daily basis and you sort of asked that question again, it would come up with a different answer. And so the super uncomfortable, weird, like I think what's going to be uncomfortable for us is how good these things are at like feeling human, right? And that's going to be both a great salve for us. It'll like make life is a lot better and it'll also be a great temptation.

Henry Kaestner: Well, yeah. Okay.

William Norvell: Fascinating.

Henry Kaestner: That is fascinating. James. Thank you, brother. Thank you for your friendship, for your partnership. And thank you for spending time. And may the Lord bless you as you get ready to make this presentation in Washington and excited to see how God will go through you and and may He lead all of his people to being able to be participants in the new technology and lean into it and just maybe protect us all. In Jesus name, Amen.