Episode Transcript
[00:00:14] Speaker A: Welcome to Examining, a technology focused podcast that dives deep.
Good afternoon, Mr. Hands. How's it going today?
[00:00:34] Speaker B: It's going well. How are you?
[00:00:36] Speaker A: I'm doing great. We have a.
Well, we're going to do another episode but we're no longer doing our.
Or we're still doing a podcast, but we're no longer doing ed tech examined.
So what are we going to call? We're rebranding the podcast. So what is our.
You want to tell folks what we're rebranding it to?
[00:00:56] Speaker B: So based off of. We want to be in line with what we had before and since we were examining.
So that's, that's what we're kind of sticking to is building off of the edtech examined aspect and instead of putting a past tense and I guess the other problem is all the good domain names are hoarded so we had to go and stick with something that was available. And so it's going to be examining, ca going and it'll be examining and then we'll go and do a deep dive into something tech related or productivity. It seems like that was kind of.
[00:01:33] Speaker A: A lot of what it could be anything really. I mean we haven't, we probably will stick with the tech related. Well, this will be a tech focused podcast where we are given the artistic license, so to speak, to examine a range of things.
[00:01:48] Speaker B: Yeah, yeah, we'll see, we'll. We'll see what comes.
[00:01:51] Speaker A: Education. We'll stick with that. Where we, where it's relevant.
[00:01:54] Speaker B: Yeah, absolutely. I think part of the other thing too. Just so everybody knows, we found especially over the summertime that the news cycle for education kind of goes very dire or slim to none. So again, just in the interest of going and having, providing relevant information that's useful to a wide range of audiences. And so we'll still.
Hopefully it'll be beneficial to people who are educators as well as students, but even just people in the workplace.
[00:02:31] Speaker A: Yeah. And I think it'll be helpful for us coming up with content because sometimes there's things that, you know, would be other or tech related that we would like to talk about, perhaps don't have an education angle.
And then it's based on kind of what we positioned in the original podcast. It was hard for us. We had to either cut it or we had to find a way to fit it into education. And sometimes that just wasn't feasible or likely.
So again, we don't have to, we didn't want to work within those constraints. So we're going to do more of a not A variety show. I think we're going to stick to shorter, ideally shorter episodes where we go into one thing. So we're not going to have as many segments and maybe talk about it at length. And I think one of the things that you and I both noted is that sometimes the aspects of the podcasts that we've liked the most have been our discussions on particular segments. And on more than one occasion we said, wow, that could have been its own episode. And so we're going to kind of run with that, which means that it'll just be a lot more simplified and, and hopefully more accessible to people if, if there's just one thing that we're talking about.
[00:03:38] Speaker B: Yeah, absolutely.
[00:03:40] Speaker A: Okay.
So we are going to do an examining of kind of this current state. I'm sure we're going to return to this, of artificial intelligence and namely the generative AI stuff that's happening. And so we're going to return to this. But we're just kind of doing a, you know, a general update onto general generative AI and yeah, so we'll just comment on that. So did you want to kick us off with, I guess, our colleague, our friend Amy Webb, who has written for Harvard Business Review.
[00:04:21] Speaker B: Yeah. So she did a pretty long read if her Harvard Business Review title is how to prepare for a Geni future you can't predict.
And for those who aren't familiar with Amy Webb, she is an adjunct professor at NYU Stern School of Business. But then her day job is she's a futurist. She has a consulting firm. She goes and looks at forecasting and works with a wide, probably fairly high profile clients from the sounds of it. But I found her article. I think it's a good foundational piece for us to jump from.
But especially when she started off the, the piece that she wrote for hbr, I liked how she was talking about how these various, you know, consulting clients, like banks, they're going and seeing how could they go and take this technologies, this generative AI technology and almost see it as a silver bullet for going and replacing staff.
And again, I, I don't know if that's necessarily the case. That's what she's kind of bringing or highlighting that everybody's looking at it from that aspect. A lot of these businesses and they want to go kind of deep, but really what the technology is doing, I mean, we don't know where it's going to go. I mean, right before we started this, I was just showing Eric. We took some images of himself. I took some images of My, my own just off the Internet and put it through Adobe Photoshop to go and show the generative fill. And I mean, it does a remarkably decent job in about a minute. It doesn't have the, just, you know, it doesn't have that quality when it comes to the hands, it seems like.
[00:06:16] Speaker A: Yeah, the hands are always a little bit weird.
[00:06:20] Speaker B: So there's a, a lot of that, that happens. But again, I, I think, you know, in the workplace, you know, this workplace kind of workforce problem isn't something that, you know, if that's what they're trying to go and maximize their profitability and especially trying to get people back into the office and dealing with all sorts of other issues and to see generative AI as a solution to that. That's not the case. I mean, there's some things that it could maybe streamline and so some of the things that she talked about is maybe like fraud detection or customer service, where you could augment it. But again, to see it as a, you know, kind of silver bullet, I don't think that can be the case. And certainly OpenAI. I mean, it's, it's been in the news for a number of reasons. Probably more pronounced is the fact that it got a million users in five days. It took them two months to get to 100 million. Right. And so again, this is kind of top of mind for everybody.
[00:07:24] Speaker A: So I have a, I have a. We can return to other aspects of your article, but I want to, I want to ask, I want to ask you some questions about this. So there's a couple of things that come to mind about productivity and AI and.
Yeah. Automation and people being automated because, you know, companies seek to make more profit with less expenses. That's essentially what this comes down to.
[00:07:47] Speaker B: Totally.
[00:07:48] Speaker A: Okay, so let's switch tracks from the banks. I don't quite understand how that's going to be implemented. I'm not saying it doesn't. It's just fuzzy for me. So I want to go back to a previous conversation we had, which was Microsoft's Office 365 copilot.
[00:08:03] Speaker B: Yep.
[00:08:04] Speaker A: And the idea there is that they've invested all the money in OpenAI. Right. And so they're using that to help automate the. Let's call it the creative production of workplace documents, particularly drafting them out and doing things according to their demo, like take the meeting notes from this document and build out a outline in PowerPoint to present to the executive team in this template. So it takes what you've already done manually and helps you Summarize it. You know, create executive summary of this report at the top. You know, those are little things that take a long time that don't necessarily lose value when they're done with an AI. Let's say there's still a lot of human creativity there, but there it's kind of augmenting, right? Yeah, that's really cool. But the cost of that was like $30 per month per user.
[00:09:06] Speaker B: Yeah. Unless you had the, the top tier kind of enterprise solutions.
[00:09:14] Speaker A: In fact, I, and I could be mistaken. I was just listening to another podcast, Windows Weekly, and my understanding was that it was still $30 a month per user, regardless of the tier.
[00:09:24] Speaker B: Oh, really?
[00:09:24] Speaker A: Okay, so it's like really expensive. Now there's two ways to think about this from an automation standpoint. It's a dollar a day, right? So let's say someone's being paid $30 an hour and it saves them 10 minutes a day, enough to break even. Right. You know what I mean? So this calculus is kind of being done.
Um, and, and so you could say, well, I'll give somebody this for $30 a month and I'm going to have this person do what normally two people did or something like that. And so there I can see kind of a reduction in automation, but it seems like that's, there's some tension there.
And this is. My next point is because there's a cost to implementing this thing and reducing the human workforce for productivity.
[00:10:18] Speaker B: Right.
[00:10:19] Speaker A: And I'm. When I'm reading a lot of the articles on AI, there's kind of this assumption that the cost will go down over time, and that's people's fear. And then that'll really ramp up automation. So let's say they get the cost of their data servers down, so then they can start charging $10 a month per user or something. But my understanding is that there's this one vision of the world where it's like it's going to take over and it's going to eliminate a lot of jobs. And then there's this other vision of the world where it's going to augment what people already do and it's just going to change the way they work. And it's in the middle that I don't quite understand because the cost of running these AI is really expensive. I mean, I don't know what the profit margin is for Microsoft or Google.
I mean, my understanding was that it was costing OpenAI like a hundred thousand or more a month just to keep this thing running and to run on Microsoft Azure.
[00:11:15] Speaker B: Yeah, well, it's costing right now from the early reports or the articles that was come that were coming out, it was costing OpenAI $700,000 a day to run.
[00:11:26] Speaker A: Okay, so arguably there's a little bit of novelty bias. There's a bunch of people jumping in and just mucking around with it and once the novelty wears off, they'll dissipate. But my question is, is that given the cost of running this and given the very specific ways in which it can help you, because there's many ways which it doesn't help.
Is it really, and I'm not saying Amy Webb thinks, but for people who are concerned about automation, is it really, at least in the medium term, a fear given the energy required, the money required to run this? I mean, if every business rolled this out, I, I, I don't even know if, if at scale if our electrical grid would be sufficient. And that's, that's what I'm asking. It's kind of like bitcoin mining in a way. Right? The concern of bitcoin mining is that it uses up like all the energy of Australia or I don't know what country it is, or Luftenstein or something like that. And I'm thinking, well, if this keeps going the way it's going, it's going to be a lot like that. Because think of all the queries, the data centers. There's an, I'm not so much worried about an environmental impact. I'm just saying it uses a lot of energy and it's expensive.
So again, it seems like there's, it's going to run into a scaling problem really quick. Yeah, but that's just maybe my bias, so maybe I'm misunderstanding.
[00:12:51] Speaker B: And I think, Eric, you bring up some really good points. I mean, there's fears, some of the coverage, there's fears that maybe OpenAI might go bankrupt. If you're spending $700,000 a day and you still haven't figured out your use case right now they're just lucky and fortunate that Microsoft gave them, you know, $10 billion to control 49% of it and it was a billion before that. So, you know, they're basically going and spending all this money to experiment with it. And as you saw, like on the 28th of August, some of what's happened now is there's a bunch of companies that came across this one Business Insider article 75 of businesses were looking at Banning Chat, GPT and any other kind of large language models. And so if that was the case.
[00:13:45] Speaker A: So what's the rationale for that.
[00:13:46] Speaker B: The, the rationale for it was privacy.
Right. They didn't want so like their business.
[00:13:51] Speaker A: Secrets were being typed into the chat.
[00:13:53] Speaker B: Right. And so, and then what OpenAI on 28 August they released that they're going to have an enterprise version of it. Apparently it costs $20,000 to install it on site and then from there for every thousand inquiries they're char being charged $0.50.
[00:14:13] Speaker A: Ultimately there's a cost to having it. I heard this before. So there's a, there's a very high cost to having it installed on prem meaning that the secrets that you're typing into it are somewhat contained and then, but it still has to go back to the server and for the queries and, and that has a data center cost. So it's very similar to Amazon Web Services or you know, you're charged based on usage. And so again it brings up the point and I think you've, you've addressed this is okay, I have to save, you know, you have to save x many dollars just to break even to use this thing.
And so outside of, you know, automating front end customer service and you know, reducing a handful of graphic designers who, who are just going to run generative fills to make, you know, Eric Muscle man pictures, which is what we were doing before and stuff like that, like you see, you see what I mean? Like that's, does it, does the cost outweigh?
I mean at least at this point the ability to only really apply this effectively to a very small and narrow few ranges of things.
[00:15:28] Speaker B: Well, and that's where like it's a lot of like Sam Altman who is, you know, one of the co founders of OpenAI, he's being criticized for this, you know, because he was involved with Y Combinator and if you look at his talks, it was always the advice was that you should figure out your use case and you should figure out your pathway to sustainability and so on. And here he's taken just because he had the reputation and you know, the network and so on was able to go and raise a bunch of money without any kind of use case, without any problem to solve. They're basically taking a bunch of problems and trying to see what sticks. And so far I think right now maybe Microsoft's use case of using it for workplace mundane activity, whether it's like, you know, sending emails or work documents or what have you, maybe that's the use case to the point where Microsoft has given them $10 billion there, they've integrated into Bing AI. So now like Bing has become relevant. They have announced that rolling it out and including it as a service as part of Office 3 365.
So, you know, I mean, this is certainly, obviously we've talked about in the past, like the AI wars are on. All these companies are trying to figure out. And I don't know, I mean, I, it might get to the point where, like, I mean, does anybody really talk about cryptocurrency anymore? Like, AI has become the new cryptocurrency. Right.
[00:17:01] Speaker A: So my understanding, and I'll have to find the citation for this, was that despite all of the AI integration from Microsoft, Bing. So those people, that's Microsoft's chat competitor, they haven't actually made any room on terms of search engine market share.
[00:17:20] Speaker B: Oh, really? That's interesting. I thought that they would have gone up by now.
[00:17:26] Speaker A: I can find it. As we're talking about other stuff, I wasn't prepared to talk about that. I mean, I'm sure, but I, I'm pretty sure that Paul Thomas talked about Bing and, and made some comments about, provided some data.
I should say that it's not really, it's not really gaining market share.
[00:17:54] Speaker B: Yeah, I mean, it might not be gaining market share, but I think what's happened, some of the articles that I've read or the news coverage is that Google's dominance on search isn't what it used to be. And I think we could probably all agree on that, you know.
[00:18:10] Speaker A: Sure. So people may use AI for things that they would otherwise have to search manually. So, like, I get it. Yeah. But I, I think, I think also, I think what's happening though is that, I mean, until AI gets better, I mean, my suspicion is that these AI, like Bing Chat, will use live web links, but it's really only giving like a very surface consensus articles. Like, I've really tried to push it to give me, you know, give me some different sources. It really, especially when searching for tech topics and you're like, you know, can you provide articles on the following? Like, it really doesn't move past a handful of different types of sources. And some of the sources it gives me, I'm like, I have never heard of this blog. I mean, I know there's an article on what I'm searching for from the Verge or Engadget, which is a much more reputable publication, and it just will not give me anything from there. And so I think, I think it's almost.
Maybe we're starting to see some of the limits of what a large language model can bring back regarding search in a live context. Does that make sense?
[00:19:24] Speaker B: No, that, that totally makes sense. Right. I mean, I. From an academic standpoint, I would say that this is akin to, you know, it's providing like Wikipedia entries. Like, it's such a superficial level. Like, I don't even think it. It's a good starting point, but hopefully with time, I would imagine, you know, I'm. I'm still optimistic that I think AI will get better, and I'm still impressed with what it's doing so far. I mean, look at when we played with this generative fill and, you know, and it's in the beta version of Photoshop right now, but it's still cool. It's. I mean, I can see some of the possibilities, but at the end of the day, this is where again, like, I mean, you bring up good points about the feasibility, the scalability, the specificity.
[00:20:11] Speaker A: Is what I'm getting at. Right. Like the generative fill that you brought up works really well and that's a really great use case that anybody can use. Right. But we're like a Microsoft. What they've done is that they've invested. So it was on the Ring First Ring Daily podcast, 1495, and Paul Thorat talks about this with his co host Brad Sams. It's just like a. A daily, very short podcast that they do. And they've had very little growth in general from usage, even after $11 billion in investment. So I think Co Pilot's really cool. And that's kind of like the Microsoft Office equivalent of this generative AI fill.
[00:20:50] Speaker B: Right.
[00:20:51] Speaker A: And I'm not, and I'm not down on AI at all. I'm just saying that $11 billion, even for Microsoft is a lot of money and they're not seeing a lot of growth from it. And I wonder if it's because they're just like, we're going to try to roll it into everything.
And they're taking this very broad approach where, say, a company that steps back and thinks about how it's going to be used a little bit more thoughtfully. I mean, Apple's famous for this. They're very rarely first to market with a product. And I do, I do understand the concern around Google, but is Google going to take a step back and then implement exactly what Microsoft has done with Bard once they catch up, and then it's going to be on level again? Like, some of these things are a lot more replicable than I think perhaps we realize. I mean, even Meta has this llama. This Large language model. And so everybody's going to have a large language model. So I guess what I'm asking is that it's useful, but it's also like a rising tide raises all boats. Right. It's like table stakes versus a, a firm and permanent advantage. I can see it in Office, I don't see it in Microsoft doing search. And that's why I'm, I, that's a little bit different than what I originally brought up. But that's, I guess, what I, what I'm thinking.
[00:22:02] Speaker B: Yeah, yeah. And again, I, I think these are all valid points. Right? Like, and this is why they're still figuring it out. It's, I mean, I, I think, yeah. You know, this mainstream version, like here, like with OpenAI. And that's why I mentioned, like, in terms of, this is unprecedented growth. We've never seen a company go and get that many users in such a short period of time. And it was because the, the, and you could call it maybe the. I've even seen articles where they're, they're saying that the, the number of users that are signing up for OpenAI as ChatGPT have declined, or they're kind of stagnant now.
But at the end of the day, it was that you could see the application. There was the. Whether it's novelty or you could see, like, this is, this is pretty amazing. Like in a minute it can generate stuff. Right. And then, you know, I, I talked about even how these large language models, people don't, I mean, without getting too technical into this, it, what it's doing is it, it isn't actually the way that they're built.
They tokenize everything. So they're taking our, you know, prompts, our information that we're feeding, the instructions that we're feeding into the system, converting it into tokens into data, and it's going and running through multiple versions of, you know, what is going to be the next word that would come out. Right.
[00:23:23] Speaker A: And so it's a predictive analysis, like a probability problem, basically.
[00:23:27] Speaker B: Exactly. And so.
And it's amazing at what it's been able to do. But this is where like those hallucinations come in. And certainly I would think it's going to get better over time. I mean, to the point where, you know, this whole idea of prompt engineering and I mean, the first article that I came across where they talked about somebody making $335,000 a year was in Bloomberg.
And just to come up with the instructions for getting this tool to come back with the output Right. I mean it's.
[00:24:01] Speaker A: So I think that's going to be a flash. I think that's a flash in the pan. I don't think there's going to be very many of those prompt engineers, long run. I just don't see that that person, that's like the person working with the beta version and eventually that those kinds of complex prompts and that constant change is not going to be necessary and then so they have to cash in while it's useful basically.
[00:24:23] Speaker B: Totally. But again, I think that like now, you know, like the, again, if we take your idea of stepping back, you know, if I look at it like especially we're coming from an academic background, like what is one of the reasons why you would go and take a bunch of courses that are unrelated to your field of study? And it's to go and get that, you know, well balanced approach and that critical thinking, critical analysis, the research skills. And so I'm of the belief. And this is where like as much as people think that, you know, everybody, it's going to replicate and maybe even replace everybody. Right. This is where I would describe it as.
It's basically coding meets philosophy.
And so now, now what you're doing is if you understand how the technology works, how these large language models work, how they're going and trying to predict the next, you know, word or what have you, if you can come up with really well thought out questions, if you can provide it, that context, if you can be very specific, you can get that type of, you know, output out of it and you need to go and work with it. Right. But again, this is where I think even like imagine somebody who's like a history major.
I mean, somebody who's a history major I think would have huge advantages over other people and maybe a lot of those liberal arts kind of backgrounded, you know, people would probably do better than, let's say if I was a business graduation, because then the finance grad. Yeah, like, I mean you'd be able to go and look at all the, all the, the back drop of history of what's happened. Use that to go and ground how you're going to be drafting that prompt.
[00:26:14] Speaker A: Yeah, I agree with that and I think, yeah, I think we're in agreement. It was more just a question I have because I'm starting to see not only hallucinations from the AI, but perhaps some hallucinations from the more panicked folks about this, about how it's going to unfold. To your point though, about specifics and like prompting that Amy Webb article. So she, she has an interesting point. So she says she's talking about specificity, right? So I'll quote here. She says once a workforce understands how to delegate correctly, it will act as a force multiplier within organizations.
Individual teams could be more ambitious in growing the company's top line, ideating and stimulating new revenue streams, finding and acquiring new customers, and seeking out various improvements to the company's overall operations. And so she gives an example. This pretends a future that demands a different approach to upskilling. So she says most workers won't need to learn how to code or how to write basic prompts, as we often hear at conferences.
Rather, they'll need to learn how to leverage multimodal AI to do more and be and better work. So just look at Excel, which is used by 750 million knowledge workers every day. The software includes more than 500 functions, but the vast number of people only use a few dozen because they don't fully understand how to match the enormous number of features and features Excel offers in their daily cognitive task. Now imagine a future in which AI, a far more complicated, more convoluted software, is ubiquitous. How much utility will be left on the table simply because business leaders approach upskilling too narrowly.
[00:28:09] Speaker B: Absolutely.
I mean, even, even, you know, I'll tell you, like, earlier this year, there was a conference here in Calgary called Inventures that took place. And one of the keynote speakers, it's Michael, and he's a American theorist, a physicist.
And one thing that I really struck out when he was talking about AI and the hype and, and, and just the fears and so on.
So being a theoretical physicist, he called the Chatbots as being just glorified tape recorders. And it kind of makes sense because what it's doing is there's all, it's gone out there, taken all this information throughout human history, whatever is available on the Internet, and it's giving us a rendition of what's happened.
Right. Based on, and, you know, some things might be good, some things might be terrible, but it, it just comes down to, like, it, it's taking information that's already there.
All right? And actually, another article that just got released on August 30th, I mean, the, the headline was like, large language models aren't people. Let's stop testing them as if they were. I, I thought that was really kind of a good point as well. Like, without getting into the, the full article, but like, why are we going and saying that, okay, this large language model is beating all these Tests and law school admissions and all that? Well, I mean, isn't that going to be obvious when you're taking all of the information that's publicly available on the Internet and feeding questions through there? I mean, imagine if you had that access in your brain.
[00:29:49] Speaker A: Yeah. And I mean, I think that.
So I like, yeah, these, these AI and these models are going to be able to beat a test because the test is, is a baseline for what you have to know.
You can't have a world though, where, you know, someone's in an emergency room and the emergency medicine specialist, a doctor, is like, you know what, I need to go home first and search this on Chat GPT and then I'll come back and help you. That doesn't work, that doesn't scale. And that person's not going to have access to prompts and a computer. I mean, the prompts are. Well, if you put in a prompt to these chaplets, the response is pretty quick.
That's not fast enough for someone who's dying.
And so the tests are like a minimum amount of understanding that you have to have memorized to be able to go through processes in your, in your mind. And I think sometimes people forget that. This idea that we're just going to run back to our machine and prompt it and sit there and craft and recraft the prompts until it gives us the perfect thing, you know, that's not helpful. And in fact that's going to use more time than it's worth. Right. Where it's different. If you are tasked with doing something creative in a knowledge worker situation and you need to translate part of what you've created to another format and it kind of helps you get started. And that's very different than situations where people can't be at a computer. I mean, until Chat GPT is, you know, a neural network that learns and it's also in a robot that can follow us around, it's not going to be helpful in a medical setting. It's not going to be helpful in an emergency response setting.
You know, a lawyer who's in a deposition with a client isn't going to be sitting there prompting Chat GPT. I mean, like you said, these comparisons are ridiculous.
[00:31:48] Speaker B: Yeah, but you know what? One thing it could do, right? And this is where again, we have to look at the use case and how it could work and trying to kind of envision that, let's say in the medical setting. Imagine if, if you feed a bunch of images like, I don't know, tens of thousands, maybe Even millions of images and of different scans and so on, of going and being able to detect, I don't know, it could be a tumor or something or some sort of disease. Sure, it, it could probably, when you get, feed it enough information, it might be able to do a similar job to what a doctor could. But again, it's just like, you know, and I think we've talked about this.
[00:32:30] Speaker A: In the past, they already have AI imaging for, even before ChatGPT for cancer screening.
[00:32:36] Speaker B: Yeah, exactly. And, but you know, if we look at like let's say even just practical kind of, you know, there's airplanes flying every day, most of them are running on an autopilot.
You still have two pilots that are sitting in the cockpit there.
You have a bunch of, you know, analog controls in there as well. Why do you have that? Right, that, that's why I think, you know, Microsoft just killed it with that co pilot kind of terminology you're augmenting. I mean, are you going to go.
[00:33:05] Speaker A: I agree with their approach, I agree with that. I think that they've sold it in a way that's realistic and now you may have less people because, you know, a fewer number of people can use co pilot than having a whole army. Like, I agree with that. I guess what I'm saying is, is that I feel like there's, what you're describing is realistic, but sometimes some of the articles that we've looked at are almost apocalyptic.
[00:33:28] Speaker B: Yeah, well, and I mean any kind of technology, if you got to look at the unintended consequences and you got to think about if there is a way that it could be used or abused, probably it will be. And so you got to take preventative measures. I mean in, in our case, and we've chatted about this, look at, Google has basically turfed a bunch of their high profile ethics, the AI ethicists.
Microsoft has eliminated the entire team. So I don't, I don't know how well it bodes for that. And many of these things, if you saw there was a Rolling Stone cover that just recently with a bunch of female programmers, you know, highly regarded and some of them were the ones that got fired by Google. And they've been saying this for years about some of the issues with these algorithms. Right. The bias that's in there and so on. And a lot of that has to do with like the people that are going and building these, these algorithms in the first place. Everybody has bias. And so you have to be cognizant of that.
[00:34:33] Speaker A: Well, and the bias thing is the Thing that I find it the most interesting and also the, the, the idea of institutional memory.
So let me start with the latter.
The pilot plane analogy that you give is a really good one because in a world where you automate things to where there's no fail safe person to intervene even without AI, you know, automation in general has posed unintended consequences that are a problem. So for instance.
[00:35:06] Speaker B: Look at.
[00:35:09] Speaker A: The lack of human attention to agriculture and automating large agribusiness.
You could make the case that that has led to less quality control and more outbreaks of, you know, salmonella, E. Coli, you know, making sure that there's a documentary on Netflix, for instance, talking poisoned talks about how, uh, that's really interesting. Talks about how the reason E. Coli outbreaks happen in leafy greens isn't because leafy greens are prone to getting E. Coli. It's because that the, the irrigation system happens to be shared with a feedlot X many miles away. And this is huge automated processing. And so if something gets contaminated and everything gets watered, it just gets picked up. There's not a lot of human attention, there's not a lot of human attention to the location.
You could argue that the inspections for food are inadequate. They don't really, you can't see bacteria visibly, so it's meaningless. And so we've automated to a degree where these unintended thing consequences happen and these things slip in. And I think that the fear with AI is warranted when we start to, you know, eliminate what is really valuable experience and expertise. One of the things I find interesting, I was this is on a separate note but related. Chris. I guess Microsoft is, and I heard this on Windows Weekly, another podcast that I like.
They were talking about the Microsoft events and a lot of the tech company events that they're going to streaming.
Apple's doing all streaming now because they did that during the pandemic for their product announcements. Microsoft I think is having a Surface slash AI event, but it's going to be in person.
One of the things that was mentioned was that a lot of the videographers and stuff who were focused on streaming during the pandemic are just no longer at the company.
So the skill is gone. And so I think about that like what if you automate something and then those people get laid off or they quit or they retire or they die and then there's no one to fix the problem. I mean we've seen this already in the banking industry. Much of the banking industry software was written using Cobol and a lot of Their original programmers are dead and nobody knows how to do cobol or there's very few people and the software has to be translated from COBOL to something, you know, like Java or JavaScript or Python or something like that, something more modern. And because there's such a shortage of people who know how to use the language, you can make serious money.
It's always in demand. Right. And so I think one of the concerns is about eliminating institutional memory, which has a long term benefit, in favor of very short term quarterly goals. And that's what companies tend to do. And that's a very valid fear of, of AI. On the bias front though, I also wonder about if all companies, let's say all companies started generating documents and mission statements and budget analyses and they start doing all this stuff with AI, is there, and this is what I don't understand about the technology.
By feeding it data and everybody feeding it data over time, does it kind of like flatten out the creative output? You know what I mean? Like, if everybody's using similar tools to automate the creation of knowledge isn't going to lead to just a bunch of really generic stuff that all the, all the companies and competitors are doing the same thing?
[00:38:46] Speaker B: Well, I think that's where even if you recall prior to Google going and launching, you know, Google Bard, they were saying that if you go and use AI generated content, it's actually going to punish you from a search engine optimization standpoint. Now I don't, I, I think how they test that. Right, yeah, well, and I mean that's, that's some, I guess another matter where it's getting very hard to detect and distinguish between what is AI generated and human generated. But it, it could get to a, you're, you know, with all of this, if everybody becomes so reliant on this, it's, you know, you might get to a point where it's just going to be very low quality content that's being produced.
[00:39:29] Speaker A: You see what I'm saying? So like right now the language model is based off mostly human created content. So if everybody's using the AI to create content and then the model that it's working off of becomes AI generated content, doesn't that make it more generic over time? Doesn't it change the content base that it's working from?
[00:39:51] Speaker B: Yeah, yeah, exactly. It's a bit of a, like, I.
[00:39:54] Speaker A: Don'T know how it works, maybe that's not true. But I, I was like, if everybody starts writing their articles in AI and then AI starts using the AI written articles to generate more AI written articles. Isn't this going to become like just worse content? I mean, it doesn't seem like there's enough variety in there. It seems like the model would be.
Because you can kind of tell some of the responses that chat GPT. You can kind of, sometimes you can tell, okay, this was a bot, it has a style, right? Yeah, but if every, but if the, if the model is the style that it's using, isn't it just going to use that style forever?
[00:40:28] Speaker B: Yeah, well, and that's where maybe there's opportunity, right? Like again, I, I, I would say like with these technologies, I mean, you know, you touch on creativity and stuff and it's really at the, at the end of it, Eric, like, I think maybe the, the topic that we're talking about, it's a, it's a bit of novelty, but then there is some utility and in certain aspects I think it could be used really powerfully. Right. Like, I mean, look at how in one minute we were able to just take some really low pixelated images and it could go and generate something. Right. If you had something in your mind, something that you wanted to kind of showcase. And most of us, we are visual people. So if you, if you can go and you know, use these tools and it's a, it is a tool. It's a tool just like any other tool that you would have in your toolbox.
If you can go and help show somebody what that end state could look like much faster. As opposed to me hiring a graphic designer or even let's say from a writing perspective, what would you have had to do in the past? You probably would have had to hire a ghostwriter.
Ghostwriters would be expensive. I don't know. The only other thing that maybe you could do is run a seance or something, right? But yeah, but at the end of it now, like, isn't this like awesome? Like in, it might be terrible, it might be awesome in terms of the, the output that comes out. But getting that first baseline start, it could be an outline. And again, I don't think people, if they're going and just taking and copying and pasting it over and thinking that that's it, yeah, you're probably not going to get the best results in your, whatever your, you know, vocation is out there. But if you see it as something that can maybe, you know, get you to a certain point, you can use it, tweak it, massage it again, that's where, you know, some of that originality might come into play, right? And, and again, I think it's funny because even, like, this one article that I came into, like, you know, they're putting these large language models, again, to the test of whether they are creative.
But again, yeah, I mean, if you take all of the content out, you know, all the works of art, I mean, even there's a bunch of authors like, you know, for instance, like Steven Spielberg, where, you know, OpenAI has taken their information and used it as part of the large language model. Right? And so, you know, at, at some point, this, this, if you had all of that, obviously you would be more creative, right? But at some point it's going to get kind of watered down and then you're gonna have to like, how do you. Like, I mean, I remember one, One time, like, I. I went to a concert with one of my clients, it was Coldplay, and, you know, I don't know, we were having dinner right before going to the stadium here and stuff. And, you know, he just made a comment. It's like, you know, you would think like, in all these thousands of years that, you know, or even, let's say hundreds of years of human beings going and putting together songs, you would have every kind of conceivable type of song and, you know, bead and so on, right? But somehow they. They keep turning out new songs and stuff, right? And they are inspired by whatever's in the past. And so. And that's where I, I say, even, like, that history, I think that element does help, but I, I don't think that that's ever going to change. It's. It's kind of like, you know, probably one of the most underrated skills that we as humans have is the ability to go and tell stories. You know, storytelling how you can go and, you know, connect with somebody.
I mean, I look at a guy, like, look at it. Even years have passed now since Steve Jobs has passed away, and he was considered one of the best, and he had his own quirks as well, like, as a human being. But he was able to make those connections and tell, you know, a story in such a way that now that company, which was on the brink of bankruptcy, has become the most valuable company in the world.
And what does it come down to, really? I. I think it's. It's the storytelling, right?
[00:44:43] Speaker A: And so I think the fear with AI is that in the interim, at least in the short term, you know, companies see dollar signs based on quarterly increases, and so they forget about those things that are really important. And, you know, if they forget about storytelling and there's no one around in the company anymore to remind them that storytelling is useful, I. E. That institutional knowledge. I think that's the fear. That being said, I do you raise a really good point about the other side of the argument, which is that it can inspire creativity. And the music aspect is a really interesting one. I had read, and I' I don't remember where years ago that there was an analysis done of the tim. The timbre of music and said that the variation in music beats and kind of the range of notes had been reduced greatly. Actually, you saw much more in terms of rock and roll, the old school term. From the 50s and 60s 70s, the timbre really started to change and not necessarily for the better, but became more generic really going into the 1990s and the early 2000s. You know, except for, you know, a few notable bands are really different, like Pearl Jam and stuff like that, Nirvana and. And the concern was is that based on. Not auto tune, because I hadn't been out yet, but based on just kind of the refinement of production and marketing, that all music was starting to sound the same. And it could go the opposite route where, you know, artist wants to be different and they're using AI somehow to kind of prompt some.
Give me a beat. Okay.
Make it really offbeat on this measure or something like that. And then that inspires them. So I mean, I can see it doing the opposite where it makes people more creative. It inspires them to do little things that they might not otherwise would have. And I think if they're feeding creative stuff back into the system, the system will give a greater variety of things to be inspired by. I think it's more of a concern that there may be a large majority of people who don't use it that way or. Or for certain purposes. It's, you know, in the.
Well, maybe for industry reports, it doesn't matter so much. Generic and formulaic is better. I mean, scientific articles are like that. I wouldn't say scientific articles are particularly exciting to read. They follow a very formulaic approach for good reason. It's easier to skim. Yeah, but they can be dry.
But so, you know, I had colleagues say though that academic writing doesn't have to be like that. And if you look at academic writing from the 19th century, early 20th century, the papers are far more informal and much more interesting to read.
And it'll much more have a.
[00:47:31] Speaker B: Much.
[00:47:33] Speaker A: Have a greater sense of humility and what they're claiming. And so it doesn't. It's not that AI. What I'm trying to say is that AI isn't the, you know, the sole culprit of things being, you know, the creativity taken out of certain sectors. It just has the potential to do kind of either. It's like a knife's edge. Right?
[00:47:53] Speaker B: Yeah, well, and, and this is where I think instead of, and this is, you know, you look at it from a business standpoint, right. Like we went through this crypto phase.
Now crypto is pretty much like, you know, we've now transitioned into AI. So AI, everything is just like the buzz. But really if, imagine instead if we look back and reflect on this and if we applied it to what is realistic, it wouldn't be that sexy if we tried to make like, you know, this AI, these LLM kind of stuff. If you take it back to the unsexy aspects. Right. And what I'm getting to, and this is where I think Microsoft, they probably, probably are going to be okay.
They've taken the mundane unsexy aspects of our day to day work life and they're trying to, you know, make those a little bit more efficient and bearable. Right. So something like what you talked about earlier, like who likes to take meeting notes? Nobody likes to take.
[00:48:55] Speaker A: And, and you don't learn any valuable skills from it. So automation there does make sense. Yeah, I totally get it.
[00:49:00] Speaker B: Yeah. So like, I mean that's, that's a perfect good application of going and automating the meeting notes, doing the action items, taking your agendas. I, I, you know, again, I mean even prior to Microsoft Copilot, and I mean again they're, they're going to be rolling it out soon for all their subscribers. But you had tools like Otter for instance, where if, if you used it that way you could, you know, go and have everything recorded. Then imagine you take that same stuff now throw that into chat GPT, have it summarize everything, send out, know the, the minutes to everybody. It's great application. Imagine like Excel, you know, doing analysis and all that. Like who likes to do any of that? I mean from, from our aspect, like even doing a lot of that data collection, data analysis, those are things that you could somewhat automate and then you go and take a look. I mean it's certainly, I think all these technologies have an impact.
You know, certain professions are gonna, and I don't think again, that whole thing really, I think what's happened is that there's this pressure to go and increase your profit margin and that it just goes back to capitalism. Every quarter has to be Better than the, the next. And so how do you do that? Either you increase revenues or you decrease costs. So how do you decrease costs? You fire a bunch of people, you'll lay off a bunch of. Right.
[00:50:24] Speaker A: So maybe we can do an examining on capitalism and it's if that actually is, you know, your quarter over quarter growth is actually unnecessary for healthy capitalism or if that's just kind of a bastardization of it, I may lean towards the latter. I don't know that that has to be the case. But you make a really good point about that. I think about Nvidia which just had its earnings recently and like it's revenue. I'm probably going to get the numbers wrong because I don't have an article in front of me. But I think like its profit was up like over a hundred %, revenue is up 3, 2, 300% or something. Ridiculously huge number. So like AI has now its number one business, gaming is second just because graphics processing cards happen to be really good for AI work.
Nothing that Nvidia actually planned.
And so there's a winner and loser based on luck.
[00:51:27] Speaker B: Yeah, and again, who knows, right? I mean like imagine if it gets to a point where it's taking too much energy and you know, with these data centers and water and electricity and it could fail. If that's the case, Nvidia, it could go down the tubes too, right? I mean nobody knows but right now like they've become a trillion dollar company on the forecast that they're going to continue to grow. And I mean that's what I mean by like quarter by quarter.
You know, these companies have to keep growing. And if you don't, then you know, you, you got to go and cut down costs somehow.
[00:52:02] Speaker A: Yeah, that's, I, that would be an interesting discussion with an economist that maybe we should consider because I, and this is totally from AI now I agree with you. The company by the quarter by quarter mentality for companies and if they can't, then they cut costs.
I don't understand where it's written that infinite growth is actually has to be the way that companies operate. I mean, for instance, you know, look at the big utility companies and nobody thinks about these companies yet. They're essential to running modern life. You know, Fortis, for example, it has growth. I wouldn't say it's fantastic. I mean it grows because it's, it grows a little bit and you know, it might plateau, but I mean the company, you know, hydroelectric, natural gas, I mean there is no, I can't think of a, you Know, energy and utilities and water are pretty much the most essential publicly traded companies on the stock market. And they can be punished a little bit for lack of growth. But typically what they do is that they are, they're more in a dividend game. They have. Well, they're a company that can't not exist. And so they pay their investors a dividend and that's how they grow. It seems strange to me that companies are punished for the inevitability that they might plateau and saturate the market. You know what I mean? Like, I mean, Apple today is a different company than it was 20 years ago. It is not the same company. They make.
They used to make one range of products and now they make a ridiculous number of products too many. I would argue it's going to plateau, it's going to peak, but that doesn't mean that they should, you know, cut costs and stop making good quality computers. I mean, that's bonkers. I mean, that's. Where is it written that infinite growth. That seems like a strange stock market expectation. And I sometimes wonder if the stock market is conflated with the idea of a market economy. And that's a totally separate discussion. But I just, I can't help but bring it up because I guess I question that. I'm not questioning what you're saying. I'm questioning that logic that you've, that you and I have both heard.
[00:54:22] Speaker B: And that's just like the, the human nature psychology aspect. Right? But I mean, look at Apple's coming out with their new event is coming up in September and you know, they're under intense pressure for their iPhone to go and, you know, do. Well, iPhone 15 is going to come out.
[00:54:39] Speaker A: I'll probably get one because I need a new phone. So I'm looking forward to it.
[00:54:43] Speaker B: Yeah, but imagine like, I mean, it's getting to a point where like, do you need phones every freaking year? Right? Like, yeah, I don't.
[00:54:50] Speaker A: Which is why I will buy one from them again because my previous one was so good and lasted me so long. It's like buying, you know, that's like saying that like, oh, Toyota didn't sell me a car every year. The damn things last too long, they're going to go bankrupt. It's like, well, they have a hardcore customer base because they're so good.
[00:55:08] Speaker B: Yeah, yeah, exactly. But see, again, that's. But isn't it funny, like, the market expectations is that, you know, already the number one in North America anyways, The iPhone is the number one selling product. It has A huge margin. But if they're expecting everybody to buy another phone, if that's what the analysts, the investor analysts are thinking, like, I, I don't see it. Right. Like how many people are going to go spend whether you do it on a plan or not. I mean, it's, you're basically just going and amortizing it over the course of your, you know, beta plan or whatever. But like.
[00:55:44] Speaker A: Well, I'm sorry, keep going.
[00:55:47] Speaker B: Yeah, but to spend like 1500, $1800 on a phone, it's just not, it's not feasible for people, especially the way that the economy is going right now.
[00:55:56] Speaker A: So anyway, and, and that does tie into AI and this growth idea. And so Apple doesn't report, you know, they report how many phones. I think they do. They report the actual number of phones that they sell. Or is it profit per user?
Now?
[00:56:12] Speaker B: I'm not exactly sure.
[00:56:13] Speaker A: I think it's profit per user. And the reason is, is because they know that they're going to pet. There's only so many hardware devices they can sell, and so they're making money off the services and that's, that's providing a huge portion of the growth. But I think about this with AI, Right. I mean, so at what point is companies always do this? Oh, we can't.
We actually have a decline in the number of users using our product and that looks bad. Even though we're making more money than ever, people will punish our stock price because they don't understand that that's irrelevant to maybe how much profit we make. So then they have to do some magic where they don't report the number of active users anymore, but they report revenue per user.
[00:56:55] Speaker B: Yeah.
[00:56:55] Speaker A: And so they constantly have to. Even if their money value and the amount of money they make is going up every year just because they own all the infrastructure, the stock price can scale down if they report the wrong number. And that's why I think that this seems ridiculous.
[00:57:12] Speaker B: Yeah, well, and again, that's, that's just a, we might have to do an examining on that. But getting back to like just maybe wrapping up this AI discussion. But I, I would say that there's, you know, it is a tool just like any other tool right now.
If, and this is what I, I did a talk on this a couple of weeks ago. If there's a link in the show notes to it.
No, it was a private one. It was for, it was for the 500 Global Accelerator Program. But one of the things that I mentioned, if, if that Business Insider report is correct in terms of 75% of businesses not using it or banning it.
You know, I personally don't know if that's the best idea because what are you going to do you, when you go and ban something, probably the employees, they're going to go and install it on their phones and use it anyways. So Streisand effect, right? Like they have to come up with a better policy or you know, have that discussion, that open discussion about how would you use this? And I, I think how you would use it. Obviously you shouldn't feed in a bunch of confidential intellectual property information into the system, but some of those mundane things I think it's perfect for right now. And I, my point to them was if, if all these huge companies are banning it as startups, you don't have to abide those by those rules. You know, you can maybe take that mentality of Facebook's move fast and break things and you know, make yourself more efficient. You know, you can go and use it to automate the meeting notes, those action items. You can use it for day to day business communic your writing efficiency. I mean, I'm gonna, I'm teaching creativity in the workplace this upcoming semester. I'm gonna allow my students to go and use it for the various applications. And one of the things in the past, it's funny, I get students to go and generate like 50 ideas on one issue that they're dealing with now. I think with the, the generative AI, they should be able to do it, no problem. And it's going to be a matter of going and picking and choosing like what ideas are actually feasible.
It might be able to help them not only brainstorm those ideas, but maybe help them create an outline for like some plans for how to implement some of this. Summarizing, extracting information.
You could use it as a, you know, kind of a, a motivator or like an assistant with you. Like how I talked about earlier, like having like a ghostwriter or even having like a designer or whatever kind of purpose that you're looking for. So it certainly can augment I think certain disciplines. I mean imagine, let's say the field of law, they say, okay, lawyers are going to be completely replaced.
Sure, that, that, you know, that LLM for law will probably be able to figure out a lot of that stuff. But are you going to trust that LLM that makes hallucinations, or are you going to go and trust that the lawyer goes and reviews it and then charges you and takes the liability of saying that this is the correct Advice.
Same thing goes for accounting or medicine. Right. Like it's, it's basically going to shift the, the workflow most likely if they can sustain these, you know, the, the actual operations and make them not just right now they're just bleeding money. Right. Like Microsoft's basically, you know, they've financed OpenAI for the next how many ever years?
Who knows? I mean they got to go and generate, they got to figure that use case and that path to profitability pretty quick or otherwise it could be, you know, one of the biggest failures in human history. And who knows, I mean maybe that, that radio kind of analogy might be a good. Kind of like this is all recorded content. So how do you go and distinguish yourself from everybody else if they're using all this like you know, mediocre kind of like what you were talking about like that watered down like we might actually screw ourselves more in terms of humanity.
[01:01:18] Speaker A: Well, I, I think that people are individuals, everybody's unique. I mean you can draw characteristics between individuals but by and large we learn a lot when there's a bunch of people, a bunch of diversity and I mean, but diversity, I mean diversity of ideas. Like a bunch of different ideas come together and then you kind of combine them and, and I think that's valuable. I think what this idea that you know, we can replace that with, with any kind of large language model is a little bit ridiculous because the model is just based on pre existing conditions.
And so by definition if we, the pre existing conditions become generic or more generic. So for instance, right now all of these models have the benefit of all of the creative works of human history.
Well, what would the large language model look like if all it was working from was stuff generated by other large language models?
[01:02:18] Speaker B: Exactly.
[01:02:19] Speaker A: Probably would be really boring. Right. And so I think about it's self defeating.
It's a, I, I agree with you though. When I look at this stuff like I wish just to take it back to a use case. Like I'm not trying to be down on it. I'm, I'm, I find it really fascinating and I feel like Amy Webb and a few others are the only people who really just like you said to wrap this up, who kind of really understand that there's specific use cases where this is really interesting.
But a lot of people are talking about this in, in broad senses and you know these, the companies are thinking about rolling this out in a broad way and I just think that that's a terrible idea. Like it's just not going to work.
And, and what I would like to see. I would love to be able to have. I would pay for a couple of months.
I would love to try Office365 copilot. I would love to see how it can prompt me to create an outline and take a different approach. It's almost like having a. What we talked about use cases like a Socratic opponent. Right.
And. And you know, if you don't have a person that you can bounce all ideas off of, well, this is, you know, maybe a decent substitute for that. And so ah, yeah. I wrote this introduction. How do I translate that into a presentation?
Those are the things I'm excited by.
But only, only insofar that they get me started. I certainly don't want it to like take over the driver's seat because then it's not really mine.
[01:03:44] Speaker B: Yeah, yeah, for sure. Right.
[01:03:46] Speaker A: And that's where the academic integrity and universities and people like that's what I mean about the concern because there is a percentage we can assume of users who are going to cut and paste.
[01:03:57] Speaker B: Yeah.
Well, and I'm, I'm sure there's. It's funny because I look at this past semester, I have a feeling because there was probably, even though I had a policy in place and there was certain kind of parameters where they would have to go and document and reflect on how the output from generative AI would come out. And at the end, I mean I feel like there was maybe a handful of people that might have just taken, copied and pasted it. But I was also very clear throughout. I mean we would go and tear apart the output of what would be generated. And at best case, because it doesn't have the context, it doesn't know all the things that I'm discussing in day to day kind of business life, you're looking at maybe a C plus and B minus and maybe some people are okay with that. Right. But I, I could tell right away like they were very similar, generic. It didn't pick up on a lot of the.
[01:04:53] Speaker A: It has a certain tone, doesn't it?
[01:04:55] Speaker B: Yeah, yeah, exactly. Right. And, and I think you could get better with that. Like you, they, you know, it might have just been let's just throw this in and copy and paste it and we're done with it or what have you. But ultimately you as an individual and this was one of the things that I instilled, you're responsible. You're responsible for the actions that you take. And I have to assess that deliverable that you're presenting to me. And the same goes for like in the, in the workplace, if imagine you had to go and work on some major report, you hand in this report, it's pretty generic and blah. Maybe it even has a bunch of errors because it's making up a bunch of fake information.
Well, you're on the hook for that. And so you better know your stuff and you got to have that domain expertise, right? And this is where like a lot of this, it's, it is hype to I'd say to some extent. And that's why like even that Slate article that we kind of, we didn't really touch on but like, you know, Meta has AI, Google has AI, Microsoft does AI, Amazon has a plan.
And I think that's a good kind of approach is that, you know, they're looking at how are we going to go and, and actually apply this in the Amazon Web Services. Look at Apple, you mentioned this earlier.
They haven't even unveiled what their plans are for AI. Certainly, you know, they're going to do something, right? I mean there's been even suspicions of the past where they are creating their own search engine competitor to Google. I mean now it's probably changed. But you know, this is where seeing what everybody else is doing, this is why like especially when I'm looking at business and teaching business, there is this concept of first mover advantage.
I don't think that first mover advantage, if you look at, you know, everybody that's been first to market hasn't necessarily succeeded, right? If, if, let's say Sony was probably one of the first to market with MP3 player, who dominated it was Apple with the ipod, right. Search engine. There was a plethora of search engines back in the day.
You know, like Ask Jeeves, there's Excite, Web Crawler, altavista, altavista. That was my go to. And then you know, at the end of it, it was Google that's dominated search. Right. So I would actually argue that there's this last mover advantage. You see what everybody else is doing, see how they failed. You go, and, and I mean again, Apple, they are the most valuable company in the world where they have looked at what everybody else does and does it better. I mean I, I actually think this, this Vision Pro is maybe a little bit of a departure from their typical strategy and implementation. Right. So, but who knows, maybe they're getting to the point where it's kind of getting kind of stagnant or it's like.
[01:07:40] Speaker A: Something, it's either first mover advantage or it's the innovator's Dilemma that Clayton Christensen talked about it, you know, you know, I mean, talked about like Nortel had network switches and stuff like this. I don't really understand their business, so I don't claim to know. And that was primarily their technology was traditional network switching. And it was Cisco that came out with, you know, their technology, and they were tiny, you know, by comparison. And it was just better.
[01:08:09] Speaker B: It was.
[01:08:09] Speaker A: It wasn't better at scale when it started, but it was in the long run. And so they, you know, a lot of companies, they start out with kind of the low end of the market, you know, hydraulics and heavy machinery. I mean, it was only like Bobcat, like these little things. You know, they hadn't figured out how to scale hydraulics to large industrial machinery.
And at first, and it was still like cable and pulleys and stuff for big diggers for the longest time. And it took a long time. But the idea, the technology was better. Yeah, it just hadn't scaled yet. And I think that there's a combination between that and. And not being a first mover. I mean, so you could be the first mover with a technology like hydraulics, but it may be someone who sees the hydraulic stuff and is able to scale it when you're not, even though you invented it. And then they dominate everything. Right. So there it doesn't work.
[01:09:08] Speaker B: You're.
[01:09:09] Speaker A: I agree with you. These markets don't unfold the way people think.
[01:09:13] Speaker B: Yeah. And that's where, like, I mean, again, you know, touching on history, one of the things that we say in society is patience is a virtue. Right. Sure. We want to get to, you know, get things out as quickly as possible. I'm guilty of this, but, you know, sometimes just being patient, I mean, I say this to students all the time, like I'm teaching business communication. Does that email need to be responded to, you know, the second it comes into your inbox?
[01:09:40] Speaker A: No, but we were talking about this earlier with me because there's always that pressure to do that. And it's like, well, maybe if I left it for a couple of days, I would have a better response because I could think about what I wanted to say.
[01:09:50] Speaker B: Yeah, exactly. Right. And. But again, that's where like that, that patience sometimes even, I mean, I've seen situations where not responding, it's already resolved itself. There's no need for a response after a couple of days just because the time has passed. Right. And so, you know, I think with that, again, this is a nice kickoff for our first examining, you know, deep dive into AI.
But yeah.
Hopefully the listeners, they find this, you know, discussion of interest and, you know, feel free again. I guess if there's other topics or things, we'll we'll see what comes up in, you know, our review. I mean, we're always looking at the current trends. There are certain things that are interesting to us of interest. And so especially, you know, I'm I'm really into technology, design, other things. And, you know, same goes for Eric in terms of the just looking at productivity and other aspects. So, you know, feel free to, I guess, drop us a line as well.
[01:10:57] Speaker A: That's right. Well, it's just been a good episode, so I guess we'll leave it there.
[01:11:04] Speaker B: Yeah, for sure. Always a pleasure, Chris.