Episode Transcript
[00:00:08] Speaker A: Welcome to Examining a technology focused podcast that dives deep.
I'm Eric Christiansen.
[00:00:16] Speaker B: And I'm Chris Hans.
[00:00:22] Speaker A: So welcome to another episode of Examining A, the technology focused podcast that dives deep. Good afternoon, Chris. How are you doing today?
[00:00:33] Speaker B: Good, how are you?
[00:00:35] Speaker A: I'm well, we have a couple of topics that we're going to discuss today in our new and improved format that we tried for the first time last episode, our kind of condensed format. So I think today the first thing that we were going to start with was Anthropic and their proposal to settle and pay $1.5 billion to settle this author class action lawsuit that they're embroiled in. So did you want to kick us off about just kind of what's happening?
[00:01:15] Speaker B: Yeah, no, for sure. And maybe to give people background. So Anthropic, which is the company that created Claude. So the large.
[00:01:23] Speaker A: So an AI chatbot competitor to ChatGPT.
[00:01:26] Speaker B: Yeah, exactly. So to go and train their data, they went and took a bunch of, you know, books off of the Internet through this one website that is a known source of taking works without the author's, you know, permission. And they fed that into the, their large language model to train it.
And so right now I forget how many books, something like how many thousands of books is. 4. Yeah, 465,000 books right now that they have in terms of works. They're giving, you know, up until September 15th to go and put together a full list of books. But yeah, they're, they're going. And you know, they took all of these books, so let's say, you know, half a million books, put it in there without authorization and they decided to go and settle at 1.5 billion. So, you know, according to these articles, the news that's been released, so that works out to being $3,000 per book settlement.
[00:02:37] Speaker A: Right.
[00:02:38] Speaker B: And you know what the interesting thing, I didn't realize until you brought it up, but the judge has thrown out this.
The settlement.
[00:02:48] Speaker A: Yeah, like you said, they're settling with the, they're trying to settle with the authors in this class action suit who are unhappy with the way that they've kind of not paid attention to proper copyright. I think the sticking issue wasn't even just that they scraped this well known repository, but then they also held that content kind of in their internal library.
So like it wasn't something that they trained on and purged. It was data that was then retained, which I think, which makes it even worse. Right. And it's kind of we're going to tie it into something else. But it's also, it's, it's it. I think it, it. We kind of expect this from these AI chatbot companies because they need a certain amount of data to work with. But I think it probably one of the reasons that Anthropic is in more hot water. I mean, perhaps ChatGPT, Gemini, they've all on this to some extent, but because Anthropic has tried to position themselves to some degree as not using their own users inputs for training data and they've kind of tried to take this kind of gentler, more humane kind of AI company brand, which is what it is. I think the copyright violation looks a little bit more glaring in contrast to the image they've tried to create for themselves as a company. So that's kind of what I see. But like you said, this judge, so it's. Judge, I'm just going to get the name.
Alsup expressed concerns about the claim process and the potential for future litigation. So basically by rushing to settle for $1.5 billion, which doesn't really add up, like you said, a lot per book, they basically wipe the slate clean to continue this behavior moving forward was the concern.
[00:04:51] Speaker B: Yeah, and that's a big issue because this would set a precedent. And so now all of these companies, OpenAI, Meta, you know, Mid Journey, all of them, they can go and just pay out 3,000 bucks for taking, you know, pirated books or content. Yeah, that's it.
[00:05:08] Speaker A: And I think with this one, the judge's concern was also that how they notified the members in the class action lawsuit. So it's a little bit different. It's not like they were taken to court by, you know, McGraw or some like a unified publisher. It was, it's a class action suit. So I think there were some concerns about how they were notified about like was everybody, you know, properly informed about the settlement? I mean there's a lot of people in this class action lawsuit. Right.
And so as I understand it, it's kind of, it was kind of struck down. I think it was the, the language by the judge was that it's kind of put on hold until there's kind of more information and more justification for the process.
So I think it was this. Maria A. Palente, president and CEO of the association of American Publishers, said in a statement that the court demonstrated a lack of understanding of how the publishing industry works.
The court seems to be envisioning claims process that would be unworkable and sees a world with collateral litigation between authors and publishers for years to Come the deal would resolve authors class action over the AI company's downloading of millions of pirated books. One of the first settlements of a copyright dispute against AI leaders, including OpenAI Meta Midjourney. Like you said, attorneys say the proposed $3,000 per book settlement establishes a potential benchmark for the tech giants to resolve similar claims. So kind of exactly what you're mentioning. It's going to set a precedent for how copyright is, is perhaps litigated in the future or settled.
[00:06:56] Speaker B: Yeah, well, and I mean we've talked about this in the past, right? When there's these huge companies like Meta or Google where they go and get these settlements and it's weird because the market actually rewards those companies. As soon as they've settled for hundreds of millions or let's say even a billion dollars, the stock goes up because you've just given free reign now to these, I mean how we've described it in the past, it's like you've slapped them on the wrist, it's like a parking ticket. And they're still making millions upon millions. And so yeah, of course the shareholders are ecstatic about this.
[00:07:38] Speaker A: I would like to point out too that there are other copyright complaints.
I don't know. I'm not an expert in copyright by any means. I know Creative Commons like open copyright because of my research very well, but copyright law in the United States is not my area of expertise, so I'm going to kind of muddle my way through this. But also, not all the copyright claims against the AI companies are the same either. This one is particularly egregious because they are copyrighted works that were obtained from a less than ideal repository. They weren't like bought in an agreement with the publisher to scrape for training. They could have, they could have even tried to make a deal with all the publishers like hey, we want to use this for our training model, you know, let's pay, I'll work out, you know, how many millions of dollars per publisher as a one time thing. We won't hold any information in our repository, we just want it to feed our model, blah, blah, blah. They didn't do that.
But this is a little bit different than say other copyright complaints. So for instance, I believe, I don't know if it's still going on, but the New York Times had a complaint against OpenAI.
They claimed they were able to get ChatGPT to exactly replicate, I don't know, four or five paragraphs to verbatim of a article that was paywalled. And they used that as evidence to say that they had scraped their internal stuff.
Now, the issue I took with that, though, was that, okay, maybe that's true, or maybe a user uploaded it. I don't know how it got into their system, right? Maybe like a person with a paid subscription downloaded the PDF and uploaded it. Who knows how it was trained, Right. But one of the issues is that they were claiming that it would take users away from the New York Times.
And I so, like, oh, people will go to ChatGPT to get the New York Times. They won't pay for the New York Times. And I'm like, I don't know how many prompts they had to get to make it do that. And so that's not. That, to me, that's not a realistic copyright complaint because it's not a very clear argument that ChatGPT is a competitor, which is what they were saying. And then of course, things that are on the open web that they scrape, that are not behind a paywall, they're copyrighted. But as long as they're not reproducing and redistributing those works and the machine is reading them, it's hard to make a case. I think that that's copyright infringement as well.
I think I only bring that up just to say that there's a lot of different. Copyright is complicated and there's a lot of different types of accusations made against the AI companies. And I think this one is the more serious. The one with anthropic is the more serious.
[00:10:17] Speaker B: Yeah, yeah, exactly.
So, but again, like, you know, it would go and create precedent and, you know, like, look at. To your point. I mean, Eric, Yo.
Why wouldn't they, wouldn't it be the. This is just. I don't know, Let me think, think out loud here.
The common sense type of approach would be going and asking for permission from the publishers of these works and coming to an agreement. But no, of course not.
Let's just go and steal it. Let's go steal it and who cares about the outcomes? Let's just go and take all this information. We'll deal with it by then, because guess what, we'll. We'll have raised hundreds of millions of dollars and we can hire lawyers and throw money at the whole situation and we'll make it go away for $3,000 a book.
[00:11:08] Speaker A: So, yeah, I mean, it's hard to. It's hard to imagine that this wasn't even, perhaps even considered as part of the business model beforehand.
[00:11:17] Speaker B: Yeah, exactly.
[00:11:18] Speaker A: It kind of ties into another thing that Claude is Doing so like I said, Claude is famous like Anthropic is famous for not claiming that they don't use their users data to, to change to train the model further.
Now I have a question for you because I haven't looked in a while, but is chatgpt do you have the option to say don't use this data to train the model?
[00:11:44] Speaker B: You do have the option.
I don't know if I trust them because of that.
[00:11:49] Speaker A: No, I'm not, it's not about trust. It was more like do they, do they claim that you can say no?
[00:11:53] Speaker B: Yeah, they, they do have the option where you can go and say if.
[00:11:57] Speaker A: You do it but it' on by default.
[00:11:59] Speaker B: Yeah, it's, it's on by default. You have to go and opt out of it. And if you go into the invisible mode, what it'll do Is it the temporary chats? Yeah, it'll retain it for. Yeah, the temporary. It'll retain it for 30 days.
[00:12:17] Speaker A: Okay.
[00:12:17] Speaker B: And, but yeah, then that doesn't get fed into the model.
[00:12:22] Speaker A: Okay, yeah, so you can do that with other products but I think Claude was more famous for just not doing it by default. But what's interesting with Claude and so this isn't related to copyright but just kind of showing the cracks in their armor and their brand, so to speak.
Starting like I think when was this article written in?
Starting on August 29th, so a little while ago, like a week ago.
Anthropic is changing its data handling policy for Claude AI and so starting August 29th users had to have opt in to allow their data to be used for training of future AI models or opt out to keep their data private. And if no choice is made by 28 September 2025, access to Claude will be revoked.
Yeah, so they're now going in the opposite direction where they're going to kind of redo some of their, the tiers of their plans and they're going to start using user data to train the models for sure. And so the company, and so in Tom's guide it says why Anthropic is making this change. And they say, the company says that user data is key to making cloud better and at vibe and traditional coding, reasoning and staying safe. Anthropic also stresses it doesn't sell data to third parties, that I understand to be true.
And uses automated filters to scrub sensitive content.
But in this article it says still the change shifts the balance from automatic privacy to automatic data sharing unless you say otherwise, which is very similar to how telemetry and User data is scraped in like Chrome and Microsoft Edge. I think the only privacy focused browsers left are Safari and Firefox.
The other ones scrape your data by default. I mean, it's quite a laborious process to go through Chrome and Edge and turn everything off.
Yeah, I have because I'm one of those guys. But it just shows an interesting shift, shift away because these companies are really hungry for more data to train their models.
[00:14:51] Speaker B: Yeah, let's see, this is where like, I mean again, everybody kind of bashes on Apple right now. And I mean there's been talk about Apple going and potentially buying Perplexity. And I mean for the most part if you take a look, most of the, the acquisitions that Apple has ever done, they haven't gone that well.
I mean maybe one of the more successful ones has been the Beats acquisition.
[00:15:16] Speaker A: But I would argue that Siri was successful because it's a ubiquitous product, even if it's not the most advanced. It's very good at doing very specific voice prompts that people always do like timers and reading your messages and stuff like that. Like it is a good brand.
[00:15:31] Speaker B: Yeah, no, so there's, there's been some good acquisitions, but yeah, there's some people now they're saying that they should just go and buy Perplexity. They have the cash to do it, they'll give them the run room. But honestly, I mean you have some of the brightest people there.
You have a company that has access to I don't know how many billion people have iPhones and other Apple devices. So you have the user base.
They already have a privacy first type of mentality and that's part of their brand.
I still think Apple's been well known for this. They are usually the last one for every type of product and then they just make their experience way better than everybody else. So I wouldn't count them out. But I'm sure that they've had other issues like with the tariffs and so they're fighting fires on that side. And in the meantime they're also losing a lot of their key people to.
If you've been following like Meta has been, it's like they're almost throwing out money. Like if they're like NBA LeBron James type of players or something, they're hiring all the top AI type of, you know, researchers and experts in the field.
[00:16:50] Speaker A: You know, it's funny thinking about all of these AI platforms, these, these AI models and that they're. I think it's becoming clear that the rate of improvement is not going to be what they originally thought.
And it seems to me, as you know, I'm a longtime computer enthusiast. I like to code.
I'm not an AI neural network guy. It's just not my training.
But my sense from reading about neural networks and reading about how these datasets work is that it almost requires increasingly exponential, and I really do mean exponentially large data sets to make these leaps. And exponentially large data sets and increases in data for training models every single time you release a new model, which means we're going to start hitting a wall or already have started hitting a wall. I think ChatGPT5 is a huge improvement, personally, over 4.0. It doesn't seem as much. It doesn't seem as sycophantic. It's a lot more business. Here's what you wanted. I kind of prefer its tone. I do feel it to be an improvement, but I think people are disappointed not because it's not an improvement, but because they're expecting this. Quantum Leap 5 is kind of another nice. Not a round number, but a nice number. It's not a 4.1, a 4.2, you jump to 5, you expect it to be, I don't know, a lot better than a 4.3. And it's not.
The naming of these things is somewhat arbitrary.
It also makes me think back to something we talked about some time ago, which is that I wonder if there is just an inherent limitation to how these large language models are built that we just really don't get it. And so it's hard to make it better because our understanding of them is a little bit limited. So I think I have some notes from an article I read a while back from Stephen Wolfram, who is, you know, Wolfram Alpha is an amazing search engine, by the way. I think people should use it and I think it started to incorporate AI. But he kind of talks about these neural networks.
So I had. The note I have is that Wolfram points out that many neural network designs like adding token value and token position embedding, so kind of how these probabilistic words are calculated. Right.
Are based more on what works through experimentation than an understanding of why they work.
He highlights that while neural nets often function effectively if roughly right, we lack deeper engineering level grasp of their inner workings. The laws on which these models are built are not explicit. It's impossible to make it transparent. We can promise it that it's going to be more private. We're going to promise to do this, but because we don't really know how it works in an explicit sense, like we, we know the input and we can see the output, but there's a bunch of stuff in the middle where we're missing our knowledge. It's very, very difficult to make them not only reliable but make them accountable.
[00:20:13] Speaker B: Well even, I mean a couple of things that I've come across over the last couple of days. There was a couple of papers that have come out about hallucinations. One was from OpenAI and then there was this one from this company called Asana.
But you know, at the end of it like they were looking at some of the, the direct implications and what's happening, happening in terms of suppressing the hallucinations in a controlled way without changing the model. And effectively these large language models are sophisticated algorithms that compress vast amounts of information.
So let's say the Internet and then recreated on demand. But that compression is necessarily lossy. So it means that rare facts cannot be correctly retrieved.
And in, in this paper they actually go and provide some concrete tools and it's based on information geometry to detect some of these. Which kind of makes sense. Right. If you think about it just logically speaking, if you want something on demand like this, there inherently there might be some issues and you're pushing it because you want something to come out and doesn't necessarily mean it's going to be great. I mean remember there was that 60 Minutes episode where literally the Google's a large language model and they get all their experts and they trained it on in English and then out of nowhere somehow the large language model understood Hindi and it's, they have no, when they asked them they're like, I have no idea how that happened.
[00:21:49] Speaker A: Yeah, so maybe it was able to see enough connections and examples to be able to deduce another language from another one. I mean that's possible but like you said, it's a black box.
Stephen Wolfram has this concept of computational irreducibility.
So it suggests that many natural processes just can't be simplified. They are what they are and you have to run every single step to see what the outcome is going to be.
And then this, you know, this kind of challenges the neural network ability to truly model or kind of understand complex irreducible phenomena. Like it's, it's, it's not a rules based system. In fact that's what he said with, you know, you early days, maybe not as much now but you know, there was, people were getting all these ridiculous answers for like fractions from ChatGPT. Like you know, it was arguing that like, you know, you know, one half is greater than 2/3 kind of a thing. And it's kind of like that's just wrong, but because the system isn't based on principles that you'd find in nature, like actual rules, it's this kind of probabilistic outcome and we don't know what happens in between.
There's just inherent flaws.
And I think that this is going to be an issue because I don't think that these models can get much better or they can get better if they're reduced. So maybe, maybe an interesting way to talk about it. And again, I can reference this on our show Notes in the Windows Weekly podcast. A while back, they were talking about how in Windows 11, if you open up paint or photo editor, you know, there's a prompt in Windows, I don't mean a prompt like in chatgpt, but you know, it says, you know, you might want to download this model so you can do these AI photo editing tasks or something. So there's, there's a mod, there's little models that are embedded all over Windows.
Right? There's a model for this, there's a model for that. So maybe, maybe the future is the chat interface or voice or whatever. The interface the person sees isn't really the model. It just acts as an orchestrator to point you to the smaller models that do things really well.
Because it seems like these big uni models, these general models are where things seem to fail well.
[00:24:26] Speaker B: And part of it, even in those papers that I talked about, one of the things, and I think about it like even the word hallucinate, I mean, basically what's happening is that these large language models, they're bluffing their way through.
Right?
[00:24:41] Speaker A: I mean, maybe that should be the term very convincingly.
[00:24:44] Speaker B: Right? But you know what, what they were proposing is instead maybe you allow the large language model to actually say, hey, I don't know, and penalize incorrect answers more heavily. And so if you could train it so that the, and teach the models to say, I don't know, when it's uncertain, maybe that would change the whole thing. And so who knows? But again, you know, if you look at it, these companies, they have a vested interest in, like you said, like, I think everybody was expecting these leaps and bounds that didn't really transpire. But I mean, when you're going, I mean, I even saw there was something about the amount of money that OpenAI, like they've, they've increased their revenue, but the cost that they're spending has actually Increased even more. So at the end of it, yeah, they're going to burn through billions of dollars. Like I forget how many. It was quite a significant.
[00:25:42] Speaker A: I wonder if a solution to this could be something that we could borrow from finance. So when you look at a lot of analyst picks on stocks and stuff, for instance, often there's kind of a confidence score.
So they'll say, is this overvalued? Is this undervalued, Bullish, bearish on the stock? And then how bullish or bearish they are. Kind of depends on how far the outlook is.
But often, I don't know if Morningstar does it or Trade Central or one of those companies, but they'll often have a confidence score. The confidence score is high. It's because they reviewed the financials and the data a lot. But sometimes the confidence score is low, especially if it's a newer company. There's not a lot of financials that are publicly available. So they kind of say, well, this is our view. But keep in mind that this is our confidence in that. Another good example is allsides, which is a media bias platform. So you can put in a media company and then they'll rank it if it's a left or right leaning politically, and that's based off a checklist. They have internal audits that they do in terms of snapshots. They don't do them all the time. They have user ratings and then they compile that and they give you kind of a confidence score. How confident are you that the Wall Street Journal is to the right of center?
And so I wonder if that would be something that would resolve this, because I think the issue isn't that things have to be perfect, but it's very difficult for the average person to discern how confident it is. If they get an answer from ChatGPT, that's like 95% confident. And here's a bunch of sources versus something that's 50.
They may think twice or maybe they would prompt it differently or ask the question differently. If we're trying to get people just to take a step and think before they believe it, those things can go a long way.
But they haven't hired me or asked me to implement this from a UX perspective at this point. So you know what?
I think this falls under some of the problems that we're having with these companies and their practices definitely fall under the Cory Doctorow's insidification model. Once they lock you into a platform, then they get worse to try to extract and extract, though it seems to be happening a lot faster than I anticipated.
There's a related story that you found from Futurism.com that's called SAM Altman says he's suddenly worried about the dead Internet theory coming true. So what is the dead Internet theory?
[00:28:29] Speaker B: So the dead Internet theory, my layman, layperson explanation is imagine where there's bots talk to bots, algorithms talk to algorithms, and we're left scrolling through an artificial loop of aut auto generated junk.
And so I found this kind of funny and even like the subtitle here was like, we're all trying to find the guy who did this. And the irony is, you know, Sam Altman, he's the, he's the guy holding the keys to the very system fueling all of this. And you know, the danger isn't really what that the AI exists. The danger is that we forget what it means to be human in this loop. And you know, so how does that work?
[00:29:10] Speaker A: How would we forget what it means to be human?
[00:29:12] Speaker B: Well, if everything's automated, then nothing's authentic. And I mean, like we talked about like last episode, who knows, we might get to a point where, you know, it's all like wall E type of situation, but everything is. And I mean, even for them, it's a problem too, because imagine if all the content is being generated by AI, their training models are going to have issues as well.
And so, you know, it's kind of like the analogy that we've been giving in the past is like taking a photocopy of a photocopy and a photocopy and at some point it gets all like pix pixelated and distorted and it's not as crisp as the first one. Right. And so again, you know, I mean, I've even seen now recently there's a bunch of articles from journalists that have been pulled because they figured out that they're AI generated.
So.
[00:30:04] Speaker A: Oh really?
[00:30:05] Speaker B: Yeah. So the Verge actually I think pulled a bunch. I think there's a few, few different, like publishers.
So.
But yeah, so what would happen?
[00:30:19] Speaker A: So in a world where the majority of the Internet is comprised of AI generated content, is it going to be like this thing where AI generates content that AI reads that nobody reads anyways because they're getting AI to summarize it. It Is it dead because nobody's invested in it because there's no humanity? I guess that's what he's saying because there's no people involved.
[00:30:48] Speaker B: No people involved. Right.
[00:30:51] Speaker A: And, and how does that, but so to me, how does that work? And Maybe you've already said, I think you mentioned this, but I kind of, I got pulled into one aspect of what you're talking about. So if I, if I missed a piece, please forgive me, but I think what you're getting at is that it kind of poisons the. Well in terms their training model. Because if I stop, I've Never really used ChatGPT to write things for me. I put things through it. If it's a really sensitive communication and I really want to make sure it comes across well, then I may get it to edit heavily. But I often like to just hammer it out myself, at least to start.
I never get it to write blog posts for me or do any of that because I like to write. But if the AI model is now training on AI data, does that wreck the model? Because isn't that what we call model degeneration? We talked about this a while back.
[00:31:52] Speaker B: Yeah, exactly. And then I guess what's worse is at some point, basically, maybe nothing on the Internet is actually done by humans anymore.
And I don't know, maybe that's not necessarily a bad thing. I mean, we know that, you know, Jonathan Ives, he's been hired by, by OpenAI to go and create some sort of product. And I mean, some of the renderings that I've seen, just people, you know, going and creating like Photoshop versions or whatever, who knows, it looks kind of like the, the humane AI pin. So it's going to be like another device. It might fail again.
But I mean, yeah, like the way that we're interfacing with the Internet may change. It's already changing, right? I mean, we've talked about this over several episodes over the last year. There's. Why would I go and use Google when I can go and use one of these large language models to find.
[00:32:46] Speaker A: You know, see, I'm still not there. I'm still using search.
[00:32:50] Speaker B: Yeah, well, maybe you're not.
[00:32:53] Speaker A: I'm finding that the AI models are not like. So I've been doing this, right, Been doing web searches. Go find this for me.
And you know, sometimes ChatGPT comes back with some great sources. But you know, I find that I have, I just have better sources if I put the search into Google News.
[00:33:11] Speaker B: Well, maybe for articles. Yeah, that could be the case. But there, this is a known thing now. The traffic going to Google has declined.
[00:33:23] Speaker A: Yeah, right.
[00:33:23] Speaker B: And so, you know, to that end, Google now is like just gone all in on AI. I mean, they're the ones that created the transformer model in the first place to this whole architecture. But yeah, I mean fundamentally people, I mean there, if it's basic, simple things that I don't really want to put too much thought into, I'd rather go. Instead of siphoning through hundreds of pages of results from Google, I'd rather just have it give me one, one result.
And it's going to impact Google's business model because now they're not going to be able to charge as much. But it's interesting now you have the option you can make, let's say if your blog post or what have you, you can actually on your website side of things, you can allow it to be searched by these large language models or not. So that's your choice, you know.
[00:34:14] Speaker A: But Google just had their quarterly revenues and didn't they grow considerably?
[00:34:23] Speaker B: Oh, did they?
[00:34:25] Speaker A: I mean, I don't. It's not the most they've ever grown, but I, you know, I didn't plan to talk about this today, so I, I don't have a great source, but my understanding was, is that even in the. Well, I know their stock price went up after the antitrust was, was thrown down and perhaps that's something else we can talk about.
But it's not like they had declining revenues or profits. I mean they were doing great. So I mean traffic is down to some degree. But then is that from what people using Gemini?
I think the jury is still out on who will win the AI race, right? I mean, I don't know that that's clear. I'm trying to think back to an analogous time in history.
Was it obvious that the iPhone was going to dominate?
[00:35:14] Speaker B: No.
[00:35:15] Speaker A: Was it obvious that the IBM PC and Windows was going to undercut Apple, who came out with the graphic user interface first? I mean there's all these things we would say of course now, totally. But of course OpenAI is the leader.
I don't know if that's true. I still say that their models are better, but I have to say, just backing up. I haven't actually worked with Gemini and I'm starting to wonder maybe.
I've been looking at Claude, I've been looking at OpenAI and how it ties into Copilot. I think Copilot has made some huge advancements.
I like the way Microsoft has implemented it. But I have to admit that I actually haven't spent a lot of time with Gemini, especially the latest version. And you know, it ties into Google Docs and Drive, which I think is a great product and maybe that's something to take more seriously.
[00:36:14] Speaker B: I don't Know, just looking quickly at the news, you're right. About a month ago, they actually exceeded the market expectations.
Yeah, but I don't know if that the growth is there or not, but whatever the market was expecting, they exceeded it.
[00:36:29] Speaker A: So what is. Is there a Gemini Pro?
[00:36:33] Speaker B: I'm not sure. I haven't played. I think so. I think there is. Well, I don't know about Pro, but there is one that you can get a.
There's with the Google workplace where you can go and get into it and get access.
[00:36:48] Speaker A: Okay.
[00:36:48] Speaker B: Yeah.
[00:36:49] Speaker A: Well, maybe that's something I should check out. Maybe I should switch to Gemini and report back.
[00:36:53] Speaker B: Yeah, I mean, it doesn't hurt to try out all of them.
For the most part, what I've seen is that there's certain model, certain tools are better at things than others. Like. So, for instance, like Claude, from what I've heard, is better at the coding side of things as opposed to some of the other ones. But maybe with GPT5, OpenAI is maybe neck and neck and who knows? I mean, I think it's very, you know, I'd have to go and look at. I know that Ethan Mollick, sometimes he looks at the performance of these models and, you know, overall they're. They're not that far apart.
And even the other thing that's interesting too, which we maybe have not talked about in a while, the effectiveness of the models has increased over the last couple of years.
Also, the other big thing is that the amount of energy that there are these models are using has declined apparently.
[00:37:52] Speaker A: Yeah. Considerably.
[00:37:53] Speaker B: Yeah, considerably.
[00:37:55] Speaker A: Yeah. And because they're getting more efficient.
Interesting.
I think maybe we do.
I'd like to do maybe I would like, if it's okay with you, to throw in kind of a tip. I didn't plan to do this, but I'm thinking now I'd like to go and switch away from GPT.
Or maybe we both do it and go play with Gemini Pro for a bit, just to see, because I feel like I need to broaden my horizons. One thing that I. Before we end today, I wasn't planning on providing a tip because it's something we do, but if possible, I'd like to direct our listeners if they're interested in just the history of technology and code, but also how it ties into AI. There's a really interesting podcast episode I listen to.
I don't always plug other podcast episodes, but Lex Friedman, who does some really interesting interviews, he interviewed kind of a personal hero of mine, David Plummer, who is one of the I don't know if you know, David Plummer is, but he was one of the early.
Goes back to. He hired into Microsoft and worked on like the Windows 95, like the early days. David Plummer is actually from Regina, Saskatchewan, but he's kind of a rock star in his field.
And one of the things that's really interesting is that he is C, like an old school programmer, but he's doing this project that requires him to do a lot of Python. So he's already a programmer, he already has this foundational knowledge, but he's not super in depth at Python. So he's like, there's a huge. He's like, I can read Python. But he's like, it's, you know, and understand the syntax just by understanding another language expertly. But you know, writing it whole cloth is a different story.
And so he's actually, he was asked if he's used these AI tools and he, as an expert programmer says he's been vibe coding his way through this whole project in Python and then editing it and putting it together. And that's been his use case for teaching himself this other language already. Coming from a strong base, as a professional programmer, I remember you were talking about Vibe coding before and it was a really interesting.
I just thought that was a really interesting use case. You're already good at something, but you need to learn a different syntax or a different language. Maybe this applies to more than just Vibe coding.
So you, you totally vibe code your way through it, but you use that as like a teacher.
Anyways, that's my tidbit for the week that I thought of.
Is there anything else that you wanted to throw in before we end here today?
[00:40:34] Speaker B: I don't think so. I think we've covered enough. Hopefully it's not too much AI kind of stuff, but yeah, we're trying.
[00:40:42] Speaker A: Well, next week we're going to be talking about the Apple event, so I think.
[00:40:46] Speaker B: Yeah, exactly.
[00:40:46] Speaker A: There'll be an AI break.
[00:40:48] Speaker B: Yeah, for sure.
[00:40:50] Speaker A: Okay, well that sounds great. And that's probably a good place to end for today.
[00:40:54] Speaker B: All right, awesome. Until next time.
[00:40:56] Speaker A: Take care. Ciao.