Episode Transcript
[00:00:08] Speaker A: Welcome to Examining, a technology focused podcast that dives deep.
I'm Eric Christiansen.
[00:00:16] Speaker B: And I'm Chris Hans.
[00:00:23] Speaker A: And welcome to another episode of the Existence Examining podcast, the technology focused podcast that dives deep. Good afternoon, Mr. Hans. How are you today?
[00:00:34] Speaker B: I'm pretty good. I didn't know we were doing the formalities. Mr. Christensen.
[00:00:39] Speaker A: Professor Christensen.
[00:00:41] Speaker B: I'm just. Okay.
My apologies.
[00:00:46] Speaker A: So we have kind of a large slate of articles
[00:00:53] Speaker B: today, although I, I don't know how long it's actually going to take.
I mean, I don't think we need to go do a super huge deep dive. But yeah, there's kind of, there's, there's
[00:01:05] Speaker A: somewhat of a trend here which is kind of, I think it's all about kind of like walking things back. We're talking about Microsoft walking back changes to Windows. We're talking about, you know, kind of walking back some of the hype around AI.
That's how I'm reading into this anthropic article. Yeah, kind of walking back some of the tech layoffs, at least from the AI side because of course OpenAI is going to expand its workforce. We're going to talk about that.
And then Nvidia has to walk back the brutal backlash to DLSS 5, which I'll explain what DLSS upscaling is. And then we're going to talk about the advantage of 50 year olds. So we're going to walk back the ageist comments that we've made in the past. No, we don't, we don't make ageist comments. I think we're actually quite fair. And then we're going to have a discussion on kind of the state of ChatGPT versus Claude. Based on, I guess my experience. We haven't, this is our tip. We haven't really. Our tips are kind of intermittent lately. But that's okay.
We have tips when we have them, we'll put it that way.
But so I think maybe I'll, if you don't mind, I'll, I'll kick it off with this article. So I, I'm a subscriber. Paul Thurrott is my favorite, one of my favorite tech journalists.
It's THOT.com for those listening. So it's T H U R R O T T dot com.
It's free his website but he has a bunch of articles for premium subscribers. You know, I'm not a huge subscribe, like journalistic subscriber. I have, I still get Wired magazine is mailed to me.
That's by one. I like getting Mail.
I don't know why I don't, I don't like emails but I like physical mail. In fact I've been unsubscribing to all sorts of email related stuff. Like I'm thinking well or I, or I dump a million articles into AI and be like, you know, prioritize this for me. I can, I'll never be able to read all of these so there's just too much information. But I like getting physical mail. The only thing that I subscribe to journalistically other than that is, is Paul Thorat. It's an annual subscription. He has a bunch of premium articles and he just has a. Because he's covered Microsoft as a specialty and particularly Windows and the consumer side for like 35 years.
You know between him and Mary Jo Foley who's I think she might, I don't know where she works now. She might actually be working at Microsoft. I'm not sure those are the two like real Microsoft tech news experts.
And we've talked about Microsoft a lot in the last few years because you know they've of all the money they invested in AI they've had huge changes really starting, really started under Steve Ballmer. I don't think Steve Ballmer gets enough credit because he's really the person who put Microsoft on being the cloud provider track. It was not Satya Nadella. I mean Satya Nadella has of course continued that and it's done a good job expanding data centers and Microsoft Azure.
But Steve Ballmer was the one who pivoted the company and so it's changed a lot. And I think what's interesting is that because Microsoft has become this kind of. It's not even really a tech company according to Paul Throt anymore. He describes it as more of this like data center money management company that happens to make tech products at this point. But you know we're, we the consumer side, we all think of Microsoft as Windows and Office but that's not the dominant part of their business anymore. So it's interesting that that's how we think of Microsoft but that's actually not where they make most of their money.
[00:04:48] Speaker B: Yeah, yeah.
[00:04:49] Speaker A: And I think Apple's moving in that direction. I think Google has moved in that direction long ago.
And so we're going to talk a little bit about a premium article. I can't, I can link to it to people but you're going to hit a paywall. But I would if you like what I'm talking about. It's his subscription cost to his site Is not, is not high. It's like I don't know, 50, 60 bucks US a year or something like that.
And he did a really interesting article called Trust but Verify.
So the Windows side of things has not gone well in recent years. So Windows 10 really introduced what Corey Doctorow called this idea of insidification. So like how platforms get worse over time and kind of squeeze their customers, their business customers.
Windows 10 for instance was the first to have kind of like non opt out telemetry data which is like a privacy concern. And there's all these things about Windows 10 and that, that were a problem of course that have continued with Windows 11 and Windows 11 was what, 2021. So it's not super new anymore.
But one thing that's interesting is that there was a post that came out from Microsoft by Pavan Davaluri. So I think he is the one of the heads, if not the head of kind of Windows. And I suppose in the next year what they've signaled is that they're really going to focus kind of refocus Windows around reliability.
And this is an interesting thing for Microsoft. The reason I bring this up and that's important is that you know, yeah, you and I are Mac users at home. But like you know, as if you're talking about institutions, very few institutions are rolling out Linux or Mac. It's Windows, right? It's like Dell computers on lease, mass deployment. Windows is designed for institutional configurability. So when Windows has a problem it's a big deal because billions of people use Windows, right? So they're going to apparently improve reliability.
And Paul Thurrot is kind of not rejection but kind of cautious skepticism slash criticism because you know, Microsoft has a long history of making Windows worse through unwanted changes.
So the list he provides is you know, like I said, forced telemetry basically remains untouched. Kind of the bundled junk. Recommendations and suggestions, monthly updates that are kind of increasingly chaotic cause a lot of reboots, things like that forced Microsoft accounts for sign ins. So it's really tough now to sign into Windows without creating a Microsoft account.
Now there's advantages to creating a Microsoft account and I don't think it's going to violate dramatically your privacy to have it. In fact, if you do have a Microsoft account with Windows 10, we've talked about that, you'll actually get a free year of extended support because then your computer settings, not your data but your settings are backed up and that's how they're able to maintain that security, right?
Edge promotion everywhere. So you can set your default browser to whatever you want. But if you click on a news article in the Microsoft News feed, it always opens Edge like it's always trying to force you to adopt Edge as it's, as its browser of choice. Which is dangerous for Microsoft, of course, because of course they had antitrust action way back partly as a result of Internet Explorer trying to be the dominant and they consider was considered a monopoly.
Strict hardware requirements for Windows 11 that are somewhat arbitrary. And I would agree with that, though there are also security reasons why they have those hardware requirements. So they're not totally arbitrary and things like, you know, OneDrive constantly nagging you to update your file. So these are like consistent UX problems that, that are persistent. Windows and Microsoft has acknowledged some of these problems and they're apparently signaling that they're going to do this philosophical reset where Windows kind of gets smoother and kind of fixing some of these things. Now, I don't expect they're going to roll everything back. So for instance, I don't think that they're going to change things like the, you know, hardware requirements for Windows 11.
I'd be amazed if they make it so you can sign in easier without a Microsoft account because of course they're a services company, so that works in their best interest. So maybe they'll, they'll want that, but they won't try to force Microsoft Office 365 down your throat all the time.
Of course, that knows all the Copilot stuff.
And the problem that Microsoft is having with Copilot is that there's like 15 things that all have the name Copilot, but they're all different.
So Copilot in image Generation and Copilot 365 and Copilot on the web. Microsoft has branded everything the same, but they all have different functionality. So that was a huge mistake on their part because Copilot is good if you use copilot with ChatGPT, but not everything copilot is ChatGPT, and it's, it's a mess. So Paul basically says that there's a huge trust issue with Microsoft because they've kind of neglected the consumer side of their business for so long and kind of really trying to squeeze as much revenue out of a per user basis as possible.
And so what do you think about this?
They've done things like this in the past, but I don't remember a recent time where they've, you know, really said that they're going to double down on performance and smoothness and stuff like that usually Microsoft is famous for not communicating with customers.
[00:10:28] Speaker B: No, that's true.
I don't think they really own up or say that they're gonna go and take care of the issues that the. I mean other than I suppose when we went from Windows 8 and then just rolled over to there was 8.1 and then 10.
Right. I mean that was something where they were gonna try to completely redesign Windows
[00:10:51] Speaker A: altogether and of course I would normally. Sorry, keep going.
[00:10:55] Speaker B: No worries. Yeah. One thing I just kind of came to mind, you know, from the security updates perspective. One thing that I found again, you know, I'm on a MacBook right now and then also my phone. But this was the first time ever Apple has done this where it actually updated in the background on its own.
Their security for the os.
[00:11:19] Speaker A: I feel like that's happened before, but it's rare. It usually requires a reboot.
[00:11:23] Speaker B: Yeah, yeah, exactly. So yeah, I don't know what the. Maybe it's something that they're going to start just pushing in the past or going forward but yeah, but it makes sense. I mean the security issues when you go and don't update things. And that's why everybody, for Microsoft wise, they wanted, they were encouraging people to go from 10 to 11 so that they can go and make sure that it is secure and nobody has any kind of issues that way.
[00:11:54] Speaker A: Yeah. And one thing that also occurred to me, I don't know if this is true, but I wonder if Microsoft is feeling a little bit more of a pinch from its competitors on the consumer side.
I don't know how much this is actually true.
You know, I would say that if we were talking about, you know, Mac OS maybe five or six years ago, we can make this case.
I think though I regrettably Amos suffers from many of the same problems that Windows does in terms of like nagging you to, you know, not as bad but to subscribe to its services icloud backups. Not nearly as bad as like OneDrive and stuff. But I because they have a larger services business, they are prioritizing that more though I will say if you turn that stuff off and say no, it typically doesn't ask you again where, you know, it doesn't automatically back up things to OneDrive without my consent.
Microsoft is still a step ahead on that.
But you know, with the introduction of Liquid Glass, which it doesn't bother me but I don't love it and I suspect that over time, you know, Apple's had other user interface blunders in terms of and design. So this is, you know, when they, when iOS7, everything went super flat. Skeuomorphic was also a problem. So this is like early days of a new language. So I think it'll get, I think it'll settle out over time and it'll be fine. But I think it's different and people have to learn it. And so I don't, I don't know that they're feeling the pinch from Apple maybe a little bit. I do wonder if they're feeling the pinch from Linux.
There's a lot of like, I don't know if you, we had talked about this a while back, but after Windows 10, end of support, you know, Linux distributions, some of them saw the most downloads like ever, right? Because people have this perfectly good hardware. It doesn't have to be as old as, you know, my old 2013 Mac that actually is quite old, that's 13 years old, that computer now. So that's kind of on its way out.
But you know, computers that are, you know, 2017, maybe 2018 or just missed the hardware requirement or they, they do meet the requirements for Windows 11, but they're, they're kind of on the early generations of, of those processors that support those security features. Like, I forget what they're called off the top of my head. So maybe they're a little sluggish.
The performance on Linux is far superior of all the other operating systems. And so I wonder sometimes if.
Because open source tools, maybe not just operating Systems, but like LibreOffice is, you know, do I like it as much as Office? No, but it's a lot better than it used to be.
These things are not as difficult to install anymore. They're a lot more compatible than they used to be.
And in a world where most people are using, you know, web email, web apps, there's not as much that you leave behind, you know, going to another operating system.
I mean, obviously, like if I, you know, there's some things that would be difficult for me to totally translate to Linux, but it would be much more doable today than it was, let's say, 10 years ago. And I wonder sometimes if they're actually losing users to this.
[00:15:24] Speaker B: I, I mean, it's hard to say. Like I, I saw something just recently, you know, from, and it was like further to our last episode where we looked at how many, you know, what the market share is for laptops. But I mean, still Windows is the dominant one. I mean, Lenovo was, I think, number one. And then it was hp, Dell, then you had, you know, Apple and then, But Then there's a whole slew of other Windows based types of computers. So like you say in the, the professional workplace standpoint, I mean whether you, whatever type of organization, chances are high likelihood you're going to be using Windows.
[00:16:08] Speaker A: Yeah, yeah. So I mean it's just interesting that they're focusing so much on the consumer side. We'll see if they actually come through with it. I think it's an interesting thing to keep an eye on.
There are you know, some advantages to running Windows but it has gotten more and more frustrating over the years and I'd be curious to see if they come through on, you know, making it easier to unbundle kind of crapware from their operating system and stuff like that.
[00:16:39] Speaker B: Yeah, yeah, absolutely.
[00:16:41] Speaker A: Did you want to talk about this anthropic article called what is it, what 81,000 people want from AI?
[00:16:48] Speaker B: Yeah, so this, this was a study that they did anthropic and it was about what people are actually valuing from AI. And so they interviewed over 80,000 people from 159 different countries, 70 languages and a lot of the, the key kind of themes that were coming out of it that people wanted more control over their, their daily life.
Less administrative work, more time, better learning, more financial room.
So you know it does kind of support them with their creativity, access expertise that they may not otherwise have. So it's, it's quite interesting study. Like if you go and take a look at the website and I'd encourage everybody to go and look at it but I mean there was all sorts of quotes from different people but like some of the examples that I, I found that I you know, took away like for instance there was a software Engineer in the US they used AI to cut 173 day process to three days.
And then there was a, another one that I, I found interesting. There was a mute worker in Ukraine that used it to build a text to speech bot.
And another one that I liked was there was a freelancer in United States that use Claude to help connect the dots after three or after nine years of misdiagnosis of health related issues.
And so yeah, it was just really interesting kind of study that they've done.
I mean there are trade offs obviously where people are maybe concerned about or they're worried there's like this kind of caution that anyways that people are describing how maybe from a losing the ability to go and think or if they've used it for emotional support maybe there were, there's going to be too much dependence on that.
Some people brought up how if it does save them time for work, that maybe some of that extra time is just going to be filled with more work.
So, you know, I, I did find one of the, the interesting things with this report is that people are not just judging AI in terms of the speed alone. They are judging whether their life is getting better in terms of their work. And so, you know, I think that's really important, especially if you are going to go and try to build something AI related or if you're looking to implement or buy AI, setting policies for AI. These are some of the things that you should look at, you know, considering.
[00:19:45] Speaker A: So I see. Yeah. One of the, one of the things that I struck me about this is that there's seems to be a trend where people are really using these tools.
I like it. It's less spectacle, more like practical. People see it as kind of infrastructural. Right. They're seeing it more as like the Internet, more kind of a foundational tool and to really connect dots, see bigger pictures that are really tough to do manually or solve bottlenecks. And that's certainly been my experience. Like when I tell people I'm a heavy AI user, I think they have this assumption that I'm getting to go and do everything for me, which is not the case. Right. Because there's lots of things like writing, which I consider thinking. And you and I have discussed this before.
I like to do my own writing because it's my own voice. I don't want it to be perfect. That's, that's not me. Right.
Not being perfect is part of the brand. But, you know, I gather tons of news stories. I'm struggling, you know, threading disparate ideas together. Same thing when we're doing these podcasts. Like, how would I, you know, what is connected through all of these stories that we found to make a show title? And do you have some ideas? For me, these are bottlenecks. These are looking at big pictures and, and giving me some options to work with. And sometimes it's totally wrong, but other times it comes up with really interesting stuff. And so like the idea of the, the, the misdiagnosis, well, that's a perfect example.
And specialization is valuable, but it also creates a problem because then it means everybody has an opinion. And if you're a specialist, that's your, you know, your specialization is your hammer. And everything looks like a nail that fits that hammer. And so to get, you know, how do you draw connections between different disciplines and different opinions in a less Biased way. I think that's, you know, AI is not totally unbiased, but that's obviously based on how it's trained. But there's certainly some. I think there's evidence that there's less bias in some cases than when people do it.
[00:21:52] Speaker B: Yeah, well. And again, like, the people. And this is where I think this study has helped people want to go and make sure that this, the AI is useful for them. It's accessible, it's.
It's helping with their actual human needs and not just some kind of impressive demos that they're trying to put together. And so, you know, like you say, like those bottlenecks, like if there's things that you're kind of stuck with because going and doing, you know, let's say a Google search, like how we would have done to try to find or solve some sort of problem, this is something where. And again, like you say, it could be wrong, but so can Google, generally speaking.
[00:22:32] Speaker A: I mean, so can articles you find. Everything can be wrong, so.
[00:22:35] Speaker B: Yeah, yeah, exactly. But usually it's not bad. I mean, I find it is quite useful when you're trying to go and figure out even just the most minor things and not have to again waste time combing through, like Google searches.
[00:22:53] Speaker A: Yeah. One of the things that I found really interesting about AI is like this incredible focus on what it does get wrong. It's unbelievable. It's like, oh, yeah, eat one small rock per day or whatever. Google barred. It's, you know, their AI had said to keep up your vitamins or something like that.
And, you know, there's.
If you look for problems, you'll find them.
You know, if I, if I go through my house with a magnifying glass, I will find cracks in drywall and I will find dings.
I'll find these problems. I will, I will find them. People are really good at finding problems, and that's what they're fixated on.
But that's not really the point. Right.
I would say that the AI overwhelmingly gives me more things that are correct than wrong.
I'm not saying that doesn't mean I don't verify things to make sure that they're right. That's a literacy problem. And I don't think, I don't think that really changes. I don't think AI really changes that, but it can certainly spark inspiration to look in other places. You know, one of the funny things that I've, I've noticed is that people have said, you know, I just don't see how it could be useful. For me, this is, these are the kinds of things I do and I don't know how AI would help me. And I'm like, well, why don't you ask the AI and explain what you do and be like, is there anything that you could do that would be beneficial?
Like it's kind of a super meta question, right? But I think where people find value in these tools, it's by asking the tools like, look, I'm not an, go to the tool. Say I'm not an AI user. This is not my area of expertise. This is what I do. I don't know how you'd really be able to help me.
Give me some ideas and ask some follow up questions. It'll actually, it'll do that.
I don't know how to prompt you to do these things. Well, why don't you ask it how it should be prompt? Like, you know, it's actually funny. You can ask the tool about the tool and get information for the next chat that you start and if you have memory turned on and tell it to remember things, it starts to build this lexicon. I think it's a different way of thinking about information seeking.
[00:24:56] Speaker B: Well, I mean, you bring up a good point. Like one thing that I've now observed, just going and doing some observations and this research is a lot of people, the ones that have used AI, they're not using these large language models like how they're meant to be used in terms of like developing the prompts and so on. And so they're, they're almost like treating it like Google search and doing these little basic prompts. And I think that suggestion that you made, if you don't know how to prompt something and depending on whatever your end task or use case is, you could go and develop a better prompt just by asking the platform.
[00:25:40] Speaker A: And the platforms work better if you include special instructions. Both CLAUDE and Chat GPT, which are my primary ones, allow for that.
And I have, you know, be concise. I have a few basic things I had not, I had, I had not updated the, the OpenAI instructions for a long time. But I do have memory turned on and I save a lot of the chats that are not confidential, you know, confidential things I'll use temporary.
Yeah, things that I archive a lot of stuff if it, if I, you know, just so it keeps that memory.
But I actually said, you know, based on your memory of me and the things I've been asking, like, you know, do you have recommendations for how I would update the custom instructions? Here's what I currently have. And it was. Yeah. Actually, here's a bunch of things that you ask me to do all the time that you may want to put in the instructions. You don't have to.
That's really helpful. Right. So, I mean, it knows what I'm asking. I don't. I can't keep track of what I've asked it. It can.
And so in that way, it is kind of more of a conversational, like a friend, but with a much better memory. Right.
[00:26:43] Speaker B: So, yeah, for sure.
[00:26:45] Speaker A: It doesn't surprise me that people are using it for saving time and connecting the dots. I mean, that's certainly what I've found, you know, oh, here's 50 articles I've collected in Pinboard. I'll never get a chance to read all of these. You know, based on what you know about me, where should I prioritize my time?
Here's your top five. These other ones are iffy.
It could be wrong.
You know, a lot of the time it's right on the money.
You're right. I'm not interested in that.
This is a much better suggestion than what I would have picked. Right. So I think maybe people are embarrassed to admit that. That it maybe knows things about you that you don't know about yourself, just from. Not because it's thinking, but because it just. It remembers what you've asked over and over again. It has a better memory, right?
[00:27:33] Speaker B: Yeah. I haven't used it for emotional support yet, so I can't say anything on that note.
[00:27:38] Speaker A: But, you know, Pink has a great video on that. It's like brutal questions that you can ask chat GPT, like, where. Where am I holding back and where am I lying to myself? And he has all these prompts you can use and it's just brutal, the answers it gives you. I tried it.
It's pretty interesting.
[00:27:58] Speaker B: This might be a good segue to the. The next article.
[00:28:02] Speaker A: Yeah, I mean, I haven't. Again, you have looked at this. Well, perhaps more closely than I have, but again, kind of walking back, we've had all these tech layoffs.
I don't think so much because of AI automation. I think because there was huge hiring during the pandemic and now they've overhiered and now they're slowly letting people go. I think that's more likely.
I don't think there's a huge productivity gain yet from AI, but it's gadget and others. Financial Times is reporting that OpenAI plans to nearly double its workforce. So they plan to grow the workforce from about 4,500 to about 8,000 by the end of this year with hiring across product, engineering, research, sales, technical ambassadorship.
That sounds like something you and I could do to help customers adapt its tools.
Says they're expanding their office space in San Francisco.
I think they're trying to make more inroads into enterprise and that's something that Anthropic has done really well. Probably better though they have that big government contract at the state style. So I guess they've, they've won on that front.
Financial Times said that 90, more than 90% of their 900/ million active users. That's so many monthly active users are still on the free account.
[00:29:32] Speaker B: Yeah.
[00:29:32] Speaker A: So you know, less than 10% pay for ChatGPT Plus.
[00:29:37] Speaker B: Well and I guess in to. In terms of that the, that was one thing that I guess they're rolling out the, the ads to the free account so that they can offset that.
[00:29:46] Speaker A: I mean and the lower end count, I think there's a more base level account now. Is that right?
[00:29:51] Speaker B: I. I'm not sure. I haven't looked at the actual account.
[00:29:54] Speaker A: I'll look it up while you're talking.
[00:29:56] Speaker B: But you know, I think the big thing here to it's, it's just funny like you say, there's all these companies doing massive layoffs. I mean we talked about it. I believe it was last episode, the one before but like Block laying off half of its workforce and there's all these companies laying off thousands of people and you know, they're citing that the reason is because of AI and increasing the productivity and blah blah, blah. But at the end of it, what I find kind of interesting and somewhat kind of ironic is that the AI companies are actually like OpenAI are increasing their, the amount of people that they need. And so again I mean it could be because they overhired and I think a lot of it was maybe something like during the pandemic when you had.
They probably misanticipated the amount of growth or the demand that they were going to have. And so they hired all these people and especially trying to get people back into the office and not having the amount of productivity or the output wasn't
[00:31:02] Speaker A: there or desks available.
[00:31:05] Speaker B: Yeah, well and that's, that's a good point too.
[00:31:07] Speaker A: I mean I think Government of Canada did that. Right. Because they had of course a remote. You had to go back to office where there wasn't enough office space for people to return up so that they had to show up and there's nowhere for them to work. And you can't just bring in your own device. If it's a secure network, it just like plug in. It doesn't really work like that.
So people were just standing around doing nothing, which is not productive at all.
It's kind of incredible, actually.
[00:31:29] Speaker B: So.
But yeah, again, I think if you look at the, the messages that even though you have this technology, this, you know, AI is a tool just like any other tool, and you do need people to go and, you know, execute and put it into practice. So.
[00:31:49] Speaker A: Yeah, yeah, it's, it's really interesting.
Yeah, I don't think.
Well, again, like I said, I think a lot of the layoffs, you know, they can say it's AI and they can kind of blame it on that, but I don't know that that's, that's true. That's the reason that Microsoft and Amazon have laid off lots of people. I'm not so sure that that's the reason.
[00:32:22] Speaker B: Yeah, there was a lot of high.
[00:32:24] Speaker A: I don't think. I'm not saying that AI doesn't won't automate things in the long run. I just, I don't believe that it's had such a productivity gain that they can lay off 10% of their workforce yet. I think that's more.
There was a lot of hiring done during the pandemic, maybe over hiring, and then those things are kind of coming home to roost. And of course, they can blame it on AI or they can try to spin it as a productivity gain even though they're losing all that people. Because AI is here now. There's a bunch of reasons why they may claim that, like block laying off many of its, its people.
You know, it could also signal problems in the economy that nobody wants to talk about too. Right, so.
[00:33:02] Speaker B: Well, exactly right. Like it's, it's easier to go and blame AI as opposed to just talking about, I mean, if your share price is going down, this is an easy way to fix it if you get rid of a whole bunch of people. Because now there's hard costs associated with those.
[00:33:19] Speaker A: Exactly.
[00:33:19] Speaker B: You know, the expenses aren't there. I think the other thing, although I still think from an accounting standpoint, I. It doesn't really make sense to, to me, but, you know, it's, it's a matter of going and investing in the capital infrastructure for AI and so they're kind of, you know, allocating the operational money toward these capital.
[00:33:39] Speaker A: Yeah, they can't afford to keep all these people if they're having, if they spend a hundred billion dollars on Data centers, right?
[00:33:44] Speaker B: Yeah. And so actually on that note too, you know, I can't remember if we mentioned this last week, but I was having a discussion with one of my PhD classmates about this and I, you know, we've talked about this before and this is a little bit of a tangent but you know, where there's all this like, you know, it's, it's getting into like many billions, like hundreds, it might even be like close to a trillion money, you know, I think it's
[00:34:07] Speaker A: $900 billion the big companies have committed over the next few years to spending on AI infrastructure some ridiculously large amount of money.
[00:34:14] Speaker B: Like it's, it's crazy right? Now imagine look at this has happened in the past, right? And a lot of it like you're saying even like we're walking back or what have you. But you know, I recall back in the day we, there was, you know, IBM would have their mainframe computers and then we would use terminals and then somehow the personal computer got created and then we moved away from the mainframes to this. Then we've kind of been switching back with this cloud infrastructure where you can offload some of the, you know, files and other things off to the cloud. And so, but imagine if these small language models, and this is what we've been kind of thinking is, is going to happen if your personal device or these smaller devices can go and do that, what's going to happen with all these data centers?
And I don't know, I mean this is like, is it the best approach that we're taking?
[00:35:17] Speaker A: Yeah, so I kind of have two minds about that. So if everybody, so there's kind of two things happening. So for people listening, depending on what model you use, you know that like a thinking reasoning model versus an instant model like when chat. So if you go into chat, GPT and I'll use as an example since that's the most popular platform by far, I think still has like 80% or 60 to 80% market share of all these chatbots. Right.
There's a, there's a lot more energy usage used to use the high end models or Claude Opus vs Sonnet. Opus uses much more so they don't make a lot of money on this stuff. So there's a financial incentive and, and you know, a lot of the models are really good for like if you're uploading documentation and doing complex things. Analysis of text doesn't have to be data analysis but like, well text is data, it's qualitative. Right.
If you're not doing that. You don't always need those models, which is why they're trying to promote like the default is like automatic. It'll automatically send you to the model. And overwhelmingly I think it's going to send you to the cheaper model because they're going to lose less money on it. But with regards to the data center build out, I think there's two pieces of it. So on one hand they're going to encourage people to fall back to smaller models in the cloud or use models offline, which is what I've been working with a lot just to see for basic tasks, basic summarization, you don't need these frontier models to do that. But at the same time, I think there's also going to be huge demand just by the number of users that are going to be using this over time. Right. Like even the percentage of people who regularly interact with AI. I don't remember the article, but we talked about it on a previous episode.
It's a pretty small percentage of people worldwide. Like 30. Globally it's like 30%. Of course more in the United States and in the west, but even still a huge percentage of people over 50% in Canada or something like that are not using this stuff at all. And so imagine when it becomes like everybody is using it, even if people are using it for really with simplest models in the cloud, that's a lot of potential users are coming down the pike. I think that's why they're building it out.
But they want to ping the servers in an efficient way and I think that's, that's tough to do.
So that's probably where they're making the bet. Also consider that by spending the money and owning the infrastructure, becoming the, you know, the AT&T, the utility company.
And you know, Paul Thurop pointed this out, companies like Microsoft can kind of win by losing.
So Copilot, it's kind of a mess. There's nothing wrong with copilot using the ChatGPT models. It works really well.
But they have lots of different copilots and it's not clear which ones use which models. And so they have a major communications branding problem.
And that's one of the reasons why they don't have the market share you'd think that they would have, but they're just, they've already been supplanted by Google. Right. It's just not, they're just not top of mind. But if they own the infrastructure and the cloud data centers that they can host their competitors to, they can kind of win by losing, right?
[00:38:41] Speaker B: Yeah, yeah, exactly.
[00:38:43] Speaker A: So that's how I see it.
We'll see if it pans out.
I think there's probably an enormous amount of pressure from these companies internally to stay relevant long term.
[00:38:58] Speaker B: Yeah, no, for sure.
So you're going to cover the next one here?
[00:39:04] Speaker A: Yeah. So there's on this trend that we're covering of kind of walking things back.
They haven't walked anything back yet, but this is certainly a backlash.
So Nvidia had a presentation.
I don't know if it's Nvidia or Nvidia. I've always called it Nvidia. But anyways, maybe it's Nvidia.
They have their, you know, historically a graphics card company, but turns out graphics cards work really well for AI. So they've kind of pivoted their whole business to AI. So much so that they're really the only people in this whole Ponzi scheme of money moving around that actually makes any money because. Well, I hate to say it, but you know, that's true. There's this, there's this weird thing going around where Microsoft, well, we'll buy a billion dollars a year graphics cards, but then we get a billion dollars here and suddenly a billion dollars goes out. One comes in and they say we made 2 billion. Like where do these numbers come from? It doesn't make a lot of sense. All these companies, there's just circulating between the big companies.
But Nvidia is the only one where the money seems to be flowing in and staying there because they actually sell a real product, which is a physical graphics card. And they have a bunch of graphics cards that are not so much for gaming but for AI.
And so there was some rumors that they weren't going to do much on the gaming side because even though they're the dominant, they make the best graphics cards. From a performance standpoint, their top of the line cards can cost a couple of thousand bucks apiece.
And so they've kind of had indicated earlier this year that they weren't going to do much on the consumer graphics front because they're not making as much money on it. The revenue's not there. Why not focus on AI?
But they did unveil DLSS5 generative AI as Technica calls it, backlash to Nvidia's DLSS5 generative AI glow ups. So DLSS is an upscaling technology.
So when you play a video game, let's say you want to play it at 4k, some games will run natively at 4k, meaning that the textures in the game run, they are 4K textures.
That's very difficult to do, especially for the most demanding games. Often you may get a native 4K older game or that's been upscaled or something like that. But a lot of these games run at like, I don't know, 1440p, 1080p internally. And there's kind of a machine learning algorithm that upscales it to 4K, right? Just kind of like. I've been doing some photo touch ups. I have some old photos that are kind of worn and tattered. One of those, my late cat.
And so I uploaded them to AI and asked it, you know, clean this photo up and upscale it to 4K. Super handy, right? So it's similar. And DLSS is their technology for doing it. Of course, amd, their competitor, who's a CPU and a GPU maker, has something called fsr.
That's their version of it. But DLSS is really impressive.
But this DLSS5 uses a much more aggressive use of generative AI to kind of alter the visual output of the game. So it's not just upping the resolution, it's actually making changes to the artistic vision, which is new. We haven't seen this before. And so this huge backlash because it's changing kind of the look, the mood and much of the artistic intent of some of these games. And again, people are starting to call this AI slop, but they're not. I don't think they're wrong in this case. I push back on that term.
So, for instance, if you look at the Ars Technica article, they have some images of what the upscaling graphics is doing.
So one of the top image at the top of the screen is the one that's kind of gone viral. But there's others too. And so there's a resident. What's the game called? I think it's Resident Evil Requiem. It's the new Resident Evil game. It's incredible that Resident Evil has had lasted this long as a game franchise. I mean, that's been around since I was a little kid. But horror games, they continue to go. So there's a main character.
I don't play Resident Evil, so I don't know the character's name.
And there's a scene where she's lost her parents or she's wandering the streets. And on the left, you know, you know, that shows what she's, what she's supposed to look like. And she's supposed to look a little bit in the game because she's been through some tough times. A little haggard. You know, that's a dark, kind of a gritty mood and atmosphere of the game.
You know, she's not supposed to be wearing kind of any makeup or anything like that.
And then they run the game through DLSS5 with this upscaling plus generative AI and it not only does that, but it kind of makes her. You know, when you generate images of people and they all look like beautiful and perfect and AI, it kind of does that with the game. So now her hair, which is kind of, you know, blonde now. Now it's blonde with like highlights. Now she has eyeshadow. It looks like she's had a nose job because her. The bridge of her nose is. Is thinner. Her lips are fuller. Looks like she's been given like a beauty makeover. Right.
And the background is also interesting. So in the original screenshot, again, the new version is much clearer because they've upscaled the resolution. But the background before is supposed to be gritty, foggy, wet. It's much brighter and cleaner.
It doesn't have that feel at all.
The sky is brighter, the lighting is very different.
And so I think a lot of people are really concerned about this because they make a game and these are really handcrafted. I mean, people may use AI to help generate textures and do things, but there's a lot of hand artistry that goes into game development.
They roll out this tool, which is optional, you can turn it off that, you know, turns the game and cut and kind of turns the characters into just generic AI looking people.
[00:45:04] Speaker B: I mean, the one thing that I saw, and I. I didn't look into it much, but I just saw some people posting that are friends of mine that are gamers. And apparently there's, you know, a lot of these video games, they actually take like motion capture of the actual person and get a whole bunch of like 3D.
[00:45:20] Speaker A: And they probably signed that to. To. To look like them, right?
[00:45:24] Speaker B: Yeah.
[00:45:24] Speaker A: Like they sign contracts for that.
[00:45:27] Speaker B: Yeah. And then I think it was. Oh, it was Harrison Ford. I think they did it.
[00:45:30] Speaker A: Oh my gosh. Yeah.
[00:45:32] Speaker B: And. And the person like after doing all of this, you know, the scans and everything of him and when they ran
[00:45:42] Speaker A: a Jones game, which I played, which was terrific, which just looks the. The game and whoever played his voice because it wasn't him in the game, sounds just like him. Like they did a really good job. It looks just like Harrison Ford.
[00:45:53] Speaker B: But with this DLSS5, it didn't look like him. And so, so that was yeah, so that was kind of interesting. Like, again, like, how you're saying that it's.
It's like changing the actual image. So.
Yeah, it's interesting anyways.
[00:46:11] Speaker A: Yeah, it's somewhat concerning.
I mean, in some ways they could be tweaked or done something like that. And there's all these, like, memes now coming out.
But, you know, somebody, you know, I'm looking at some. I think one of the examples was Starfield.
Yeah, it kind of makes everyone look like a K pop star. That's kind of how I see it. I mean, it looks.
Everyone looks very pretty with DLSS5 turned on.
And it's just. It's really funny. And some of the, you know, some of the results are well done where it really. It sticks to the vision of the artistry and others are not. Right.
And so I don't know what to say about it, but there's some terrific memes that are obviously jokes, but.
Yeah, I don't know it well.
[00:47:09] Speaker B: I think it goes back to, you know, what you mentioned when, like, when, you know, we.
We don't want our writing to be perfect. And here it's, you know, it's like, okay, well, for the images, they don't need to be. Especially for that artistic vision, they don't need to be perfect either.
[00:47:28] Speaker A: Well, it's also just no fun, right? Like, it's no fun to have everybody look the same. It's no fun to have all writing sound the same. I've still, you know, I hate to say it, but, like, I don't know about you, but my experience, it's interesting watching the evolution of a platform like LinkedIn.
LinkedIn didn't used to have a feed hardly at all. It was just. You could see your network. It was very limited. It's. It reminds me a lot of Facebook now, but the, you know, and it has that feel because it's algorithm, algorithm, algorithmically curated.
But it's just.
I can tell, like, a lot of the posts sound the same.
They have a very similar look and feel to them.
They have very similar intros, outros. So clearly people are using AI for social posts. I can tell when it's written by hand.
I'm confident that I can tell the difference now in full transparency for our social posts, I do get AI to help, but I've given the AI all sorts of examples of all the styles. We have a style for our posts that we design by hand, and so I feed those examples into it and then I go and heavily edit it after the fact. So I'm just doing it to get it going because I'm not here trying to write War and Peace. I'm trying to get some communication out and then I go and change it with my own style after. But it's following the guidelines that I've given it, which I think is very different.
And I, I've noticed that a lot, a lot of things are starting to sound generic. It kind of speaks to the idea of the dead Internet theory. If everything is AI generated, you know, there's no human connection anymore.
[00:49:12] Speaker B: Totally.
[00:49:13] Speaker A: So it's, it's unfortunate. I guess this is why people call this stuff AI slop. But it's, it's kind of funny.
Did you want to talk a little bit about your generation?
[00:49:25] Speaker B: My generation, you know, I think I read this, even that word slop, it was the, the word of the year for 2025.
[00:49:36] Speaker A: Yeah, I don't know if I agree with that should be word of the year, but it's certainly funny.
[00:49:40] Speaker B: Yeah, but, yeah, so this last article, so this was about the hidden advantage of being over 50 in the age of AI. And the author of this article, he talked about building his first website in 1995. And then he noticed some people, like he's been watching some young founders, they're going and developing apps and he thought, you know, maybe I've missed my moment or the, the time or what have you. And.
But after starting to use the tools and, you know, he reached a different conclusion. And it's like, yeah, the AI, it does help get things done easier, but the edge comes when you can go and use your own judgment. And so he was talking about how his actual practical experience, where he uses the AI to stress test a business idea or find some blind spots in a launch plan, or uses the AI to challenge his assumptions or flesh out an existing model. And so this is where the, the usefulness of these prompts is only if you actually have the knowledge of the business well enough so that you can push back on the output and, you know, critique it. And so this may, this is where, like this, this technology, the AI wave, may actually reward people who are experienced more than people expect. And yeah, sure, the, the fast hands, it does help, you know, trying to get access to the tools and getting, getting things outputted really fast. But it's really where the clear thinking matters. And so if you have that experience having, you know, whether it's some sort of failure that you've had customer context mistakes that have been made over the years, like those are things that take Time. And so, yeah, the prompting is easy, but good judgment, it does take time and many years. And so this is where the, the people who, and I, I agree with this. I think the people who are maybe in their 50s might actually be able to benefit more in this current state of affairs. It's not like you've just missed the, the window.
[00:52:04] Speaker A: Yeah, I think maybe the millennials, well, a certain percentage of the millennials, depending on when they're born and people who are from kind of Gen X may have an advantage here because like you said, because they might be the last generations that develop those experiences without the tools.
And I sometimes think about that with writing or literacy, like I can go through the AI and go, that's not very good. And this isn't going to come across very well as a communication.
But if you've grown up with those tools and you've never, and you've, you know, is it, are you going to be able to learn the same skills? I'm not saying that people won't be able to learn them, but I wonder if they'll be able to learn them in the same way.
[00:52:48] Speaker B: Well, and I look at it even, you know, here we're in Alberta and look at, over the years, there's been many times where there's people in the oil and gas sector because there's the boom and bust kind of cycles. And I quite often have seen over the last 20 years where there's people who are really experienced, they can't even retire because, you know, there's, they have so much institutional organizational knowledge and we aren't going and investing as much in terms of the, that, you know, whether it's like reservoir engineering or other type of roles. And so they, they've had the decades of experience and so they can go and, you know, actually come back as consultants. And so again, I, I think this may be a really good opportunity if people can get comfortable with the uncomfortable where this technology keeps changing at a rapid pace. But, you know, taking the time and care to actually apply their judgy judgment and, you know, discernment to what is coming out, test those ideas and be able to apply it so that, you know, you're not actually making huge mistakes. Because again, I, I could see imagine some of these young people who are like vibe coding something it might look good for, like a demo.
But again, if you haven't had the, the experience, I don't know if they'll be able to scale.
And so again, that's where you'll probably need somebody at Some point to help go and code and develop. So it's a robust platform.
[00:54:30] Speaker A: Yeah. Like, I don't think you can vibe code production quality stuff. Not yet.
[00:54:35] Speaker B: No, not yet.
[00:54:39] Speaker A: I can see this becoming.
Yeah. Like a problem.
People don't know the fundamentals.
Yeah, yeah. And. And knowing kind of where the limits of these tools are. Because there's always limits. Right. There's always edge cases, there's always things. And, and if everybody's using the same tools as you and you want to do something innovative, you're going to need to think.
[00:55:03] Speaker B: Yeah, totally.
[00:55:04] Speaker A: Yeah, hopefully. I hope that we do have that.
I think that's all we had for articles today.
I guess the, the tip slash final discussion, and this doesn't have to be a super long thing, was to talk about kind of the two. I would say the two leading platforms. I mean, I guess Gemini and Copilot could be considered leading AI platforms for arbitrary reasons. I'm going to deem the leading platforms Claude and OpenAI.
[00:55:36] Speaker B: Well, they're certainly getting the most media attention right now.
[00:55:39] Speaker A: Sure.
And so ChatGPT commands the biggest user base, but Claude's having a big moment, not just because of the controversy with the contract in the US government, but just, you know, people using it. And so I had actually the paid version of both of these. This is getting really expensive, but not really in the grand scheme of things. It's a drop in the bucket.
And there's certainly pros and cons. So you're mostly on ChatGPT playing with it, which is probably good. You know, I had a question from somebody the other day. I guess the reason this came to mind is that, like, which are the tools should I subscribe to?
I think the Ethan Mollick advice is good, which is that you, I mean, you can switch between the tools, but unless you're hosting a podcast on technology and comparing them for a discussion point, I don't think you should have more than one.
You know, mastery of one will give you the skills that translate to another tool if you switch.
And there's. There's pros and cons to these.
I would say that if you are doing mostly my experience after kind of like a couple of months of using both of them side by side, is that you're primarily doing a lot of data analysis code or analysis of large text documents. And so you're not using imagery, you're not looking as many connector apps, things like that.
Claude is probably your better bet. It has a much larger context window for large. You know, you can get put in like 500 page documents and it'll do a much better job of analyzing it if you need it to do that.
And its writing style is much more, less AI sounding. So from a writing tool, like if you work in communications or something like that, Claude is probably better, but you don't get any image generation.
There's far fewer connectors and apps that connect with it.
From a coding standpoint, it is better I think overall, I'm not a programmer by trade, but I do use it as kind of like for hobbyist, you know, add ons, changing, you know, code and WordPress templates and stuff like that.
But you know, ChatGPT Codex is also very good. Depends on the level of code that you need to do.
One of the things I've noticed though that's kind of interesting from a just an organizational standpoint is that from just like a chat management perspective, I think OpenAI has an advantage. One of the things I like about both of them is that you can create project folders.
And so for these tools you can kind of give overarching instructions to how it should address you, what kind of writing tone you want. But if you create a project folder, so I have one called Personal, for instance, I can actually make settings for each project folder. So then, and this is the same for Claude and ChatGPT.
And so I can put in a bunch of, you know, read things about myself, personal interests. That's kind of what the instructions say. Keep in mind, these are Eric's interests for personal projects, you know, work might have different ones. I have a one called Notes and the instructions for the Notes project folder is like exactly how I want notes to be formatted and what I want it to do.
And so that's kind of cool. And you can do the same thing with Claude. The only thing I don't like about Claude though is that you can't hide chats in project folders. And I'm not sure that you can archive them either.
So eventually what you, you can bulk delete them, which is kind of what I want to be able to do in all these tools. But you can't.
You can pin things, but you just have like now I just have this like infinite scrolling list of chats and clouds and I can't really get rid of them, I can't archive them. If I put them in a project folder, they still stay on my recents. So for like a management of this chat and these conversations, ChatGPT works more like a lot more like a notes app where you can Put things in folders and organize them and stuff like that. I don't know what's really better.
I find that these tools are. Are better for certain things I find, like the web research, running Deep Research, they work well.
I do find that maybe Chat GPT is a little bit more thorough from a web research tool perspective.
It tends to format documents a little bit nicer like I've sent you, like deep research for both. It tends to format it, but then so it. But if it's written in prose, then the Claude stuff sounds better. I think a lot of these tools at 6 and 1/2 and does have another.
I'm still not a huge fan of Gemini. I've been told that the models that Gemini are using are much better now, and I believe it. But I don't like the way it writes. I don't like the interface.
Copilot has a terrific interface, much nicer and cleaner, but it just. I find the outputs to be lackluster. I think maybe they're a little bit more railed.
I think if people are interested in focusing on these tools, having, you know, played with a lot of them, I guess my tip of the week is that, you know, those are kind of the two main platforms that offer functionality. One of the cool things that I'm interested in that Claude has over Chat GPT is this idea of co work so you can actually give it access to aspects of your computer. Like you can give it access to the whole computer if you want. I have not done that yet. I have a folder I've given it access to in my documents that's called Claude. So I can put stuff in there and it can work in that folder. And I have given it access to Apple Notes so it can search for my notes, it can go amend notes, it can take things that I've told it to do and make a new note. And.
And so that's really handy. But yeah, like, Claude can actually like run your device not on mobile, but on a desktop computer. Yeah. So that's. That's kind of cool. I've heard of people like, you know, find all the bloatware on my computer and delete it. And I'm like, all right, like all sorts of interesting things you can do with Claude. I don't know that there's a clear winner, though. I think it's just pros and cons. I think that the thinking models for coding and data analysis and text and writing are a little bit in favor of Claude. ChatGPT is kind of a more well Rounded tool, you get a lot more. I think it's a better value that you get images, you get video creation, you get a bunch of stuff. Right. You don't get any of that with Claude. It's very basic.
[01:02:28] Speaker B: Although on the image creation now it feels like Gemini might be the better one.
Yeah, now it's like that Nana Nano Banana or whatever.
[01:02:40] Speaker A: Yeah.
[01:02:40] Speaker B: They don't even. They've removed that even. So now it's just images, Gemini or what have you that you look for.
But anyways, yeah, I mean, but I
[01:02:52] Speaker A: still get really impressive images from all of them. So it's kind of like, you know, do you want a Toyota Tercel or a Camry? Like, I don't know, Like, I mean, yeah, they're fine.
90, you know, good enough. Most of the time.
Cloud is expensive. It's more expensive. It's closer to $30 a month.
So I don't know. I don't know what's best.
One of the things I've been doing is working a lot more with these local models.
You can actually install them. That was the next thing I was going to talk about. So Ollama is the desktop app. There's a graphic user interface, but you can chat with the models in terminal.
You don't have to use it. And so you just have to. There's some, you know, if you ask an AI, even a free one, it'll tell you how to install Llama on any operating system. It gives you the terminal command walkthrough. So right now, the models I have installed on Olama is. I have llama 3.1, which is the open source model from Facebook.
Quin 3, which is a. I think it's a Chinese model. And then I have one from Gemma3.4 billion parameters. And that's from Google. That's their offline model. So the B means how many billions of parameters. So the more parameters. 30 billion, you know, billion parameters needs a lot of RAM. So if you have an 8 gigabyte RAM computer or even 16 gigs of RAM, it's going to be slow and it's going to be unreliable. So kind of 4 billion parameters to 8 billion is kind of the sweet spot.
The lower the parameters, the faster it'll be, but the less robust. Right. So so far I like Llama the best, the 8 billion parameter model. But there's lots of them, you know, it's pretty cool.
I don't know if there's like a huge difference. Some of these do image recognition, so there's all sorts of really interesting stuff that you can do with the models.
I will say that I really like that you can run these local models on mobile devices. Now there's a bunch of apps from the app stores, Android and iOS.
I don't know which one is better. Some of them are paid, some of them are not. The one that I have been using just for free because to test this stuff is an app called Locally.
So Locally AI.
So you can. It's just an app, it's free on the iOS app store. There's some really. I think it might also be on other platforms, but there's. These are a dime a dozen and they'll just, you know, download the model directly so they, they take it right from the repository and then it's working on device. So these are. You don't, you don't even have to be connected to the Internet to use these, which is pretty cool. And so yeah, depending on the device you have, it'll actually recommend models that your device can handle.
[01:05:37] Speaker B: Okay, nice.
[01:05:39] Speaker A: But they are not anywhere any near as good as the frontier models from Anthropic or OpenAI.
So, you know, you're actually for simple text revision, stuff like that. But your mileage will certainly vary.
[01:05:53] Speaker B: Yeah, absolutely.
[01:05:55] Speaker A: Yeah. But that's basically all I have to say.
I don't have a lot of, I suppose insights other than there's 601. Half a dozen of another.
[01:06:08] Speaker B: Yeah, no, and again, I mean I, I think you should try out.
I, I would still, I, I think it's good to, you know, be proficient in one and just focusing on that. But as new features come out, you never know. And I, I would encourage one thing that I think a lot of people, they don't realize the paid versions are much better.
[01:06:30] Speaker A: Yeah.
Yeah.
Especially because you get access to the reasoning models and those are, that's really where the advantages are.
But with that, I think that's more or less what we had to talk about today. So I guess we will end it there.
[01:06:46] Speaker B: All right, sounds good. Until next time.
[01:06:48] Speaker A: You bet. Take care, Sam.