Episode Transcript
[00:00:08] Speaker A: Welcome to Examining, a technology focused podcast that dives deep. I'm Eric Christiansen.
[00:00:16] Speaker B: And I'm Chris Hans.
[00:00:26] Speaker A: And welcome to another episode of the Examining podcast. I am Eric Christiansen and I am here with my colleague Chris Hands. Once again. Good afternoon, Chris.
[00:00:36] Speaker B: How's it going?
[00:00:37] Speaker A: It's going really well. We're kind of getting our groove back. We're doing this more and more frequently. We've got a couple of good new episodes under our belts and yeah, we're making momentum. So now we're into February 2025 and I think we're going to kind of continue with the, some of the AI stuff. But I also had a piece or we have a piece that I thought was interesting from Microsoft Research and I thought perhaps we can draw our own experience. But did you want to start with just having kind of a general discussion? We have some sources we're going to include too. But there's so much that's been written about this. It's hard to just point to one article at a time with regarding to this Deep Seek, which is this AI model that came out of China that's some people are saying is starting to disrupt the American market or the American tech companies. So perhaps we can dive right into Deep Seek and explain to people what Deep Seq actually is.
[00:01:39] Speaker B: Yeah, and so, yeah, I mean I thought it was interesting, like this is the first time that we've had like a, a new competitor, especially from the Chinese. But what Deep Seq is, it's a Chinese artificial intelligence lab. So it was founded about a year ago and they have released this open source AI model. And so that Deep Seq, this is the latest version, appears to rival in many cases and apparently even outperforms OpenAI's ChatGPT, including the GPT4.0 model and its latest Zero1 reasoning model. And so the fact that this Deep Seq chatbot could outperform OpenAI's chatgpt as well as Meta's llama and also Claude Anthropics, Claude Sonet.
It's a bit unnerving to America, especially the AI experts. And so especially given that they just started up, they've only had a limited time. Apparently they are using AI hardware that isn't state of the art. They're using like mid range Nvidia chips to go and develop theirs. They have apparently spent a fraction of what anybody else has spent. And so according to a technical report that was put out Deep Seek, the to develop their model, it just cost them 5.576 million U.S. so millions, not, you know, billions as what OpenAI has spent in the past.
[00:03:21] Speaker A: And I mean it's, it's plausible though there are some oddities to the, the Deep Seq story which you and I will talk about. So there's kind of two ways about it. If we take it at face value about how, how it was developed and we believe that, then yes, it's doable.
However, it's possible that there's a little bit more to this story based on what you and I have read. So the original reporting was that and the podcast this week in Tech covered it really well. But it was said that they originally used the Llama model. So that's the large language model that's, it's called open source, but it's not really open source because it's, nobody knows what's in it, but it's open to use from Meta. So that's the large language model that's used as a base for training for AI. So Meta made that open to use, but again, we don't know what's in it. So it's not like you can't see the source code. So it's not truly open source. And so it's been reported that that's the foundation for Deep Seq. So no surprises there. Anybody could technically go and use that. But what's interesting is that because the US has put that restriction and trade on US chips going to China, there's a few questions about how they did this. So it's possible that Deep Seq, which is, I believe, funded, I don't know the details of the project backer, there's like a, if they have a patron in China, there's like a billionaire there. I don't remember the group or the individual's name. So it's either that they used older Nvidia cards with the Llama model as a foundation and then trained Deep Seek using what they call distillation. So that's a method that reduces the training costs and the computational power, but maintaining the performance. So you kind of, you use, you kind of test, you build your AI foundation as I understand it, and then you test it against an existing AI. So they would be able to test it against their competitors like ChatGPT.
The other, the other, not the conspiracy way of looking at it is that, that there was perhaps they got a hold of black market versions of the high end Nvidia chips and they spent a lot more money on it than they say. I mean, which is possible because it's not like they've we don't have any way of confirming it. So I mean there's kind of different stories floating around on, on how this was done. But the funny thing is that I think is interesting is that so this, this way of training the model distillation, which is new to me, so you're training it so you have an AI that someone spent a lot of money on, they build it and then you build an AI that's not as good at the foundation, but then you test it against your competitors AI to make it better.
And so some of the Americans on the American side are arguing that this is not so great. So I think it was David Sachs who's kind of the, the cryptos are. And the Trump administration was saying that this was, I don't think he said fraud, but it's funny to me because hey, I hope an AI is tested on all sorts of open and then probably copyrighted material.
And then so when people make AIs using distillation by testing against theirs, then they complain that they're stealing their information.
It's kind of like it was on this Week in Tech. It was, it was like, oh, you break into someone's. I was this podcast or somebody else's. It was, you break into someone's house and steal their tv but then someone steals it from you and then you report it to the police. Yeah, exactly. So is that. I do, I do. You know, if I'm understanding this correctly, the distillation process is that kind of how it works?
[00:07:29] Speaker B: Well, and you know, if in this is where, like I think with news media, I mean, I still am a little bit skeptical of the whole thing because nobody back in the day when you're having any kind of journalism work being done, you would fact check. And here I don't believe the fact checking has taken place because, you know, Deep Seek doesn't have to report what, what happened or where they spent the money or how much it cost or anything like that. But if they did, if they went and used these like mid range H800 Nvidia chips and you, you know, I actually read this one article where it was the, the CEO of Perplexity, Arvind Serena Voss. And so what he was saying is that necessity is the mother of invention and because they had to, you know, figure out workarounds, they actually ended up building something a lot more efficient. Kind of makes sense logically speaking at.
[00:08:29] Speaker A: The, Yeah, I mean you could, we, we could say the same thing. I mean it's not quite the same as an AI Model. But I mean, just look at what we did for this podcast. I mean, we were in a pandemic. You and I had never recorded in a studio. We never set one up. So what are we going to do? Right? We have to use equipment that we can afford, that we can source. We have to use Zoom software that's within our wheelhouse. Of course we've changed things and gotten better and more professional. But yeah, I mean, somebody, that's how you would compete as a scrappy startup. Not that we're trying to compete too hard against, you know, some professional podcast startup that has its own studio that it's built somewhere and they rent space and they have all the gear. Right? I mean, that's how you, that's how you get around that. You would say, well, we can't do it exactly the same, so how do we get it as close as we can while cutting out all those other things? Right, so it's plausible that they just found a better way to train because they had to.
[00:09:31] Speaker B: Yeah, yeah, exactly. I mean, it is possible.
You know, again, I think the other thing too, that when we look at it, these large language models, maybe it isn't the, the best way, right? Like, they are hallucination prone. Everybody's been obsessing over them. But, you know, and we've talked about this before too, like, maybe you need to make these smaller language models which are honed in a little bit more. And I don't know, again, do we need like a know it all consultant that claims to, that they can do everything and then delivers mediocre results at the end of it?
I, I think it isn't just about building these big models. Like they need to be building smarter, faster, get more efficient. And so with this, like, you know, maybe the, in some ways this is maybe a good thing. It's going to actually encourage innovation because for a while there it looked like there was basically just like kind of three. And maybe, I mean, beyond meta, maybe it was really. I still find anthropic and OpenAI probably to be the strongest. I mean, even this past week, it's funny because I'm trying to go and use the, like I have my subscription version, but then I was using a, the free one just to show my students and I only ran it a couple of times.
It was just like one little exercise that we were just going through and critiquing and it stopped working for me. It just, you know, said that I've used it too much, the free version.
This is just, I was using Chat GPT. So I used it twice. So I had three classes that day. Two of them, it worked. I did just a couple of small little, you know, prompts. The third one, I, it didn't work. So then I, I'm like, okay, let's flip to Copilot and still Copilot. Even though it's, it's based on what Chat GPT, their large language model, it, it did not perform as good. And so again, I, it looks like still, you know, Chat GPT and, you know, Claude's anthropic are the stronger ones out of the ones there. But now you have this upstart with apparently. I mean, I, I still am skeptical. I, I feel like it's, it's almost like a shorting scheme on Nvidia chips and the whole, the whole market kind of went into a free fall.
[00:12:01] Speaker A: But it is possible very, very briefly, and I mean, like, into a free fall from like an absolutely huge paper gain and stock price and record highs. Right. So it's still way above, you know, and you invested in Nvidia four years ago. You're still way above water. This isn't anywhere close. It's like a paper loss. There is still some skepticism from what I've read, whether there's involvement from the Chinese government supporting this, if it, you know, it was as cheap as they say, like I said. But you mentioned innovation and Spring innovation. And what's interesting is that then OpenAI released a new model. Yeah, they released this O3 Mini. Right.
And I thought that was quite interesting because again, when you and I presented on this topic some time ago at our faculty retreat, and what we said was is that you launched the Cadillac version of this model. It's very expensive to train, but like you said, it's, it's, it's not targeted.
So from a business perspective, you're trying to, you know, get it into different businesses. You want, or you want to have it run locally for privacy reasons. These models don't scale very well. You just can't make them bigger and bigger and more complicated. And it's not even clear that that that would make it smarter. Right. There has to be another way to do this. And so we said that, you know, there's these concerns about the energy impact and that's fair, but these smaller, more targeted language models are just more reasonable. And so you think about, I need to do text summarization from things that I already have. Do you really need chatgpt4o or whatever it's called to do that? Or can you do that locally with a small language model. I mean I can do that with the Apple Intelligence stuff. Okay. I can do all the text summarization I need with a combo of ChatGPT4 and Claude or whatever notion is using within the notion app.
These copilot plus PCs from Microsoft do AI on device because they're using Qualcomm, so they're using some sort of small language model on device without Internet. So this is doable. And it's interesting to me that OpenAI responded with this other model and then responded so quickly which tells me did they have this in the works and they were already thinking of the same thing or do they not want to release stuff that's going to cannibalize their existing market?
[00:14:43] Speaker B: Yeah, I mean it's probably a bit of both.
Who knows what they have in the pipeline. Although they just had somebody, another senior executive just left. So like their entire pretty much other than Sam Altman, everybody's gone. So who knows? There's a. That's kind of speaks volumes as well. But at the end of it like again, you know, like we kind of saw this. I don't know if you need these. Like we're going to have to come up with better architecture and something that is more specific and doesn't cost as much to train and doesn't run as much for energy. And so maybe the, the implications of with this deep SEQ going and making a cost effective AI model, it will probably be good like from a, you know, societal innovation standpoint.
[00:15:33] Speaker A: There's an article that I sent you on Medium and just, I mean you don't need to read through it but it basically talks about OpenAI, you know, their deep seq, R1 and O3 mini and it compares O3 mini to O the reasoning model. It compares it to, you know, O3 mini scores as well in many cases as the highest end models.
And this is a really interesting article. I can put it in the show notes. I'm a member of Medium. I don't know if you are so I wasn't sure if you could see it, but it's by Austin Starks and they give them a bunch of things like they explain what O3 mini is and they say that it's a new and improved large reasoning model. So it's a little bit different than 4.0, right? Like these 1, 0 and 03, those are different. And unlike a traditional large language model which responds instantly, reasoning models are designed to think about the answer before coming up the situation. So it's kind of a smaller data set, I guess, and it's testing against itself. So it takes a little bit longer to get the answer, but it doesn't have to reach out to the whole corpus of knowledge. That's how I understand it.
He goes through and he talks about this and he compares the responses from, you know, O3 mini and 4.0 and talks about kind of the pricing for input tokens. So 4.0 is like $2.50 per 1 million input tokens and then $1.25 per million cash input tokens. I don't exactly understand how this works, but it's like the price per unit basically. Right.
And then O3 mini is 1.$10 per 1 million input tokens and 55 cents per 1 million cash input tokens. So the cost from an electricity and just a data center point of view is way less.
And it scored pretty closely to 01 preview, if not the same or better in PhD level science questions and stuff like that when they ran it against the stuff. Yeah. And so it's not dramatically different.
[00:17:54] Speaker B: Even that other link that you sent where O3 was going and you know that animation.
[00:18:01] Speaker A: Oh, comparing to Deep seq. So we can explain this to people who can't see it. There's a, there's an octagon. Or is it octagon?
[00:18:08] Speaker B: 2, 4, 6.
[00:18:09] Speaker A: Hexagon.
[00:18:09] Speaker B: Hexagon.
[00:18:11] Speaker A: I know octa means eight, so I just had to count it. So there's a, there's a hexagon turning. And this is from Flavio Adamo. This is a tweet I came across and he says, write a Python program that shows a ball bouncing inside a spinning hexagon. The ball should be affected by gravity and friction and it must bounce off the rotating walls realistically. And so it's a spinning hexagon and there's a ball inside, but the, you know, the Deep SEQ one is just like all over the place. It's like defying gravity. And then the O3 mini is way better. But you know, that kind of makes me somewhat even more skeptical of what OpenAI is up to.
Yeah.
[00:18:51] Speaker B: So yeah, I mean it's, it's interesting times for sure. And again, I, I think, you know, if I was to ask were for the future and especially like if, if people are going and reporting on this, there needs to be some more, you know, in depth kind of investigation, like they need to go and I don't know even if these companies are going to go and actually publish some of this stuff, but I think some of the aspects, like we need to know what kind of data that they've trained the language model on in the first place. And right now everybody's just speculating. We don't know, is it llama, is it on, you know, Chat GPT? Nobody knows, you know, and then, you know, they probably, I'm not a scientist, but I think he would probably go and test it and then see whether it was successful or not, how many failures.
There should be a way to go and measure the hallucination rate.
[00:19:53] Speaker A: Right.
[00:19:53] Speaker B: You know, what kind of projects could be handled by these language models? Like you just, you just mentioned now there's this reasoning one which it, and it makes sense to me like I, if I want a higher quality output, do I want it like, you know, instantaneously in a second or would we rather have it kind of, you know, take the information, whatever prompt that you have, think about it, you know, and process that and then spit it out after maybe a couple of minutes? I think that would be something that's worth it. But again, and you know, we predicted this when, when we talked about it at the, the, you know, faculty retreat, there is going to be smaller language models because I'm not convinced that these big ones, especially given the fact that, you know, let's say the 50 billion documents that ChatGPT, the original Model 3.0 was trained on, and I'm going, let's say for my course that I'm teaching business communication, do I really need a bunch of information in the language model like Stephen King's novels, history, philosophy, all sorts of other things when I'm just trying to make, you know, respond to an email?
[00:21:06] Speaker A: Probably not well, and you're quite right. And in fact what I'm most interested to see, and I don't understand the new models well enough, I want to investigate a little bit further into the different models and their size of training data and look at some articles that have done some kind of heavier side by side comparisons that are something that's accessible for me to digest.
But what I want to know is a, what's the difference in quality of answers between things that have been using a huge model versus a smaller one and for what tasks and at what point? And maybe this can't be answered yet, but at what point do these larger and larger models get worse as they integrate less human generated content? And I think that remains to be seen.
I think perhaps maybe the innovation, and this is my guess now, is, you know, tuning those language models to the, to the, to the task at hand, like you said. Right. Currently it just doesn't, I don't, you know, Microsoft has just had their earnings report and you know, they're, they spending $19 million a quarter on the infrastructure for this stuff. Like, it's bonkers. But it's not seeing like a, I think it's, there's a lot of growth, but I don't think it's seeing a lot of profit. So right now they're building out the infrastructure, but you know, more targeted models with more targeted tasks probably make more business sense.
[00:22:39] Speaker B: Yeah, well, and then I also think like what we talked about the faculty retreat, about having the smaller language models locally. Like, you know, even like we chatted about this offline. Like, I updated my iPhone to the latest OS and now they've made Apple intelligence on by default on everything. And my, my phone was like running so slow and it was like, I get, they've even changed the UI for the email. So at the top there's, I would say probably about a centimeter of your screen is just taken up going, and you know, having icons there for, let's say shopping type of emails or what have you. I don't know if you know, like you can click between shopping or people or you know, all mail, but that's literally, it's taking about like a centimeter of my screen.
I don't know if that's necessary. I don't know who designed this thing.
[00:23:36] Speaker A: They're betting on it. It's being heavily used. I guess they're hoping that the use case, the inconvenience to you will be offset by the driving of more sales to AI specific phones. Though I have read that there is not a huge demand for AI. Like it's not driving more iPhone sales.
[00:23:56] Speaker B: Oh, I, I, you know, even in fact, like we, we chat about this offline and I'm like, hey, you know, I literally, when I updated it because I did it overnight and then I get it up in the morning and it's taking so long and it's prompting me with stuff and I, I had to get to work. I just turned it off. I turned off the whole thing and I'm thinking, okay, here's somebody who actually likes tech. And I'm getting alienated. What about all those people that have issues with it or have like tech there? You know, with the change you can.
[00:24:28] Speaker A: You can turn it off. So like to an extent. So like in the, in the mail app, if you're in your inbox, if you go to the three dots for the options in the top, right. You want to change it back to List View.
[00:24:40] Speaker B: Oh, okay. There you go. There we go. Okay, that's so much better. Yeah, see, I didn't even have time to investigate that.
[00:24:46] Speaker A: I, I, I just looked at it before we got on the podcast because you, you're telling me about it and I don't use the Mail app because I use Proton, so I only use Mac Mail. I can't integrate ProtonMail on the, with the iOS mail client, so that's why I haven't used it for so long. But that's the one way to hide it. But it's still, I think, I still think it takes up more in the header than it used to.
[00:25:09] Speaker B: But that, I mean, it would be great if it, like, I, I think if, if Apple Intelligence, I mean, obviously like we talked about last episode, I mean, they know that is a problem. Siri is becoming dumber. But you know, out of all the companies, Apple is the only one that doesn't go and take our data and use it for advertising and selling.
[00:25:31] Speaker A: Not yet.
[00:25:32] Speaker B: Well, well, not yet. Maybe they will. But you know, for them, if, if it could be on device and learn from how I write and get better, like, yeah, I'm all game for that, but that isn't what's happening right now. And I, I still, it was funny because even, I mean, beyond this, like, let's take aside with Deep Seek, like we were chatting about this offline as well, but like, you know, let's say, you know, the whatever I believe from just being skeptical on Nvidia chips not being used and the money that was spent to train this thing. On the flip side, the other thing that I find really interesting and like most human beings, especially Westerners, don't think about is that, you know, look at, we had this hiatus on the TikTok, like from the Chinese government standpoint, it's like, you know, they're basically creating a bunch of different apps. In some ways like this Deep Seq might be like a Trojan horse. Let's say TikTok gets shut down. And if you look into the actual privacy policy of Deep Seek, so if you install it, I mean, you can run it on online, like through a web interface. You can even download it onto your computer using the hugging face and stuff. But imagine you download it because it became the number one app on your phone phone in the privacy policy that they have. It will go through everything on your phone from text, voice, everything. It'll scrape it. And again, like they're probably wanting this I mean, you probably want to have a phone where you can go and siphon off everybody's data, spy on it. I mean, our social media apps are not allowed in China, but all their stuff is being allowed here. And then, you know, beyond TikTok and this, this Deep Seq, now they have other apps as well. So they have like four or five different apps that are very popular on, on the, the Western side that especially the younger generation that they're using. And so from a security standpoint, national security standpoint, I think people need to be a little bit more aware and skeptical. And so my suggestion to students this past week, I'm like, be careful. What I would do is probably use it in incognito mode on the web and be careful with what you feed into it. And even then, who knows? Like I, you know, I think that.
[00:27:56] Speaker A: The OpenAI app is more just like a web wrapper.
I don't know that. Does it ask for a lot of permissions?
[00:28:03] Speaker B: I mean you, you with. I, I've tried it on my computer and you would have to enable things like you could attach it to. I mean you could do it in the website as well.
[00:28:13] Speaker A: Right, but on a phone, is it, does it scrape anything?
[00:28:16] Speaker B: I don't think so. Like, unless you wanted to, if you go and you know, access your files and other things like that.
[00:28:26] Speaker A: Yeah, I've only ever uploaded one thing at a time. I don't think it has access to everything. Maybe it does. I usually use the web interface. I find it better. I don't like the apps. The apps actually look different a little bit, which is confusing to me. I find I'm used to the web interface, so I like to stick with that.
On a related note to AI because we talked about this last time, are those AI agents and you asked me if I would trust. So just for people who don't remember, these AI agents are these operators, kind of AI operators that would ask act on your behalf. I don't know why I can't speak today. And so there was a New York Times article called how Helpful is operator OpenAI's new AI agent.
And so they kind of offer a glimpse into, you know, hypothetically what these agents could do for you. So I suppose they subscribe to this ChatGPT Pro, which is $200 US a month, which is very expensive. Operator can browse the web, fill out forms and execute user directed actions. Though it requires frequent human intervention. So it requires a lot of hand holding. It's not totally inoperative. So what they did, and they kind of break down what they did well or what the agent did well and what it didn't.
So it successfully ordered items they requested from Amazon on their behalf. It booked a Valentine's Day reservation and scheduled a haircut. It ordered food for a colleague via doordash, I suppose, and successfully had it delivered with the correct address it was given. Responded to all LinkedIn messages, but the author said mistakenly registered the user for a webinar, which I thought was pretty funny, and then made a dollar twenty from online surveys.
And then the failures were blocked from certain websites, New York Times being one, Reddit, YouTube.
It couldn't do the captchas, which I thought was really interesting.
So even the AI agent with all the new AI technology still struggled with the captchas, refused to engage in gambling activities.
So I guess you can't have it go and gamble online for you because that would be cheating.
And it required kind of constant confirmation to make sure that it was inefficient. So it kept coming back and asking for things.
So their assessment so far, they're not super useful. They're not going to replace people anytime soon. But also, I was thinking about this and just a couple of problems. First of all, having it go and answer your LinkedIn messages and order DoorDash. Okay, if everybody used a, let's say this becomes $10 a month, all AI stuff in the future and everybody has access to a certain number of credits or tokens for an AI agent, Again, I tell it, oh, go respond to my LinkedIn messages. What if those LinkedIn messages were created by the other person's agent? I mean, like, where is this energy going? Is it just telling agents to go interact with other agents? I, I, there's an article right now on the COVID of either it's the latest issue or one of the most recent of the Atlantic, I think. And it's like it talks about how lonely society is and how, how much time people spend alone. And I'm thinking, what could be more lonely than sending a robot to go talk to your friend's robots?
It just seems incredibly silly. And the fact that, yeah, like, it breaks a lot of security protocols. Like a lot of websites might not like this. What if you tell it to go deal with customer support? I mean, this might be a nightmare for companies, right? It might, there might be a huge cost somewhere else by people, by companies just being flooded with AI agents.
[00:32:25] Speaker B: Well, it goes back to like, imagine back in the day, it's like, I don't know, we need to go and make a lunch reservation or something. I'll get my people to contact your people. But now it's not even people, it's AI agents.
[00:32:39] Speaker A: Yeah, it's.
I don't know, I found it to be again, targeted for a certain thing, right? Like go to do statistics to find out, you know, how many impressions our company's advertisements got or, you know, there's, there's things that are useful for gathering information or, you know, over the next week, gather all the information about the trade between the US and Canada or something like that. Like a better Google Alerts or something. I get it, there's some uses here, but I don't have the trust to allow some robot to order from Amazon. I mean, for all I know, I turn around one day and I have a $2,000Amazon bill because it's made some minor error and I ordered a million of something.
[00:33:31] Speaker B: Well, yeah, exactly. So I don't know, it's, it's just, it feels like it, especially the way that they're shipping these products. Like they're just look at like you just mentioned. Okay, well, in response to deep seek, OpenAI just releases another one. Like maybe they were working on it. Maybe, maybe they just haven't even. It's something that they just whip together and just threw it out there. And so just to show that they're working on things. And like, if this was hardware, would you go and launch this without testing the thing?
[00:34:07] Speaker A: Well, this is, this is, you make a very good point. There's not a lot of testing going on, so that there's not a lot of thought being, at least I don't think being put into the ramifications of launching these products and the speed at which they're being launched and the haphazard nature of which they're being launched. As much as I really love like using AI for a lot of tasks and I found it to be enormously useful, I would like to see more thoughtful iteration. You remember the dot com bubble in 2000 or 2001, between 1999 and 2001, everybody was launching a website and it was just. Everybody was throwing whatever they could at the wall. Every.com was going published and then it burst. Now this is a little bit different because of course there's only a handful of players because it's so expensive to get into a. But I can see a scenario where these companies are just throwing everything at the wall and then something bottoms out. It doesn't mean the AI goes away or doesn't become useful, but that correction is Almost what's necessary to clean out the craft.
[00:35:13] Speaker B: Well and it seems like it's already happening because I mean they just throw something out there is this like this AI agent thing. I mean I wonder how long they actually worked on it. And would you go and release it out into the wild?
Start charging people $200 a month and like what value does it have?
[00:35:33] Speaker A: I think it, yeah, it has value. Like I can see like if so for purchasing on your behalf. Right. I don't see, I don't see it having a lot of value in tests like this. Like the New York Times, they've done a bunch of one off tests, you know, a bunch of very different tasks isn't going to save me a lot of time. But an agent that does something that I always do over and over and over again may save some time.
That would be the only way I could see it would be useful. So if you always know that you're going to order these things from Amazon, toilet paper, whatever once a month or you're a business where you are always procuring from Costco and having it delivered.
The cost is the cost either way. So if you have chat GPT, if it's worth it for you, then you know to do it on your behalf to save enough money for those routine things. I can see it, but at 200amonth per user that's the only way I could see it scaling because again, you know, I'm not making $200 an hour in my day jobs so it's not.
[00:36:45] Speaker B: Going to save me enough time to make it again. Like this is where like I don't know if the use case is okay, this, I don't know who came up with this experiment or this idea of purchasing things but okay, do we need an AI agent to go and buy stuff for us on Amazon or do I just go and use the subscription functionality within their website and just subscribe to, you know like for instance I just got an email today. My Nespresso pods are out. All right, so the ones that I usually get. So now I have to either substitute it or I could cancel the thing. Otherwise it was already on autopilot. I was, you know, I get it every how many ever weeks, like maybe once a month or whatever. I set it up that one time is it. This is where I don't know if they're checking or testing out the, the use case. And so like you know, you mentioned the dot com. I mean after that, 10 years later we had the app, you know, kind of bubble that Happened where everybody and their dog was building an app. Do you need an app? There's, like, other things that you can do. You can have a website. Right. Not every business needs an app out there. And so again with this, I don't know if maybe if, I don't know if, if we were doing some specific, like, task you were saying, like for a work and you build it for that specific task.
[00:38:05] Speaker A: Exactly.
[00:38:06] Speaker B: Then maybe. But like, I don't know if, like, this is where I honestly, I don't think that they have thought this through. Where $200 a month to have that subscription. First off, it's going to, you know, turn off a bunch of people just off the onset. Like, unless you can, like you say, if you're making maybe for lawyers, like, I don't know, maybe there's. You're going and doing certain contracts or whatever, and that's basically less than one hour of your time. And if it actually did go and, you know, perform some of your tasks, but then I don't, you know, are you going to go and take that liability?
[00:38:42] Speaker A: This is where. Yeah, this is a little bit different what's rolling out than how Satya Nadella described agents and I. We can talk about that maybe in more depth another time because I haven't. I'd need to go back and read what he said about it and listen to some of his keynotes from the Build conference. But I got the impression that it was, it was a competitor to that, like we said last time to the software as a service model where we go out and answer questions for you almost like a copilot, where it could gather information that would take, that would be very laborious and time consuming to go through and gather, especially if you had a bunch of institutional knowledge at a company like Microsoft, for instance, and it can scan all the SharePoint sites and drive and pull all the market research together and kind of present you a portfolio that might actually be really compelling. But that's a very, very specific use case where again, the New York Times, I'm glad that they tried it, but again, none of these things are worth, even in totality, $200 a month. Right.
Perhaps our last thing that we can talk about today, and I just brought it up as kind of an interesting productivity thing.
I came across this article in a tweet. Tweet X.
What is it? Why do you say tweet?
[00:40:05] Speaker B: I still call it a tweet. I don't know. I don't know if it's ever going to go away.
[00:40:08] Speaker A: Is It a Z, an X, a Z. I don't, I don't know what it's supposed to be.
But this is an article that was in a. I follow Greg McEwen who wrote the book Essentialism and he talked about this article. So I went back and found the original. And it was just a paper published by Microsoft Research.
It was the case for breaks between meetings, which I found really interesting. It said back to back virtual meetings. It's a hallmark of remote and hybrid work.
Causes stress and fatigue, making it harder to focus and engage. In fact, now that I'm reading this, I'm thinking maybe we covered this on a previous podcast. It's possible. Did we talk about this?
[00:40:47] Speaker B: I think possibly.
[00:40:48] Speaker A: Well, we should bring it up again because I'm in a lot of meetings lately and so perhaps this is worthwhile. So it said their human factors lab, they did brain wave monitoring of participants on consecutive meetings. One session with no breaks, another with a 10 minute meditation break. And so breaks prevent stress buildup, breaks improve engagement. So engagement went down going from meeting to meeting with no break. And tensions between meetings, they argue, cause a lot of stress. So the idea is that it's a win win for everybody. Right? It not only reduces your engagement, but it'll likely improve the lower everyone will, there'll be lowered stress and it'll improve the engagement between the individuals in the meeting. And so it was really interesting that how they did this. So they use like Microsoft Outlook.
[00:41:43] Speaker B: To.
[00:41:43] Speaker A: Kind of to sort this out. They used all their own products, teams, all this stuff. And so they just came to this really simple conclusion that like there has to be intentional breaks away from screens, alternatives to emails, to constant response to slack and teams and stuff like this.
And so the idea of having techniques or whiteboards and breakout rooms is actually really important. And so I came across that, I think we've talked about this before, but it just struck a chord for me because I've been on lots of meetings lately where I've tried to schedule breaks, but I haven't been able to. And I do notice a difference.
[00:42:24] Speaker B: Yeah, no, for sure. I mean I, I try if, if possible not to even have meetings in the first place, but, but sometimes they're inevitable. But you know, again, yeah, I think that the, the break, I, I mean even during the pandemic, like one thing that I would quite often do because I, I still, I mean maybe this, they did a, this was a study that they did on remote like video conferencing meetings and stuff. But I find being on the screen, like, you actually, you feel way more tired. And I've heard some, like, some tips, for instance, like if, let's say, if it is a. Some sort of video conferencing and a meeting, one thing that you should do is enable or disable the ability to see yourself. Because when you see yourself, that there is like some psychological cognitive load that happens just from going and seeing yourself on screen. And so it might be something subtle, but that's definitely one thing. But, yeah, I mean, during the pandemic, I tried my best. I'd be like, if. If somebody went to have a meeting, I'm like, hey, you know what? Let's do it, but let's do it on the phone. And so, because I don't want to be on in front of my computer and at least I can go for a walk or do something at the same time.
[00:43:42] Speaker A: I'm a big fan of walking meetings. I think it's really helpful. I think people generate better ideas, too.
[00:43:47] Speaker B: Yeah, totally.
[00:43:48] Speaker A: Like, if you and I wanted to meet and we were on campus, I would say let's just walk around the campus. Assuming it's not minus 30 like it is right now.
[00:43:55] Speaker B: Yeah, I don't know. But this minus, it was even like minus 30. I was gonna shovel this morning. Like minus 37 with windchill. I don't think so.
[00:44:05] Speaker A: Yeah, I did it. I don't recommend it.
[00:44:07] Speaker B: Yeah. And plus, it's gonna snow again later, so I'm like, I'll just do it one time later.
[00:44:13] Speaker A: I agree.
Well, that's all we had to talk about today, I think. I think that's a wrap. Chris, where can people contact you?
[00:44:22] Speaker B: Yeah. So you can find more about me and my contact information on my website. It's Chris with a K. K R I S H A N S CA.
[00:44:34] Speaker A: And I'm Eric Christiansen. And you can find everything about me at Eric with a K. So E R I K and Christiansen C H R I S T I a n s e n.net and to learn more about this podcast, you can visit Examining ca.
[00:44:55] Speaker B: Okay, awesome, awesome.
[00:44:56] Speaker A: Take care, Chris.
[00:44:57] Speaker B: Yeah, you too.
[00:44:58] Speaker A: Bye.