Episode Transcript
[00:00:08] Speaker A: Welcome to Examining, a technology focused podcast that dives deep.
I'm Eric Christiansen.
[00:00:16] Speaker B: And I'm Chris Hans.
[00:00:24] Speaker A: So our first article of the day is from Fast Company.
So it's called AI Coding Tools could bring us to the one employee unicorn. So I'm assuming here, Chris, that they're talking about we're going to be so productive with AI tools that unicorn, which is usually refers to a startup that gets an evaluation or is it evaluation or funding round of a billion dollars, something like that?
[00:00:53] Speaker B: It's. Well, yeah, I mean, it depends, I guess usually it's substantiated based on how much money they raise. But yeah, it would be based on your valuation.
[00:01:04] Speaker A: Right. So with the first. We're gonna get the first $1 billion solopreneur startups, basically.
[00:01:12] Speaker B: Yeah, yeah.
[00:01:14] Speaker A: And so, so the article basically says that explores kind of how the AI assistants, it's interesting that they start off with GitHub Copilot, which is the coding copilot, and then talk about the traditional chatbots. Like ChatGPT can accomplish tasks that used to require an entire team and by automating some of the more repetitive stuff. So their argument is that the repetitive coding, debugging, even making suggestions like if you're a developer and you're building an app and trying to. All the time spent trying to figure out the, like the architecture, like the overall structure can be brainstormed and done a lot faster with AI itself. What do you think about this? Do you think that's true?
[00:02:02] Speaker B: Well, I mean, I've been saying this for a while now. I think, you know, with the no code and low code type of tools that were available, I mean the, you know, for years I've been saying, I think that there's going to be a one person, $1 billion company now with the AI tools. I think now the tools are making it even more possible.
You don't need a tech founder anymore. You can go and, you know, develop things that are a lot more complex. Like I, I recall, I think we might have even chatted about this like last year.
But you know, remember back in the day, like if we were having a meal and you know, if you wanted to go and create an app to go and sp. Figure out what the bill is like and splitting the bill and the tip and all that kind of stuff. Well, literally in a few minutes, you know, you could use something like ChatGPT and it was, it developed the full on app. I mean, back in the day that would have been a startup venture and you know, that that code, you could literally go and upload it into the iOS store or the Google Android store.
So, yeah, I mean, I now, I think these, especially when you got like GitHub copilot that you have the whole repository of code. I mean, again, I would always double check the code and make sure that things are working. But I mean, definitely certain aspects, like even figuring out debugging, there's, you know, if there's, if you even miss one character sometimes it can break the code and it's a really good application that way. And then even beyond that, if you're thinking about, you know, marketing whatever your venture is and coming up with sort of a drip campaign or social media posts or what have you, like all of those things before you would have had to hire people or get people to do it. Now you can use these AI tools to go and deliver some of that.
[00:04:08] Speaker A: It's interesting. I saw it kind of ties into it. I don't have it with me, but it was a tweet or some sort of social media post by Chamath and I can't remember, I can't pronounce his last name. Can you pronounce it?
[00:04:20] Speaker B: Oh, yeah.
[00:04:26] Speaker A: I think that's what it is. He's a kind of Canadian America. I think he's Canadian American. I think he was born in Canada or came to Canada first. So he's a venture capitalist. Interesting guy. I like some of the things Chamath has to say. Really interesting. But he said that he listed subjects that people should study in school. So this is maybe useful to our education listeners. And he used to really push kind of the engineering roles, you know, coding and stuff. And he has similar feelings about not only AI, but no code. So somebody who, who is very logical and understands the logic can code without having to basically memorize the syntax because the syntax is the problem, not necessarily the concepts. Sometimes, right. It's kind of like you might have design skills, but the problem isn't your design understanding. It's that using Adobe is frustrating. Sometimes we call that unethical design if it's a user interface. But with code it's not necessarily unethical. It's just the way you talk to the computer. But he says that people shouldn't study those anymore because he felt that in the next few years or two years, maybe his timelines are a bit.
What's the word? Optimistic, I guess he felt that people would be engineers would be supervisory roles at best, if AI is doing most of the heavy lifting.
And so he argues that people should go study philosophy, psychology and stuff like that.
[00:05:55] Speaker B: Yeah, no, for sure. I mean, yeah. Well, isn't that like, isn't this the most craziest thing if you think about it? I mean, for how many years have we been pushing, hey, everybody should learn how to code. There was even not for profits learn to code and all this. And again, you know, don't get me wrong, like, I, I think it's great if you could learn code and if that's something that you're into, but this is, has changed things. And you know, if, now if your workflow, if you're using pre existing, you know, repositories of code and going, and you know, you got to go and figure out from a creativity standpoint or even like you're talking about like the liberal arts where you have to think critically or you know, use philosophy or what have you to figure out what is the best output that people need.
That's, that's probably the thing that's going to be more valuable. And even that, I mean now like from a creativity standpoint, I mean there's, there's a lot of things that, you know, I think anybody, they probably should go and bounce ideas off of these LLMs. I mean one thing I will say to like, you know, yeah, we've been pushing all of this telling people that they should code and other things. I mean the other thing that I, and this might be a little bit off topic, but I've been thinking a little bit more and more about just how these big tech companies, and specifically people like Sam Altman who keep pushing the narrative of AGI, which is the artificial general intelligence and you know, even just how the technology works. And if you think about it really, if you dig into it from a critical analysis, critical thinking skill aspect, this technology, it doesn't think, it doesn't have human characteristics. It's a bunch of algorithms and there's a, you know. Yeah, the transformer model is, can go and produce things that seem very, you know, convincing and it looks like it's thought. I mean even just think about like these now these new cognitive models and it actually says, yeah, it's thinking, but it really isn't thinking, it's processing. Right. And I, I would argue that's there's a difference. But if you dig into this, why they keep pushing that narrative is that imagine, I mean, I think it's just a matter of time. Few years from now, something goes haywire, something happens.
What are they going to do? Like, I, I could see this right now, Sam Altman is going to go and say, oh, it wasn't us, it was, you know, chat GPT and it just developed a personality of its own and blah, blah, blah. And you know, again, if you think logically speaking like the US is very, I say that they're lawsuit happy and so you can go and sue somebody for anything. And so this is their way of slowly kind of introducing the idea that this thing, it isn't, you know, an algorithm. It actually has human qualities and it's, it has faults too. And then they'll blame that, they'll blame the technology for some of these aspects.
So anyways, that's a little bit of a side but definitely, I think, you know, if you go even into the like the next article. So like beyond, you know, having these one person companies and being a lot more nimble, but Microsoft has now been saying that AI colleagues are on the way and specifically what a lot of the, I mean I would say all of the large language models there now talking about these AI agents.
And this is where we've talked about this before too and I feel, you know, like with the Industrial revolution and we, I think we both agree on this. But in terms of this new, having this artificial intelligence boom, I don't see the jobs being created. I don't see like, you know, it seems like there's going to be more job losses than anything else.
And it feels like they're pushing like Microsoft and other companies that what's going to happen, the workflow is going to be one person is going to manage AI agents that are going to go and do work on your behalf and it'll just be processing things and so on. And so again, I think the workplace is in for a reckoning right now.
[00:10:36] Speaker A: Yeah, that certainly could be the case.
We didn't, I mean I always think of things that we can include, you know, before we've put together our show notes, another thing that people might want to take a look at and I've just put it in our show notes.
There used to be a really great podcast. This ties into what you're saying. Chris called this week in Google. But the Twit podcast network has changed that podcast to be about intelligent machines and the Internet. So it's called the Intelligent Machines Podcast. It's really good, it's interesting. And so they kind of do an interview with a big name and then they have the rest of their episode and, and it was in March, I think March 12, 2025, they did a interview with Ray Kurzweil who Wrote the book the Singularity is Near. And he wrote that book like 20 years ago, 2005 or something like that. And Ray's been right about a lot of stuff about when this was going to happen and when it was going to take off. Like somebody actually went through his predictions at one point and figured out that he had like 85, 86% accuracy.
This kind of ties into the other article from Fast Company called Microsoft Thinks AI colleagues are coming Soon.
So, I mean, this is kind of to your point or our point that people will play a more supervisory role and they're going to have these kind of AIs that act more like colleagues. It doesn't really say in the article if that's, if, if the, if the personality of the AIs is going to get better.
[00:12:09] Speaker B: Yeah, yeah, exactly. That's why I was saying, like, you're basically going to be a person who manages your AI agents or what have you.
So. And maybe, who knows, I think the expectation, it feels like it's like one person will be doing the equivalent of like 10 people and just by using these AI agents or algorithms or what have you.
[00:12:31] Speaker A: Yeah, 10x. I don't know that we can 10x our productivity with AI. I'm not convinced about that.
[00:12:38] Speaker B: That's what they always, you know, they want to 10x everything.
So this, this next article, the one on the politeness, I thought this is funny. Funny? Yeah, like, you know, but it's funny because again, like Sam Altman, so the, the article talks about how he said article. Yeah, the tech crunch. You know, you should stop being polite when you're entering in your prompts. So don't put in please and thank you and be a little bit less verbose.
And so with that, I mean, I, he didn't quantify it. He didn't say how much it's costing. I wouldn't be surprised. I mean, the more, again, this, whatever you put in. As we've discussed before, the large language model does not understand English. So it takes whatever content that you're putting in there, converts it to tokens, and then tries to figure out, you know, what other words can go into it based on the transformer model and develops an output. But I, I personally think that it still probably isn't a bad thing to go and, you know, put in the please and thank you because, you know, if you're looking at your overall habits just as, as human beings, if you start being less polite, it might go and pervade other parts of your life.
And so I, I Don't I? And I don't know. I mean, I don't think there's any science to this. I've tried it with and without. I find sometimes when you put in the please and thank you, it actually is a little bit of a better output.
[00:14:15] Speaker A: There was another article I saw recently, kind of by an anti AI individual and they said not and they're advocating not to use please and thank you.
For whatever reason, it wasn't a very good one. And I've always wondered like you, like, if we, if people, if we normalize being rude to the AI and you get in that habit, is that going to impact how we treat each other?
[00:14:43] Speaker B: Yeah, I mean, again, I'm, I'm not an expert on, you know, sociology or anything like the psychology or anything, but in terms of human behavior, I would not be surprised. Surprised, right? Like we're creatures of habit and if, if you start being rude to one thing, you might start being rude to people in general.
[00:15:04] Speaker A: Right?
Yeah. So I've always wondered if it's better just to be polite.
Anyways, interesting idea. I still use the polite things and I'm fine with that, I think.
[00:15:19] Speaker B: Yeah, for sure.
[00:15:20] Speaker A: We kind of have a few articles on kind of AI and creativity.
So this one that we have another one, we have fast companies seeming to rule the day.
They have an article called Adobe introduces created without generative AI label. Now this is interesting because I've seen, I don't know if you've seen this one, I should send it to you. There's already a label that exists out there that's called not by AI. Have you seen this?
[00:15:50] Speaker B: No, I haven't.
[00:15:52] Speaker A: I can send it to you. And basically it's a open source icon pack. So you can put it on your website and say it's not by AI, it's 100% human.
So this is more of a formalized kind of approach, I suppose. But it's interesting.
What's the purpose of this? Like why would, is this just more for assets or things that people can put on work that didn't, that they deliberately did not use the Adobe AI to help with?
[00:16:23] Speaker B: Yeah, and I think it's just, you know, Adobe, what they're trying to do is just distinguish between that human versus AI generated and their rationales to preserve like transparency, protect that creative authenticity and just give customers more clarity about where, where the, the content came from and what they're consuming. I mean if you look at it, and this is where again most. There was. There's been recently certain trends like Everybody creating their own action figures. And there's that one cartoon style that obviously we didn't have, like OpenAI didn't get permission to go and use, but they just, you know, to hell with it. They just go like, ask for forgiveness instead of permission. And so the, the creatives, especially artistic people like this, this is one way that you can go and you know, kind of preserve that copyright and demonstrate ethical design.
So this could be, I mean, I kind of like this open source thing that you talked about. The, not by AI, but even just recently, I don't know if you saw this. There's like a video and we've talked about this before, but there's a video where somebody uploads an image into one of these AI generators. Image generators. And then over, you know, keeps putting in the image, image after image and so on, and it takes like a person after like a hundred. And the idea or the concept that we've been discussing is like taking a photocopy of a photocopy of a photocopy and it totally changes like the person, what they've uploaded from the first image to the hundredth. Like the person is like totally different skin tone, the body shape has changed. Like it's, it's crazy just what's comes out of it.
So again, yeah, I mean, I, I, I think this is something that is probably a, a good thing just to go and show, demonstrate that there it's, it's authentic. It's something that, you know, isn't AI generated.
[00:18:33] Speaker A: Well, yeah, and that's good. It's interesting to me that you have to, I think this is something that's in a new app or an existing app. Adobe Fresco, I'm actually not familiar with that one. Yeah, the painting and drawing app. So this is something that's built in.
Interesting. I guess it's kind of the benefit of this label is that I guess it's endorsed by Adobe. Does it automatically. I mean, so you can only put it on if, if you haven't used AI. I mean, how does it know? I guess it's not available if you used its AI generator.
[00:19:05] Speaker B: Well, probably. Yeah. Like, I mean it's, I would, I.
[00:19:08] Speaker A: Guess that's the, I guess that's the advantage over the open source is my point. Because it's like there's, there's no check.
Yeah.
[00:19:14] Speaker B: Like in this, like, I think because you're using that specific app and there's no way to go and use AI for that process that they can actually go and confirm that. I mean, who knows like, this is where even I think.
I mean, I remember years ago, even just now, it's happening, right? Like all the misinformation and the videos and images that come out of there. But the use of blockchain technology could be something to go that you could use too, like to confirm if it's legit or not and look at the actual source, the core source.
So.
[00:19:51] Speaker A: But yeah, yeah, and blockchain might work better on something that's not like a payment system, because that would scale better. Yeah, because it's like the payment. The reason the blockchain hasn't worked, or at least as I understand it on the payment side, it's like, it's like trillions of payments and everybody has to have this ledger. It doesn't really work. Work.
[00:20:10] Speaker B: Yeah, yeah, exactly.
[00:20:13] Speaker A: Okay, well, that's interesting. I mean, I'm glad that I'm. I do appreciate that Adobe does try to provide that transparency with a tool. So that's, that's a positive.
There was this really interesting article that you found from Harvard Business Review that says how people are really using generative AI. So this kind of reminds me of our earlier discussion today about startups and the unicorn, solopreneur and 10xing through productivity. And I'm using it for productivity things as much as I can. I'm trying to learn about it, but it's interesting to know on average how people are using these tools.
And so this article tries to cut through some of the hype and they say kind of rather than people doing kind of like grand or attempting kind of transformative projects, most people are using it for what they call incremental efficiency. So they're drafting emails and summarizing documents and creating outlines and brainstorming ideas.
And even if I look at my usage overall, most of it probably falls into those buckets.
It also talks about how it's adopted in institutions, like in an organizational sense, so culture, digital literacy and the buy in, kind of depending on where you work.
So kind of a nuanced look at how it's integrated quietly into the workplace, probably mostly with Microsoft via copilot is my guess, but interesting.
[00:21:56] Speaker B: Yeah, I found. And for those, again, we'll include the link, but definitely take a look at the diagrams that they have in there, the charts that kind of summarize the usage, the use cases from 2024 to 2025. And I found it interesting, like in 2024, like the number one thing was generating ideas, the use case. And if you look at 2025 now, there's more of these like personal and professional support things. So like the number one thing now it was year was therapy and companionship. So now that's become number one.
Number two is organizing my life which is a totally new use case finding purpose is number three. And so like it's. Yeah like the table is really interesting how people are using the technology and so if, and I've seen some people even just you know, influencers and so on like and it kind of makes sense if you think about it Eric. If you, if you go and upload a bunch of information you could conceivably use the, the large language model and if it starts to you know now especially with the memory retention and other things it could become your like life coach or you know, if it's for your business, like your executive coach, you know, people using it for like therapy or what have you. Yeah it's.
[00:23:19] Speaker A: So I have a question about that. So I suppose it depends on how these things are coded in my experience. I don't know what your experience with AI is but like if I've, I've uploaded like screenshots of a website page that I've made for example and I'm, and I'm thinking so like I have a portfolio, a design site that's separate from my main site and I'm kind of redoing some of the structure and I'm like, you know, should I center the logo and title to align with the centered headings at the top of every page or something? Should I have it left to match with the navigation? And I'm just like I'm brainstorming ideas and thinking about information hierarchy and I uploaded here's my thoughts, here's some screenshots, like what would be the best way. It's really mealy mouthed. It never really provided me with a solution and it was like your insights are terrific. Like clearly you've really thought this through. It's like I don't need someone to tell me that I thought this through. I understand that it's like, but I, you know, I have options here. What do you think would be best? And sometimes I can get AI to do that I guess if you really force it. But a lot of the times it's more placating.
[00:24:22] Speaker B: Yeah, well but that's again like Eric.
[00:24:24] Speaker A: Maybe real therapists do that. I don't know but I think you're supposed to tell you if you're wrong, right? Like no, you got this wrong actually like that would be better. That's what I would want from a real manager. I wouldn't want the AI to steer me in a direction where all my wacky ideas are accepted.
[00:24:41] Speaker B: Well and that's the, that's the whole problem with, I mean look at, that's a perfect example that you've said, right? Like having that domain expertise and that knowledge, that's where you're going to be able to call BS on you know, whatever it's generating in terms of the output. And yeah, if you have that subject matter expertise, you'll be able to go and point that out and like if you don't and that's why like going back to the earlier part, like in terms of going and generating code and, and being able to do something, I mean, yeah, it's great. Like I still think it's gonna be, it's coming, it's gonna be coming in terms of having like these one person unicorns. I don't think it's there yet, but I think that you know, definitely you could probably get into millions or even maybe hundreds of millions depending on the idea. But if you don't know the code, you aren't going to be able to go and fix it. And there could be some things that.
[00:25:43] Speaker A: Are maybe get too complicated to fix. Like if you never understood the fundamentals, like so this is where I'm thinking about learning code. It's like, oh, people will be supervisory at best. And I'm thinking, I guess I don't see.
I mean it's. Even if AI is the as my understanding is that it can code with direction, that's the key as well as the best programmers now, not like average programmers. However, if you don't know how to tell it what to do, it may preemptively do that. And I understand it's going to get better.
But you can't just say I need a million dollar company, I need an app that does this and you take a nap and then you wake up and it's done this. And if you don't understand the fundamentals, you not going to be able to direct it. So there's value in learning the concepts of code. I think again what it takes away is this necessity to perhaps memorize all of these technical things in the way it's like, you know, a language might be easier to learn if we just could memorize the words easier. But we understood the concept of the sentence structure. Right? Like it's this, I use Adobe all the time. Like Adobe Photoshop could be way simpler. I've used Canva, I've used design tools that when I need to do something simple, it's better. I mean, Adobe is so complicated that the barrier is using the software. It's not being a designer.
I know lots of people who have great design taste and they're never, they don't want to be a designer because they're afraid of the product that they have to use. That's the standard. And I think it can be similar with code and it might be similar with using some of the AI interfaces too. Too. Right. So I can see it like if you want to translate code, like, let's say like I need to trans. I don't understand this. I don't do programming in Ruby on Rails. I need this in JavaScript and translate it for me and then I can see something in a language that I understand and then you direct it. But there's still a ton of human intervention there. I mean like, it's possible that it will just be able to just go and do all this agentic stuff, but I do get the impression that that takes a lot longer than people think. And by a lot longer I'm saying, you know, not a hundred years, probably more like 10, 5.
But it's not, that's not happening right now. I feel like the, in the current moment there's a bit of hype.
[00:28:10] Speaker B: Yeah, absolutely. And I mean, on the flip side, here's kind of my counterpoint to that.
Let's say for instance, I was just chatting with some people who are developers or just I went to a conference and so on. Anyways, they were saying, like I just said, okay, well look at like I gave them the example of like WordPress. Like WordPress powers about 25% of the.
[00:28:33] Speaker A: Even more sites.
[00:28:35] Speaker B: Maybe even more. Yeah, I mean even whether It's a quarter, 30%, but it's, it's a huge portion. And then they, they started telling me, well yeah, but it's vulnerable to attacks and because of PHP and blah blah, blah. And I'm like, hey, look at, I know how to code. And I'll, I'll tell you, I'm the first to Admit the, the WordPress installation I've seen like we've done it with clients and stuff. Like there's sometimes those installations are like 3, 400 megabytes.
Does that need to be for just a simple website? Probably not. But things have changed. There's two things. One is the high speed bandwidth. So we have access to high speed Internet.
[00:29:16] Speaker A: Yeah, it doesn't matter anymore.
[00:29:17] Speaker B: It's fast, it doesn't matter. Then the other thing is the processing, computational power. So those two things together, it doesn't matter anymore. As long as the end, whatever interface works for your, whoever your customers or the end user that's going to be using it, it doesn't matter. They don't care what the, the code looks like. And then I even gave them another counterpoint. I'm like, okay, well look at like somebody like a company like Airbnb at one point. Airbnb, if you had the iOS app, the Android app and then their website, they're. They did, they didn't communicate with one another. So like if you booked it on the iOS app, you don't know, you might have gotten it, maybe you booked the place, but it doesn't go and you know, refresh right across. And so there was like a bit of a lag.
And so what they did was that once they raised X number of, you know, and they got that billions of dollars valuation, they, they kept things going but they rebuilt everything from scratch.
Right. And so that's a nice problem to have when you are making a lot of money. And actually another thing I should probably.
[00:30:29] Speaker A: Easier to do with AI is my point. I mean, not to interrupt but like you make a really good point. There's all this focus on startups and unicorns and I'm thinking, you know, big companies that already have a huge foothold also have access to this. I mean it could be that it AI gets. So I mean maybe there's two aspects of. Before we move on, just a couple of things I want to mention. There's two aspects to creativity and then disruption for creativity. Like this idea that people are like, oh, AI does everything better. I guess I just won't do things that are creative anymore. I just don't see that happening. Like even if the AI is better, people are still going to pick up a paintbrush or code because otherwise they're going to go crazy and there's going to be a revolution. So like I don't see, maybe demand will go down for some things. I just don't, I don't see a disappearance because it's just, it would be athenine to the human condition. The second is that big companies have access to this stuff too. So startups, yeah, I guess at level. But it's like the baseball stadium problem. It's like the startups, you know, if you stand up in a baseball stadium and I've mentioned this analogy before, you have a better view. If everybody stands up, it's the same and the big companies can stand up and they're already taller than you. So like WordPress could go and say oh yeah, by the way, we have great coders, maybe they reduce their workforce. I have no idea. They're still going to be coding this stuff or at least within GitHub copilot and then they're like oh yeah, all those vulnerabilities.
Well yeah, the AI is helping us solve all of them now with PHP or we translated the base code really easily because we have this tool. So now it's like JavaScript based or it's built on a different standard. There's nothing wrong with PHP by the way, it's pretty performant. It's just it's been around for a long time. So you know, maybe they can make templates faster or maybe they can find efficiencies in the code to make the installs smaller for countries that don't have very good bandwidth. So it's like the startups have an advantage but then like WordPress has all this stuff.
[00:32:32] Speaker B: Yeah, exactly.
[00:32:33] Speaker A: Like it's not like they're like everybody, it's not like they're in the stone age and everybody else has found AI and they haven't paid attention yet. Like these people are hyper aware. Microsoft, you know, they're deploying it faster than anybody and they're like one of the biggest companies in the world.
[00:32:49] Speaker B: Yeah, totally. I mean I've never seen a huge company like this move so fast but you know, to that end like even going back to, on the creativity side like I look at.
So like Microsoft owns LinkedIn. LinkedIn. I mean have you noticed like because you can also now use, it's embedded, you can use AI to generate the LinkedIn posts.
Most of these posts look very similar.
[00:33:13] Speaker A: Yeah, you can tell. I mean there is a pattern.
[00:33:18] Speaker B: Yeah. And so like I, I don't know, I mean sure you can use it for ideas and stuff but like this is where I, if you want to go, if ever that pattern, like one of the telltale signs. I mean I, I've used EM dashes before but I just look at it like everybody's using an EM dash, the emojis. Like I was one of the first ones to tell people like I find social media posts, you know, like they say a picture is worth a thousand words so you should use emojis. Now I look at every friggin social media post has emojis in it. And so if everybody is doing one thing, go the other way yourself apart. Right. And so I, and I don't know what that looks like but I mean I've always. One of my sayings that I say is sick when other zag. And so you got to do the complete opposite of what everybody else is doing. Like, I mean I personally like that whole trend that happened with the.
What is it called?
The. The trend with this whole action figure thing. Like I didn't do it. I mean I. I'm like this is. It looks cool or whatever but like everybody's doing it. Like. And I just one thing. I don't know, maybe I'm stupid this way too Eric. Like, but I'm thinking about like the computational power and like I don't. What is this going to accomplish? Like it just seems kind of weird and stuff. And then they were all looking the same too.
[00:34:45] Speaker A: Like that falls into number six on the how people are using gen diagram which we'll link to. And number six on the list is fun and nonsense.
So I think that says it right there.
[00:34:59] Speaker B: Yeah, yeah, exactly.
[00:35:03] Speaker A: Hobbies and recreation make up a huge percentage of this stuff. Not actual productivity.
So.
But yeah, I mean I agree with you. I would go to where others are not. That's a very famous Arnold Schwarzenegger approach too.
Everyone was buying houses in California, so he built an apartment. Sorry, bought an apartment building.
So to make income. Right. So I, I don't know. For social media, I think like a better way to do it probably for us. I'm probably not going to use the emojis anymore. I'm going to stick to pictures. I. I'm always interested in the kind of the rougher written ones if there's like a list but like it's just easier to put like less information. I'm finding that they post now. You and I did this early, right? We had the emojis and we kind of had a lot of detail in the post but now I avoid those because that's all I see. Not that I look at a lot of Twitter anyways.
So I would even like a one liner with a really cool image or a graph or something like that probably gets more attention or maybe even like breaking it. Like so it's like a thread with no emojis and just have an image per thing so people can kind of get the summary just something different because I kind of avoid it now. In fact, I don't even look at social posts because I just assume it's all AI written and it sounds again it has that kind of therapy talk like language like posit overly positive and there's nothing wrong with that talk. I'm just saying it's not how people talk. So it looks fake.
[00:36:29] Speaker B: Yeah, totally. And that's where like, again, I, I think this is where people, what's happening is that they're getting lazy. Right? Like they, they just copy and paste. They don't even think about it. I mean, I, I, it's unfortunate. Like even this 2004 to 2005, like idea generation went from number one down to six.
So you know, literally people are probably just copying and pasting it and not using it for ideas to go and build upon and put their own spin on it and just throw it out into the world. And so like. Yeah, I mean personally, although I've been also really busy, I've just been trying to stay off and I don't post very much and we'll probably get back into it.
[00:37:10] Speaker A: Should we, should we tie this in? I know I didn't put this on our show notes. Sorry to interrupt, but should we tie this into an article that I forgot, which is about how AI can dull your thinking skills?
[00:37:19] Speaker B: Sure.
[00:37:20] Speaker A: This was also from Fast Company. It was like studies show AI can dull your thinking. Here's how it can happen and how also how it can show. Sharpen them. Yeah, so it was like, make time for strategic thinking. Use it as a sparring partner. Like, I don't agree with that. Like, push back on its answers. Don't just accept it.
The right use cases, that kind of stuff.
[00:37:40] Speaker B: Yeah, yeah, exactly. I mean, even at this conference I attended, like there was some people who were like, yeah, I'm, I'm using AI to code every day. And I feel like, and this was just like the anecdotal things that people were saying, but they're like, yeah, feel like I'm getting dumber and you know, so I'm not thinking as much and of the code and just, you know, being lazy with it. And so again, like, these are some of the aspects I think, you know, you just gotta, who knows, like if we, if we don't think for ourselves and you don't like question the outputs and so on. Yeah, it's really easy for people. I mean, I look at, I, especially in the education sector, like, yeah, sure, you know what, I, I'm the first to admit that it's, it's very hard to go and prove without a doubt that this is AI generated or not. But when they're the claims or whatever, your points you're trying to make aren't supported by research or, you know, some legitimate sources and then it just, it sounds kind of eloquent. But it really, when you dig into it, there's no substance to it.
I mean, at the end of it, you're going to be looking at what the end product is and if that end product is mediocre, you might have thought, okay, yeah, even I actually, I, I read an article I don't think we included in this, but one of the things, I forget which, it might have been perplexity or anthropics. Claude. But when you're doing the deep research part of the algorithm and the output, they're generating 10,000 word kind of outputs.
[00:39:18] Speaker A: Yes.
[00:39:19] Speaker B: Just to go and show that. And nobody's going to dig through those 10,000 words, but when you have a whole 10,000 words put together, it's going to look like impressive whether it actually is good or not. Who knows, right?
[00:39:33] Speaker A: Yeah, right. That's fair.
[00:39:35] Speaker B: Yeah, yeah.
And then I guess the last two things, and I don't know if we need to go and dig too much into this, but the OpenAI stuff. Yeah, the Open AI, I think it's kind of cool that OpenAI finally they've actually given a prompting guide and also a guide for building AI agents. And so like they're their own people. They, in fact they've even come up with, I mean, we were like even joking about this. I mean, hey, OpenAI, if any of you are listening to this, you know, you're more than welcome. Like you're the videos that you've produced for OpenAI Academy. I think Eric and I, we can probably do a better job. But anyways, you know, this is something where they actually do have now videos and they're trying to teach people how to use their technology, which is great, like, you know, especially from the people who are creating that technology.
[00:40:29] Speaker A: Yeah, I think they have a great guide for both the prompting, like you said, and this is for GPT 4.1. Has that, is that out yet or is that only in the Pro model, like the pro subscription we have, the plus subscription. I don't have GPT 4.1, I have 4. 4.5.
Yeah.
So I, I guess this is not out this model or not for most users.
[00:40:55] Speaker B: Yeah. But I think the principles still hold like, you know, like this, like if you look at like the chain of thought or what have you, like. So again, hopefully, I can't say like with 100% certainty, but again, hopefully they have provided at least some general guidelines that work across different models or at least you can kind of think about how the transformer model that they have that powers GPT.
[00:41:24] Speaker A: Well it comes from them. Right. So I mean, like they give examples of prompts and you can actually cut and I appreciate that they're kind of embedded. You can cut and paste them and test them yourself. It's really, really well done.
[00:41:33] Speaker B: Yeah, totally.
[00:41:35] Speaker A: And it's better than what I could do. And it comes from the company who makes it. In fact, I think OpenAI has done a good, really, really good job with the documentation. And you found a lot of that stuff.
I think it's super helpful. I don't know if you've talked about this other one. The other one was A Practical Guide to Building Agents. So this was our other cookbook. But this is a bit beyond me because there's a lot of Python scripts that I'm not familiar with. But this is all about how you would go about building kind of an autonomous AI agent.
So there's a lot that I can't speak to. It's interesting that they had this whole section on guardrails and they said, you know, well designed guardrails help you manage data privacy and risk. So it's very prominent. An entire section on guardrails for AI operating independently.
Yeah, clearly this is on their mind for a variety of reasons, right?
[00:42:22] Speaker B: Yeah, for sure.
[00:42:23] Speaker A: Yeah.
I thought maybe we could end this episode. I mean, I don't want to dominate the tips, but I did have a tip that based on something I've done recently.
So I have a website, it's my contact erickchristensen.net I use a template, so it's interesting. I use a WordPress.com to host a personal site and my other one is kind of an independent host and I use open source WordPress. Well, it's WordPress for both, but I use WordPress.com with an account where I can edit the template and add code and do all that. But one of the things that I use is that I use a template called assembler which is kind of the open. It's the most kind of open ended template. It's like a blank page. It's almost like one step removed from developing your own template and saving all the templates files and stuff.
But one of the things I didn't like about it is that there's no hover element on the navigation.
Like you navigate, so you hover over the navigation items and there's no. I like underlining the navigation when it's done. Rather than color, I use the color hover function in CSS for links embedded in posts and stuff. Anyways, it was just really hard for me to find which element to change. Because when you use the inspector on Chrome, so the inspector, for people who don't know, is where you go to look at the code, the HTML and the css. You can actually make changes in the Inspector and preview what it would look like.
But there's, you know, WordPress because it's these big monster templates and these big installs like you alluded to. There's so many classes and divs, it's just really tough to figure out. It's like, okay, do I want to. Am I supposed to be highlighting the whole thing or. Or am I just focusing on the label class? It's not always clear. There's multiple classes that make up even all the aspects just of the header.
So what I did was, is that I didn't know. So I took the classes that I thought were the ones that I needed to target when I go to adjust the CSS code.
And I asked, I told, I went to ChatGPT. I used the Reasoning Model 3O, which is good for code and math and analysis. And I said, look, this is what I'm trying to accomplish. I want it to underline on hover. Pretty simple.
And I'm using this template, I'm not sure which one I'm supposed to target.
And it actually gave an explanation about some of that code was legacy.
And so what you should do is just target both.
Okay.
And then it was like, did you want me to write the CSS code? And it did. I didn't like its formatting. So then I went and changed it. Not that it looks very good when I dump it in as a code injection anyways. And it actually solved the problem.
And this is something that I was spinning my wheels on a couple of weeks ago. Even though I'm relatively competent with css, I couldn't. Just didn't get it and I just didn't return to it. And there's an example of a relatively easy thing to do where it was acting as kind of a co pilot on my behalf.
So super fun. I was super stoked that it did that. And the first time I got this to work, it broke something else. I didn't understand why.
And then everything was being underlined.
And then I was like, something is wrong here.
[00:45:29] Speaker B: Yeah, yeah, yeah. Exactly.
[00:45:31] Speaker A: Yeah.
Well, that's it for this episode. I guess we will. We will chat again soon.
[00:45:40] Speaker B: Sounds good.