Episode Transcript
[00:00:08] Speaker A: Welcome to Examining, a technology focused podcast that dives deep.
I'm Eric Christensen.
[00:00:16] Speaker B: And I'm Chris Hans.
[00:00:24] Speaker A: And welcome to another episode of the Examining podcast, the technology focused podcast that dives deep. Today's another special episode. We're going to do an interview with our colleague Hattie Faraborzi at Mount Royal University.
So, Hattie, before we get started in some of the details and your, in your background, maybe just introduce yourself, your position, kind of what you do, who you are, so to speak.
[00:00:48] Speaker C: Oh, thank you very much and hello to your listeners. I'm very happy to have been invited on this. Glad to share my expertise.
I'm Hadi Fariborzi. I'm an assistant professor and I teach innovation and entrepreneurship here at the School of Business at Mount Royal University.
Yeah, so a lot of my research is on, obviously on innovation and small new ventures, specifically on internationalization of them. But, you know, over the past three, four years, I have added a line of research and a whole lot of other work and teaching, which is all concerning technology and specifically AI, into the process of innovation creation, into the process of entrepreneurship, and of course, a whole lot of that in research. So specifically in synthesis research, a lot of my research uses a methodology called systematic reviews and meta analysis, which, you know, other, other names for that or like evidence synthesis and so on. And because of my involvement with that, we have spent a lot of time developing methods and tools and platforms and then use the AI in building those platforms. And you know, over time other people started using the things that we created. So it turned into a community of researchers that are using it, which is called Hub Meta. I'm happy to explain more in any part that, you know, you think we need to talk about.
[00:02:44] Speaker B: Yeah, that's awesome.
[00:02:45] Speaker A: Well, that's a, that's a perfect segue. And so I love it for you to talk about Hubbeta and kind of that, how you use it for evidence synthesis. I mean, I don't like to make this about myself, but I'm particularly, particularly interested in that because as a, as a librarian, I've supported that. I used to support health sciences a lot. And so working with people to do the replicable searches and how the databases work and working through all the, you know, the different keywords and working with the researchers and refining it.
You know, I even trained at the Evidence Synthesis Institute, which is run out of the University of Victoria. And I really, I really liked that line of research, A, because you don't have to go through ethics, which is difficult and frustrating, but also the impact is high. And I'm kind of curious if you could talk a bit about how meta and kind of what drew you to kind of build, kind of co found and build that platform?
[00:03:37] Speaker C: For sure, yeah.
So it all started for me as, as a solution to my own need. I was, you know, when I was doing my PhD, I had to do a meta analysis and back when I was being trained by our professor Pierce Steele at University of Calgary, the size of meta analysis that we read and were published back in 2014, 15 were like, you know, 40 papers, 50 papers, a hundred of them.
And it was kind of like, you know, still a manageable size to do using pen and paper, you know, brute force, reading a bunch of papers again and again and coding and extracting information.
And then I started collecting and you know, working on my topic and very easily, without even, you know, much effort, I, you know, I had 100, 150 and then it was quite growing, like more, the more topics we touched, the more numbers. So it was, and then it was not just one critical one liner relationship between variable A and variable B. It was a multiplex of, you know, many different variables. So it was no longer something that you could handle with Excel or you know, our own, our old tools.
And so this was a frustration that we had and I actually had no choice but to build my own tool to basically analyze what I needed for my PhD. So I created an early version and then, you know, had that paper, you know, for the dissertation. And then, you know, through some casual talk, you know, on hallway talk with Pierce, who taught everything about meta analysis and he's a very, you know, well known meta analyst himself.
He, he took an interest in the project that I had developed. Like I, you know, I created an early web based version of a meta analysis platform. I hired basically some freelancer programmer and then, and then Pierre saw that and said, okay, this is fantastic, let's join forces, let's start this. And we did, we created that, we worked together since 2018, built many different versions of Hub Meta. We kept improving it basically for our own projects. But then, you know, because we needed to train more and more RAs, we had to record a bunch of, you know, videos on how to do meta analysis using Hubbeta. I put them on YouTube and then organically, more and more users started using Hub Meta until they came and we looked at the platform and saw, oh my God, we have like 5,000 users. Like 5,000 people are using this niche thing that we had developed. So apparently there is a market. So last Year we, you know, and I mean, over the years we have integrated more and more AI into it. And you know, last year we completely reshuffled and rebuilt the whole platform.
Now AI is very native to all aspects of this, and now we are doing some really amazing research with that.
[00:07:19] Speaker A: Do you think that AI will fundamentally change, like meta analysis methods moving forward? I mean, obviously your tool, you've integrated it, but you know, there's a lot of manual processes, as you know, and in traditional ways of doing systematic reviews, even scoping reviews, even at small scale. Do you think that'll really enable more people AI to do that kind of work?
[00:07:43] Speaker C: 100%. I'll explain why. So before, if, let's say like you're, you're doing a topic, let's say the effect of drinking tea on your well being, on your health, this is, you know, my classic example in all my meta analysis workshops. So if you go to the literature and you, you can pull up like easily like 20,000 papers that might capture one or both of these things. So that, that's your earliest start from like, I mean, from a librarian perspective, you know, that, you know, it's not that that difficult to end up with such big numbers.
And then the big question becomes, okay, so how do I go from the tens of thousands of potential papers that we find on any topic to the actual final data set of papers, our final set of papers that I need to analyze and extract data from?
How do I go from 20,000, 50,000 to 3,000?
Before AI, we had to do this manually. We had to go through every single one with a little bit of machine learning. Not exactly generative AI. We turned it to a better thing so that it would learn from our decisions and then we would look at around half of the papers depending on the topic. So that was big time saver.
But now with AI, what we have built, for example on Hub Meta, is that you define like 4, 5, 10 different AI bots, you give them the inclusion exclusion criteria, and within an hour or two, they go through all the thousands of papers that you have and they all submit their vote. They say, is it a yes or no? And then what the platform gives you as a researcher is the average of the votes of these team of AIs. So, and then for each of these AI bots, you can, you know, define a little bit more instructions. Like, you pay attention to the methods part, you pay attention to that part. You know, you can play around all with that. But, but essentially what happens with that is that, you know, when like Four or five AI bots, we all of them using generative AI with extremely capable AI models, are given the abstract or title or even the full text of the paper and said, do you think, you know, given this inclusion criteria, do you think this paper should be included or not? Just say yes or no.
Maybe, like if it is just one, but must not be as accurate. But when you have like five of them and four of them say yes to you, kind of say, okay, this is probably a yes. So, so, and then you would include all the, you know, very, all the ones on a very high threshold, like, you know, above, I don't know, like 75% agreement, you know, reject all the ones on, on the low side. And then this cuts the job of the researcher from looking at 50,000 entries to about like, you know, 1,000 of them, 2,000 of them, which AI, you know, bots were not quite sure. So you just go and look at those. So this is really huge. So basically within hours you go from collecting your data to having a set of final papers. And then when you have that, when you actually have the papers that you want to extract data from, this used to take like, you know, I have, over the years, I have hired many, many people and trained them on extracting data from papers.
Even the best of them, sometimes they could not do more than three papers per day to extract full information from. Now what we have done on Hub meta with AI is that it has the PDF, it has all the data forms, all the things that we need to extract from the paper, and it extracts it a batch of papers, like, you know, 200 of them within like one or two hours. And then all the researcher has to do is basically go in and verify, like, you know, compare what the AI has extracted with that. So we are talking about days and weeks as opposed to months of work. We are talking about hundreds of dollars of basically AI credit costs compared with tens of thousands of dollars. We have spent those tens of thousands of dollars on research money for our published meta analysis in the past.
A whole lot of it is just having people to extract information, just the process I explained to you. So it's a fundamental change.
You can extremely expand the size of the articles that you target, and that really changes the insights you can get from the literature.
[00:12:59] Speaker A: Thank you.
[00:13:01] Speaker B: Actually, out of curiosity, Hadi, for the models that you're using that powers your technology, what type of models are you using for these bots?
[00:13:14] Speaker C: That's a very good question. The thing is, first of all, like, I mean, on the Hub meta Platform we don't have one model. We have like, I mean I have dynamically coded, you know, a test of all the 300 models that are available on Open Router, which you know, is basically an interface to all AI models, Google, Gemini, openais cloud, all of them.
And for each task that we do on the platform, one of these models works. So like for example, when you have abstracts of papers and the job of the AI is just to say yay or nay to that, that's a fairly easy job in the AI world. So even smaller models which are, you know, which you know, have less compute per se are able to do that very well. So in those cases we use a combination of like we offer the choice to our users to use models from Cohere, from Kwan, from OpenAI, from you know, Google's Gemma or Mistral specifically for some more involved tasks which like for example to extract the tables and information from inside PDFs that's where we go to the clots and you know, more involved models like, like Gemini 3.1, those are the ones that are actually, you know, we needed thinking model, we need model like. And for all of these we have, we have, you know, in the process of development we have a whole lot of tests that, that we run. We keep running it with our samples to just check the accuracy, check the speed, how well it does in different tasks. And it's only through that, you know, rigorous process of testing that we land on one model or provide a list of options to our users. But I would say so basically for there are different levels of tasks that you have to do for research.
For some of them, like you know, if it is just basic checking of presence of some data in a paper or something, the smaller models are just fine. When it is a more involved process, you also need a model. But I also a like a bigger model, you know, that is more capable but more important than the model from my experiences in my research is the prompt is the instructions that you add to the, to the, to the model. So like for the extraction part, which is the most difficult aspect in, you know, in the platform that we've built, we have spent weeks just on that one single task and you know, running multiple tests to find out what best models with that maybe we tested hundred different models, many different versions of the AI prompt. Like it's quite the system to get to the final prompt created agentic workflows compared with single shot workflows, examples, all of that. So a whole lot of research goes into refining how to make this AI work. So it's kind of, it's a, it's a very involved process that is not just based on what model but also the instructions, how we use it, the temperature, how we work with all those different aspects that we can leverage here.
[00:17:11] Speaker B: You're taking the next one?
[00:17:14] Speaker A: Well, no, you're going to have to keep going. Just.
[00:17:16] Speaker B: Okay, all right, no worries.
No, that's appreciate that. Hopefully I wasn't getting you to divulge your secret sauce.
[00:17:26] Speaker C: It's not like, you know, if we ever get into that, I'm happy to explain like, I mean with Hub Meta, what, what we have developed, like it's not a commercial platform for us. We are an open science platform.
We, we thought very long and hard if we are going to make it a paid subscribed software or you know, you know, make it free. And we ended up making it free mostly. And basically the only thing that people pay for is just the AI credits that we will recharge. So we, we just charge them, you know, as much as they use AI, you know, the more involved AI there otherwise, like I mean collecting information, running the duplication, you know, analysis, all of that on our platform is completely free.
And we hatch, we actually have no problem of open sourcing our whole code. The only reason we don't do that is because a concern that we have is that like, I mean if we put it out there, there are a lot of other companies who can take it and then put it behind, you know, build their own version and put it behind a paid wall. I mean that's, that's just not what we are after here. Instead we have it hosted and working well and open to the community.
Every single part of Hub Meta has, is built based on complete transparency of our methods. Like we explain exactly how for example our deduplication algorithms works, how each of our analysis, every single part of our platform, we have completely shown that. We even have a lot of blogs and stuff to show how each part of the platform works, explain all the details of that.
At the end of the day we thought, Pierce and I, who created this, we thought that hey, we are researchers, we are paid by taxpayer money, we are building this, we are giving it to the public.
We ultimately will publish all the methods that we have.
It just takes the time of the peer reviewed process. So we have absolutely no problem with sharing all the secret sauces up there.
[00:19:58] Speaker B: I really appreciate your approach.
I've known peers for years, I think probably prior to, I would say 16 or so.
I remember him like, you know, Wanting to work on something and, you know, develop some sort of software.
And so it's, it's fantastic. I like your approach.
You know, the open science and sharing and not putting it behind a paywall or, you know, that's quite often one of the biggest issues in academia. We just how the system is set up. You, you've got to get everything published. It's using, like you say, taxpayers money and then it goes behind a paywall. Yeah, you have to pay, who knows, thousands.
[00:20:39] Speaker C: Exactly. Yeah. So there's a, there are a lot of, you know, evidence synthesis platforms just like that.
And I, you know, we just thought that, you know, this is really not our goal. Like, we, we want more and more people to use it and publish more research. Like, the best thing for an academic is for their research to be recognized and other people like, you know, seeing that this is actually doing something, this research that I am doing. So, you know, this is really a dream for us if other people start working on it and publish with it. Like I'm going to tell you, even with our own platform, with our old platform, more than 100 meta analysis are published using, using Hub Meta. A whole, A whole lot more are in the process.
So that really gives us a lot of pride.
We, we are very proud of that, that result. We hope we can take it to thousands of, you know, systematic reviews and. Yeah, so that's kind of the approach here.
[00:21:49] Speaker B: Well, that's awesome. I, I also like how you shared, you know, because I could totally picture this. I mean, I've run into Pierce in the hallways and he's this pretty sociable. But, you know, this is the, where it's the importance of having those serendipitous, you know, kind of conversations in the hallway that might lead to something. And even this semester with my students, one of the things I'm having them focusing on is, you know, like the government of Canada has asked their workers to return to the office. And you know, I don't think they've done a good enough job explaining why they're returning to the office, but that's besides the point. But I mean, this is proof that, you know, sometimes you just run into somebody and who knows what might come out?
[00:22:31] Speaker C: I think, you know, I mean, that, that's, that's quite right, actually. I, I really believe in the power of these short conversations that we share. You know, some people are really, you know, you know, reluctant to have a small talk or talk about what you're doing, but I think there is always tons of value in that. First of all, in.
In feeling that you're not working in isolation, in. In a bubble of the. Of your own not being part of a bigger community.
Like, that sense of community really brings value to.
To work.
So in any office, type in any. In any organization. So it's kind of like that's really how human beings, you know, have to be like, you have to be interconnected, but at the same time is kind of like understanding what. What Covid taught us, understanding the fact that people really need that hybrid model.
You know, they really need a day or two in the week that they can work on their own hours from home, perhaps.
And so that, you know, it's just like they find better work, life balance, Full at home, I would say no. Full in office, I would say no. But I think, you know, the best approach is hybrid. There is good in both of them. We definitely need both. That's my opinion.
[00:24:02] Speaker A: It's interesting that you say that. I was just listening to a podcast by one of my favorite academics, Cal Newport, of course, who wrote the book Deep Work.
And that book, it makes me feel old because he just mentioned that it's a decade old, but that book, that. That book is now. But someone asked him about hybrid work and. And in person, and he had said the exact same thing. Kind of the ability to kind of have those conversations, but then also dip out for focus.
And he ran. In his latest episode, he was kind of like revisiting the idea of kind of the. In the office, out the office with the back and forth.
[00:24:39] Speaker C: Yeah, absolutely. I think, you know, like, for me, for example, when I work from home, what I get is that deep focus that my job absolutely requires. Where, you know, sometimes I'm.
Many times I would say in research, we have to work, you know, endlessly towards a goal, and you don't really know what expects you in the end. You're just chasing and chasing to see, okay, is there anything to report here? Do I see anything in this data?
Can I really make any contribution? Sometimes it takes weeks or months to get to that point that needs a lot of focus.
Whereas I also need to be a human, to teach to people, to teach to human people, to.
To know how to talk to people, to know what matters in the world. So that requires me to connect with people, to see how other people work, to talk with my students, to be in an environment which really takes me out of my own isolation, of my own mind.
And it never happens.
It totally never happened during COVID because of the online work, online only aspect of work I mean, it's, it's, it's just not the same. It would never be the same. No matter if we are. We all put, you know, VR headsets on and we feel in the same. It's not the same as, you know, seeing someone up close, talking with them. And I don't think I'm being traditional at all. I think that's completely just the nature of human beings.
[00:26:26] Speaker A: Yeah, I agree.
[00:26:29] Speaker B: Oh, that makes sense, I guess. One other question, Hadi, we kind of touched on this, but what drew you the most in terms of evidence synthesis as a methodology?
[00:26:45] Speaker C: This actually is, it has actually very deep roots in that.
I always had this aching for clarity to understand what is going on. Like, I mean, that example of drinking tea on, on health is, is very telling for me because I'm a big tea drinker and I remember growing up, I kept hearing in the news like, oh, the scientists in that university found that drinking a lot of tea has a positive effect on your health. Then a few days later it has a negative positive. And it kind of like, it always bugged me. Like, I mean, I thought this was science. Like you actually went and did an experiment and tested something and you found out something. So what is it then? And then when I was introduced to meta analysis and this, this idea of evidence synthesis is that it is actually a method created by just for that kind of controversy, just for that kind of clarity making to understand why both of those answers might be true. It's just that, you know, you need an explainer, you need to know the environment that this test was done on. So that 10,000ft picture that you want is very important. So evidence synthesis to me is like, you know, individual research, primary research to me is like going into a forest and looking at each of the trees and study them, study their disease, their growth and all of that, which is 100% absolutely essential and important.
But in addition to that, you need that 10,000ft satellite image to understand the borders of the forest, to see the patterns that you observe in growth of these trees when seen all together, the patterns of disease that you see in them. So it's kind of like you need both of them. I try to, you know, because of my, you know, personal interest in having that clearer picture, that picture from above, putting the pieces of puzzle together.
I tried to be that evidence synthesis person because it's just so, so satisfying to have that feeling of. I kind of have a feeling that I've read the whole literature and have an opinion or can explain why we see all These different results here, that's, that's very, very satisfying to me.
[00:29:26] Speaker B: Yeah, no, for sure. I mean that was it, it was kind of cool seeing that article that came out through MRU about you taking, you know, how you mentioned that, you know, I'm a researcher, even though science is in my domain, I can still take what I've learned and see why, why is t good or bad at the same time.
[00:29:49] Speaker C: Yeah, yeah, that's exactly like, I mean it's, it's, it's pretty amazing that evidence synthesis enables you to, to actually do that. And one of the things, one other thing is that you know, as a researcher you will be trained in one specific area. You go and read the literature of that. Like, you know, you can ask me everything in the very niche topic of internationalization of new ventures or you know, internationalization in general and I would know a thing or two to say comes to entrepreneurship, I would still know a little bit. But if you go a little bit further from that, my, my depth of knowledge becomes lower and lower and lower.
So there's no way that I can do a research on you, you know, in psychology, this, that, you know, a lot of other cool topics.
Whereas if I develop the, the methods, if I become the Methodist and give the, give other people the tools to create their own research, they know the area, they will do that research. I have given them the tools and methods. I can read their, their work, I can be involved in their work in some way. I can be the Methodist on their team and with that I can do a whole lot of great research. I can learn a whole lot of cool things. So that's, that's really, you know, another reason for that. Because my, you know, the time of any one single researcher is very limited. Big. The extent of their knowledge is very limited.
This is just, you know, sort of my Trojan horse, if you will, to go into other fields just to observe how other people work and you know, see how cool their research is.
[00:31:39] Speaker B: It's very cool. I like that. The Trojan horse.
[00:31:42] Speaker C: Yeah.
[00:31:43] Speaker B: You know, I, I've been following a lot of the work that you're doing, like through your courses. I think it's really impressive, you know, since you've started as a full time professor at Mount Royal. But you know, now you have multiple courses from what I've seen, like you have the spearheading and navigating product launch, you got your how technology enables innovation, you have the artificial intelligence for business. I've run across like we haven't had a chance to chat much But I've run across many of your students that are taking the courses and they, they're always blown away with how, what, how much they learn from it. But I just want you to kind of just share your thoughts on how you've integrated generative AI into your courses and any kind of key takeaways for sure.
[00:32:31] Speaker C: Yeah, like, I mean, one of the things that really was always in my mind even before the days of AI, is that a good entrepreneur, a good innovator, needs to feel comfortable with technology.
That's a given.
It's what people who expect of the new generation of graduates that join the job market, they want to see them feel comfortable with new tech and take on new one. It became even more important when AI came into play because everybody knows how to open up, chat GPT.com and put in a bunch of prompts, but not a lot of people know how to create something useful with that, how to actually use it to, for example, get insights from data or do something from that.
That was really important for me and I wanted to share that. And the way I approached this was I started seeing myself as a student of AI. As a student that keeps learning new things about AI. And then, you know, from the, you know, early year, early days of like, you know, two years, two, two and a half years ago, when generative AI became really adopted, I started, I kept learning and then turning and turning that into what I teach in class. So and I have really kept that approach in designing the AI course and then later, you know, the technology course, I keep learning new stuff and then integrate that into, into the, into the coursework.
No, two semesters in a row that I've taught. These courses have been like each other. So I keep changing, I keep adding. Like technically we are not allowed to change more than 20% of the content, you know, but you know, with the AI course, I don't change the overall aspects of what I'm teaching, but all the tools, all the ways of doing things, a whole lot of that changes because AI keeps changing. And I guess the part of the fact that the students are always excited about the AI course or the other tech course is because how cool and how enabling the tools are.
Like in the block week course, we have a one week course called How Technology enables Innovation in a matter of just the five days of. In class, students start from just an idea on a paper to having a fully built online platform ready to serve customers with a name, with a domain, with great graphics, serving customers with AI integrated with a database, a complete you know, platform, this used to take months and thousands of dollars of money. We do it with zero money in.
And you know what I hope is that students always keep that as a skill with them so that, you know, they wake up one morning, have a cool idea, they spend a weekend building it. And I keep reinforcing that in the class and many times they actually do. And it's, it's pretty cool to watch that.
[00:36:10] Speaker B: Yeah, no, for sure. I mean I've.
Years ago I actually took a, taught a course and I mean we have certain people in common like Mohammed Kayani.
And so he was on sabbatical and that's where I kind of got exposed to how technology is empowering entrepreneurship. And I mean, just from that point, at that time it was just, you know, the tools, the low code and no code. And now like the generative AI.
[00:36:39] Speaker C: I could only imagine we all graduated from low code and no, no code into this. Like Mohammed and I, you know, we're in touch weekly. We keep updating each other on the new stuff we teach and the new stuff we build.
But yes, certainly it has been quite the movement from that. And I kind of feel like this is not a niche anymore. I keep telling my students that you can't just say I will be the person that is the non tech person of the team. I will be the one that, you know, there will be other people to take care of that. That is no more acceptable. Because how easy AI makes it for everyone to have the basics there.
Sure, you know, there are different levels. Some people, you know, have it consumed all of their days. They become the master in creating that. But you can't really say that. I, I will not learn this because there will be other people in the team that will learn that. You, you got to learn that, that you have to know how to use AI to do basic business functions like automating a bunch of stuff like analyzing a dead data that you see, getting insights, building an idea that you have very quickly. All of these are very important skills. So you have to have had those skills. If it was up to me, I would make the AI course that I teach mandatory for all business students.
We're getting there. We're getting there. You know, we're talking with the dean to have an intro version of that, you know, across the business school. But you know, for that we need to first make sure that we have enough people teaching it. So it's a, it's a big ass.
[00:38:26] Speaker B: Yeah, no, absolutely. I think that's something probably all universities are struggling with because whether we like it or not, the students are using it. I see it everywhere.
And you know, I, it's one thing I just, I was chatting with one of my students, he's actually, he's in charge of the Gemini rollout here at MRU and Jonathan Stady.
And so, and it was funny because, you know, here I am, I'm, and I can't remember if I told you, we haven't seen each other in a while, but I'm doing my PhD right now. And so for one of my courses I'm taking qualitative research. And so I decided because otherwise I'm probably not going to have an opportunity but for my course I decided to go and interview former students of mine that have graduated. And you know, they, they took my course for business communication and where I taught them how to, you know, use generative AI for writing.
And one of the things that kind of, it was interesting just speaking with them and even I have like much more appreciation now going in the researcher side.
But just the way that they're using it, I, you know, having taught them how to do prompting, like their prompt was so basic and it's not what I expected. Like the, and so one of the things I'm kind of, I'm calling it, it's, it's like they're using, let's say Chat GPD or Copilot or Gemini or whatever these tools are, they're using it like Google and, and then Jonathan and I, we were chatting about it and because as you probably know, like the, the Gemini is going to be rolled out to everybody now starting in May. And so, but if you look at the interface like Google made it famous having a one line, you know, thing where you put in something. But Gemini, open AI, all of them, they just have this little one line thing. But, and I don't know, like maybe just psychologically that's what's limiting people to just put a one liner.
[00:40:33] Speaker C: Yeah, that's, that's always an issue in my class too. So we spend a whole lot of time, you know, working on our prompting skills and I keep showing them how different was and what happens is that many times when you are in the process of building, you know, my approach to teaching is I open up my own, you know, chat, GBD or cloud, whatever I prompted and ask it for insights or you know, to build something and you know, I actually give them the prompt. I, I copy the product and give them and we talk a whole lot about what, what I see and then my approach Is to I go to the students and watch how they are doing it.
And when they hit a roadblock, I always tell them, okay, this is probably how I would prompt it to get out of this point. To like, say, I explained to this what the situation is, what your problem is, give it some example of where to go from here and you know, explain what you want with clear words.
And I kind of feel that, you know, in the end, like, I mean, at least what I see in the projects is really amazing and high quality projects.
I hope that that has an effect on their general skills of commanding AI because I can observe how they used to use AI at the start of the course and how they are using it. And in the end I hope this is something that stays with them after they finish up and go, yeah, no, absolutely.
[00:42:19] Speaker B: Even one thing that I started doing this semester, Hadi is because I would always ask them to give me a, you know, screenshots and show me how they're using the AI.
But after taking this qualitative research, because I've never been on that side, I've done like, you know, focus groups for industry consulting work.
But one thing that really struck out or stood out to me was the fact that you keep a research log and you know, you're constantly reflecting on throughout the project.
And so what I decided, inspired by that course, I decided to go and implement that for their group project. So they, they maintain a group research, well, AI usage log, but in there. So I ask them how they use the AI, like the kind of the main prompt.
But then I think the most important part, and I never, I just thought it was just something simple, you know, and until Jonathan mentioned it to me, he's like, you know, the, the most really what he appreciated was the reflection part because now, you know, you could hand people like, you know, these prompts, there could be prompt libraries.
But reflecting on the process and understanding why things happened, he said that was so powerful for him and he's actually going to go and share that with the administrative folks here at Mount Royal and so on. I honestly, I thought it was like a nothing burger.
I just, I thought it would be something good for the students just to kind of document how they're using.
[00:43:49] Speaker C: No, no, no, you're totally right. This, this reflection of how it works. I think I, I can, I can try to think of that and see how I can also implement that somehow in my course. Because it's very important to keep reflecting on what happened during the course of this conversation. It's really, you Know, I keep talking to them that these AIs, you have to think of them as our new companions and we have to keep like, understanding how do I talk with it so that it does all the good things I want from it and does not do anything bad. So it, that mindset always has to be there so that you can actually build. So I guess it's, it's a, it's something that we are all learning at the same time as teaching. So that's, that makes it cool. That makes it not boring.
[00:44:53] Speaker B: Yeah, no, absolutely. I mean, for me, I feel with the advent of generative AI, I, I feel invigorated because, like you say, like, every semester is different. There's, I've always had, you know, an appreciation for technology and now every course is basically a technology course, whether we like it or not, so.
[00:45:17] Speaker C: Exactly. Yeah, that's absolutely right. Like, I mean, there is no way around it. Like, I can't imagine teaching research, any research methods now and not, you know, spending half of my time teaching tech and teaching AI. This is just part of the nature of things. And it's not just, you know, prompting or using ChatGPT. It's a whole lot of other things. Like in all parts of our analysis, we keep using AI. The whole nature of research thinking, all of that has completely changed.
Before, we used to spend a whole lot of our time, you know, basically working on one tiny sentence to frame.
And if you think about it, that was not research.
When I was spending hours on building those two sentence paragraphs in the whole of a manuscript, I wasn't really.
It was, you know, finding something scientific or communicating that.
So I don't think if it's, it's such a bad thing to ask AI to write, you know, to do the communication part. We do the research part, the insight part, that thinking part, and then, you know, command our companion. Okay, sit down and write for me. So it's, you know, that part is not the part that really bothers me. The part that bothers me when researchers use AI is a dumb use of AI. It's just like sitting down and saying, go and do this research on me. Find the resources that I need to cite without having read those things.
So that's, that's, you know, that's important
[00:47:20] Speaker B: even you probably have seen like those articles now where there's been papers that have been peer reviewed, supposedly, but then within the, the document, let's say some sort of journal article, they put in a prompt.
[00:47:35] Speaker C: Yeah.
[00:47:35] Speaker B: And probably used white text. Right. So. And then those Researchers who are peer reviewing, which they're not allowed to throw it into ll, they're throwing it in there and yeah, they're getting caught.
[00:47:47] Speaker C: Yeah, that's, you know, that's really, like. I mean, makes me sad to see a dumb use of AI where AI is so capable and so great. Like, I mean, honestly, right now, AI can, you know, if you prompt it well enough, it can actually write.
So human, then no one in the world can guess this was written by AI. There's just tech. Like, you know, theoretically even it's impossible to understand.
So I guess, you know, if we are at that point, we have to say, okay, let's fight on another battle. Like, it's not human written or AI written.
It really doesn't matter at this point. Research is something else. Research is the thought, you know, the critical thinking that matters.
[00:48:43] Speaker B: Yeah, well, and like you say, like, from a tool perspective, it's. It's incredible. I had a student, he just recently won in our Launchpad competition, and he was showing me because he's working with his accountant to file his taxes for his business and his accountants asking him for every transaction. So he had Claude go through his inbox, scrape everything together, put all the invoices into a spreadsheet.
You could click on it like it actually downloaded the invoice, put it in a Google Drive like it was.
It would have probably taken him weeks, maybe even, who knows, like a month. Because he has so many transactions. But he did it, like, very quickly.
[00:49:26] Speaker C: Like, I mean, I have to, like, because we have research accounts and we have to report all the costs and, you know, every month I have to report, like, I don't know, 30, 35 research transactions. Each of them has their own invoice. And this is something I absolutely hate because single PDFs open up, see which company this is from the date and amount.
So, like, I mean, this round, you know, I asked Claude to. This is the, this is the folder, go and list them, then create a spreadsheet, clearly say what invoice, rename the files properly to match this, and then boom, within like two minutes, everything was done. Like, used to take me two, three hours every time. But yeah, so very, very different words. Like, I mean, these little time savings are important because we can do a whole lot more now.
[00:50:27] Speaker B: Yeah. And on the things that you actually enjoy, not,
[00:50:31] Speaker C: you know, paperwork down work that you don't like.
[00:50:35] Speaker A: Yeah.
[00:50:35] Speaker B: Well, that's awesome. Well, maybe. Well, let's go into, like I mentioned to you, so with our guests, we do the rapid fire Questions. So again there's, it's up to you. You can go and expand on it, but usually it's like short type of questions.
So you ready for the rapid fire?
[00:50:53] Speaker C: Let's go. Yeah.
[00:50:54] Speaker B: Okay. Since you are a fan of tea, what is your favorite tea?
[00:51:02] Speaker C: Earl Gray, but the Persian type.
[00:51:05] Speaker B: Okay.
[00:51:07] Speaker C: Yeah.
[00:51:08] Speaker B: All right. Mac or PC?
[00:51:15] Speaker C: 100% Mac. So again, I can't say more. That's 100, Matt.
[00:51:22] Speaker B: Standing or sitting? Desk.
[00:51:25] Speaker C: Standing desk.
[00:51:27] Speaker B: What's your favorite car?
[00:51:32] Speaker C: Favorite car.
You know, I like Mercedes.
[00:51:40] Speaker B: Okay.
Ebook or paper?
[00:51:43] Speaker C: Ebook.
[00:51:45] Speaker B: What's your favorite open source tech?
[00:51:51] Speaker C: Lots of them. But I'm, I'm really appreciative of, you know, all the research libraries that people have written in R for their analysis.
You know, it's not just one like, you know, one of them that we keep using in meta analysis is called Metaphor.
And it's fantastic that this person has written this over like 10 years. And it is open source and we can go in and see the whole of its code. All right, that's one example. The other one is Zotero is also a reference manager. It's open source. In the process of building Hub Meta, I learned a lot from their code about how it worked, it managed articles on and such.
[00:52:46] Speaker B: Yeah, I'm, I'm starting to get into Zotero myself just for.
[00:52:50] Speaker C: And it's open source, so you have the whole source there.
[00:52:53] Speaker B: That's.
[00:52:53] Speaker C: I, I really appreciate that.
[00:52:55] Speaker B: Very cool. From a learning perspective.
Synchronous, asynchronous or hybrid?
[00:53:07] Speaker C: Definitely in person.
I mean Async, you know, it has its own benefits, like, I mean the wide reach and you know that in their own time. But you know, for most topics, in person is the best approach.
You know, if it is not in person, sync or Async doesn't really matter to me. I mean it's mostly the same thing, you know, except for some, you know, interaction and collaboration that you can have in class, in sync. But I would definitely say in person has significantly higher value.
[00:53:49] Speaker B: Yeah, absolutely.
What's your favorite web browser?
[00:53:54] Speaker C: Chrome for sure.
[00:53:56] Speaker B: Okay. Do you like VR, AR or both?
[00:54:03] Speaker C: Neither.
They gave me nausea and I,
[00:54:09] Speaker A: I
[00:54:09] Speaker C: mean I would see them as cool extensions and cool gadgets, not a life changing tech. At least at the same at this time.
[00:54:22] Speaker B: All right, and the last question we have is who inspires you?
[00:54:28] Speaker C: Oh, who inspires me?
Well, a lot of people, mostly people who stand by their values and are not afraid of being judged and that, that's really, you know, amazing. Like there are a lot of people that fight for their values and get punished yet, you know, getting, getting jail fighting dictators and all of that. And I really have a lot of respect for them, that those are the people that inspire me.
[00:55:07] Speaker B: All right, awesome. Well, thanks for participating in the rapid fire.
And I, you know, I really appreciate you taking the time to join us for this conversation.
[00:55:17] Speaker C: Oh, absolutely.
[00:55:18] Speaker B: In terms of if people want to find you, what's the best way to, to get a hold of you?
[00:55:23] Speaker C: Like, I mean, I always check my emails. Like, the best, the best way is like, I mean, many people drop a note on in my mouth royal email. I kind of feel like I have a duty to answer all those emails and, you know, respond.
It just takes. Sometimes it takes some time for me to get back to people, but I absolutely answer every single email.
[00:55:48] Speaker B: Yeah. All right, sounds good. Okay, well, thanks again and yeah, we'll. We'll be sure to go and share, you know, the, the details in our show notes, so.
[00:55:59] Speaker C: For sure. Thank you very much, Chris. This was fun.
[00:56:02] Speaker B: Yeah, for sure. Awesome.
[00:56:04] Speaker C: Cheers.