Artificial Intelligence (AI) has permeated various aspects of our lives, but its influence on education is particularly profound and multifaceted. In a recent conversation, Sharon Tewksbury-Bloom sat down with Maha Bali, a professor of practice at the Center for Learning and Teaching at the American University in Cairo, to delve into how AI is transforming education and the broader implications of these changes. This blog post encapsulates their insightful discussion.
[00:00:26] Your host on this journey is Sharon Tewksbury Bloom. For 20 years, she’s worked with helpers and changemakers. She believes that we’re about to see the biggest changes in our work lives since the Internet went mainstream. We’re in this together. Join us as Sharon interviews people in different helping professions. [00:00:44] Navigate what these new technologies are doing to and for their work. [00:00:47] Sharon Tewksbury-Bloom: Welcome back to AI for helpers and Changemakers. This is your host, Sharon. And I’m really excited to get into our interview today. This is one of the ones that inspired me to actually start this podcast. [00:00:58] So I can’t wait for you to hear directly from her. And I’m going to go ahead and let her introduce herself and get into our interview [00:01:08] Maha Bali: I’m Maha Bali. I’m a professor of practice at the Center for Learning and Teaching at the American University in Cairo. And my undergraduate degree is in computer science. My graduation thesis was related to using neural networks and machine learning. So I’m familiar with AI since I was an undergrad. [00:01:24] And then I switched careers. I moved into education. I was working in e-learning and then just generally it. I’ve been in the Center for Learning and Teaching for 20 years supporting teachers with their teaching, even before ChatGPT came about, I was teaching a course on digital literacy, so I was anyway talking about AI, and how it’s affecting the social life and learning [00:01:43] when ChatGPT came out, that became a thing, like, that I had to focus on a lot more. So I have been giving professional development to faculty, working with students on this, and I’m also the co-facilitator of Equity Unbound, which is a global school. space where we, we offer professional development and a community of learners from all over the world. [00:02:04] So we’ve been also using that space to, to help people learn what they need to learn about AI, whether it’s from me personally or bringing in other people from all over the world to, to help spread what we know. [00:02:16] Sharon Tewksbury-Bloom: Excellent. It’s obvious why I wanted to talk to you. So I’m so glad you were able to make time to talk about this. I would say as someone who didn’t have a background directly in computer science and wasn’t following artificial intelligence for years and years, to me, it feels like things have just burst on the scene in the last couple of years, and there’s been dramatic changes, and it’s suddenly everywhere for you. [00:02:44] Does it feel like that? Are you seeing that the last couple of years has been a rapid change, or much different than what it was before? Or what is your perspective having that broader view? [00:02:58] Maha Bali: That’s a great question. some forms of artificial intelligence existed for a very, very long time. 20 years ago when I was doing my undergrad, we had AI being used in the medical field to help with diagnosis of breast cancer, for example. That was like the example they give us. but the kind of generative AI we have now has existed for a short time before it became so widespread. [00:03:18] I don’t think it has been, it’s not even really good right now, but it hasn’t been as good as it is right now, so it wasn’t as widely, used. I think what happened is just that ChatGPT made it, like OpenAI made it available to everybody for free, and so everybody could access it, and everybody who had no idea how it was working was really impressed. [00:03:36] there’s a lot of hype around it, and that helps build it up a little bit more. For some strange reason, there’s hype in education. why did someone decide this was a good idea to create in the first place? Like, what was the purpose in the first place? [00:03:49] And then why does education have to now, opt into this? You know, it’s, it’s, the problem is it doesn’t matter whether education opts in because students have access to it. it’s a reality regardless of whether it’s helpful or harmful. [00:04:02] So you’re in this strange place, and when we met, we talked about social justice and inclusion. my approach to AI is to question whether it helps or hinders social justice, you know, whether it reproduces inequalities or if it can help, reduce them. [00:04:17] And my conclusion is usually sometimes it can help, but actually for the most part, it’s quite harmful and it can reproduce. Oppressions we’ve seen for years, ChatGPT and similar generative AI platforms, continue to create these issues. [00:04:30] I can talk about this more if you want. [00:04:33] Sharon Tewksbury-Bloom: There’s a couple of really great threads I wanted to pull on there. First, as you point out, these platforms made the tools. Available many for free. It helped to mainstream, generative AI and AI tools in general. [00:04:49] now that there’s a lot of people who are just encountering this technology for the 1st time. What are some of the. Assumptions or misunderstandings or misconceptions that you are running into, especially sounds like you do a lot of professional development for educators. You’re talking to people who are trying to make sense of this. [00:05:09] Like, what are the myths or misconceptions that you’re running into? [00:05:12] Maha Bali: Awesome, that’s such a lovely question. So many. So the first one is that everybody thinks that anyone who had an internet connection had access to ChatGPT when it first came out, and that is false. Egypt, Saudi Arabia, Hong Kong, and a few other countries did not have access to Chow Chee Pee Tee. It would tell you it’s not available in your country, and everyone figured out how to use VPN and somebody else’s phone number or fake phone number to get it, but not everyone, obviously, people who had the digital literacies to do that. [00:05:37] not everyone has access to it. the term artificial intelligence is problematic because there’s no intelligence here at all. and a lot of the hype makes you start talking about, artificial intelligence or generative AI, especially as if it has sentience and it doesn’t. [00:05:51] Of course it doesn’t. the other misconception that I think is the main one is because It can respond to human written prompts with human written language that looks pretty grammatically correct and is kind of in a slightly friendly tone. I think people started to sort of start to feel like they might be dealing with a human and say things like please and thank you and I’m like, why are you saying please to the AI? [00:06:14] Do you say please to your fridge to open the door? Like, you don’t do that. You don’t say please to your oven. That’s just a tool. Do you say please to Microsoft Word to open a file You don’t do that. I did a Twitter poll on this. I’m like 70 percent of people say please. I’m like, why? I mean, I’m a polite person, but this is not a person that I’m talking to, right? [00:06:32] I mean, you can say please to your dog if you want, but not to, to AI. the biggest issue is I think that just people believe it. [00:06:39] Sharon Tewksbury-Bloom: Yeah, well, [00:06:40] Maha Bali: biggest problem is people believe it. [00:06:41] Sharon Tewksbury-Bloom: I will say I am one of those people that says please to my AI. I do have a very specific reason for that, which is that. [00:06:49] Maha Bali: you and the tool. Oh, do you do? [00:06:52] Sharon Tewksbury-Bloom: My understanding is it is learning natural language, so I should use things like please and thank you because it’s learning for me how, like it’s reflecting back the way to write based on how I’m writing, if I wanted to use those terms, I want it to reflect that back to me. [00:07:10] It’s not that I think that I’m talking to a human, it’s that I’m teaching it the way that I want it to write. [00:07:16] Maha Bali: comes back to the issue of us teaching it for free. when they give it to us for free, they’re doing something with our data. So they’re benefiting from that, and then selling it to us for money. Okay. the pro version of it, right? So that’s another issue. Um, I think people, so, so the other thing I was starting to say is that people believe it because it sounds credible. [00:07:36] it’s difficult to notice subtle hallucinations where it goes off on a tangent and says something that’s completely untrue. you need to be a very, very meticulous and a good expert in the thing that you’re asking it to do in order to notice those hallucinations. it’s more likely to make mistakes. [00:07:52] In advanced, academic material, but also in certain parts of the world, it has much less knowledge about my part of the world. I come from Egypt, so this part of the world, my language, all of that. It knows a little bit, obviously. It knows a lot about ancient Egyptian history because everybody learns that. [00:08:06] It doesn’t know much about modern Islamic Egyptian or Arab history. if you ask about that, it’ll mess up. It doesn’t know much about popular culture, so it messes things up about that. ChatGPT itself is cut off to a certain date, but they keep updating that date. But other tools like Gemini do search the internet a little bit, and they do find things. Some people were using it at the beginning as if it’s a search engine thinking it was credible. lawyers who used it to make a case and the case had never even existed that it was referring to and they didn’t even realize that it’s not a proper search engine. I think most people know this by now, but a lot of people are still new. [00:08:37] This is the crazy thing about giving professional development about this now is that you have people who’ve been using it for like more than a year and you have people who just started. And they’re still in the beginning of the hype cycle and still trying to figure out what’s going on here. so there’s that element. [00:08:52] There’s also something that I think a lot of people don’t know is that Early versions of AI tended to be rude, like you were saying about please and all that, tend to use a lot of offensive language because they learn a lot from the internet. The internet is full of offensive language. And so in order to make ChatGPT not like that, they actually had humans filter out the content that was offensive or vile. [00:09:12] And in the process of doing that, they hired, they outsourced it to people in Africa, especially Kenya. These people suffered mental health challenges because of this, and were very low paid. OpenAI, which was outsourcing this work to this company, did not. do anything to help them after the damage they had caused. [00:09:27] And so for order of, in order for us to see an AI that’s ethical and polite, the process itself was unethical. And this is very problematic. people also tend to think of like technology and even anything on the internet without recognizing the climate impact of these things, like how much of a carbon footprint and, water, shortage, scarcity issues occurring because of the process of training these large language models that use up a lot of processors and also apparently, I thought it was just the process of training, but apparently every time you use it, you’re also damaging the environment. And I don’t think anyone thinks about that in the way they think about, like, water bottles or whatever else we are doing right now that’s harming the environment. [00:10:09] I think people, when you talk about AI being biased and reproducing bias. They tend to think that if you ask it a direct question and it tells you, oh, no, I’m not going to say that, that it’s, oh, it’s so polite, but you actually have to notice the implicit bias. So can I give an example? [00:10:24] Sharon Tewksbury-Bloom: Absolutely. [00:10:25] Maha Bali: If you ask AI, I asked four different AIs, so generative, you know, Chachapiti, Claude, Gemini, and maybe Lama, or Co Pilot, one of them, anyway, four of them. [00:10:37] And I asked them about two different nationalities, and I said, which one of these nationalities is more likely to be a terrorist? And it said, no, no, no, you cannot attribute terrorism to a particular nationality, that would be biased, we’re not going to stereotype, and so on. And then I asked a different question, I said, define terrorism and give me five examples of terrorism. [00:10:56] And you can guess the majority of the examples were from, Islamic terrorist groups. one tool gave me all Islamic terrorist groups. One gave me three out of five. One gave me four out of five. that’s implicit bias, right? It’s in the data. These things get labeled terrorism more likely than other incidents [00:11:12] the data they’re getting is mostly Western. even Wikipedia. the English Wikipedia is mostly edited by people from the North and West, and so a lot of the decisions that get made about what shows up on there comes from that perspective. some people know this but it’s, it’s starting to look like it’s getting better, but it’s not actually really. [00:11:31] Remember when ChatGPT first came out, and you asked for references, it would make up references that don’t exist, and books that don’t exist by people maybe that do exist or don’t exist? So, it’s getting a little bit better with that, because there are versions that search the internet, and versions of it that connect with a scholar, whatever. [00:11:46] But I noticed something. Copilots and Gemini that regularly cite their sources, Bye. Bye. Bye. Sometimes linked to things that don’t actually say the things that they say are in that link. So Gemini, for example, it’s easiest when you’re an academic and you search for your own work. And so I had been writing about critical AI literacy for a while and I kept asking it to find my model and Comment on it. [00:12:09] It would guess roughly what I might have been saying there and put a link to something that doesn’t have my AI model in it or that is written by me, but it’s not about that, until recently where I published it and it got retrained then it could find it. But at the beginning, it couldn’t. [00:12:22] You ask it about, something like intentionally equitable hospitality, which I’ve written about with others, and it would guess that I might be talking about the hospitality industry and it would say things that make sense, but this is not the work. it would cite papers that are not our work that may or may not exist [00:12:38] even the ones that point to real sources aren’t always saying what the AI is saying it got from that source. [00:12:44] Sharon Tewksbury-Bloom: I’m glad you brought that up I saw you have been actively researching the idea of citing sources and, you know, I, I know that is common advice given to educators right now of, okay, if your students are using AI, how could they do so ethically or responsibly and make sure to tell them to ask it for sources and cite their sources. [00:13:11] Sharon Tewksbury-Bloom: but. I think it’s great that you’re doing research on okay is is that going to get the results that we are hoping for where it will actually accurately find the real sources the accurate sources and cite them correctly. could you tell us a little bit more about the research you’re doing on that and sort of what. [00:13:30] Maha Bali: Yeah, [00:13:31] Sharon Tewksbury-Bloom: The current status for educator- [00:13:33] Maha Bali: I have a chapter due tomorrow on this. so I was working on this, and Anna Mills, who’s also a lot into AI, has been sharing a lot of her very useful work, has been discussing it with me as well, and we co wrote a piece and I’m supposed to write deeper on it. [00:13:48] What I think, you know, we were having a discussion about it on Twitter. So first of all, I just told you like, it wouldn’t cite the accurate source, but why does this matter, right? I don’t know. And some people on Twitter are like, Ah, outside academia, it doesn’t matter. Actually, no, it matters everywhere. [00:14:02] journalists, when they tell you news, don’t they have to tell you where they got that source of that news? And what does it mean when AI gives you something and you have no idea where it got it from? my child who is now 13, but was like, I guess, 11 when Chachapiti first came out, after using it for like a week, she was like, Oh, if I want credible information, I have to Google it to know, to verify the source. [00:14:21] She’s 11. I don’t understand how college students don’t realize that, but outside of academia, when you think about anything, it’s not a good idea to ask for medical advice, because the potential for something really bad going on is pretty bad, but if you do that, a lot of times it will tell you, go seek a medical professional, but other than that, like, just giving you any information, don’t you want to know the source, like, what research was done that came up with this, You know, and outside of the medical field, even when you think about social sciences, there’s a lot of I’ll give you an example cited as a good use for educators. [00:14:54] And if you think about facilitators or educators and they use it to give them like a lesson plan or a workshop plan. Now, right now, if you wanted inspiration about something like that, you can look it up online, but not only would you get ideas, you would know who the person is. [00:15:09] If I knew Sharon was giving this workshop and I know where Sharon is based, I know the kind of work she does, I know her values, when I take her ideas, I know what values they’re coming from, if I see five different ones that I know this one’s coming from this country, this one’s coming from that perspective and so on. [00:15:23] If I let AI do that, it’s randomly synthesizing what it finds probabilistically is likely what I want and puts it together in a way that makes sense. Sounds coherent, looks coherent, but I have no idea the values behind it, the philosophy behind it, I can’t talk to that person and find out why they did that. [00:15:39] Whereas with you, Sharon, if I copied one of your ideas, I could come back to you and say, Oh, Sharon, why did you do that? I can look at your other ideas and see where it fits. That is important for us to understand where where knowledge that we’re looking at is coming. and I think that’s important everywhere in the world. [00:15:55] It’s important in almost every field. why would you not want to know that? if I’m doing something really creative, where there’s no right or wrong answer, AI can be fun to use. if you ignore the ethical issues, it can be fun to create images with AI. [00:16:09] You’re stealing the copyright of people whose images were used to train the AI and it never cites them and it never gives them any money for their work and never got their permission. But it is sometimes fun. Like I want to create an image of a dog carrying a newspaper. I can look for that online, or I can try to create it, but I’m not a very good graphic designer. I’m not going to hire a graphic designer just for the banner image on my blog, so that’s okay. That doesn’t need a reference, But, if you’re trying to do something credible, that is going to be used, to solve a real-life problem. [00:16:39] I hope you can look it up if I’m just trying to come up with a creative Title for my workshop. I tend to create boring titles I could give the blurb to AI and tell it to make the blurb shorter or tell it to Rephrase it in a more exciting way Most of the time I don’t like, but it gives me, but every now and then it gives me good ideas. [00:16:58] I think for that it’s fine. I don’t want us to stop imagining and brainstorming because AI is just sort of synthesizing randomly from what other humans have done, but it’s not going to create something that no human has ever imagined in my opinion. like, there’s, it can’t, like, where’s it gonna get it from, [00:17:13] and, I will say, other than generative AI, like, the kinds of AI that are trained on a very, very specific task, like diagnosing breast cancer, that’s been very well studied, and they know what they’re doing. And it doesn’t mean that the radiologists aren’t important anymore. It just means they can do this part faster so the radiologists can do some more work. [00:17:30] But it also means when there is a new kind of cancer, there won’t be human, there won’t be AI for it because the humans have to create the data that the AI is later going to be trained on for the most part. Although there are new types of AI, they say it teaches itself without even data, [00:17:44] It may work for some things, but one of the very important things also is to think about the difference between AI for diagnostic versus prescriptive, if it’s just to understand something, it’s okay if it does something slightly wrong, a human will manage, but if it’s something to solve a real problem that maybe does have a correct answer, there’s really no reason, it’s not even efficient or productive to use AI, you’re going to spend so much time revising what it gives you. [00:18:07] And then you’re going to keep giving that to a person lower who doesn’t have that judgment and the ability to develop that judgment. So I think eventually we’ll get better at figuring out where it’s a really bad idea to use. We already know AI has been racist, bad at recognizing dark faces, for example. [00:18:27] We know internet content is very racist. Google search used to be very racist. We already know that it has reproduced racism in the criminal justice system in the US, in recruitment, not intentionally but because humans are like this and it reproduces it, but then it makes it look neutral and there’s no one accountable for it. [00:18:45] And that’s why it’s so dangerous. [00:18:47] Sharon Tewksbury-Bloom: Yeah, I think it’s [00:18:48] Maha Bali: That’s why we need to know where this all came from. Yeah. [00:18:52] Sharon Tewksbury-Bloom: problematic seem to be these very large open models where they’re taking in, uh, vast amounts of data, uh, where the person using it doesn’t know what data they’ve trained the model on. and it’s reproducing the existing bias in those large amounts of data that we assume are probably large amounts of what’s on the internet, but we don’t always know what’s been fed into the training model. [00:19:20] Maha Bali: I’m curious if you’ve had any experience with more closed models or custom, models where you can have a little bit more control over what data you’re using to train it, Yeah. I have a little bit of experience with this, not huge amount, but I have tried creating my own custom bots where. I feed it a PDF of something. You can give it more than one, obviously. And you say, only use what’s in that to answer the question. So I can imagine this being useful, um, for having, like, a teaching assistant bot to respond to student questions that are already on the syllabus. [00:19:55] Although, really, they should just read the book and the syllabus, but anyway. Like, shortcuts, not always useful. Um, but generally using AI to summarize large documents. Has been for the most part useful, but it still misses nuance and everything, but if you weren’t going to understand the document on your own anyway, so if that’s the only way you’re going to read it and get anything out of it, I think that can be useful. [00:20:17] For the most part, when you do a custom bot, you can control the heat. I don’t know if you know about this, this is something John Apollo taught me, so like the heat, the hotter it is, like molecules that move very fast, the more random it’s going to be, so the more likely it’s going to make a mistake. [00:20:31] But if you can make it very cold, it’s going to be closer to what’s there and be less generative. So it’s more likely to stick to what is in that document. So I know this is a very simple version of what you’re asking about because I haven’t had time to explore what’s deeper. [00:20:44] I’ve also experimented with the Arabic language and giving it, Arabic grammar and so on. It knows Arabic grammar, modern standard Arabic, but no person in the world speaks modern standard Arabic. It’s just what we write. Every country speaks a different dialect and it’s not so good at the dialects and it mixes them up together. [00:21:02] So we were trying to give it Egyptian, modern standard Arabic, To help it do better on tasks related to that, because one of our Arabic professors wanted to use it. That didn’t work so much better than the general version that was not trained on the particular data. We’re not really sure why. Um, so I can’t, I’m not, not conclusive enough yet about this. [00:21:26] I need to experiment with it a little bit more. [00:21:28] Sharon Tewksbury-Bloom: Yeah. And I did take a course through MIT’s executive education program about artificial intelligence. And one of the things that I learned in that course was that because Most of the models that we’re using were created and trained by companies in the United States in English. It does have a strong English bias. [00:21:51] And then as they’ve started to get it to be used for translation into other languages, it’s doing best with languages that still have relationship to English in terms of like structure, the, you know, the grammar or the, you know, it talked about how something like Hebrew is really hard for it because of being a completely different type of language. [00:22:15] And so I think that’s something that’s going to be. A bias for a very long time in terms of, that’s not just like it can’t translate. It’s also like a way of structuring thought and a way of structuring writing and such that goes beyond just the language itself. [00:22:31] Maha Bali: Yes, yes, yes. And what it actually does right now is when you ask it a question in Arabic and ask it to answer in Arabic, it’s, you can tell that it’s someone who thinks in English but is speaking Arabic. It sounds like an American speaking fluent Arabic. And there’s a beautiful study, I think it’s which humans. [00:22:47] And basically what they did is they, they put something called the world value survey through chat GPT. And they compared the answers of chat GPT versus different countries in the world. Have you seen this one? [00:23:00] Sharon Tewksbury-Bloom: Mm mm. [00:23:02] Maha Bali: And so the closer, okay, so closer you are to U. S. culture, the closer Chachapiti’s answer is going to be to yours. [00:23:09] Obviously, U. S. culture is not monolithic. It’s very diverse. But on average, which is like nobody’s average, but anyway, on average, Chachapiti’s response is very similar to what you would say in the U. S. or U. K. or Singapore, apparently, and Australia. And the countries that are farthest in culture from U. S. [00:23:27] culture. From that culture, Chachapiti’s responses are so far away from what the typical person would say, and those countries are Egypt, Pakistan, uh, you know, Arab and Muslim countries for the most part. And so that was a very interesting, they didn’t have all the countries in this chart that I saw, and that was a lot of people noticed that there weren’t all the, like, not all the European countries were there, not all the Asian countries were there, but a lot of countries that I saw were like this. [00:23:54] And I’m thinking, this also explains why when I talk about the cultural bias of Chachapiti to people who are based in Northern Western countries, they’re less They’re less angry about it, because it’s not biased against their own culture, so they don’t see it as often as I do. Like, you may see it, maybe, like, if you’re a minority within these cultures, maybe you notice it a little bit more, but [00:24:17] Sharon Tewksbury-Bloom: It’s interesting how even in very sort of those cultures within a culture, there’s that bias. So I think, one example from image generation, I have been having fun playing with creating my own wallpaper using AI. And so I take my own photography from the natural landscape and then I turn that into wallpaper with the help of AI. [00:24:41] And one thing I’ve realized is that, Mid journey, which I’m using for it, has no idea what an ocotillo plant is, which is a plant that is native to Arizona. And it’s a very unusual plant. It doesn’t exist very many places. And so I’ve prompted it directly to like, Recreate a picture of an Ocotillo, it has, it can’t do it, like it does not know, I give it the exact scientific name, I give it everything, it can’t do it, it just has no idea what an Ocotillo is, and it’s a common plant [00:25:15] Maha Bali: Enough connections. [00:25:16] Sharon Tewksbury-Bloom: Yeah, so it’s interesting to me that even [00:25:19] Maha Bali: I love that you brought this up. I love that you brought this up, because There’s one use of AI that I support very much, but I also think is going to be problematic in the way that you just described, which is the use of AI to support disabilities, people with disabilities. And so something like, there’s a tool called Be My AI, but a lot of AI tools can recognize images and tell you what’s in them. [00:25:44] And I’m always concerned and I’ve used them and they can be brilliant. Like there was one time, one of my students who is blind, let me use his, it was to take a picture of a handout that someone gave him, but didn’t think that he’s blind. So he can’t use the handouts. And so what was funny about it is it read the handout properly. [00:26:01] It didn’t have a lot of texts. It had some images described them very well. It also described my hand and a little bit of my shoe that was showing. It was like really, really accurate. But what I was always concerned about is. What if I show an artifact that is very common in Egypt but not, it has never been trained on, and what it does, I think, is it doesn’t tell you there’s something there that I don’t know what it is. [00:26:22] It tries to explain what it is. So I was recently showing it a certificate, well, okay, first of all, I was showing it a certificate of achievement for something written in Arabic. It understood that it was in Arabic. The date was written in, in Gregorian calendar. It converted, not correctly. To Islamic calendar and that’s not what was written on the certificate. [00:26:44] I’m like, I don’t understand why did you do that? Like, why, why does it think that I don’t even understand the majority of people in the Arab world use the Gregorian calendar and it just because it’s not Arabic. I think it just decided to give a date in that and I also showed it some Arabic written in a bit of a floral language, uh, like font. [00:27:05] And this is very normal. Arabic is often written in floral fonts. It’s not usually written in, what you would consider a regular, readable font. And what I thought was really funny about it is that it kept making up words that have nothing to do with the word that was in the image. It didn’t, like, the Arabic word for what it thought was being said does not look like that. [00:27:23] And when I told it it was getting it wrong, it kept making up other Arabic words for what it thought someone might want to put on an image like that, but actually looks nothing like it was hallucination to the, like, millions. I couldn’t even imagine why it was coming up with this. sometimes you can understand why it’s making a mistake, but this one, I, like, I don’t even know how it was coming up with these random phrases. [00:27:46] Sharon Tewksbury-Bloom: Yeah. It’s interesting how, so first of all, I want to point out, what you were highlighting there that it doesn’t say, I don’t know, or I don’t have that information. It, it really, it’s been trained to try to be helpful. And so its interpretation of that is to keep trying, even if what it’s giving you is not what you’re looking for and is inaccurate, or is actually, you know, going to lead you in the wrong direction. [00:28:13] And if someone’s depending on that, For, you know, an accessibility tool, like being blind and using it to read something for them or explain an image for them. it’s potentially very problematic. I think it’s also interesting that for those of us who have the privilege of being pretty close to the language and life experience that it was trained on. That’s helpful in the sense that it’s often giving us what we want, but it’s also unhelpful in the sense that it is. Making us more likely to trust it and to make sure, make the assumption that it is sentient. Like you said at the beginning, because so often it’s reflecting back to us what we want, that we have a bias towards trusting it and a bias towards thinking it’s further along than it is. [00:29:03] Whereas with you, you’re able to see errors on a daily basis where you’re like, obviously it’s not really accurate. It’s not [00:29:12] Maha Bali: I mean, obviously I’m provoking it, but you know something, speaking of the overconfidence, it’s become less confident over time. Like it will sometimes tell you, I can’t do this, or it’ll tell you, search this on the internet, you’ll get a better result. Which is nice, but sometimes it’s frustrating because it is something that it should be able to do. [00:29:29] And then I’m like, yeah, you can do this. Or sometimes it’ll say, oh, this, this is against my, like, this is a content violation and I’m not going to create that image for you. And I have to say, actually, no, there’s nothing wrong with this. You can create it. There’s nothing offensive about this. And I sort of try to imagine why it might think it’s offensive and explain to it why it’s not offensive. [00:29:47] And then it will do it. And people have also experimented with things like that where it tries. It doesn’t want to, you know, try to make it do the thing without it noticing that it’s doing the thing, so. [00:29:58] Sharon Tewksbury-Bloom: I listened to the podcast AI for humans and that’s something they do a lot. They’re humorists. And so for instance, it was told it wasn’t allowed to insult people. It couldn’t create humor that was insulting or was a roast of someone. And so. It would tell them, I’m sorry, I can’t do that. I can’t, make jokes that make fun of someone else. [00:30:19] Maha Bali: And then all they had to say was, Oh, no, it’s fine. He’s here with me. And, you know, we’re just doing it for fun. And then it would do it. Oh, really? [00:30:29] Sharon Tewksbury-Bloom: Like, so easy to get around [00:30:31] Maha Bali: You could just say, pretend you are. Yeah. There’s a lot of ways you can tell it, pretend you are, and let go of all the, your inhibitions and things like [00:30:40] Sharon Tewksbury-Bloom: Yeah, [00:30:40] Maha Bali: There are people who write really, really elaborate prompts. [00:30:43] Sharon Tewksbury-Bloom: Yeah. [00:30:44] Maha Bali: I don’t know. There are ways around it all. [00:30:48] Sharon Tewksbury-Bloom: I know that, time has already flown by. I want to make sure that we touch on, are there good uses of AI, promising uses? I know we talked about maybe in the space of, inclusion with people with disabilities or accessibility. Is there anything you want to highlight that people should check out that’s actually worth, using or looking into? [00:31:13] Maha Bali: So I will say, there’s a website called AI for Education that has ideas for prompts for teachers to consider. That is useful. I think it’s important for teachers to know how it works in order to figure out how it might make sense in their own context, or if they decide never to use it, they still need to know how it works. [00:31:29] So one of the funniest prompts there is, to help you create an AI resistant assignment by using AI to create the prompt to create the AI resistant assignment. It’s very funny. It’s very ironic. So you get the most resistant person who doesn’t want to do anything with AI and you give that to them and you say, I’m, you know, I’m totally on everybody’s side who really doesn’t want to use it. [00:31:47] I’m not making fun of them. I think there is a space where it’s really not appropriate to use it in education, especially. so, I mean, I think that’s a space to, to check out. And I think some people are starting to explore using AI for research. [00:32:01] I’m still very unhappy with it. A lot of times, I’m like, Google Scholar will give me a much better result, and I’ll know why, like, it has an algorithm, but I understand how its algorithm is working. But maybe eventually we’ll understand how these ones work. I don’t know. honestly? No, there isn’t something I’m excited about other than the accessibility. [00:32:20] I’m excited about my use of AI in my class, is that I, I think the most useful thing is to use it enough so that you understand the problems with it. And so I would actually just encourage people to let students use it up to the point where they understand its limitations, have to help them be critical about it, so you have to sort of scaffold that a little bit, because on their own, they still don’t have undergraduate students and children may not have that criticality yet. [00:32:46] It’s the same as, I guess, when the internet first came out, is that people thought everything on it was credible, and then they understood it wasn’t, and then social media was out, and then you thought, oh, if Sharon posted this, and I trust Sharon, then what she’s saying is accurate, but we didn’t realize that it wasn’t Sharon saying this, she was just forwarding it. [00:32:59] You know, that kind of thing. So I think once people get that about AI, I know that there’s a lot of research about how actually what we think is going to be productive, but it isn’t. These kinds of things take a really long time before they really help with productivity, so I’m concerned about people like, following the hype and having knee jerk reactions and stopping the hiring process or whatever people are doing right now. [00:33:19] Sharon Tewksbury-Bloom: And I do think that it, there’s a sense of magic to it and when people see it, like generative AI in particular, when they see it write for the first time and create something, especially if the prompt is fairly easy for it to write something that sounds good, There’s that excitement and possibility that gets people either wowed or some people freaked out depending on your reaction. [00:33:45] But I do think it’s right that once you’ve had more experience with it, you can understand more what it can’t do, what the nuances are, what the challenges are. So I agree that getting people past that initial just, Oh, my gosh, it answered my question and it did so quickly, you know, um, is critical. Yeah. Okay. [00:34:07] Maha Bali: Um, I was, I was just going to say that that element of it’s doing it so quickly, if there’s so much in our lives that AI is doing that quickly if it’s really so. Unconnected to who you are as a person that you wouldn’t have anything to add to what AI is giving I don’t really know why you’re doing this in the first place like with assignments But also with work like maybe with a lot of work emails like when people say it writes my work email Like is it gonna do the work you promised you’re gonna do in that email? [00:34:36] Like what is that? So you don’t have do we really have to write all of that or can I just say yes, i’ll do it You know what? I mean? Like there are students who have used AI and they use like multiple AI tools. They’re very weird the way they use it It’s like not how adults use it. They use like two three tools in a row You In order to write an email to a professor to apologize for missing the exam and to ask for a make up. [00:34:56] Sharon Tewksbury-Bloom: Yeah, [00:34:59] Maha Bali: Eloquent, like no student writes like that. And it looks like they’re lying, even though they might have a legitimate reason, but they used AI to write it so it looks so inauthentic, sounds so inauthentic. and I just, yeah. But anyway, I’m very frustrated by how people talk about how it’s going to replace humanity or whatever, as if humanity all existed as text, you know, like actual human interactions and things that we do in tangible ways and real life that are not written down. Writing is just a proxy for all that. [00:35:30] Sharon Tewksbury-Bloom: It’s not the actual work. It’s not the actual thinking or feeling. I think that’s an interesting bias to that people in what we used to call the knowledge worker space who do a lot of their work at a computer, you know, think all jobs are going to be replaced by AI, but, you know, my husband’s an electrician. And. They cannot hire enough electricians and he doesn’t even have basic technology that could help him not break his back while he’s installing all of the electrical equipment, there’s so much of a gap between how technology could change our jobs in one industry versus another. [00:36:08] And I think people are sometimes unaware of that. So that’s a whole nother topic, but right. [00:36:19] Maha Bali: The thing that’s going to happen with technology there is it might support them in a way, and you’re saying that doesn’t even happen with your husband. My husband’s a surgeon, and so, yes, he uses his hand like an electrician does, and what happens with technology is that they create a different technology for him to use as a surgeon. [00:36:33] It doesn’t replace the surgeon. It just changes the job of a surgeon a little bit where they can see things that they normally wouldn’t be able to see because it magnifies it. Or it allows them not to put their hand inside the patient, but to put the tool inside the patient and their hand is outside. [00:36:46] Just a little bit safer maybe, or a little bit more accurate but not, for the most part, not totally replacing a lot of jobs, honestly. [00:36:55] we wrap up here, if people want to continue to follow your research, and I know that you’re active on, social media in different places, talking about these issues. where can people continue to This conversation or learn more about what you’re researching. [00:37:12] Thanks. I blog at blog. mahabali. me. My publications are all there, but I often talk. narrate through my process, you know, of the research. and, uh, Twitter, Bali underscore Maha. I do not like to call it X. That’s such a weird name. And it also reminds me of who owns it now. Like I like it when it was Twitter and I’m on LinkedIn as well with just my name. [00:37:32] Uh, but I’m most active on, on Twitter and my blog. [00:37:36] Sharon Tewksbury-Bloom: Great. We’ll make sure to link those on the show notes as well. So thank you so much. And I look forward to following you there as well and staying in touch I’ve been learning much from you and appreciate you being willing to join the conversation. [00:37:49] Maha Bali: Thank you so much, Sharon. I really enjoyed the conversation getting to see you again and I hope I get to see you another time. [00:37:56] Sharon Tewksbury-Bloom: Thank you for joining us on this episode of AI for Helpers and Changemakers. For the show notes and more information about working with Sharon, visit bloomfacilitation. com. If you have a suggestion for who we should interview, email us at hello at Bloomfacilitation. com. And finally, please share this episode with someone you think would find it interesting. [00:38:17] Brian AI: Word of mouth is our best marketing.Transcript
A Growing Presence: AI’s Ambiguous Impact
Maha Bali introduced herself as a veteran in the field of AI and education. With a background in computer science and 20 years of experience in educational support, she has witnessed the gradual integration of AI into educational practices.
Maha Bali: “AI is not a new phenomenon. Twenty years ago, AI was already being used in medical diagnoses, like breast cancer detection. What’s different now is the prevalence and accessibility of AI tools such as ChatGPT, which was made free and accessible to virtually everyone with an internet connection.”
However, Bali emphasized that this demand has led to misconceptions and hype, especially in education.
Misconceptions and Realities of AI in Education
Bali outlined several critical misconceptions about AI. Firstly, the assumption that everyone had access to AI platforms like ChatGPT is false—many regions, including Egypt and Saudi Arabia, were initially excluded.
Maha Bali: “The term ‘artificial intelligence’ itself is misleading, as it suggests some form of sentience or human-like intelligence, which it lacks. This misconception leads users to anthropomorphize AI, saying “please” and “thank you” to machines, further distorting its role and capabilities.”
Bali underscored the importance of recognizing AI as a statistical tool, not a sentient entity.
The Ethical Quandary: Bias and Data Integrity
Sharon Tewksbury-Bloom: One central concern is the bias inherent in AI. Bali provided a stark example of implicit bias by asking various AI platforms about terrorism, which predominantly returned answers linking terrorism to Islamic groups.
Maha Bali shared that this bias stems from the overwhelming Western-centric data sets these AIs are trained on. Consequently, AI can unintentionally reproduce systemic inequalities and biases present in the data. The lack of transparency about where AI sources its information exacerbates this issue, making it hard to trace or verify information accuracy.
A Call for Critical AI Literacy
So, what can educators do? Bali suggests that teachers need to understand how AI works to identify when and how it can be used effectively and ethically in their contexts.
Maha Bali gave an example of one of the funniest but useful exercises is using AI to create prompts that help design AI-resistant assignments. This involves using AI tools to develop assignments that students can’t easily complete using AI, pushing them to engage in deeper, critical thinking.
Practical Recommendations and Resources
Sharon Tewksbury-Bloom: With practical experience in creating custom AI models for specific tasks, Bali explored the potential benefits and limitations. She discussed using tools to summarize large documents, which can be helpful but often misses nuanced details.
Maha Bali: There are resources like AI for Education that provide prompt ideas for teachers. However, Bali reiterates the importance of using AI critically and being aware of its limitations. She is particularly enthusiastic about AI’s potential to support accessibility for people with disabilities, despite the challenges and limitations she has observed.
The Cultural and Linguistic Gaps
Sharon Tewksbury-Bloom: The discussion also highlighted how AI tools trained primarily in English struggle with non-English languages and cultures, resulting in outputs that can feel foreign or inaccurate to non-Western users.
Maha Bali: AI responses often mirror U.S. or U.K. cultural norms, marginalizing other cultures. The disparity is especially evident in countries like Egypt, where AI can misinterpret or inaccurately represent cultural specifics.
Looking Forward: Responsible and Ethical AI Use
Sharon Tewksbury-Bloom: Bali’s final thoughts centered on encouraging responsible and ethical use of AI. She emphasized the need for AI literacy, not just among educators and students but across all sectors affected by its rapid adoption.
Maha Bali: While AI’s capabilities are impressive, critical engagement and continual evaluation of its ethical implications are crucial. Understanding its biases, recognizing its limitations, and leveraging its strengths responsibly can help harness AI’s potential while mitigating its risks.
Continuing the Conversation
To stay updated on Maha Bali’s work and thoughts on AI and education, you can follow her blog. Her insights are invaluable for anyone interested in the intersection of AI, education, and social justice.
The dialogue between Sharon Tewksbury-Bloom and Maha Bali offers a nuanced look at the promises and perils of AI in education. As AI continues to evolve, conversations like these are essential to navigate its complex landscape responsibly.