Speaking of ... College of Charleston

Navigating AI in Higher Ed: Balancing Innovation and Integrity

University Communications Season 3 Episode 11

Send us a text

On this episode of Speaking Of…College of Charleston, we speak to Ian O’Byrne, associate professor of literacy education at the College about AI in higher education. His work centers on teaching, learning and technology and is incredibly prolific, publishing newsletters, blog posts and videos on digital literacy. O’Byrne is innovative in the classroom and always seeking new ways to keep students engaged.

O’Byrne discusses how higher education can integrate AI to support learning while maintaining academic integrity. He addresses concerns about cheating and fostering creativity and critical thinking. O’Byrne also highlights the need for authentic assessment and broader ethical considerations, advocating for more inclusive discussions involving students, faculty and alumni.

Most importantly, he encourages educators to have more dialogue about what these things mean for the classroom.

[00:00:00] On this episode of Speaking of College of Charleston, we speak to Ian O'Byrne, Associate Professor of Literacy Education at the College, about AI and higher education. Ian is incredibly [00:00:20] prolific and writes about literacy, technology, and education. And before we dive into questions about AI and higher ed, I'm hoping that you can tell a story, um, paint a picture for our listeners of a day of a life and a professor navigating the challenges of AI.

[00:00:39] What's the [00:00:40] first thing that comes to mind, or what's the biggest issue that you're seeing on a day to day basis in the classroom? So, uh, first of all, thank you for having me here. Uh, very important topic. Um, obviously, uh, it has captured the public's imagination trying to figure out what AI, artificial intelligence, means.

[00:00:59] [00:01:00] Um, most of my interactions with AI, are twofold. One, I think that I see a lot of hype and hyperbole and hysteria around AI, a lot of knee jerk reaction to just ban this tool or these technologies in our lives. But [00:01:20] then on the other hand, I see tremendous opportunity. Um, and we can talk more about what AI looks like in our lives.

[00:01:26] We could talk about how we can think about AI, but I think on a, on an everyday basis for me, um, as a, as an academic, as a professor, I think that most of the dialogue is a lot of fear and [00:01:40] hype and hyperbole and hysteria, or an indication of opportunities that we can use this tool to improve our lives, improve the lives around us.

[00:01:49] There's a lot of risk and reward with that. Um, but it's trying to, you know, negotiate, try to balance those spaces, try to figure out what's the best possible path forward. [00:02:00] Right. That makes sense. Um, okay. So diving in, how should higher education institutions approach the integration of AI tools like ChatGPT in the classroom?

[00:02:13] I mean, it's already happening. So how should they continue to do that? What are the potential benefits and [00:02:20] challenges? of integrating those tools. Yeah. And I guess, and we're talking, yeah, we're talking about higher education. So we're not talking about, um, middle school or elementary school. Yeah. And so one of the things is, so I am an associate professor of literacy education, and a lot of my research has been trying to understand what happens as we move [00:02:40] from, from print to pixel.

[00:02:41] What are those changes as technology impacts, not just teaching, learning and assessment, what's happening in our schools, but how does that impact our lives? Um, and so I've seen for decades, those transitions and for the most part, what happens when technology. change the society in [00:03:00] educational spaces. K 12, higher ed, we automatically think that students are going to use this to cheat.

[00:03:06] And that's one of the main problems. Um, and this is one of the common threads. Uh, most recently we saw this with COVID where there was Um, this belief that, uh, students need to all put their webcams on and be in a Zoom [00:03:20] call from 8 in the morning till 6 o'clock at night. Um, and if they didn't have their webcams on, they would be cheating.

[00:03:26] So, one of the things that I think we're seeing right now is there is a, uh, a privileging of one form of learning over others. And what that means to normal people, is that, uh, educators, [00:03:40] academics have a belief that you need to learn the way that I learned. So you need to learn and understand my, your, my content, my discipline, the way that I did.

[00:03:48] You need to read these books. You need to read the books. You can't listen to the podcast or watch the webinar. You need to read the book itself. Um, and so one of the things that we're seeing is that. [00:04:00] In higher ed, um, there are a lot of voices that, uh, will push back against the use of AI. They'll push back against the presence of this in our lives.

[00:04:11] We can talk at some point about the fact that these technologies have been in our lives for some time. This is not new, but there is this a [00:04:20] concern that it's going to completely change, uh, and, and to some extent ruin what we have going on in higher ed. Um, a lot of the research suggests that, um, if we don't give students the opportunity to use these tools in their practice, if they leave our institutions of higher ed without having [00:04:40] used the tools or thought critically about their usage in their lives or their disciplines, then they are being left behind.

[00:04:46] Exactly. So it's another digital divide. So instead of You know, just the same way that we wouldn't want our students to leave our programs without an understanding of their digital identity or, uh, some digital literacy skills, [00:05:00] um, or how to use Zoom, um, or how to use LinkedIn. Um, so too, I would say, um, we need to think about different generative AI tools that they could use in their practices, in their lives, in their futures.

[00:05:13] If we send them out into the world without, uh, an examination of these, a critical examination and use of these [00:05:20] tools, we're not preparing them for their futures, right? We're doing them a disservice. Yeah, they're not prepared. Yeah, exactly. Um, so, so talking about trying to limit it, which I think comes so much from fear, like fear of the unknown.

[00:05:34] I think that's where a lot of A lot of the, the wanting to [00:05:40] not allow it in the classroom, like also wanting to do it the way that you have done it in the past for instructors. And so there are a lot of people who, who want to just ban AI in educational settings. Um, Like, like they're trying to ban cell phones in classrooms and all of that.

[00:05:58] So, so [00:06:00] what do you think? Let's talk about blanket bans on AI in education. What are the pros and cons? So one of the things is that we know that in all educational spaces, blanket bans don't work. Um, we know that from book bans, we know that from cell phone bans, we know that any time you tell a learner, an individual, [00:06:20] don't do that, don't go there, that's one of the first places that they're going to go.

[00:06:24] Um, we also need to understand that, um, when we think about these technologies, we think about AI, there is this, uh, initial response that we should ban chat GPT. Keep in mind, chat GPT is one version chat [00:06:40] GPT. around this, around the date of today, Chachi BT came into our lives about three years ago. Um, so it was during the holiday season, Chachi BT was launched by accident.

[00:06:51] Um, it was version three, five of a product by, uh, open AI. Um, they launched it because they wanted, [00:07:00] uh, different people to interact with it and ask illogical nonsensical questions, uh, in different languages. Um, and so it would help the model learn. Um, but these technologies have been generative AI has been in our lives for some time.

[00:07:16] Um, you know, AI machine learning is paying attention [00:07:20] to, to your, uh, and it's helping determine your Netflix queue, your, um, your Amazon shopping list. It's paying attention to all the signals that you leave behind online and offline. And it's helping inform how companies and products market and service you.

[00:07:36] Um, I argue that generative AI, [00:07:40] uh, is that this version of it is an opportunity, uh, for us, the user, to leverage some of that technology and some of that power. So now we can sort of, not to, not to its fullest extent, but we can understand what's happening with it and we can use those. Um, and so when we think about blanket bans in higher ed, [00:08:00] Open AI is the company behind Chats GPT.

[00:08:03] Open AI is heavily funded by Microsoft. Um, Open AI, um, Microsoft is one of the main, you know, technology providers in higher ed. Uh, so at our institution, we're Microsoft is one of the key [00:08:20] vehicles that we use. It's one of the key tools that we use. Microsoft is slowly folding a lot of generative AI and machine learning models into their tools and services.

[00:08:31] So just the same way that a couple years ago you would buy a watch or a toaster or a refrigerator and you would decide, do I want, you know, the internet of things? A [00:08:40] little bit of like internet connectivity to that. Um, we're seeing more and more. Uh, tools and products and services that have just a little bit of machine learning models built into it.

[00:08:50] Um, and so here at, at our college, uh, we are a Microsoft school, like I said. Now you have, uh, Copilot, [00:09:00] the, the chat GPT version, uh, that is already baked into our browser. It's baked into, you know, our office tools, it's baked into our teams, it's baked into our email. And so. Even if we would, uh, want to ban these things, it's technically impossible.

[00:09:18] Yeah. Um, and [00:09:20] decisions have already been made above our pay grade that have determined that yes, this is something that we have access to. Um, and once again, if we send students out in the field and they don't have these tools and know how to use them, we're s we're keeping them behind. I'd also add that a lot of educators, a lot, [00:09:40] especially in higher ed.

[00:09:41] Um, we have a lot of work to do during the day, and if there's an opportunity to help us do our work and save a little bit more work life balance and be a little bit more of a human being, to me, that's worthwhile. Right, right, yeah. Um, like letting, taking advantage of the AI in, [00:10:00] in doing some of the tasks that, are not requiring your kind of higher thinking, you know, the, the, those day to day tasks, if that, if some of those things could be eliminated or reduced.

[00:10:15] you know, by using an AI tool, then that would allow you more time to [00:10:20] be using all of the skills that you have. I'm not saying that well, but well, when I get talks about machine learning and generative AI and computer science, there is a, there is a, there's a worldview that as a human being, your time should be spent doing higher order functioning skills that you shouldn't waste your [00:10:40] time, you know, washing dishes and stuff like that.

[00:10:42] Um, and so there is a belief that. We should take some of the menial tasks and sort of like boring tasks and we should farm it out to someone else. That gives us products like DoorDash, that gives us Uber and Lyft. And so that gives us this worldview that it's okay [00:11:00] to have some stranger pick you up in their car and drive you someplace else.

[00:11:03] Um, and so when we think about generative AI, Um, you know, when I talk about generative AI, I make it clear that, you know, we're not going to, our focus is not going to be on universal basic income. We should talk about that. Okay. We're not going to focus [00:11:20] on the singularity and the end of the world because a lot of the discussions about AI go there at some point, and we're also not going to talk about hacking your work, hacking your job, hacking your life.

[00:11:30] We need to have that discussion. So in higher ed. We should spend more time thinking about how can this help me do my job a little bit better? How can it save me some time? [00:11:40] Um, and it's, what's interesting is that especially in education, there's a lot of shame in that for some reason, there's a lot of shame that if I say I'm gonna, You know, I have a mountain of emails I have to go through.

[00:11:54] It's going to help me make sense of my emails and write a response. That's a little bit more emotionally intelligent [00:12:00] that that is a shameful act. Um, we had this belief that we know everything. Um, and so I think that there is the opportunity to think about, you know, how might we use and leverage these tools.

[00:12:12] Um, and then what are the implications to that later? Because there are very real implications. There, there's some very big question [00:12:20] marks that we have or we should be talking about. Yeah, and that's really interesting about the shame part too. I thought I would love to hear more about that. Um, moving on, what are the ethical AI in higher ed and how can institutions balance the need for for academic integrity with the [00:12:40] benefits of AI enhanced learning.

[00:12:42] Absolutely. With, um, you know, with every tool, with every opportunity we have in our lives, there's a lot of risk and reward, um, with, uh, generative AI with machine learning. One of the things we have to think about is the environmental concerns. And the environmental impact and what [00:13:00] power consumption looks like.

[00:13:01] Um, we should have much richer, fulsome discussions about the ethics involved in this. And so one of the, one simple way to think about that is, um, we want to think about The, the content that people create. So we talk about generative AI. We're thinking about machine learning [00:13:20] or artificial intelligence that's creating new content.

[00:13:24] And so we want to think about where did that content come from? Exactly. And so it's, you know, if it's, uh, you know, we're, we're repurposing because a lot of these machine learning models, they're just guessing what the next word or phrase would look like or sound like. And so we want to think about [00:13:40] where did that content initially come from?

[00:13:43] The initial models that came out. So when chat GPT came out version three, five, it was created using data from the old Google book search project project that happened decades ago from scraping Reddit, scraping Wikipedia, [00:14:00] um, some different sources. And so, and it was not able to actively search the internet.

[00:14:05] So when it's creating responses, that's where people would harp on the hallucinations is creating responses from a dataset that. Um, was a closed space, right? Newer models can search online. Um, but one of the [00:14:20] things is that, um, all the good and bad that was on the internet and is on the internet from Reddit and Wikipedia and stuff like that.

[00:14:29] All of that is in that black box that's being used to generate those responses for you. So in terms of ethics, we want to think about Intellectual [00:14:40] property. Right. And think about who created this. Right. Um, we also, uh, want to think about what is this doing to us as a society? So, if a lot of the data is primarily from, you know, uh, you know, English speaking countries, what does that do for other places around the world that are not [00:15:00] English speaking?

[00:15:00] Um, is that setting forth, you know, that's basically making, making this indication that, Um, the, that, you know, standard academic English is, uh, the, the most important and most proper language. Right. Um, so it's, there's, there's broader ethical implications of this, um, in terms of what this means for teaching and [00:15:20] learning, um, we need to have discussions about something as simple as a writing sample.

[00:15:25] You know, most of our systems have been based upon, in education, K 12 and higher ed, most of our systems are based upon your ability to read, write, think, speak, interact. In standard academic English. And so if you're a student [00:15:40] in K 12, you're a student in high school, you're a student at College of Charleston or in higher ed.

[00:15:44] You want to think about, I can speak effectively in standard academic English. And now, we have tools that can spit that out in seconds. And so, if we have an interview, or we have an essay, or we have some sort of [00:16:00] other non traditional assessment, it can spit those things out. So, we need more time to think about, what, are those assessments, like an essay, are they still null and void?

[00:16:12] Are they null and void? Um, I would suggest one of the things that we could do is focus more on process over [00:16:20] product. So instead of looking at just that final assessment or final assignment, we can look at what are the steps along the way and where did you learn? Where did you get that information? Um, one of the things that I do in my own personal work and I urge my students to do is to integrate AI into those steps along the way.

[00:16:39] [00:16:40] So do a little bit of the project. And then reach out to a friend in class, reach out to an AI model, get some critical feedback, talk about, do you believe or trust what the AI model told you, and then try and revise your work and then submit it later. So I think one of the easiest things [00:17:00] I would suggest is just focus on.

[00:17:02] process and learning over time as opposed to just the final result that you turn in. Yeah, I think that makes so much sense. And maybe it could even be a good thing in the classroom, you know, like instead of, instead of just turning in papers, maybe there will be more people will start work, you know, writing [00:17:20] papers in the classroom or the classroom itself will be more, um, Um, of, of like a laboratory workspace.

[00:17:27] Yeah. And if you think about some students, um, the, the learner in the classroom might not feel as comfortable going to the teacher or the professor and asking for feedback. Yeah. Um, there are in a [00:17:40] lot of our classrooms, um, you know, the student just wants to figure out what the professor wants, what the teacher wants.

[00:17:45] And so they'll go to you and say, you know, is this good enough? There's that's problematic, you know, I think that the students should be determining what is is important to them, but I think that, you know, there might be a student that it's a [00:18:00] little bit hesitant to reach out to appear in class, or they might not have a friend that they would trust to reach out to.

[00:18:05] They might not have. Um, you know, the professor may or may not have the time, depending on the class structure, to give everyone the one on one support. So if there's a way that, in K 12, we had this, this, this, [00:18:20] Uh, this idea of like, ask three and then ask me so ask three of your peers or look someplace else before you ask me the question.

[00:18:27] So there is the opportunity to sort of like, um, test out your hypothesis or test your work or, you know, uh, critique your work. Before you send it off [00:18:40] to a, you know, a human being to review it before the grade is really, um, you know, the grade will be determined. You can basically send it out and get a little bit of critical feedback before you actually submit it.

[00:18:51] That makes sense. That whole germinating period. Yeah. Yeah. So, um, So that said, that takes [00:19:00] us sort of to cheating. How are universities and colleges addressing concerns about AI and cheating? And, and what role does authentic assessment play in this conversation? And I don't know if you can speak to what we're doing at the college specifically, or just talk in generalities.

[00:19:16] Yeah. I think so. One of the things that we're seeing is that there [00:19:20] is, there are companies that make a lot of money and get a lot of market share by indicating that they can scan or look at your student work and determine if it's cheating or not. Um, we're seeing this happen also, um, as I submit manuscripts for publication, they'll look through and scan it and try to [00:19:40] determine, um, if the, and this was pre generative AI, pre chat GPT.

[00:19:45] Um, was this something that I plagiarized from elsewhere? Um, a lot of the tools that will scan student work and determine, um, if it's cheating or not, a lot of them, uh, don't work that [00:20:00] well. Um, and a lot of these tools, they will, once again, they'll privilege standard academic English. Um, so if I am a non native English speaker, or if I, um, don't exactly fit the model that the, this.

[00:20:13] scan it that this tool is looking for, it's going to flag my work. Um, and the [00:20:20] other thing that we're, that we're seeing is that some of these tools and products, um, are, they are quite lucrative. Um, and so a lot of these tools, a lot of these products, um, you know, if an institution were to pay for that tool, um, keep in mind when we use generative AI, You are, you're training the [00:20:40] tool.

[00:20:40] You know, earlier I talked about the data set that was used to train these tools. Uh, some of the, the challenge that we're having in machine learning now is that we're out of data. There's no large data sets that we can use to train these tools. So if I'm looking for a tool to identify how many apples are in a, a, a bunch of photos [00:21:00] or did this student cheat on this test, I need to have data sets that either have a bunch of photos with and without apples or a bunch of work by students that includes cheating or doesn't include cheating.

[00:21:11] But across the board, we're, we're running out of data. So a lot of the models, a lot of the products are, are either trying to find data and [00:21:20] create data. And so we have companies that AI tools, generative AI tools are generating data that is being used to train other tools. Or, um, you know, a, a, a product, a service is going to look for ways that they can gather a lot of information and use that to train the model.

[00:21:38] So, [00:21:40] put simply, a lot of our tools, um, don't work very well in terms of like catching, you know, plagiarism and cheating and stuff like that. Um, but then at the same time, Uh, they are, you know, we're creating a future market for that tool, you know, we're so years from now when they say our tool is great, it may or may not be, [00:22:00] but we've been paying to help provide that service.

[00:22:02] Um, in terms of what we do here in, you know, what we could do in higher ed is I think that term authentic assessment. is one of the key components. I think it's trying to think about what are ways that we can really try and make sense of student learning, student growth over [00:22:20] time, uh, you know, assess and evaluate their progress.

[00:22:24] Um, think about what they need to know, um, and make sure that they're getting a little bit further along that path. Um, one of the best, uh, ways that I think that we could address this. I think some of this is happening in our institution [00:22:40] and It's not happening as much, uh, in K 12 and higher ed, but, um, more discussions with human beings.

[00:22:47] You know, one of the things is keeping the in, in computer science, there's this idea of the human in the loop. And so we want to think about where is the human in the loop of these interactions. So. Where is the learner and [00:23:00] the instructor or the professor as you interact in class and the assessment and the evaluation and how are they creating those touch points and thinking about that process, but also more importantly, the, you know, having dialogue with faculty, with professors, with staff about what do these things mean?

[00:23:19] Um, [00:23:20] you know, staying away from that hype and the hyperbole in this area and having. long term meaningful discussions about what do we mean by academic rigor? What do we mean by authentic assessment? Um, learning what these tools actually are and what they can and cannot do, and then trying to [00:23:40] have decisions about, as you said earlier, you know, the genies out of the box.

[00:23:44] Um, what do these things mean for our classroom? What do they mean for our disciplines? You know, what do they mean for our classroom? What does it mean to graduate from a student from a specific institution? So I think that the best thing that I could see is more of [00:24:00] that discussion, more of those conversations, um, and try to make sense of what these things mean.

[00:24:06] Um, and, and I would urge us to have those discussions, not just as colleagues and faculty, but have them with our students. You know, our students are the best part about our institution. They're the best part about all of our programs and they [00:24:20] are key contributors. There are, you know, our, our key, uh, client here, right?

[00:24:25] Exactly. What do they want? Right. You know, and then talk to alumni. What are you seeing? What questions do you have? And help. Let's have a broader discussion about what these mean, um, and, and try and just not to stick our head in the sand. Right. And like you [00:24:40] said, it's, it's constantly evolving. So it's not like a problems that's going to be solved or figured out.

[00:24:44] It's, it's ongoing. Um, in what ways can AI be harnessed to support rather than hinder students learning experiences and, and how can educators use AI to foster creativity, critical thinking and [00:25:00] ethical collaboration? So one of the, the, I have a fourth grader and I have a high school student and, uh, so I have a fourth grader and a ninth grader.

[00:25:10] And I, to me, I dream of a space where we can have a, a learning space, a digital portfolio, [00:25:20] basically, uh, you know, things that we've learned over time. And I would love to see an opportunity where we have a bot or a model or an agent that learns along with you. There are products and services out there that will do this.

[00:25:31] Uh, one of the best ones that I've seen so far is, uh, Conmigo from Khan Academy. So you can have a bar an agent that learns along with you. [00:25:40] So, you know, with my fourth grader right now, she is learning math. She's having success. She's struggling with some pieces. But, um, one of the things that's interesting is that her.

[00:25:53] You know, her math, uh, problem solving skills are vastly different than they were when I was in [00:26:00] fourth grade. Um, but then also, one of the things I've noticed is that the problems are starting to fold letters into the math problems. And so I know that at a later date, that's going to be the beginning of algebra and other more advanced forms of mathematics.

[00:26:17] And so, I would love to [00:26:20] have, and that's one of the things I'm doing now, is I'm building little AI bots for my kids to learn along with, and so one of the things I'd be interested in is, you know, with my daughter, I can have her learn, and then two years from now, three years from now, when she's in pre algebra, when she's starting to deal with more advanced [00:26:40] mathematics, she's The agent, the bot can say, Hey, you sort of struggled with this before when you first thought that's cool.

[00:26:47] And this is how we got around that. This is how we sort of like made that connection. Your own personal tutor who's there. And so you can. And so one of the easiest ways to, to think about this is students that are in my [00:27:00] higher ed classes. I think about a simple opportunity would be when I teach a class.

[00:27:07] I have 13 to 15 weeks of class. I have a number of research PDFs. I have websites, podcasts, videos that they need to consume. Um, I would, uh, I have, [00:27:20] uh, then put all of those into one space. So, uh, one of the products I, I know that we shouldn't be mentioning specific products here, but, um, uh, there's a product out there called notebook LM by Google.

[00:27:33] And one of the interesting things is that it can create almost like a bounded space. So you can have a, a [00:27:40] Google drive folder, uh, and have all the materials for the class in that Google drive folder. And then you can say to students, okay. If you have a question, all of the materials you need for class are in that drive folder.

[00:27:53] So you can query, you can question that drive folder, and it's going to look at just those materials for the class. So if you have [00:28:00] a student that might be a striving reader or a non native English speaker, if they need different materials, different opportunities to make sense of that, They can go in and just look at those or search for those specific materials.

[00:28:13] What's also interesting is that Notebook LM now has ability where I can give it a couple [00:28:20] research PDFs or a couple pieces of class content, and it can automatically create for me a 10 minute podcast about those materials. Wow. So I can give it my two, three readings for this week. And students may or may not read that content.

[00:28:35] They may or may not care about that, but I can [00:28:40] create automatically create a 10 minute podcast where two hosts are talking back and forth about the finer points of that week's readings. And I can leave that in my course content. So I would say that's wild. Yeah. I mean, if you think about it, we can just because we're pretty podcast producer at the college.

[00:28:58] So are these, these are all [00:29:00] AI voices and they sound really good. If I give students You know, two, maybe three readings a week, you know, longer readings. And I give them the readings in a PDF. I have tools that will look at the reading and it will give an AI generated [00:29:20] overview or blurb of that content. So just right off the bat, the student can look at the whole 20 page, 30 page longer PDF and say, Um, I'm not interested in reading this, or I'll pretend that I'm reading it, or I don't have the time, I have things going on in my life, um, and so it can give a, a real quick [00:29:40] overview or blurb, two, three sentences about what is that content.

[00:29:43] Yeah, little cliff notes, and then the student can decide, okay, I want to dive more in depth to this, I want to read more, and I, I, I can pay attention to this and not these, these other pieces, um, And then what I'd also like to see, so that that's just on a class level, but then I think there's the [00:30:00] opportunity to If I loop back in the story of my daughter, I think there's the opportunity that as we see students proceed through our 3, 4, 10 years in an institution of higher ed, have a guide or a learner that learns along with them.

[00:30:15] So a lot of schools are playing around with bots or agents that will answer just quick [00:30:20] questions. Um, decades ago in my classes, I would make bots that would act as a virtual TA and answer a lot of the questions that I already answered in my email already talked about in class, but a student wasn't there or missed it or whatever.

[00:30:35] Now we have the opportunity to have a almost like a plan of study. So as a student [00:30:40] works their way through a program, you can have a plan of study, have a bot or an agent that Knows from seeing other students go through, where are some of the hiccups along the way? Where are some of the, because, you know, there might be some classes that are better to take in a certain sequence, or there might be some classes that connect better [00:31:00] with other areas, or, um, you know, uh, stronger, deeper connections that they could make.

[00:31:05] So if there's a way to create, a machine learning, uh, you know, basically plan a study or a cross between a plan of study and advisor and, and include a lot of the lessons learned from multiple [00:31:20] other students. I think it can just do, it can basically help support our students as they proceed through what can be a stressful time in their lives.

[00:31:27] Which could help you as, as the instructor too, right? Like if, if you're getting feedback from this bot about areas that students were all struggling, I mean I'm sure you figure that out anyway, but I could see that as, as [00:31:40] helping also the instructor to have more insight into your classes and structure.

[00:31:45] And it's also I use it for, um, you know, I haven't gone to the extent of recording. I previously would record lectures and do lecture capture and share it. What I'm probably going to do is, is share that back out, uh, through an agent to tell [00:32:00] me where I can really streamline my thinking. But I have. taken my, uh, PowerPoints for classes for lectures and, you know, save those as a PDF and send those through an agent to say, what are areas I could streamline, extend this thinking?

[00:32:18] Here is my syllabus. [00:32:20] Do, do my lectures really connect to my student learning objectives? Do all these lines, do all the, you know, do I have all my I's dotted and T's crossed? And so I think one of the nice things is that, yeah, you know, computers. And I want to make sure that I say this carefully for later on when the machines do take over, you know, computers are [00:32:40] relatively dumb, you know, computers need to be, have very discreet, specific directions.

[00:32:46] That's why computational thinking, uh, we focus on, you know, very explicit directions. And so computers are going to do what we tell them to do. But the nice thing is a lot of machine learning right now [00:33:00] has almost a beginner's mindset. It approaches the world through the eyes, you know, on Reddit, there's this thing explained like I'm five.

[00:33:07] So a lot of machine learning models will look at the world. Um, so they can look at your materials and if you're not clear and you're too stuck in your head as, as people like me as academics [00:33:20] sometimes do, um, there's the opportunity to try and help you just cut the clutter. And so there's opportunities for me to really.

[00:33:27] Crystallize and and condense and make more concise what I'm trying to express to my students and that's wild and that you have figured all of this out because this is kind of your [00:33:40] specialty, but so. I bet there are a lot of other faculty across campus who haven't figured out these, these things that like you have, like, are you, how are other instructors, faculty, teachers in high school?

[00:33:53] Are there people like you who can pass on kind of these, these tips and yeah, but it's, [00:34:00] I mean, so the, the shame piece is interesting. So, uh, a couple of weeks ago I went to. Uh, state of South Carolina asked me to go talk to all the teachers of the year that pretty much are still around. Oh, wow. And so I had, they, we were in, uh, Myrtle Beach and I had three sessions all around AI.

[00:34:18] And so I had just [00:34:20] cadres of teachers coming in and talking to them about AI. And that's one of the things that stuck out to me is there was this belief that, we're talking about teachers of the year in the state. Right, you know, people that award winning folks that are, I'm assuming pretty high quality educators, great human beings, um, masters in their craft.

[00:34:39] [00:34:40] Um, and so one of the, the common, uh, points that they brought up is that. They, many of them, not their words, my words, are overworked. Okay? They have too many things they need to do. Um, there's a lot of bureaucracy. It is hard work teaching the classroom and reaching every child. And these are people that are experts [00:35:00] at it.

[00:35:00] And so one of the things that stuck out to me is there was this, um, you know, my word shame. At using generative AI to help write an email or to help build some lecture materials or help build a PowerPoint. And, and I [00:35:20] was trying to indicate, Hey, you have a lot on your plate. Why would you feel shit? You are an award winner.

[00:35:25] Um, I think part of it is there is that, um, you feel like you are cheating the system. You feel like, um, and one of the other things is that, um, you know, I am not normal. I recognize that [00:35:40] I like playing with these tools. I like trying new technologies out. I, I recognize I'm not normal, but also we want to think about, uh, it's not just the, the shame or the, the cheating the system thing is also, um, you need time to play with these things.

[00:35:58] You need, um, you [00:36:00] want to think about the evaluation measures that we have. So if, if I, you know, if I play with these tools. I talk to my students and try to create that culture so they understand generally what I'm trying to do. But if I, you know, don't have that discussion or that trust in my classroom, will students [00:36:20] be as willing to let me make mistakes?

[00:36:23] Because there's, you know, it doesn't always work out perfectly, but as somebody that is in education and somebody that studies technology and digital literacy, I sort of have that ability. So I think it's, you know, You know, I, I, I would suggest that, especially in higher ed, [00:36:40] we need to, uh, find the people that are doing the work and trying to figure out what this means, you know, amplify those voices, value those voices.

[00:36:50] We need to give our colleagues grace and space so that they have the time to do it. We need to pay our colleagues. Yes. So get, you know, pay for [00:37:00] them to just go sit and play and think about, okay, you know, this, this O'Byrne guy is talking about using these tools, but we don't really trust him. We want to figure out what generative AI might look like in, you know, biology or in our business program or in HRTA.

[00:37:17] What does it look like there? You know, bring in [00:37:20] our alumni had those discussions. What is the field saying? What might be we be able to you to do, but it's time it's, it's, it's investing in people to do that work, right? Which is why we're having you on the podcast today so we can amplify your, all your wisdom.

[00:37:35] And I'll include a lot of stuff in show notes. I'll include. Um, a [00:37:40] link to your website and blog in our show notes so people can go in and just see a little bit inside your brain because your website's amazing. Ian, how can higher education institutions prepare students for a future where AI will likely be integral to many careers?

[00:37:57] Without compromising on the quality [00:38:00] and integrity of their education. Great question. And one of the things is that this is a moving target. This is something that, um, you know, machine learning AI has been in our lives since the thirties and forties and fifties. Um, you know, it's been following you around and looking at and developing your Netflix queue [00:38:20] and other pieces.

[00:38:20] And now. With the chat GPT, you know, and these generative AI models being launched upon our lives, um, three years ago, I think this is a really powerful opportunity for us to figure out how could we use and leverage these tools. Um. We need to have more discussion [00:38:40] about the ethics. We need to have questions about, uh, what this means academically.

[00:38:44] What does this mean for integrity? What does this mean for academic rigor, whatever that means? Uh, you know, we need to think about the environment and ethics and what it means for us as human beings. Um, the, the short answer is that we need to have more dialogue. We need [00:39:00] more opportunities where people can come to the table, um, and, and bring many stakeholders to the table.

[00:39:06] You know, we need to bring in once again, our current students, faculty, staff, um, alums, uh, talk to experts in the field. Where do we think the field is moving? What use do [00:39:20] these different tools, what, what use do, could and should these tools have in our lives? And keep in mind, these tools are Uh, they're, they're not leaving.

[00:39:29] Um, we, we will have many more tools that are coming about. Um, you know, uh, uh, uh, Sora was just, uh, launched upon the [00:39:40] public two, three days ago. So Sora is basically like a chat GBT for video. So I can add a couple pieces of text and it will spit out short, pretty high quality videos. So we're going to see.

[00:39:52] You know, we've had a, uh, we've had a, uh, a moderate amount of deep fake videos and stuff like that. We're going to see that [00:40:00] skyrocket over the next, um, couple months, really. Um, and so we're, we're seeing a lot of advances as these tools, um, can deal more with multimodal content. What that means is. You know, most of the tools we've had are, are large language models, LLMs.

[00:40:17] So it's text in text out. We're [00:40:20] seeing more of these tools that can move between text and audio. So I, or, or text and video or audio and video. So I can record a lecture of my class and then I can say, all right, or I can have a podcast audio recording and say, all right, well, I want to cut the part out where, you know, Ian misspoke.

[00:40:38] And I don't have, and [00:40:40] I could just find the transcript, find that part, and I can basically remove that sentence or two. Or, through generative AI, I can just change. I can type in what Ian would have said differently if he was a little bit more succinct and eloquent. And it will take my voice and recreate that.

[00:40:57] Doesn't that blow you away? That's wild. And [00:41:00] so, you know, I'm thinking about the instructors, the educators that might be a little bit apprehensive or, or, you know, not willing to sit in front of a mic or a video camera. Um, so I think one of the things we need to do is we need to think about, you know, what are the hallmark, what are the things that make a higher ed [00:41:20] education?

[00:41:20] What are the things that make a higher ed education here at college of Charleston quality and, and what does integrity mean to us? And what is the, the, the hallmark or the stamp of those two things in the context of education? And then have dialogue, have dialogue and, you know, with faculty, with staff, with [00:41:40] students, you know, involve students in these discussions.

[00:41:41] And like you said, with alumni who can come in and talk about how they're seeing it in their, in their day to day work. Yeah. What, what, what's happening in the field? You know, what's happening in the, the real world and then what should we be doing to prepare people for that environment? Exactly. Yeah. So are there all that said, that sounds [00:42:00] amazing.

[00:42:00] What, what initiatives are underway on, on our campus at the college or other institutions to explore the potential of AI in education and, and how, how can these initiatives.

[00:42:15] Most of the great stuff that I have had, I've been inter I've been a part of [00:42:20] here is our Center for Teaching and Learning. Uh, so CETL, uh, is having a lot of great workshops and professional development where they'll bring in speakers and people to talk. Um, they will basically have, uh, discussions. Um, also at the school level, we're taking time to Uh, have [00:42:40] dialogue.

[00:42:40] So in, in, in the school of education, we have a couple faculty members that meet regularly and have dialogue about what this means. Um, so most of what I've seen and been a part of here at CUC is taking time to, um, to, you know, we have people that are [00:43:00] global experts in their fields. Okay. We have people that are experts in their content area, experts in their discipline.

[00:43:06] I think it's getting those people that have the, what I'm going to say, the right mindset, um, that have a critical disposition, a critical attitude. They, they want to try and engage with these [00:43:20] technologies. They want to figure out what they mean. Um, they are not fully drinking the Kool Aid like I am.

[00:43:26] They're far more critical. Um, it's getting them together, having dialogue, thinking about what these things mean, um, and, and trying to figure out. So, you know, once again, what could, what should we be doing with these technologies in our lives? [00:43:40] So I would say that the main thing I would pull back on that I reinforce is bring the humans together.

[00:43:47] You know, where is the human in the loop? Make sure that we have space for them. That's a, that's actually, that's a great. note to end on is, is focusing on the people, on the humans. [00:44:00] Yeah. What do they want? What do they need? This is, is our planet. Um, these are our lives. These are our futures. We have the opportunity to come together.

[00:44:08] And, and, uh, you know, one of the things we want to think about is just because we move high tech, we don't need to lose high touch. I love that. That's great. That's a great line. Um, thank [00:44:20] you that my, I have so much to think about and process from this whole conversation. Thank you for coming into the studio.

[00:44:26] It was great to have you here. We really appreciate it. And like I said, I will include all of a whole bunch of resources in our show notes. So if people have questions, they can find answers there. Yeah, this is the start of the discussion. I think that, um, [00:44:40] there'll be a lot of questions that come after this.

[00:44:41] A lot of things that. You know, we, we make connections, things are going to continue to change. So we'll do this again. Thanks, Ian. Thank you for listening to this episode of Speaking of College of Charleston with today's guest, Ian O'Byrne. If you liked this episode, [00:45:00] please help us reach more listeners by sharing it with a friend or leaving a review.

[00:45:05] For show notes and more episodes, visit the College of Charleston's official news site, the college today at today. charleston. edu. You can find more episodes on all major podcast platforms. [00:45:20] This episode was produced by Amy Stockwell with recording and sound engineering by Jesse Kunz from the Division of Information Technology.