Inside the Quest to Tame Gen AI to Benefit Education and Society

In this interview, Dan McFadyen, Managing Director of Edalex, and Julian Ridden, Head of Growth for Australia at Quizizz delve into the role of AI in education. They discuss the potential benefits and risks of AI, emphasising the need for thoughtful integration in educational settings. They also highlight the importance of understanding the risks associated with AI, such as bias and data security. The conversation emphasises the need for caution in AI adoption and the importance of using AI to solve genuine problems in education. Overall, they express optimism about the future of AI, and its ability to enhance human experiences and education, while also acknowledging the need for proactive measures to adapt to technological advancements.

The conversation is divided into 2 parts and each consists of 3 thought-provoking chapters, each shedding light on different aspects of the covered topics.

Part 1


Part 1

  • 3:09 – Unpacking the Potential of AI in Education and Keeping Up with Rapid Advancements
  • 11:38 – Balancing Mitigated Risk and Reward in Gen AI Integration Strategies
  • 22:20 – Navigating Hidden Risks from Gen AI and its Adoption in Education Settings

Part 2

  • 00:14 – Harnessing the Power of AI – Overcoming Challenges and Opportunities for Educators
  • 13:06 – How to Ride the AI Wave – Crafting EdTech Solutions That Solve Real Problems
  • 18:47 – AI’s Revolutionary Impact on Education

Chapter videos

Click on the videos below to view or watch on our Channel Edalex YouTube channel – Subscribe to receive updates on new videos:


(This transcript has been lightly edited for readability)

Dan McFadyen (DMcF) – Hi, I’m Dan McFadyen, Co-founder and Managing Director at Edalex. I’m thrilled to be joined today by Julian Ridden. Julian is the Head of Growth for Australia at Quizizz. Julian’s experience spans working at educational institutions, LMS providers, Moodle and Instructure, and as well as other edtech providers. He is an educator, keynote speaker, product leader and lifelong learner. Julian, thank you so much for joining me today.

Julian Ridden – Thanks for having me, Dan. As you know, I hate these intro bios, but thank you for putting that together. It makes me sound a little bit competent and for that, I’m appreciative. Thank you.

DMcF – Always, always – I’m here to help, and that’s part of my service. So to contextualise, you and I both met, it seems, a few lifetimes ago for both of us, it’s quite a few years ago. And we do often catch up, as we’re joking earlier, at conferences – so, at your booth, or my booth – so it’s great to connect again today. Now you’re currently the Head of Growth for Australia at Quizizz. So tell us about Quizizz and your focus there.

JR – Yeah. Thank you for that. And first of all, I just want to let you know, I think anybody who listens to this who works in the edtech space will appreciate this next quote. I always refer to EdTech as the Hotel California. We always try to check out, but we can never leave. So we might be back at the coalface or in a different organisation, but it’s great that we’ve known each other for so long and there are so many people who for over 15 – 20 years, every time I see them, they’re just kind of in a new position, or a new company, or new a school, but, yes, we never seem to be able to escape. 

Look, thank you for the opportunity. I’ll talk very briefly about Quizizz. Quizizz is a company that provides formative assessment tools, very heavily focused around gamification for the K-12 space, although we have a few universities using it as well. But I’ll be honest, one of my key reasons for joining it is they’re very heavily invested in the AI (Artificial Intelligence) side of things, and I think they’re doing it in a very clever way, which again, I’m very keen as we talk about today, what is good use of AI? Bad use of AI and risks? So we’ll kind of touch back on that. But yeah, look, my role here is that this is a company that had huge success in the US. They kind of launched accidentally at the right time, just in time for COVID, where the digital technology and especially tools to engage students remotely was really huge. And at the moment they’re used, please I apologise upfront, when I say I recently joined, it has only been this year, so I’ve got to get my data right, but they are used by about eight out of ten US schools. We’re doing a big global launch and I’ve been brought on board to help get it started in the Australian market, which is very exciting. So yeah, that’s what we’re doing here. 

Part 1

Chapter 1 – Unpacking the Potential of AI in Education and Keeping Up with Rapid Advancements

DMcF – Brilliant, really fantastic. So in that bio that I cobbled together for you, it shares that – only, and it doesn’t do you justice – that you’ve worked at a range of different educational institutions including colleges, Moodle as well as Instructure, and a number of other companies. And one thing that’s really shone throughout those is your passion for teaching and learning. I’d say over the past year what’s really come to the fore is around AI and how AI, how we can weave that into education for potentially incredible impacts. So let’s start to narrow in on that. So why AI and Education? Do they mix? And if so, why do they mix together so well?

JR – This will surprise you none the least when I say I want to take a bit of a rambly response to that, right?

DMcF – Time’s up. But it was a great discussion (joking)

JR – Thank you, Matt Damon, we’ll see you next week. It’s an obscure reference for those who pick up that one. No, um, the reason for the rambly answer is people look at using EdTech – Educational Technologies for different reasons. Sometimes it’s because we have to do everything online, because of this is education, you know, or things like COVID. But for most of us, we use e-learning tools to augment our face-to-face practice, we’re trying to increase an efficiency, or engagement, or something like that. And the reason I take the slight rambly answer is you mentioned that you’ve always been drawn to the passion I have. For me, edtech has been about a passion of how do we deliver more quality education at scale. And that for me has always been the driver. Now that’s much easier said than done. You know, in the early – I’d like to say early days, I think many people are still guilty of this, but, you know, we go, oh, we have an LMS and we throw 15 PDFs up in a quizz and go, We’re doing e-learning.

DMcF – Yes, right.

JR – You know, like all things, Savage from the series Mythbusters quite famously once said, ‘Every tool’s a hammer.’ You know, you can look at a problem and every tool is a hammer. The LMS became that, but the reason for the ramble is good edTech really does actually help us with this problem of quality at scale. You know, when we’re looking at lecture theatres, we’ve got a higher student to teacher count than ever before, how do we provide quality? Well, maybe we can put more things online. How do I better support those who need support academically, or with neurodiversity, or physically, whatever those needs are? Well, maybe I can use online tools to provide better differentiation and better support. And that for me has what’s always driven me to the EdTech space, which is we can use these tools to provide quality at scale if we do it right, to allow us to augment and I’m going to say the word ‘classroom’, but it could be a university lecture theatre or a K-12. So the reason I start with that preamble is that has what has really excited me genuinely about the Generative AI space. Here are tools that are built to understand context and language at its core. That’s what we’re seeing with Generative AI, of course, we’re going to talk about all the ways that it’s expanded from that. But the ability for me as an educator who is time-poor, who is trying to focus on my educational outcomes, to say, here is a tool that can not only genuinely save time, but allow me to create, to remix, to differentiate, to research, or even to provide tools for my students that will allow them to do the same. That really is a significant shifting moment for us. And we will spend the next half hour talking about the benefits of that, the risks of that – like there is so much to unpack. But at its core, that’s what excites about the Gen AI movement. It is really this, the next stepping stone into how we can provide quality at scale and how we can use computing power potentially to help us reach that goal. 

DMcF – Brilliant, and, as you’ve investigated various technologies, you’ve been great in terms of sharing that on LinkedIn and elsewhere. I recall a few videos where you said, ‘My mind has just been blown by the latest technology.’ And then a couple of months passed and you’re like, ‘That’s ancient history. And now look at this, what this new tech can do’. So Gen AI is moving so quickly… there’s so many… we obviously hear about the big names like ChatGPT and OpenAI, but there’s so much else happening out there. And you and I focus on EdTech for a living, but how do teachers, parents, administrators – how should they keep up? How can they keep up?

JR – I like for the easy questions out first. Thanks for that. I appreciate that…

DMcF – And world peace is up next, so just a heads up…

JR – No, look, my honest and genuine response is a combination of ‘I don’t know’, and yet also ‘It’s a bit easier than we think’, which is a weird fence to try and sit on between those two lines. The speed of movement has been astronomic. It is really weird to think that… again, AI has been around for years, this is not a new thing. We talk about AI as a new thing, but we’ve been using AI for Grammarly. It’s a wonderful example of a text-based AI that has helped save many of my emails from being a dog’s breakfast for years. You know, we look at Apple Maps, Google Maps. We’ve had AI and data-centered AI tools for over a decade in active use. It’s been around since the 70s. What changed? November a year ago. So again, it’s really important to kind of put this date on it. It has now been, what, 13, 14 months since OpenAI released this little product that even they had themselves come up and going, ‘We didn’t think it would have that big an impact.’ But where they went and released a product that suddenly made the world take notice of what Generative AI could do, and we’ll unpack that just a second. But when the world suddenly kind of grasped on to this and went, Wow, that’s the capability of these tools, what we have seen in that last 13, 14 months of growth has been astronomic. Every day there’s another five new tools out of those five new tools, four of them are remixes, reused ideas. But maybe one of them just changes everything. So how do we keep up? Get back to your question. If you’re lucky to be people like you and I, we are in the networks. You are, you know, I’m looking at my LinkedIn, I have set up numerous feeds and searches, and yes, I’m using AI to help me do it. That helped me keep track of what’s going on and to filter through, try and separate that, wheat from the chaff. But we are unique, we are special individuals and I don’t say that to make ourselves sound extra special. But you know, we have that time and experience and it’s kind of becomes part of our jobs. The good news is, though, and this is why I go from saying it’s really hard and it’s so much coming through. But education networks are built to share already dark data and be it at organisations like Fader and ASCILITE, and in higher ed organisations like My Team here in Australia, in the K-12 space and Vine and others like that. Educators are already getting together and going, ‘Hey, not only have I found this cool new tool, this is how I’m using it for my educational practice.’ And for educators, all I can say is connect to your networks. You will be absolutely amazed at what your peers are doing, and that’s how you can get yourself up to speed -lLet someone else do the work. Yeah, but engage that community of practice. And in that last, again, I keep saying year and a half, 14 months, let’s just kind of average it to 1.5 years for the sake of argument – in that time we have seen so many different educational communities kind of grappling with, and wrangling with, and pulling apart and putting back together, and now starting to have coherent views and coherent thoughts on AI. And I think that really is a magic point, as we come into 2024, that those networks really are providing huge value to those inside of them. 

Chapter 2 – Balancing Mitigated Risk and Reward in Gen AI Integration Strategies

DMcF – Well said. And I think, yeah, there’s, I want to pull out on that thread. I was fortunate enough to join that leadership group of school principals, senior teachers, and other leaders coming from low socio-economic backgrounds, and it was interesting hearing from one individual professor’s ignorance and fear: ‘I don’t know what it is’, to someone else who’s actually circumventing the, you know, the state restriction on banning AI because they know that’s where the students are doing when they get home, so to really bring them into that world. And just as with any technology, Gen AI can be used for good or bad, or anywhere in between. And so a lot of the initial reaction was, ‘Oh well, we don’t understand it yet, let’s just ban it’ and then, not that it goes away, but ‘Let’s study it and figure it out.’ 

JR – Actually, let me go back to that step – I want to repack that a little bit though because I actually want to make sure… but this is a key hill here I will die on – I will often defend those who have to do academic or IT decisions at enterprise levels because they always the ones who cop the brunt of, you know, AI just came out and the Department of Education in NSW just went, ‘Oh, we’re banning it.’ These are actually really important responses because when you’re doing it at an enterprise level when you have tens of thousands, if not more, students under your care, we do need to be at least more aware. Where I won’t die on the hill, is, where I’ve been really pleased and grateful to see how Australia has handled it is, those bans weren’t there to say… I know you’re saying it jokingly – but we’re going to ban it till it goes away, that actually hasn’t been how Australia has dealt with it. It’s more of we’re banning it until we get a better understanding of risk and then we’re going to slowly release. Now, this is incredibly irritating for those at the coalface. And I feel for you and I have been that person who actively circumvents those processes. I’m not here to advocate for that. But the reason I do think it’s important to recognise that, you know, when these things are going so fast… let’s talk about the most famous example that happened in the first couple of months of ChatGPT. Someone at Samsung decided to upload all this private corporate data into the GPT because they were going to do some really cool evaluation with it, not realising that the terms and conditions very openly stated, that anything shared becomes part of the model, and that data then suddenly became open. The person at Samsung didn’t intentionally choose to make that decision. ChatGPT wasn’t doing that, but no one had quite understood how the tool works yet, so that’s why I just kind of interrupted. So I think it’s important that when we have these levels of bureaucracy that, ‘Yes, we do ban,’ but that where things happen for better or for worse, is either they ban till it goes away which we all know there are organisations of that ilk. They are the ones that we try to avoid, but ones that have quickly tried to move to understand. And in Australia, the speed of the Department of Education, even the speed of our universities – universities who are renowned for, I will never name this person, but years ago I worked for a university here in Sydney. I got pulled into a DVC’s office where I was being berated for my job performance, and I was horrified and I was sat down, and this is the exact words I kid you not. ‘Julian, there is a speed of universities and a speed of icebergs, and I shouldn’t have to tell you which is faster.’ I was being berated for speed. Now, we don’t ever want to get back to those – but universities who are renowned for not working at speed have done incredible things. And I will highlight University of Sydney, Macquarie University, Hong Kong University have been doing fantastic things. So again, I notice it’s a slight segue, but yeah, I do think it’s important that slowing down in this world is critically important. And as we talk about risks, I want to dive back into that a bit more. But yeah, doing it until it goes away. Well, that’s a different story altogether. And we’ll let those people, ‘Darwinian evolution will take care of itself.’

DMcF – Well said. And that’s a fantastic clarification there. And, you know, I think as you’ve said, with time, now at least here in Australia, we now have the Australian Framework for Generative AI in schools. So there’s been the thought to planning, and so a number of the blanket bans since have now been loosened. And certainly when we’re talking about primary or secondary school children, especially privacy and security has to be paramount there. So with this evolution, do you think they’re getting it right? You know, this balance of access, and control and safety.

JR – I look to the sky because I don’t think there is an answer for that. I think that overall we’re being cautious. And I think that is a correct response. As a person who is an AI advocate, anybody who has followed my LinkedIn feed, I apologise for the ongoing onslaught of ‘Oh my god, check out this fantastic tool’ that you see on my feeds. But as an advocate and as a high user of these tools, I have been very glad to see that the cautious nature of approach. But that being said, that there is good caution and then there’s kind of an overzealous caution. And in some markets overseas, we’ve seen that in Australia, I think we have struck a relatively strong balance front. Another piece of interesting history, most people don’t know this. It was only in the late 2000s… 2003… people who watched this after the fact can Google the exact date, but it was only in the early 2000s that Australian law made it legal to record TV onto a VCR. We had had this tool in the 1970s and the 1980s, it was literally in 2000 that copyright law caught up. How can legal statutes and policy possibly keep up with change? And the VCR was around for 20 odd years before we finally got it, everybody was doing it, no one was being charged. But it didn’t change that the law took this long to catch up, the law around it. How do you have a chance of catching up when the tool is evolving on literally a daily basis? Again, I ramble but being cautious is important. When you’re got no chance of your law keeping up and law protects society, right? In theory. But if the laws that protect you can’t keep up, then caution is warranted to make sure that what we are doing mitigated risk. And I think that’s the word you hear me use a lot. It’s not about risk. Every tool has a risk. The Internet has risk. My computer has risk. My dog has risk. But it’s all about what’s the mitigated risk versus reward. And I think we’re doing that well in this country at the moment.

DMcF – Right. And so with that context, how should an institution, be it a school, or a university, or RTO or some other provider – how should they approach AI?

JR – All right. Look, again, the old adage is true, is that there is no right answer to this question, but there are many wrong ones. So let’s talk about the wrong ones first. Now it won’t be positioning it, but it’s true. But here’s the wrong way to look at it. The wrong way to look at is go, Oh, wow, I found this website. There’s an AI for that, which, by the way, awesome website. Check it out. And every day is telling me 15 new tools that I can check out. So I’m going to go and load all of these tools and play with all of them. Do not go down that rabbit hole of ‘I have just walked into a Bunnings of AI and I’m going to go down every aisle and just buy everything, and see what’s going to happen’. You joked that way makes it obvious why it’s a bad methodology, you don’t quite know what you’re playing with, you don’t know what it is your’re trying to solve, you’re getting caught by the shiny, you’re being caught by a marketing message and you don’t understand risk. That’s why just grabbing random tools – I would never advocate. The good way of doing it is think about the problems that you’re trying to solve first. What is a problem that you have that an AI could potentially help you solve? – Oh, I really want to take this resource and differentiate it because I’ve got three students on various university needs and they need to be explained in different ways. I don’t normally have time to rewrite this resource in three different ways. Maybe that’s something that an AI could help me with. I really want to go multimodal. I want to be able to take my written form because I’m not good at speaking, which is obviously not me. I’m saying this as if I was somebody else. We’re speaking of my ‘weakness’, but not everybody does. Wait a second, AI can take the written word and turn it into now a human sounding voice. Maybe that’s a problem that you can solve. Please always focus on what is the challenge that you are trying to solve? Out of that, okay, great, what are some AI tools that may be able to help you and that’s where you start your exploration. The last thing is that you don’t have to be the first. Being the first is a skill set that not everybody has. Being the first is…  someone who’s going to be the first well is someone who is actually able to take their passion for what they’re doing, but also put that aside and to critically think about risk – what is the pro and con of this. And if that is you listening to this, awesome, please be that person. But then share. And that’s where we start this conversation. If you are not that person who is good at being the first and you recognise that you either get caught up in that shiny thing, which is not a criticism, I bet it’s an awesome thing that you get passionate and you want to explore, but then go to your networks, find the networks where you can then go, All right. I’ve found this critical thing, I think it really solves my problem. Can you help me verify that this is not going to have issues. I don’t know if you’d be willing to let me segue a little bit off our script because I think this is where we should talk about what are the issues of AI? I think some are obvious, and some are not.

DMcF – Yes, please.

Chapter 3: Navigating Hidden Risks from Gen AI and its Adoption in Education Settings

JR – So when we talk about risks of AI, everybody first of all talks about, you know, it hallucinates. And for those of you who are still catching up on terminology, ‘hallucination’ is an interesting word. It’s when the AI didn’t know so the AI just made it up but it’s stated as fact. We always like to throw this company under the bus. They have fixed it since then. This is the first high profile case when Google launched Bard on a big keynote stage and live streaming it out to the world; they asked Bard, you know… the exact question along the lines was like, you know, what discovered the first exoplanet? And it mentioned how the James Webb telescope did this. It didn’t. It completely made up the result that presented as fact. And that fact was continued by the speaker, who of course, isn’t an astrophysicist who then went, Oh, see, look how awesome Bard is. So when we talk about hallucination, and that’s one that everybody talks about so we won’t harp on about, it’s just when an AI – ah, it’s a computer, it’s going to be right. No, it won’t.

But the next risks are actually a lot more subtle and a lot more dangerous. So when we talk about Gen AI, Generative AI, it’s being trained on how things work. It’s been trained on datasets to be able to find patterns. If I was trying to explain it in the simplest of terms, if I was to say, “Hey, here is a ladybug and here is a red Volkswagen,” and said, how are these two things similar? Well, if I was to have an X-Y plot, I can go, “Well, these are colours and this is organic to non-organic.” They’re both red, so they’re up here, but they’re on different ends of the spectrum. And we call these vertices, right? When you look at AI, it’s not a X-Y vertices. We’re looking at 500, 800. In fact, each new AI is actually, I would add, now thousands of vertices that when I combine these two things, it now understands exactly how there’s context between these two objects. And this is what makes Gen AI so effective because as I’m now typing in my very badly typed, grammatically incorrect, and usually tonally inconsistent sentence, it’s able to take that garbage, refactor it, understand it as a question, turn it into a proper question in its own terms, and provide me an answer. That is what Generative AI is doing. And that’s why when you’re asking it to ‘Hey, make this sound like somebody else’, it knows what that somebody else sounds like, but how? Because there was data it was trained on. So why do I make a big deal about the data? Well, many of our AI’s were trained on a wonderful, even-keeled, balanced, clean data source called the Internet.

DMcF – We got to believe everything that’s on the Internet, right?

JR – And it’s terrifying because, you know, Internet as we all know, is full of fantastic information and incredibly scary information. And different AI’s have tried to fix that by balancing – ‘We’re only training on these data sources’. But all of these data sources present different bias. Now, as a teacher, I have inherent bias. Dan, as a speaker, you have inherent bias. It comes from us. It is who we are. It comes from the environment, the culture, the situation that we are in. But the good thing about that is the only person impacted by my bias is my students, and that’s thankfully a small subset. The only people impacted by your bias is people who listen to you talk, which is even a smaller sub… no, I didn’t say that. No – wonderful, wonderful audience, but we all know the joke I’m trying to make here. But when you look at the Internet, what we’ve been seeing – and some of the early tests were done with things like DALL-E. DALL-E is an Image Generation Tool and a professor out of Hong Kong University – again, the name escapes me, my apologies – but go research this. You’d find a presentation up on YouTube. He went into Bing, which is Microsoft’s public version of ChatGPT and it’s connected to DALL-E. And what this professor asked is, Present me a picture of something beautiful that the world hasn’t seen before. And it was nothing but images of landscapes and beautiful women, which, by the way, is our first bias. Is it negative? Ugh. It comes down to definition of beautiful, definitions of sexism, but okay. The terrifying was the other end of the spectrum where you rewrite the same question and say, ‘Present me something scary that nobody has seen before’. And it was kind of pictures of like dark houses, and a clown, and a smiling African-American male. What we see here. And so listen, we talk about risk. And that’s why I bring us down this first rabbit hole, is the rabbit hole of bias when you’re dealing with a tool that you’re expecting it to generate responses in text or these days, video, oral, visual form, it is going to be provided with the inherent biases of the dark art that it came from. And you don’t know what that data is. So that’s the first more subtle risk to talk about. 

The second subtle risk is your data security. And I brought this up as our very first example, and this is nowhere near where it needs to be right now. In fact, one of the reasons that I do use AI a lot and I use AI on my laptop. Open source has fantastic tools. I’m using a whole bunch of AI that aren’t web connected, that happen purely on my Mac. So nothing goes out to the web. Why? Because it means that when I’m feeding it – and I am feeding a corporate data, I use it day-to-day at my job here at Quizizz. I promised, Dan before we started talking here that, Oh, I hate bios, but you know what? I’m just going to put my CV in, and get AI to generate me an intro for this. So when you go if you’ve come to this because you’ve read the website and thought, Julian sounds really interesting. Yes. A GPT wrote that, I’ll take no credit. But again, the point I want to make is that the data you provide it. What is happening with that data? OpenAI have very openly said that all data is used to train the system. It was only just last month and if you’re not aware of this, please go to their website. You can now actually fill out a form to state that I do not want my data being used to train. But unless you go and you fill out that form, and you submit it, the default position is data is there. Just before Christmas, I did a presentation at a university where I did a live demonstration on the screen where I was actually able to find somebody else’s CV by talking to ChatGPT 4.0, and not only I was able to find someone’s CV, and I was able to get their personal information. I got their email address, their phone number, their postal address because they had put all this into ChatGPT to probably help them with a job application or maybe to rewrite something. But that data became part of a dataset that by asking the right questions, I was able to pull up in front of an audience, which by the way, I regret doing, because I didn’t actually expect to be that successful. So that’s a second risk. Please be aware of what’s happening, not just the data we’re getting – the ‘hallucination’, but what’s happening with the data you’re putting into it. And are you feeling safe with that? Those are two areas I think people don’t think enough about – bias and data security.

The last thing on data security and then I’ll pass back to you, but is what’s already happened, is we’re now seeing more and more AI’s being able to be connected. OpenAI started this by going, oh, we’ve connected out to the Web. We can search the Web for you. And again, it acted like a year 5 student, when you ask him a question, it went Google – Oh, the first three responses will be accurate. Yeah, I’ll take that. And yes, it’s getting better, but it’s just connected to the Internet. But nowadays you can now connect lots of your apps. You can connect Gmail to an AI, you can connect a Microsoft, who have released Copilot, which is getting connected to their own form of what they call an isolated GPT. It’s on their Azure service, so it’s not leaking your data out. Again, do your research. But you know, Microsoft is doing a really good job on this, but when you start going, Oh, I’m going to connect this thing to the AI, I’m going to take my school data set to an AI, do you really know what’s happening with that data? So those are the things that we should be thinking about. Yes,  what is it going to present to our students? That’s rather obvious. But think about your data and think about the biases inherent in the system, because both those things are way more dangerous than I think most people realise.

Part 2 

Chapter 4: Harnessing the Power of AI – Overcoming Challenges and Opportunities for Educators

Summary: The conversation explores the dual nature of AI in education, highlighting both its potential benefits and inherent risks. It suggests a balanced perspective on the role of AI in education, advocating for its responsible use to empower educators and improve learning outcomes.

DMcF – Yeah, wow. Very, very thought provoking, and I would echo those concerns. So in light of that, then what should teachers do? So yeah, we want them to be aware, alert but not alarmed, right? As that expression goes, where we’re gonna learn. But we know, especially here in Australia, but it’s a global phenomenon, there’s a drastic shortage of teachers. So AI seems like the fantastic solution to solve them with, the system, with their administrative, and other activities, and then creating plans for their courses, but also for the students to develop more of that one-on-one individualised learning. Study after study has shown that that’s how people learn and retain. So what should teachers do? And how do they juggle all this?

JR – All right. Well, you’ve heard me. And every time I do an interview or do a talk, you see this Jekyll and Hyde in me? But I always start off with Hyde because I think Hyde is critically important. And you just heard me talk about risks and things to be aware of. Now that you’ve seen my Hyde, I’m going to be releasing my full Jekyll. Because AI is so awesome. And it’s worth doing because as you said, these are brilliant tools. They are fantastic time saving tools – in Australia, we are actually better positioned than many countries to utilise them. We have been focused in Australia for years in our K-12 institutions on the importance of critical thinking. We don’t want to learn by rote. We want to get the students understanding. We want them… Let’s just talk about the stereotypes for a second. But yes, we’re going to talk about Bloom’s. We want them to be up the top in the creating – the higher order elements Right? And the great thing that we’ve positioned our students that way is we have made them perfect for this AI revolution because they are already being taught how to critically evaluate. They’re already being taught how to validate, verify, quantify and transform. This is brilliant. And this puts our students higher than many when it comes to utilising these tools, as long as they’re being made aware of the things we talked about at the beginning. When it comes back to tools, well, this is where, ‘be alert, not alarmed’, is a great statement. We want to be alarmed, but we can also just do things smart. Now, I’m going to be saying something that I can’t believe I’m saying out loud, but this is where I do…

DMcF – This is isn’t being recorded [joking].-

JR – None of these things will come back to haunt me at any point. This is where I do have to have some trust in the big players. Now, what I mean by trust in the big players is Microsoft has actually been… I’m an Apple person, I’m an open-source person. Hating Microsoft is just part of my DNA. And with [Steve] Ballmer in charge, it was so easy. I will not actually stop saying how much I’m impressed by how researchers and the engineering teams have  been working with Microsoft and how they’ve been saying, “Hey, these tools are really powerful, really good, but we’re going to allow you to create your own isolated areas. We’re going to create tools like Copilot, we’re going to make them accessible for educators, but we’re going to put a whole bunch of safeguards and you can control the level of creativity that this thing spits out so you can control its degree of bias. It’s degree of flavour,” so, for lack of a better word. These are fantastic. Apple are also moving towards AI. They’re doing it very slowly. They’re doing it cautiously. Apple are renowned not being first to market, but by trying to be right to market. Not here to say Apple is better than anything else. My point is that you should always read your terms and conditions. We all say that. And yet nobody does, and we’ll be honest on that. So what I have been doing is having to have a degree of trust in the larger companies. I did not trust OpenAI at all at the beginning, and it turned out for good reason. Not that they are a bad company, again, first mover position, but they made lots of mistakes, and they’ve learned from those mistakes and they now have a more informed board and they’re trying to move a bit slower. But when I deal with the larger companies, I feel safer. So, you know, when companies and I will happily say Edalex is part of it, and Microsoft, and Google and all these companies, if you work with brands that you can trust that have proven their intent and proven their capability, then that is always where I’d recommend you start, especially back to the joke that I’ve already covered, if you’re not that kind of person who’s good at being the first, who’s has a critical thinking at it. 

Secondly, make sure you teach your students. There is so much out there – when we first banned it and again, I’ve already come out and said, I’ll die on the hill saying that was the right thing to do. It didn’t stop students from using it, and in fact – and all of a sudden, we saw inequalit-ability in the market because private schools started doing more with it first, and now state schools didn’t have access to it. This university did, but that one didn’t. We started seeing inequity in the market. So students went, I don’t want to wait for you to make it be equitable. I want to have the same experience everybody else does. So I’m just going to do it anyway. We can’t stop that. And anybody who thinks they can is the same person saying, “Oh yeah, we’re going to lock down our wireless network at the organisation. That way nobody will access things that they’re not going to,” and they forget that every student just does it. It’s like, again, we’ve gotten a lot smarter over these. Now leaders, thankfully, as a whole, we have gotten a lot smarter. So we can’t ban. We can be reticent, we can be conservative, we can be informed. But then we also have to ensure that we teach our students, and again, it comes back to, that I think our students in Australia are right up there because of our focus on critical thinking. So now we just have to provide them the same kind of talk I’m giving you now. So – ‘Okay, how do I evaluate the AI tools, and how do I understand the risks of these tools?’ And then great, off you go. And if you’re not sure, you think it’s safe, great, use it. If you don’t think it’s safe, don’t use it. If you’re not sure, ask. It’s stupid one-on-one stuff. And yet we’re not enforcing it. So that’s really kind of where I try and sit on it. But any idea of… I’m just going to tell them no, and if they don’t, I’ll say no again – look, that’s not going to work. 

And let’s finish up this rambling with this last point. I think this is important. Over the last year and a half, we have had numerous analogies to where are we in the world. I’ve heard so many people go, AI is the next calculator moment. This is not a calculator moment. And the reason I purport, and I want to make this really clear, this is a Gutenberg Press moment, but those of you who aren’t familiar, the Gutenberg press, very famous devices, how we started printing our first books, we had the idea of being able to place type, being able to pull the press down, being able to print. It was a technology that significantly shifted society. It is because of the Gutenberg press that we had the Renaissance. I’m not saying the Renaissance is because of the Gutenberg press, but the Renaissance would not have happened without the press. This piece of technology allowed for the faster sharing of information, it allowed for education, where some things were handed out by scrolls. My old joke is a monk crying over a desk and another person going, ‘Oh no, the copier I had has broken down’. We’ve moved on from those days to being able to mass print and share information. Generative AI is having that societal shift right now, and it is happening faster than ever in history before. We are having tools that are replacing, not jobs. We’re having the tools that are replacing careers. Copywriters are already a dying art. What happens if this computer can go through and rewrite copy for me? But is this evil? No. Because what it’s also doing is it’s also turbocharging those and especially where we are right now. Right now, most people’s jobs are safe. Do you know why? AI only works if you can ask it the right questions? And until people know how to ask it – a number of people have come to me and said, Julian, can you give us a quote for an e-learning course? And I might provide a quote of X and I’ll go, ‘Oh, we don’t want to pay that much. We can just go and do it ourselves’. I’m like, ‘That’s fine. But of course they don’t have the skills and the knowledge. Like, well, what is educational theory? How should we format these objects, how do we go multimodal? You know, all the things that we know of as easy as e-learning instructor designers’. Until someone can ask exactly the right question. I don’t know exactly how to provide the right responses. And our AI’s are kind of at that level right now. But we’re seeing that shift and it’s already happening fast. We are seeing every week in an Australian newspaper, another job being let go because it’s being replaced by an AI tool. What I will say to not be doom and gloom is an AI will not replace you, whoever you are listening to this and an AI will not replace you, I’ll guarantee you that. But a person who knows how to use an AI, very well might.

DMcF – Yes – and you know, similar, while it didn’t have the same impact, but the whole question of ‘Well, if my content is in the LMS then do they still need me as a teacher or if my lecture is recorded, am I going to be replaced by that lecture capture system,’ right?

JR – Let me make this personal, because this is one of the reasons I joined Quizizz. And I don’t mind saying this out loud – I don’t want to be here to say, ‘Hey, buy our product, this is awesome.’ This is not the intent of this conversation. What attracted me to this company. So I think they have the balance right. See what Quizizz does is they are not replacing the teacher. They are there to augment. So what Quizizz does is it allows you to point to something like a YouTube video or to one of your documents, and it will go through and then create a fine, engaging, formative assessment out of that material for you. Or I can say, ‘Actually here’s a quiz that I’ve written in multiple choice questions. My students aren’t resonating well with that. So can you go through and it will actually change it into different question types, maybe matching and something else.’ So I can say, ‘Take this quiz, but do this quiz in the tone of William Shakespeare’, and it’ll take your questions and rewrite all these things you’re seeing – none of these things are replacing the teacher, but you still have to know what was the quiz you wanted to create in the first place? What is the knowledge I’m trying to even teach through this quiz or assess through this quiz. That comes from you. But where it used to take you maybe half an hour to go through to build it, this thing now might go Well great. You put a lesson plan in, it spat it out and you’ve just iterated three or four questions. That for me is where today I think AI is brilliant, and solves that first problem I mentioned at the beginning, which to me the EdTech space has always been, how do we do quality education at scale? Ten PDFs and a quiz was not quality education, but it’s what we had time for. And I never berated somebody who started that way. I berated people who stayed that way and I fanged them very hard. But, I’ve never berated why you started that way. And the same with AI, where do we start is one thing, but how do we use these tools? Well, great. I can now take it and take a resource that I’ve got and say, actually, can you break this down into low order thinking, or can you actually give me four other examples of this but in a different way? And that is huge for us as educators, and allows us to focus on what we want to do, which is more quality time with our students, and that is moving from doom and gloom to the good stuff. 

If you’re not using AI right now to differentiate, to help provide multimodal. I am going to throw out some fantastic tools that I use like ElevenLabs, it goes through and takes any document and turns into my voice with tone and passion, and in the intonation – you could not tell it wasn’t me unless I was talking right next to it. I use another tool called HeyGen, which I can take anything, any script I put into it, and it generates a video of me. And if you haven’t, just Google ‘HeyGen’. It’s both exciting and terrifying at the same time, but I have used that to create videos of instruction material in Spanish. I don’t speak Spanish, but I used an AI to generate the Spanish. I then put it into HeyGen, and it created a video of me talking in Spanish that you could not tell was not me. That I find it is enabling me to provide better education than I could’ve before. So yeah, that’s where I think that the excitement is, and where that social shift is coming from. It’s enabling us to do more as long as again, we make sure we find the right balance.

DMcF – Right, and having seen the video of you speaking fluently in Spanish, and then jumping to German, and then Mandarin, you know, so and what I like is you have no knowledge of the languages, is both exhilarating and awe inspiring but also terrifying. 

Chapter 5: How to Ride the AI Wave – Crafting EdTech Solutions That Solve Real Problems

Summary: The conversation delves into the role of AI in education, exploring its potential benefits and pitfalls. It starts with a discussion on the importance of AI tools providing genuine solutions to real problems in education. JR stresses the need for caution in AI adoption, highlighting the importance of mitigating risks and ensuring that AI solutions truly address genuine educational needs.

DMcFI like that. And so expanding on that theme given your experience at different EdTech, and mine as well, putting our EdTech platform provider hats on, what are the good uses of AI? And what are poor choices? And I say that in the context of a well-respected colleague and friend in the Australian EdTech sector has said, every EdTech provider has to be in AI, if you’re not in AI, then you’re dead. So a number of companies are whitewashing and saying, ‘Oh, of course we’ve got AI, we’re all about AI’ when it’s just so that they can say that they have AI in their platform. So, where do you see good, practical, helpful, and then where is it potentially harmful?

JR – Well, let me start off one with a short quick response, which is – it’s useful if it provides a genuine solution to a genuine problem. Like that’s what a good EdTech provider does. Every now and again, we’ll see that random tool come out that looks pretty cool. But it didn’t really solve a genuine problem and it doesn’t last. The only thing that will work is when we solve genuine problems with genuine solutions. Match that with an appropriate price point, there, as said, we’re EdTech providers, that’s our role – we have to pay our staff. We have to do that. So a genuine problem with a genuine solution at a price point that equates to market. That’s the short answer. But if we unpack that a little bit, Quizizz, I think, is very much on that right path. You know, one of the reasons I joined them is that we’ve got this problem that teachers regularly build formative assessments all the time. It’s a standard part of education, and the engineering team here do fantastic things. They’re trying to solve problems. Oh, what happens if all the students had a device where you can print out cards, basically QR code that the student can turn A, B, C, and D, and the teacher just points their phone at the room. It captures all the codes at once and it gives you the response so you can still build this one for online assessment but have it work in an offline classroom. That’s a problem that we have, not only because I don’t need to have access to technology, but I don’t want every student with an iPad on their desk getting completely distracted. I want to go through and be able as I said to… you know, my background, for those who don’t know I’m a history teacher by trade. I love teaching history, but history is all about our artefacts and more importantly, history is about context. And I’d love to try and contextualise what’s happening in current day. I’m going to show my age, when I was teaching history, we had Gulf War 2 going on, right? So but I show all these wonderful examples of propaganda, and this happened, but I’m going to show you an Al Jazeera and a BBC and a Fox News. Which one is right? You go, I don’t know, right? But the great thing about the AI tools is I can now go, these things that are happening today – my daughter, a 15 year old, was asking me questions about the Gaza conflict that I could not unpack. I was too afraid to unpack, let alone not informed enough to unpack, but I found a fantastic Behind The News – BTN. And again, I know we have a global audience and BTN is run by ABC, the Government run Broadcasting Corporation, and it’s a news show for students – and they did this fantastic 15 minute packet on the history of the Gaza conflict. So not only did I provide that to her, and then I went and connected Quizizz to it and had it generate a quick formative test that allowed her to reinforce that knowledge. Now all I had to do was I found this great video and I point this tool at it, but I didn’t have to worry about it hallucinating, it was just going and grabbing it. I didn’t have to worry about data because my daughter wasn’t in that system, she was just logged in as an anonymous user. So I was able to do all these cool things. 

So back to the EdTech provider, first of all, I agree with the first comment. Again, you’re not going to be replaced by an AI, but you’ll be replaced by somebody who knows how to use an AI. And yes, that’s going to impact us as EdTechs. So whatever our problem our tool is actually solving, be it a timetabling solution, or maybe we’re doing knowledge creation or maybe we’re nn LMS, or, you know, all these different… engagement, whatever it might be – how can you find that right hammer that helps you with your existing product, but make it more valuable and make it more efficient – and by the way, EdTech, OK? Let’s put the black cap on. We also want to save money. So how can I also use this to optimise our workplace, to provide a better product for our users? Right? That’s how we should be looking at it. And the obvious answer to your question is literally just put that on its head. If you’re doing AI, but not solving a genuine problem, you’re not going to have long term or even mid-term success. You’ll have a flash in the pan moment perhaps, and you will disappear. If you have a tool that you’re not actually thinking about it, from a data perspective and how you’re protecting your users, which you should be doing anyway, chances are you’re going to be sunk. If you are doing this and not solving a genuine problem for them, then… the reverse is just flip that on its head. But I do agree with you on the opening statement, everybody in society, for better or for worse, is being impacted by AI. It’s how we choose to leverage it, utilise it, take advantage of it, and risk manage it. That will be the secret to our success. 

Chapter 6: AI’s Revolutionary Impact on Education and Society in the Future

Summary: In the conversation DMcF and JR discuss the future of AI and its implications for society and education. JR reflects on the rapid advancement of AI, emphasising the need for society to adapt to its increasing capabilities. He acknowledges the potential risks but remains optimistic about AI’s ability to enhance human experiences and education.

DMcF – Brilliant, there are so many thoughts and threads I want to pull on Julian, but I’m also cognizant of your time, and so, yes, we may have to make a series of these. So maybe I’ll ask you two final questions. I like to ask people where do they see five years from now? As you pointed out, you know, 14 months ago, a year and a half ago is when ChatGPT broke onto the global scene. So five years is maybe too long in terms of AI but, you know, where do you think AI will be, and we will be as a society with AI in three years?

JR – No, that is a really terrifying question. Again, if you had gone back let’s give me a time in space that I can remember – Covid has just finished. We’re finally allowed out of their houses. We’re going back to the office and I was to say, ‘How do you think your workplace is going to change?’ Everybody’s focus would have been on the work from home because that was our immediate impact right then and there. Ah, I’m going to be working from home. And we’ve had a huge societal shift and ongoing debate around what we should be doing. But the reason I bring up that example is that your focus on society shift was so narrow because that was the immediate change that you had seen and where you see it evolving. Right now when we ask the question of where do we see AI going, it’s very easy to kind of again, follow the logical path of, well, it’s going to keep getting smarter and smarter with each new version. And let’s just talk about GPT. There are a lot more AI’s out there than GPT, right? There are literally thousands of them. Open source, corporate model, Bard, Bing, Midjourney, DALL-E. So let’s just talk about ChatGPT for the sake of this discussion. ChatGPT’s growth and speed is increasing at a faster than exponential rate. It’s capability of data source, of data query, of speed and of price to use its pricing, what’s called tokens. We’re not going to unpack that now, but when they release ChatGPT 4 and then less than a year, I think it was only something like seven or eight months, they released ChatGPT 4 Turbo, which made it significantly cheaper. We’re looking at this fast exponential rate, it’s not too hard to go, well, we’re going to be in a world that the AI will be smarter than me in the next two years. And that’s really even scary for me to say out loud. But this will be huge, but just because something is smarter doesn’t mean it knows what to do with it. I do not believe we’re heading into Terminator, end of the world. It’s going to gain sentience. But by the way, intelligence and sentience are two very distinctly different things. And when you speak to AI researchers you do not gain emotion because you’re more intelligent. In fact, many will argue that the more intelligent you become, the less emotional they are. Sorry maths teachers I look at you. No, we look at the stereotypes. And again everyone in this room watching this knows that was a very badly made joke but when we do look at the research though, emotion doesn’t come out of intelligence and yet when we look at, the machine is upset that the world did this things against it and now it’s going to rise up. When we see those kind of end of world scenarios, we are seeing about emotional response. I am more optimistic than that. 

The other thing I’m optimistic about is that society is catching up. We will never be ahead and we have to recognise that. But we are already asking better questions of how we do these things than we did just a year ago, and the year before that. So these things are going to get more and more intelligent. They will be able to do more and more things. We’re going to have more and more risk of not being able to tell what video is real, what text is real, but we’re also going to be able to do really, really exciting things with it. What I find really exciting is the changes that you don’t expect. What happens when AI works well is it’s AI that you don’t see. Again, let’s move to another stupid analogy. The best special effects are the special effects that you don’t see. Like when I watch Star Wars, I go, Oh wow, that was amazing. But I see every special effect, like that, lightsaber is obviously not real, but when I go now and watch every TV show, it doesn’t matter what you’re watching. That set is not real. It’s probably green screen. And in fact, what that person’s holding probably wasn’t real. They’re drinking water. They’re probably not. You literally don’t see the special effects anymore. It is just become part of the experience. That’s what we’re starting to see now. And by the way, we have already said, AI is not new. We’ve had lots of AI suddenly work in the background. Our traffic lights are driven by an AI system that keeps them optimised so that we have less traffic flow. That’s a silent, a transparent AI. As these things are getting better, we’re going to see more and more things, you know, self-driving cars. As much as we look at Tesla at the moment and go, ‘I wouldn’t trust that with a barge pole?’ It will get so much better because it now… Old AI was brute force data. It’s so hard to unpack and I know this was only meant to be a short question, but you should just be, just keep going on. What are we going to brute force data to try and understand what generative AI in all its forms is doing? It is able to understand context, it’s able to understand and better iterate upon it. So we’re going to see education systems, coaching systems. What Khan Academy are going to be launching with Khanmigo, for those who haven’t followed that. That is not replacing a teacher. This is what Khanmigo is going to bring. It’s, what if we could create a tutor on demand that was written in a Socratic method? It’s not going to provide answers to the student. But it’s connected to this entire database of academic excellence, which in this case is what they’re saying is the Khan Academy database. Right? So we’ve got this hopefully non-biased or relatively non biased data set, but a student can ask, how do I do this? And an AI, in any hour of the day, can not just take their question but contextualise it back to them in a way that suits their language, their tone, their level of understanding and in a Socratic method where it leads them to a response. By the way, that is not replacing a teacher. That’s replacing part of a teacher’s role. And that’s really an important distinction to finish this on. We will lose lots of the parts of our role. That is not a negative because when I lose that ability, let’s say it can answer those questions accurately for that student, I can now spend more time with that student on a 1:1 talking about what drives them and actually finding how to make a really good quality education experience. And that was the dream of the Industrial Revolution. Robots, machinery will make our lives easier, and it has. If there was a zombie apocalypse tomorrow, I’m screwed because I don’t know how to fix a car. I don’t know how to get an engine working, I don’t even know how to syphon fuel out of a tank. But those are all skills that we lost because we got the comfort of the Industrial Revolution. Our new AI revolution is going to bring a new level of comfort – with it will come lost skills that will make us even more prone to a zombie apocalypse. But at the same point, will bring a whole bunch of benefits that will hopefully ensure that we will never have the zombie apocalypse in the first place because we’ve already solved all those issues that would have caused zombies to have come. And that is my positive end to this. It’s our intelligence, our capacity will significantly increase. We have to be wary. We have to be smart, we have to be informed. But as long as we do those things, we will lose parts of the role that will allow us to focus on the parts of the role that are most important to us. And that is what makes us human.

DMcF – Wonderfully said, and how quickly we’ll forget what we used to have to do and say, ‘Oh, you know, you remember when we used to have to do the things that uh…’

JR –  Have you talked to your child about… Remember when you used to have to read an encyclopaedia? Remember when you have to look up an atlas? Remember when I used to have to have 15 phone calls before I went to the city to meet up with my friends? So we had everything right. But that all disappeared, right? The first season of 24 doesn’t work when you introduce a mobile phone to it.

DMcF – I’ll have to go back at – yes.

JR – Yeah. Okay. It was terrifying. And you watch it and just go, why didn’t they just call somebody? The show would have been over in one hour.

DMcF – 24 seconds.

JR – Yes, 24 seconds. That would have been the entire show. But it’s a fun way to finish up. But no, as long as technology never replaces humanity, and I know that’s a weird thing to say, but humanity is our connection. It’s our wants. It’s our love, it’s our need. These are things that drive us as a species, as a society, as a culture that shapes our environment over time, right? Machines can help or hinder us on that path. War machines have hindered us. Tanks – brilliant invention, awful invention. Internet. You could probably say a bit of both, right? But it’s how we as a society move with it. And I do have a belief in society, and it doesn’t mean that I believe government, but society shapes its government. I have a huge belief in society and I am genuinely excited where the future will take us. But yeah, I think AI will become more transparent. It will just change more, more of everyday life and allow us to focus on what we call those more important things, whenever that happens to be, for you, me and society as a whole.

DMcF – Wonderful and I’m thinking about my seven- and eight-year-old children and your 15-year-old daughter. Yeah. What the future holds for them is exciting as long as we strike that balance and use… and I love that sentiment that you outlined there Julian around society and society driving government, and our decisions as humans. Wonderful! Thank you. There’s so much in that session. I really enjoyed our discussion as always.

JR – And thank you for having me on.

DMcF – My pleasure, thanks again, and I’m sure I will see you at a booth at a conference somewhere. And we’ll continue this dialogue. And now I have something so that I can say three years from now, we’ll revisit this in 2027.

JR – And when my household has locked me in a room and refused to let me out for my own safety, you can say, ‘Julian you were wrong.’

DMcF – The zombies outside beg to differ.

JR – Yeah, beg to differ. So, look, I will end up on this, if anybody is excited about this, look, I am a passionate sharer of knowledge. I do love community of practice. I just use the handle @eduridden. And if you’re on X (formerly Twitter), Thread or LinkedIn, feel free to go there. It’s a fascinating space to watch; and make sure you find communities of practice that are talking about these things, and hopefully you’ll discover some of it here, but also discover some of the others as well and see what I can do for you this year. So I can’t wait to see people join us.

DMcF – Wonderful. Yeah. And look, we’ll share those details and thank you for your time today. But more broadly, the contributions you make back to that community and they help inform me and so many others around the world. So great catching up again, Julian. So thank you so much.

JR – Many thanks Dan.

Related Posts

Scroll to Top