I'm NOT a Robot

Ep.6 - The Human Impact of AI in Hiring with Hilke Schellmann

AI Academy Season 1 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 32:49

In the sixth episode of "I'm Not a Robot," we discuss the use of artificial intelligence (AI) in the hiring process with Hilke Schellmann. 

Hilke is an Emmy award-winning investigative journalist, author of "The Algorithm," and an Assistant Professor of Journalism at New York University. 

She is a keynote speaker known for her critical examination of AI's ethical implications. In this episode, Schellmann shares her journey from being a singing enthusiast to becoming a pivotal voice in the discourse on AI and employment practices.

Through her narrative, we learn about how AI is reshaping hiring and uncovering the biases present in automated systems. Schellmann stresses the importance of transparency, regulation, and human-centered approaches in AI deployment. Her insights not only highlight the challenges but also provide a hopeful path toward leveraging AI to enhance human potential in the workforce.

Join us as we explore the intersection of technology, ethics, and the future of work in a conversation that is enlightening and a testament to the human spirit's resilience against the odds of automation.

Please complete the captcha before listening, and let's keep in touch:

Stay up to date with our newsletter -> https://bit.ly/3RO3oiK
Follow us on Instagram -> https://bit.ly/3RrY1V3
Our website -> https://bit.ly/3H5NCt


AI Academy website

 Hello everyone. Welcome to this episode of I'm Not a Robot. Today, I have a very special guest with me who spent the past 20 years researching and experimenting with methods that help connect what we know about how humans learn and how we design learning experiences.  Dr. Philippa Hartmann is now on a mission to help the world design better learning experiences powered by AI, and her take on the topic has been very interesting.

Hi, Phil, welcome. Thank you, Helen. And yeah, uh, it's so great to be here. Thank you for the invitation.  Thank you for coming. I want to start by resolving your CAPTCHA. This is a new thing that we started and, uh, I want to make sure that I'm talking to a human considering all the technology that's out there now.

Sora, all the videos that, um, can be modified by AI. I want to make sure that you're human. So can you tell me something very human about yourself?  I love this, by the way. Um, also just as a side note, uh, quite often I fail those capture things, you know, when you have to say, what is a bridge or what, and the number of times that I get it wrong and I think, so I'm not actually sure. 

Like, there's a caveat. Am I human? I'm pretty confident I am, but, but Capture sometimes does not think I am. Uh, but yeah, I think the thing that makes me human, maybe, is my complete inability to resist, um, a dog called Mabel, who is a little, uh, Cocker Spaniel. She's now, uh, I think 11 years old, so she's getting on a bit, but, uh, she belongs to my brother, I dog sit her a lot, and I am just So, uh, enchanted by, uh, uh, famously, I'm very distracted by work.

Uh, I find it very difficult to not talk about it and not think about it. But the second that Mabel arrives, uh, I am definitely a human being, uh, much more focused on, uh, I don't know, throwing carrots and running in the woods than I am, you know, focused on thinking about learning and education. So hopefully that's, uh, proof enough.

Um, I'm certainly reassured by the fact that I, uh. Yeah, can't resist a Cocker Spaniel.  Okay. I think that was a very decent response, but also I understand what you mean when you're solving CAPTCHA, sometimes there is a, maybe a bridge and like, there's just the little part of it. And I'm like, am I going to choose that part, that square?

Is it, does it count? I think so, but yeah, it can be quite stressful. Our CAPTCHA is a bit more easy going.  Very stressful. And now I think, Oh, I'm either a robot or I just overthink everything or maybe better. I'm just a very badly programmed robot. So yeah. Uh, as far as I know, I am human.  Great, I'm convinced.

So, um, I want to start by talking about learning a little bit. I think you're the, you're the best person that I can ask this question to that has been bugging me in so many years. How do we define learning?  Hmm. Yeah. It's a great, I mean, it's a big question. It's a great question. Uh, I think where I would start on this is that I think we've, uh, we've not been very good at answering this question for many, many years.

Uh, usually we think if we, if we think about like our own education experiences, we think we've learned something when we get a good grade,  uh, you know, get a grade A, get a first class degree, whatever it is. It's like, bingo, I learned. In fact, what we tend to do in those situations is prove that we can recall things.

Uh, you know, typically an exam or an essay, uh, tests our ability to memorize. I think there's, that's.  I think particularly, you know, higher education, post secondary type of education, learning means also, you know, being able to have an opinion to back up what you're saying and this kind of thing. So there's more skills to it, but, but generally, uh, I think definitions of learning have been defined by how we assess it.

And that usually it means memorization. Whereas I would say that if we think about learning more broadly conceived about, you know, things like, I don't know. Learning in life or, you know, becoming more wise or learning from experiences. Uh, it's much more complex and  it's very difficult to measure. This is another reason that we, I think we haven't really defined what learning is because even if we define it, it's very difficult to measure.

But I think ultimately the goal of a, an instructional designer, learning designer, an educator, uh, like me and my brilliant community is always to. Change the way that people think  on a way, on a, on a level that is bigger than just a topic. So yes, I want you to be able to recall and understand or whatever these concepts, but because that's in service of actually the ability to, for example, uh, challenge assumptions, uh, ask great questions, um, require demand evidence.

to, to certain, uh, questions and this kind of thing. So in my mind, the reason that I've been motivated to, to be in the world of learning is because I think ultimately learning is teaching people to be. Um, I don't know, informed and critical members of like democratic systems. It's like, you need to be able to question what's right and what's wrong in this kind of thing.

So I guess your answer is the ability to think and behave in ways which are, uh, Which are of value to the individual, but that actually, if, if you were to, you know, ask Google to define or  ChatGPT to define what learning is, I think it would be much more about, you know, education systems and exams and recall of information.

Okay. No, I think you summed up quite well. And something that we need to always keep in mind is that it's really based on the context, as you mentioned, learning. What and what kind of information that you need to use in which kind of situations. It can be really different based on different contexts. And, um, how do you think AI can improve this process? 

Yeah, I mean, this is the question I've been asking for a while, um, if and how technology in general, AI specifically can help us like help people on the ground, ultimately, to learn things better. And I think, I think we've tried to solve this problem in the past, uh, by, uh, Well, the answer that we've come up with in the past maybe is, Oh, we can use technology to increase access to information.

So to go back to that model of like giving access to content and absorbing information and then being able to regurgitate it on demand, you know, be the person in the pub quiz that gets all the answers, right?  Uh, that conceptualization of learning has led to us using technology basically as a tool for transmission. 

So.  We've explored, including the use of AI, I should say, it's not entirely new, but we've used technology over the past 30 years or so in education to, uh, as I say, transmit. So if you picture, for example, the massive open online course, I was incredibly excited about this. I was lucky enough to work with the University of Oxford on their first MOOC.

And the idea was that we take these brilliant professors. Uh, from, you know, these brilliant universities and we put them in front of a camera and then all of a sudden, anybody can watch this lecture and anybody can be asked questions by this professor and anybody can absorb what they have to tell us.

Um, but I think, um, you know, if, if we were to, to, uh, assess the success of that project on the ability to increase access to stuff, to content. of really high quality, then it, then it was a success. But what we've not really been able to do is to scale the part of the learning experience that actually we know impacts how humans learn.

So content matters, of course, content matters. We need to have something, uh, to learn. We need information in order to grow. But actually what the research shows is that the part of the learning experience that actually enables us to grow is the support that comes with it.  So where previously technology has been used to, uh, scale and access to information, what I'm excited about in terms of AI is its potential to scale access.

Also, uh, to  great teaching and great teaching is a complex thing. Great teaching requires, of course, domain knowledge. So, you know, I know the topic, but it requires more than that. It requires, uh, deep skills of empathy, communication. It requires time. It requires energy. Uh, it's essentially like having this ideal of a.

A Socratic tutor, you know, like an expert tutor who's always on hand, who knows who we are, who knows how we learn, who knows when we learn best, what kind of things we prefer, you know, need, blah, blah, blah. And so I think this is where I, and a lot of other people have been excited about AI is that it may be, and it's still question mark, uh, but maybe it is the moment at which we can finally,  uh, scale access to  great like learning support.

rather than only, uh, learning content. And there are quite a few examples of this out there already. This relates, by the way, to, if you want to learn more, uh, Bloom's two sigma problem. So in the 1980s, Bloom did some research and showed that in a scenario, rather than having a one, one teacher and loads of students, if you have a one to one relationship, it's no surprise, uh, the outcomes are better.

The learning is better. Um, and it's not just better. It's like twice as good. Two sigmas better. And particularly, um, it has a particularly powerful impact on, um, underrepresented groups, which is also interesting. Uh, and so maybe AI is the moment at which we can finally scale that. Because Bloom said, well, it's great if, uh, you know, the ideal is that everybody has their own teacher, but you'll never be like, how do, how the hell do we do that?

And the question now is, can AI do that? And as I say, we're seeing examples already. The most famous is probably, um, Carmigo. Which you may have heard of, which is the, an extension to Kahn Academy. So online learning,  STEM learning for kids. Uh, they've now, uh, built, so they've been working with OpenAI and they have built a tutor that sits alongside this, uh, that helps the kids, um, to complete the tasks, uh, gives them support.

And what's really interesting is that what's potentially very powerful about this and about AI in general is that we are able to,  um, build. Really great shooters. So often the people who know stuff are not the best teachers. AI enables us to combine the two. So we can train an AI to be, for example, brilliant at maths,  you know, I don't know, algebra, but also to, to be pedagogically brilliant.

So for example, to train it, not to give answers. Um, to look, to know how to be a brilliant coach, to take, like, to, to present the right problems in the right moment and get students to find their own way through learning. And that's when learning really happens. When we have to try very hard, uh, when we get support, but we are responsible for solving problems, we learn through. 

Solving problems rather than being given answers. And that's where AI might, I think, change how we can scale learning in really exciting ways.  That's very exciting. I also, uh, it just made me think while you were telling me about it, looking at my past, I had a lot of different interests. I love history. I love geography.

I love philosophy. I love AI. And. Thinking about it now, I can attach all of those subjects to a teacher of mine that really made an impact on me. And I remember that empathic bond where Um, if there is a topic that I was struggling with, the day where I connected the dots, I wanted to go back to my teacher, share it, and say, look, I got it.

Is this correct? What do you think about it? And having that person that shows empathy and understands the journey that we've been through, I think that was motivating me a lot to continue to learn. And this makes me question in this case.  We see a lot of, um, potential on how AI can disrupt learning and change the way we learn and maybe have tutors that we can make that connection with that can also enhance the way that we learn.

But how do you think we can balance the use of AI in learning? without losing the human touch?  No, it's a really interesting question. And I mean, you're absolutely right on the teacher presence thing. It's what it's referred to in the research, but the research shows that the, it's one of the most  powerful indicators of effective learning.

So the achievement of intended outcomes is  having a teacher. Who, uh, of course knows their domain, but yes, makes you feel seen, heard all the things you just described. So, so important. Um, but yeah, it's a great question, and it's one that I think we haven't explored enough yet. We don't have an answer to yet.

But, but the question is,  what is the human touch?  So I think at the moment we know that the  We know the formula for engagement, motivation and achievement  includes teacher presence, includes support, includes empathy, includes domain knowledge. What we don't know is if it matters that that comes from a human. 

So  the question here is one of human, that's  human machine interaction. So, um, and we can, we can learn a lot here, as always, from the medical world, which is, you know.  A hundred steps ahead, uh, in terms of its innovation, experimentation, research. And I think what we've seen there is really interesting. So what we've seen is that even when it can be proven that an AI is, for example, more effective at, uh, diagnosing an illness,  we still.

For some reason, maybe it's habit, uh, maybe it's culture, maybe it's expectation, maybe it's something that we can evolve from. But for some reason right now, we would still rather hear from a human than a machine, uh, when we get our diagnosis. There is something,  uh, specific, is what I'm trying to say, about the fact that it is a human. 

It's not about, I, it's not functional. It's not, I need information. It's, I want to have a connection with a human being who then gives me information. And I wonder if the same is true in learning. Um, I wonder whether we could build maybe the most expert, brilliant, uh, I, You know, connected, empathetic, emotionally intelligent AI.

We've learned already, like, there are many things that AI can do very well, including things like empathy, um, you know, like affect, building affective bonds with humans. But like, how do we react to it as humans? And I think,  We don't know is the answer, but also I don't think there is one answer to that. So I think, again, it's a bit like learning itself.

It's contextual, uh, depends on the context. So if, for example, I at the moment have no access to any education whatsoever,  I will welcome the introduction of an AI teacher, potentially, if it means that I can, for example, get a degree. Uh, but then conversely, we've seen, uh,  You know, research done at places like Georgia Tech University.

There's a really interesting experiment done there quite a few years ago now. Uh, they again are way ahead, uh, where they effectively, they didn't, I'm going to misuse a word here, but cloned, uh, a professor, computer science professor, effectively cloned himself, uh, and created Jill Watson. Uh, and the students were unable to differentiate, uh, when they were discussing online with a tutor, whether it was.

the real professor or the AI. Um, but they did report that they would prefer it to be the professor. I think there is a sense there of like, well, I've paid for this. What I've paid for is access to a human expert.  Uh, what I haven't paid for is, um, you know, basically access to a tool.  Uh, this is a very expensive tool if that's what I've paid for.

So there is some interesting questions. I think my, the short answer or the summary is that I think it would be possible to build an AI better than a human tutor. Um, by which I mean has infinite time, energy, resource, uh, access to the most up to date information possible. In an instant and instantly understands it,  but whether or not better is that, or whether better is actually having a real human, uh, who we can connect with is TBC, I think, and I, as I say, I think  it depends on what your current, what your current access to education is.

Uh, but certainly there's a lot of exciting potential and I am very excited to explore. Uh, so usually it's called human machine interaction. I, I, I've been thinking about it as like, uh, learn a machine interaction. What does that look like and how do we, uh, accept it? And I think just one final note on this, which I think is important is that. 

I think sometimes we make the assumption that, for example, to go back to CanMigo,  that is brilliant. It's like, ah, it's, it's, it's opening up access. Any kid now can get access to great maths content, questions, and a tutor.  But actually, if you dig in a little bit and you look at the sorts of people who use these technologies, Uh, they are typically white, middle class educated people, people who are confident in that kind of learning environment, which is very Western.

You know, the idea that there is an activity to do, a teacher here, you work on your own, you're self led, um, and so as much as it appears to be opening up access, actually, we also need to be mindful of the fact that, uh,  building AI tutors might have, um, might exclude people also. And I think that's something else we just need to take into account when we're asking like, what does this dream situation look like?

It's also, I think if we were going to lean into AI, we need to be mindful that we need to build different types of AI to support different types of students. But ultimately, yeah, maybe as humans, we just prefer human teachers and this will be a flash in the pen. Who knows?  That's super interesting. I actually remember that research.

I don't know if I ever told you I wrote my thesis on digital developments and their effect in the educational sector, and I read a book from Justin Rhee, if I'm not wrong about the surname, and he was talking about this, how technology will help us make education more accessible, but when we do the research  There were some doubts about, okay, is it still going to benefit everyone, or is it going to benefit a certain part of the world, and how is that going to look like?

So, I think there's still a lot of unknown, as you said, when it comes to learning and how AI fits into that picture. But, I think this is the time to ask the right questions. I think what you're doing, um, in that is super valuable because if we understand the questions that we need to ask, um, for example, what does learning mean for us?

What does human touch mean in learning? Is it something that we can support with AI or is it something that we need to keep as humans? So To me, it feels so similar to deciding what kind, um, what part in an AI project you need to automate and what part you need to leave out. Because at the end of today, I remember a couple of years ago, there was a lot of highlight on AI is not just technical.

It's a lot, it's for everyone. I think these are the years where we're seeing this. Becoming the reality because it comes down to how we perceive things. How do we react to an, uh, tutor that is, um, AI supported? So I think these are super interesting. So I, I wanna ask you another thing. Let's imagine for a second that we go back to your childhood and you get to choose when to use AI in your learning path.

How would you change your learning experience?  Such a good question. Um,  yeah, and I think, I think actually as a learner, it's really good to get that mindset that you just described of, okay, I have access to AI, which bits of my learning process should I, as a learner, automate and which bits should I leave for me, um, um, which I guess is what you're asking me.

And I think the answer is that I think. So I'm now thinking back to, uh, like university, uh, undergrad, uh, type level. And I think,  I think actually I did this, um,  manually anyway, uh, but what I did in order to, to do as well as I could at university, I, uh, read a lot of things.  Very, very lightly. So I am like the queen of reading the, uh, introduction and the end and then filling in the gap and like just adding that to my list of, because actually the most valuable thing for me was compiling all of this information and then thinking very critically about it, analyzing it for themes and then, um, effectively kind of developing some key skills around like, well, how do I, uh, articulate an argument in a compelling way?

I should say I was a historian, so this is all what it was all about. It was about me trying to prove that I can.  And then, uh, have an opinion on it in a way that, you know, and come to a conclusion and maybe even start to think about it in an original way. And that's where the real value was. That's, and as someone who has since taught in university, that is where you see exceptional.

You know, that's really your dream as an educator is that you, you don't,  I don't think anybody goes into teaching thinking, Oh, well, I really hope that I can teach them to tell them what I tell them. Like, for them to tell me back what I've told them. What I really want is to inspire my students to be able to think in new and original ways and for them to come up with new ways of thinking of, I don't know, just bringing more, uh, different perspectives to, to old problems and this kind of thing.

Cause that's, that's where the magic happens. And I think I would love to have been able to use AI to focus more on that bit. And so I spent a lot. And I'm going to be doing a lot of my time, well first of all, waiting to get access to information. So I think that's what's really exciting. And I absolutely love AI tools like Perplexity and Elicit, uh, because they have really like, basically I picture like knocking down ivory towers.

Anyone now can get in and very rapidly find research.  And it's not in, like, impenetrable language, like academic language. It can be summarized back to you in whatever way you like. You can ask follow up questions. It suggests follow up questions. And so it feels like that's become much more accessible and fast.

And I actually often think about what on earth could I have done if I'd have had these tools? I don't think the answer is, Oh, you would have cheated and you wouldn't have got as much out of this. I think I would have got a lot more out because I would have been able to access more stuff. But More quickly, and there's an also an important financial element to that in that, uh, Previously, people have had to pay to get access to research either by going to university or by subscribing to things.

And it's a lot of money. I mean, as somebody who has subscribed to lots of journals, it's hundreds of pounds a month. Now, not so much. It can be difficult still to access the full versions, but we've We've opened up access. I would say that AI has done more to open up access in the last 12 months than like the MOOC ever did, uh, which is maybe something else that we can get into, but there's an example.

So I would have loved to have AI as a research assistant, I guess, uh, and, and as, uh, somebody who was able to summarize stuff.  Um, and I think what that would have done is to liberate me to, to do more thinking, more analysis, more ideation, and this kind of thing. And then I think there's an, like a bookend to that where I was also would have been, would have loved to have been able to have AI take some of the functional stuff.

So I also used to spend a hell of a lot of time then, you know,  uh, writing the right word count length. doing the footnotes in exactly the right way. If you stop in the wrong place, then you're not really a historian. Uh, and so there's something very functional about that, which maybe matters, but if it matters, I would have loved to delegate that to a machine, because actually there's no value in that for me, other than proving that I can do as I'm told.

Um, so I think, uh, yeah, I think it would have been great. And I think I would have done. even more and even better if I would have had access to support to extend my thinking, deepen my thinking, and then just get rid of some of the pure admin stuff that, you know, was stressful and I would say of very low value. 

That's super interesting. I was also thinking, um, I have a background in international relations, so I had a lot of readings to do as well, similar to, um, what you did. And I remember I really liked reading. I'm the same. I just have a look at the intro, the end. And I like spending some time with that book or paper that I need to go through.

But what I really spent a lot of time was to come up with ideas for projects or presentations, which is a part, actually, when I think about it now, I would have loved to use AI to brainstorm. To come up with different topics, ideas, different historical facts that I can talk about, that would be super interesting.

Yeah. Yeah, yeah. And sorry, just, just one more on that. One more thought, but, but I've been doing a lot of work recently with both in learning designers and educators, people who are teaching, uh, and that is one of the primary use cases, actually. Um, AI is used most often to build content, but very rapidly, uh, You know, competing for first place in use cases is, uh, ideation.

Yeah, so actually putting in your idea and asking AI to tell you why it's not the best idea is so fascinating. And of course, people don't have to agree with it, but what it does is it just identifies, uh, it helps you to understand, I think, your own biases and or blind spots and or assumptions by saying, But there's these five other ways you could think about this, and that is such a valuable tool for learning.

Uh, it's incredible.  That's true. Um, another thing that I wanted to ask you, uh, I know that you also, uh, collaborated in the past with different organizations, different companies who are now, uh, asking these questions of how can I start improving my learning process within the organization with AI? In, um, most of the corporate organizations, there are all kinds of trainings when you just start your new job, uh, starting from why you shouldn't share your password in a coffee shop or, um, how you can be more productive.

And that was a topic I discussed with another guest of ours in the past. I remember, I'm wondering your take on it. So, what would you suggest organizations to do to start, um, improving their learning process with AI?  Yeah, and a lot are. It's very interesting, actually, particularly in, um, corporate L& D, as you say, there is a lot of appetite to, I guess, do two things.

One is primarily, I should say, a kind of hair on fire problem is the speed at which we can produce training and the cost associated. So the headline is, uh, To design a training in a corporate setting is incredibly time consuming. I think the average length of going from a request to a piece of training is something crazy, like eight months.

I mean, it's,  it's unprecedented in its slowness, I think, in the modern world. Um, and then of course the cost of that is. Humongous, because it's so much time, it's so much energy and then things that are, there are particular pain points within that. So for example, the content creation bit is particularly slow and particularly expensive. 

Um, but then there's also this question around, can AI help? And I think this is what I would encourage people to do. So there's something around saying, okay,  let's look at our current process. Let's zoom out. Let's, let's list, let's take a product mindset and list the jobs to be done from going from an idea through to a creative training. 

And then let's almost, like, identify the low hanging fruit. So, what is of low value?  But high resource  that could be delegated. That's how I like to think about it. Imagine AI as your, uh, apprentice. What could I delegate with really good instructions? Uh, something that's like structured and repeated, relatively simple, but I find a distraction.

So of course do that. Uh, and many organizations answer to that is, um, to focus in on the, the delivery bit on the, on the content creation bit. So we're seeing lots of organizations invest in tools like Synthesia and I think it's Colossian, uh, basically text to video tools. Cause that's the bit that costs the most and takes the most time. 

Uh, we're also seeing lots of organizations invest in like, I don't know, text to quiz tools and translation tools. These are the things that take time. and CostMoney. And that's great. Please do automate away. But I think also, please ask the question, how do we use AI not just to oil the machine  to make it better?

I think we also need to zoom out and I talk about this a lot, I know, but I think it's so important. There is a risk at the moment that we're just using AI in corporate L& D and more broadly, I think, in a learning context  to scale bad practices. Scale broken practices to get more effective at ineffective practice.

Um, and so, uh, we know, for example, that video based learning that's very inactive, that's long winded, uh, that's generic is incredibly expensive and also incredibly ineffective. Uh, we're talking about at best, um, a reported impact rate on any kind of knowledge or skills of about.  And that would be like absolutely killing it.

So if you keep in mind that we spend 300 million a year on average, making corporate L& D, the fact that we're getting at best a 12 percent return on investment is crazy. And all of this time and energy has gone into it. So the other part of the equation, which I always encourage people to think about is how do we actually use AI to improve the process, to actually get better, that before we think about delivering the thing.

What's the thing you're designing? And let's start to explore, actually, if we think of this as the ADDIE process, so a process that's like analysis, design, development, then iteration, evaluation. At the moment, we're focusing on the development. So the building stuff, the content. What if we think instead about AI at the analysis and the design stage?

What if we use AI to better understand actually what's the training we need?  And then having defined that, asking what's the best way to deliver it? Let's not assume it's a video and a quiz.  Because, spoiler, it isn't. Ever.  Uh, what is it then? And how can we use AI to get better at that bit before we build it?

And of course that's a harder question, because it's, uh, requires more systemic change. Usually we have uh, an LMS that has certain capabilities and, or we have other options like, or we can get together in a room, but we have an established way of teaching and learning both in formal education and in L& D, which is very much like  content and then some sort of discussion or knowledge check.

And then we're done. And it goes back to that first question you asked, I think, uh, about learning. What is learning? And if learning is just giving you information, then  that's enough, but we know it isn't. So yeah.  L& D, like chief learning officers, and I've been working with a lot of chief learning officers, like, um, in big corporations and encourage them, encouraging them to ask the question, not just about AI for efficiency, but also for effectiveness.

And I think, you know, seeing through the eyes of an educator, what that means is focusing,  as I say, first on the analysis and the design bit  before you start to speed up the build bit. Because at the moment we're building lots of things which are cheaper and faster, but not better. And I think that is underestimating or yeah, not leveraging AI in the way that we could, if only we were a bit more thoughtful about the first bit. 

That's so interesting. I, to me, the picture that you painted seems like there's a huge budget assigned. There are a lot of people putting time and effort in it. There is the technology that's available and what's left is, what are we going to do with it? Exactly. It's very interesting. It's a human thing. 

Yeah, absolutely. It's a human question. People often say to me, oh, Phil, is AI going to change education? It's like, well, it might, but then technology could have changed education over the last 30 years, but we decided to use it to, uh, shore up. And to speed up and to scale what we already do takes us back to the MOOC.

It's a scaled lecture from the Victorian period, you know, so, um, you're absolutely right. It's, um, potentially very disruptive, but it's potentially just gonna  make us more prolific at what we already do, uh, which may arguably is, is a conservative force rather than a, you know, uh, it's more about evolution than revolution for sure. 

Super interesting. Well, I think AI learning remains one of the most exciting areas to, um, to follow, to see the research on because there's a lot of potential. Everyone agrees on that, but how is that going to work out in the future remains a big question mark. So thank you so much for providing your insights, Phil.

It's been great to have you here. And, um, yeah, we'll see together what the future holds. Yeah, who knows, but thank you. And I'm very, uh, excited and privileged to be part of the conversation. I appreciate you giving me this space today to explore it with you. I've learned a lot as well. So thanks, Helen. 

Thank you. Bye. Bye.